• 中国科学学与科技政策研究会
  • 中国科学院科技政策与管理科学研究所
  • 清华大学科学技术与社会研究中心
ISSN 1003-2053 CN 11-1805/G3

科学学研究 ›› 2022, Vol. 40 ›› Issue (4): 611-618.

• 科技发展战略与政策 • 上一篇    下一篇

人工智能透明性原则的制度构建:范式选择与要素分析

季冬梅   

  1. 首都经济贸易大学
  • 收稿日期:2020-12-04 修回日期:2021-05-10 出版日期:2022-04-15 发布日期:2022-04-15
  • 通讯作者: 季冬梅

Institutional construction of the transparency principle of artificial intelligence: paradigm selection and elements analysis

  • Received:2020-12-04 Revised:2021-05-10 Online:2022-04-15 Published:2022-04-15

摘要: “算法黑箱”背后的信息不对称或将带来社会风险,在人工智能技术发展的过程中,透明性原则的实现是算法监管、建立信赖关系的前提与基础,智能产业的发展和社会信赖利益也呼吁透明性原则的建立。但由于利益冲突、技术特征和制度成本等障碍,对人工智能技术监管不应采取传统领域彻底、全部公开的透明方式,而需确立有限、合理的透明标准。这一标准的有效落实依赖于行为规范和私权保护的协同作用,对人工智能进行事前预防、事中约束和事后救济等全过程控制,并结合对主体、客体、程度、条件等要素的场景化思考,实现智能科技创新、整体经济效益和社会公共利益之间的平衡与协调。

Abstract: The information asymmetry behind the "algorithm black box" will bring social risks. In the development of artificial intelligence technology, the principle of transparency is the prerequisite and basis for algorithm supervision and the establishment of trust relationships. The development of intelligent industries and social trust benefits also call for the principle of transparency. However, due to obstacles such as conflicts of interest, technical characteristics, and institutional costs, the thorough, full openness and transparency in the traditional field should not be adopted to supervise the artificial intelligence technology, but instead requiring limited and reasonable transparency standards. The effective implementation of this standard relies on the synergy of the governance of conduct and the protection of private rights, the entire process control of artificial intelligence such as pre-prevention, interim restraint, and post-relief, combined with the subject, object, degree, and conditions in contexts, in order to realize the balance and coordination between intelligent technological innovation, overall economic benefits and social public interests.