• 中国科学学与科技政策研究会
  • 中国科学院科技政策与管理科学研究所
  • 清华大学科学技术与社会研究中心
ISSN 1003-2053 CN 11-1805/G3

科学学研究 ›› 2024, Vol. 42 ›› Issue (6): 1121-1128.

• 科学学理论与方法 •    下一篇

智能社会实验:场景创新的责任鸿沟与治理

俞鼎1,李正风2   

  1. 1. 浙江大学马克思主义学院
    2. 清华大学科学技术与社会研究中心
  • 收稿日期:2022-10-31 修回日期:2022-11-21 出版日期:2024-06-15 发布日期:2024-06-15
  • 通讯作者: 俞鼎
  • 基金资助:
    深入推进科技体制改革与完善国家科技治理体系研究

Intelligence Social Experiments:The Responsibility Gap of Context-driven Innovation and Its Governance

  • Received:2022-10-31 Revised:2022-11-21 Online:2024-06-15 Published:2024-06-15

摘要: 人工智能社会实验具有场景驱动创新的显著特征,而前者需要迫切解决的人工智能应用中最困扰人类的“责任鸿沟”问题也是在具体场景中生成并得到解决的。通过分析人工智能赋能场景驱动创新的技术逻辑、场景创新对人工智能伦理规约的内在要求,可以得知,“人工智能社会实验伦理”与“人工智能伦理”之间存在“责任鸿沟”方面的共性问题,因此消解人工智能社会实验的“责任鸿沟”成了新的当务之急,这将直接助益人工智能伦理原则与法规的制定。根据实验场景的不同,实验伦理挑战的归责类型可以就性质、界限上的差异做出细分即罪责鸿沟、道德问责鸿沟、公共问责鸿沟、积极责任鸿沟,通过引入“有意义的人类控制”这一当前人工智能全球治理领域最具发展前景的指导思想及实践框架,可以为综合解决人工智能社会实验场景内就认知、具身、道德意义上的控制形式与其多元责任形式之间的脱节提供一种新思路。

Abstract: Artificial intelligence social experiments have the distinctive feature of context-driven innovation, and the “responsibility gap”, which is the most disturbing problem in the application of artificial intelligence in the former, is generated and solved in specific contexts.By analyzing the technical logic of AI-enabled context-driven innovation and the intrinsic requirements of context innovation for AI ethics, we can know that there is a common problem of “responsibility gap” between the “ethics of AI social experiments” and the “ethics of AI”.Therefore, it is a new urgent task to eliminate the problem of “responsibility gap” in AI social experiments, which will directly contribute to the formulation of AI ethical principles and regulations. According to the different experimental contexts, the imputation types of experimental ethical challenges can be subdivided according to the differences in nature and boundaries, namely, the culpability, moral accountability, public accountability, and active responsibility.By introducing “meaningful human control”, the most promising guiding idea and practical framework in the field of AI global governance, we can provide a new way of thinking to comprehensively address the disconnect between the cognitive, embodied, and moral forms of control and their multiple forms of responsibility in AI social experimentation contexts.