• 中国科学学与科技政策研究会
  • 中国科学院科技政策与管理科学研究所
  • 清华大学科学技术与社会研究中心
ISSN 1003-2053 CN 11-1805/G3

科学学研究 ›› 2023, Vol. 41 ›› Issue (10): 1729-1736.

• 科学学理论与方法 •    下一篇

人工智能可以作为置信对象吗? ———为可信人工智能辩护

何丽   

  1. 复旦大学哲学学院
  • 收稿日期:2022-08-15 修回日期:2022-10-26 出版日期:2023-10-15 发布日期:2023-10-26
  • 通讯作者: 何丽

Can Artificial Intelligence Serve as Trustee? A Defense of Trustworthy AI

  • Received:2022-08-15 Revised:2022-10-26 Online:2023-10-15 Published:2023-10-26

摘要: 构筑可信人工智能已成为全球共识,但可信人工智能这一概念的合理性却始终存在争议。批评者坚持人工智能无法满足置信条件从而质疑概念的合理性,辩护者试图为人工智能分级或为信任分层以开辟可信空间却未能提供有力的合理性证明。受限于以信任者为中心的传统人际信任模式,未能在人与人工智能相结合的双重视域中展开综合考察,是批评者做出片面判断而辩护者提供失败辩护的深层原因。聚焦人工智能的可信任性,归纳其可信特质,能为辩护的开展提供新的研究视角与可行思路。

Abstract: While building trustworthy AI has become a consensus worldwide, the rationality of trustworthy AI concept has always been a source of controversy. Critics challenge the concept by insisting that AI cannot satisfy trust conditions, while advocates attempt to classify AI or stratify trust to open up a space for trustworthy AI, but fail to provide a sound rationale. Restricted by the traditional interpersonal trust model centered on the trustor, critics and defenders are unable to conduct a comprehensive investigation from the dual perspective of human and AI, leading to critics making unilateral judgments and defenders providing failed defenses. Focusing on the trustworthiness and exploring the trustworthiness traits of AI would provide new perspective and feasible ideas to defense.