• 中国科学学与科技政策研究会
  • 中国科学院科技政策与管理科学研究所
  • 清华大学科学技术与社会研究中心
ISSN 1003-2053 CN 11-1805/G3

科学学研究 ›› 2024, Vol. 42 ›› Issue (7): 1354-1360.

• 科学学理论与方法 • 上一篇    下一篇

算法透明度:从理论到实践的探索与反思

邓克涛1,张贵红2   

  1. 1. 中国科学技术大学人文与社会科学学院
    2. 中国科学技术大学
  • 收稿日期:2023-04-23 修回日期:2023-08-01 出版日期:2024-07-15 发布日期:2024-07-15
  • 通讯作者: 张贵红
  • 基金资助:
    负责任的人工智能及其实践的哲学研究

Algorithmic Transparency: Exploration and Reflection from Theory to Practice

  • Received:2023-04-23 Revised:2023-08-01 Online:2024-07-15 Published:2024-07-15
  • Contact: 张 Gui-hongZhang

摘要: 尽管越来越多重要的任务和决策被委托给了算法,但因算法的计算复杂性和不透明性,使得用户难以理解算法的决策过程和结果,导致其难以信任算法,甚至出现了“算法厌恶”的现象。相应地,算法透明度通常被视为可信任人工智能的基础,并在过去几年的学术辩论中获得了相当多的呼吁。然而,在实践层面,算法透明度的实施却存在诸多挑战,甚至还可能引发一定的伦理风险。基于此,本研究通过分析算法透明度实践层面的挑战和风险,指出至少可以从披露、审查以及设计这三个维度来化解当前实践层面的困境

Abstract: Although more and more important tasks and decisions have been entrusted to the algorithm, the computational complexity and opacity of the algorithm make it difficult for users to understand the decision-making process and results of the algorithm, resulting in their difficulty in trusting the algorithm, and even the phenomenon of "algorithmic aversion". Accordingly, algorithmic transparency is often seen as the foundation of trustworthy artificial intelligence and has received considerable attention in academic debates over the past few years. However, at the practical level, there are many challenges in implementing algorithmic transparency, which may even trigger certain ethical risks. Based on this, this study analyzes the challenges and risks at the practical level of algorithmic transparency, and points out that at least three dimensions of disclosure, review, and design can be used to solve the current practical difficulties.