• 中国科学学与科技政策研究会
  • 中国科学院科技政策与管理科学研究所
  • 清华大学科学技术与社会研究中心
ISSN 1003-2053 CN 11-1805/G3

科学学研究 ›› 2023, Vol. 41 ›› Issue (10): 1737-1746.

• 科学学理论与方法 • 上一篇    下一篇

当代青年人工智能风险感知的测度与解析

李森林1,张乐2,李瑾3,4   

  1. 1. 山东大学 政治学与公共管理学院
    2. 山东大学
    3. 山东大学生活质量与公共政策研究中心
    4. 山东大学政治学与公共管理学院
  • 收稿日期:2022-08-12 修回日期:2023-01-11 出版日期:2023-10-15 发布日期:2023-10-26
  • 通讯作者: 李森林
  • 基金资助:
    国家社科基金重点项目“人工智能发展中的社会风险及适应性治理研究”;山东大学人文社科重大项目“防范化解数字化转型中的重大社会风险研究”

"Foresight" or "Short-Term Concern"? ——Measurement and analysis of risk perception of artificial intelligence among contemporary youth

  • Received:2022-08-12 Revised:2023-01-11 Online:2023-10-15 Published:2023-10-26

摘要: 测量和解析个体对人工智能的风险感知是研究其社会影响的基础性工作。本文通过构建多指标多因素结构方程模型(MIMIC),从4方面测度了青年群体对人工智能的风险感知水平,并考察了8类风险因素对其产生的影响。数据分析显示,青年对社会、伦理、法律、国家安全、政治等具有长远性和集体性特征的社会风险因素的评价正向强化了其对人工智能整体属性的风险感知,而对经济、技术、人身安全与财产损失等具有近期性和现实性的个人风险因素的评价并未产生显著影响。上述结果表明,当代青年对人工智能风险感知的特征主要表现为“远虑”而非“近忧”,体现出该群体对待新兴技术的全局性、长远性和法理性的深度忧患意识。在全面开展人工智能风险治理的进程中,应从国家战略高度和法律伦理的角度强化对人工智能隐性风险的治理,最大限度地消除公众的远虑近忧,夯实新兴技术落地的社会基础。

Abstract: Measuring and analyzing individual risk perception of artificial intelligence is the basis of studying its risk and social impact. In this study, a multi-index and multi-factor structural equation model (MIMIC) was constructed to measure youth's risk perception level of artificial intelligence from four aspects, and the impact of eight risk factors on it was investigated. The results show that youth's evaluation of social, ethical, legal, national security, political and other risk factors positively enhances their risk perception of artificial intelligence, while the evaluation of economic, technological, personal and property factors has no significant impact. Compared with other factors, concerns about ethics, national security and law significantly increase the risk perception of AI among young people. It shows that the contemporary youth's risk perception of artificial intelligence is mainly characterized by "Foresight" rather than "Short-Term Concern", which shows that the youth group has a deep sense of worry about the overall, long-term and legal aspects of technology. In view of these measurement and analysis results, it is suggested that in the process of comprehensively developing AI risk governance, more efforts should be made to strengthen the governance of AI hidden risks from the perspective of national strategy and legal ethics, so as to eliminate people's concerns to the greatest extent and consolidate the social foundation for the implementation of emerging technologies.