Studies in Science of Science ›› 2022, Vol. 40 ›› Issue (7): 1153-1162.

    Next Articles

AI ethical risk perception、Trust and Public Participation

  

  • Received:2021-06-17 Revised:2021-10-11 Online:2022-07-15 Published:2022-07-15

人工智能伦理风险感知、信任与公众参与

宋艳1,陈琳1,李琴1,何嘉欣2,3,汪悦1,3   

  1. 1. 电子科技大学
    2. 电子科技大学经管学院
    3.
  • 通讯作者: 宋艳

Abstract: With the blowout development of artificial intelligence, the ethical risks involved in it have also come into reality from science fiction works, and have become a hot topic for many subjects. Based on the technology acceptance model and risk perception theory, based on trust as a mediation, this paper constructs a research framework for the public's perception of artificial intelligence ethical risks and their willingness to participate in risk governance, and empirically tested the relevant hypotheses. The results show that AI ethical risk perception has a significant negative impact on public participation intentions. Among them, the public’s trust in scientific research institutions and the government plays a part of the mediating role between AI ethical risk perception and public participation. This conclusion provides a scientific basis for the construction of a new governance pattern of "government-led, expert-led, public participation, and social supervision", and has important practical significance for emerging technologies to achieve the “Tech for Social Good”.

摘要: 随着人工智能的井喷式发展,其涉及的伦理风险,也从科幻作品中走进现实,成为众多主体热议的话题。本文以技术接受模型及风险感知理论为基础,以信任为中介,构建了公众对于人工智能伦理风险的感知与其参与风险治理意愿的研究框架,并通过实证检验了相关假设。结果显示:人工智能伦理风险感知对公众参与意向具有显著的负向影响,其中公众对科研机构和政府的信任在人工智能伦理风险感知与公众参与之间起部分中介作用。该结论为构建“政府主导、专家引领、公众参与、社会监督”的治理新格局提供了科学依据,对于新兴技术实现科技向善具有重要的现实意义。