• 中国科学学与科技政策研究会
  • 中国科学院科技战略咨询研究院
  • 清华大学科学技术与社会研究中心
ISSN 1003-2053 CN 11-1805/G3

科学学研究 ›› 2025, Vol. 43 ›› Issue (5): 988-995.

• 理论与方法 • 上一篇    下一篇

科技重大风险研究的范式问题与价值探究

徐旭1, 乔雁2,陈凡3   

  1. 1. 内蒙古大学
    2.
    3. 东北大学科学技术与社会研究中心
  • 收稿日期:2024-03-13 修回日期:2024-11-05 出版日期:2025-05-15 发布日期:2025-05-15
  • 通讯作者: 徐旭
  • 基金资助:
    元宇宙的风险生成及其防范化解研究;防范和化解科技领域重大风险研究

Research on the paradigm and value of Mega-risk in science and technology

  • Received:2024-03-13 Revised:2024-11-05 Online:2025-05-15 Published:2025-05-15

摘要: 在科技重大风险研究的历史上曾出现两次将风险定量化的尝试:第一次尝试发生在20世纪80年代早期,以菲施霍夫(B.Fischhoff)为代表的建构主义者们通过概率论,将风险界定为非喜好事件的发生概率;第二次尝试则发生在本世纪初,以马丁·皮特森(Martin Peterson)为代表的混合主义者们推进了拉斯穆森定量法,将风险界定为危险事件的统计预期值。但遗憾的是,这两种尝试的缺陷在于两个进路过于关注科学层面上对科技重大风险的量化陈述形式,却忽视了对风险本体质性的伦理诉求。所以,当前科技重大风险研究最紧迫的问题是在厘定科技重大风险的问题域基础上调整伦理标准,尊重科学技术本身的发展规律,重视哲学所具有的前瞻性,打破范式壁垒,在交叉学科范式下引导科技创新,从而将人本价值和共生价值应用到科技创新当中去,增强科技重大风险的管控方略。

Abstract: There have been two attempts to quantify risk in the history of major risk research in science and technology. The first attempt occurred in the early 1980s, when constructivists represented by B.Fischhoff defined risk as the probability of occurrence of non-preference events through probability theory. The second attempt came in the early 2000s, when the hybridists, notably Martin Peterson, advanced Rasmussen quantification, which defined risk as the statistical expectation of a dangerous event. Unfortunately, the defect of these two attempts is that they focus on the quantitative statement form of major risks of science and technology at the scientific level, but ignore the ethical appeal to the constitutionality of risks. Therefore, the most urgent issue in the current research on Mega-risk in science and technology is to adjust ethical standards on the basis of determining the problem domain of Mega-risk in science and technology, respect the development law of science and technology itself, attach importance to the forward-looking philosophy, break the paradigm barrier, guide scientific and technological innovation under the interdisciplinary paradigm, and thus apply humanistic value and symbiotic value to scientific and technological innovation, strengthen strategies for managing and controlling Mega-risk in science and technology. In this paper, first of all, we sort out the history of research on Mega-risk in science and technology. By comparing the concept of risk with the West, we find that the connotation of risk is not only the expression of danger and harm, but more importantly, risk represents the reality that things are constantly changing and people's desire to control things is difficult to achieve. The former form quantitative risk research in previous research on risk. The lack of the latter leads to the absence of the qualitative view of risk for a long time. Secondly, the deficiencies of quantitative research on Mega-risk of science and technology show that the quantitative view of risk has the advantages of simplicity and strong operability, but such a division is easy to ignore or exclude other factors from risk research. In addition, Mega-risk in science and technology gradually show a series of philosophical issues related to human beings. Technological progress, the meaning of nature, and unquantifiable value issues relating to human well-being and sustainability, such as equity, benefits and risks, and ethics of value, are inherently controversial and cannot be agreed upon. The separation and prominence of these value issues positively promoted the formation of qualitative research on risk in the early 21st century. Thirdly, by examining the Technology Acceptance Model (TAM), we find that for human beings, the main impact of Mega-risk in science and technology is that we cannot predict what harm the new technology will produce, that is, the risk is more destructive than we know. Although risk assessment can alleviate public perception of risk to a certain extent, it is actually a more subjective, valuable and pragmatic process, and managers need to use the results of risk analysis to formulate policies after taking into account public perceptions and attitudes (risk acceptability), legal and political constraints, and value tradeoffs. Therefore, decisions about scientific and technological activities are based on facts in the stage of risk assessment and value in the stage of risk management, which leads to the possibility of bias in the decision-making of major scientific and technological risks. Finally, the shrinking time required for the evolution of science and technology means that human cognition of technology is facing increasingly severe challenges, such as global warming, nuclear pollution, genetic engineering and artificial intelligence ethical irregularities and other issues put forward higher ethical construction requirements for human beings. Therefore, on the one hand, technology, as an applied science, should be given enough time in the process of its occurrence and development in the stage of scientific and technological innovation to fully demonstrate and test the value contained in science and technology and the facts, so as to avoid the possibility of increasing the negative consequences of technology for users and others due to the rapid completion of the design process; On the other hand, if the goal of technological innovation is the common progress of society and science and technology, then we need to return to Marxism, which is oriented towards the free and universal development of human beings, but at the same time has to recognize the shortcomings and negative phenomena associated with the current scientific and technological progress.