Studies in Science of Science ›› 2024, Vol. 42 ›› Issue (1): 21-30.
Previous Articles Next Articles
Received:
Revised:
Online:
Published:
Contact:
陈锐1,江奕辉2
通讯作者:
基金资助:
Abstract: Based on techniques such as autoregressive generative modeling, pre-training, and reinforcement learning with human feedback, ChatGPT has gained powerful natural language processing capabilities.The emergence of ChatGPT marks a significant shift in the traditional conception of generative AI. However, it also poses various types of risks in terms of model training, generating content and applications. Therefore, it is imperative to find ways to mitigate these potential risks while guid-ing the rapid development of generative AI to prevent a "pollute first, regulate later" scenario. Before proceeding with governance, several key governance premises must be established. The first is to uphold human-centered values. Humanism should be a fundamental value of governance, ensuring that AI technologies prioritize human well-being and ethical considerations. The second is to uphold the concept of inclu-sive and prudent agile governance. One is to balance safety and innovation. The second is to optimize the relationship between governing and being governed. Opti-mize the relationship between governing and being governed, strengthen the interac-tion between the two sides, and form synergy in governance. The third is to enhance the flexibility of governance. Governance should pay more attention to the foresight of governance, and shift from outcome governance to process governance. Once again, it is to insist on multi-party participation combining point and surface. Effec-tive governance requires the active participation of all stakeholders, but multi-party participation in governance does not mean equal responsibility, it should be clear that the government and service providers are the main duty-bearers, and need to take more responsibility in the governance process. Finally, a systematic governance model with multiple measures should be adopted. On the one hand, it is necessary to classify governance and adopt different means of governance for different risks, which is an inevitable requirement for multi-pronged governance, with technical is-sues being referred to technical governance and legal issues being referred to legal governance; on the other hand, for comprehensive risks, it is a multi-pronged ap-proach to adopting a systematic governance scheme. The specific governance path around these principles can be summarized in thefollowing five areas: First, establish standardized training datasets. The government and relevant in-dustry organizations should jointly lead the construction of standard training da-tasets according to the type of AI generation and different training stages, establish a sound training data evaluation and supervision system, and determine theupdate cy-cle of standard training datasets. Monitor the quality of standard training datasets, eliminate false and harmful information, and control the whole process of standard training datasets. Second, implement the professional qualification certification of AI trainers. On the one hand, change the current professional qualification certification of AI trainers, which is mainly based on skill identification, to qualification access to match their important role in the process of generative AI training and maintenance; on the other hand, set up AI trainer access qualifications in a graded manner, i.e., according to the different scenarios of the task, set up AI trainer access qualifica-tions in a graded manner. Third, strengthen the algorithmic supervision technology. First, enterprises de-veloping and using generative AI should strengthen internal algorithm regulation technology to achieve effective internal standardized governance. Second, encourage the third party to supervise the development and application of algorithms to realize effective external regulation. Finally, increase the user feedback system. User feed-back algorithms for generative AI should be established to give users the right to judge and annotate the output information of generative AI, and user feedback should be screened as part of the new training dataset to further train and optimize generative AI. Fourth, strengthen the end-to-end ethical governance framework. First, it needs to be made clear that the ethical basis for development and use is the promotion of human well-being. At the same time, a specialized ethics training and review institu-tion should be established to conduct ethics training and regular ethics review for generative AI. Second, value-sensitive design is utilized to implant ethical concepts into AI generators so that they can discern unethical information and reject the out-put. Once again, the ethical connotations of generative AI should be discussed regu-larly and an ethical declaration should be formed to address new ethical risks in a timely manner. Finally, a user code of ethics should be formulated in time to develop AI ethics. Fifth, optimize the legal framework and responsibility allocation. On the one hand, it is to optimize the relevant legislative system. One of them is to rationalize the legal governance system of generative artificial intelligence. Although, there are a wide variety of laws and regulations on artificial intelligence in China, and there are also special departmental regulations on generative AI governance. However, the phenomenon of multiple departments still exists. Secondly, for the new problems brought by generative AI, a legal response needs to be made as soon as possible. Adapting to the unique challenges posed by AI includes clarifying the responsibili-ties and obligations of relevant AI systems. On the other hand, it is necessary to rea-sonably allocate the subject of legal responsibility. Specifically: first, distinguish the specific infringement scenarios of generative AI, and set the legal liability of the corresponding subjects according to different liability scenarios; second, in scenari-os such as infringement disputes arising from the application of generative AI, the developers and service providers of generative AI shall be jointly and severally lia-ble. Then, after completing the relief for the infringed person, the secondary distri-bution of the corresponding responsibility shall be carried out specifically. Thirdly, in the case where a user's illegal use of generative AI causes damage to the rights of a third party, a "notification-disposal" safe haven mechanism should be established to protect the provider of generative AI and avoid the expansion of joint and several liability.
摘要: 在自回归生成模型、预训练以及人类反馈强化学习等技术的基础上,ChatGPT获得了强大的自然语言处理能力,颠覆了过往人们关于人工智能的认知。但与此同时,ChatGPT在模型训练、生成内容以及应用等维度也带来了诸多类型的风险。在治理展开之前,应当明确人本主义是治理的价值基础,包容审慎的敏捷治理是治理的理念,“点面结合”的多方参与是治理的主体要求,多措并举的体系化方案是治理的模式。由此,治理的具体路径将从以下五个方面展开:建设标准训练数据集、建全人工智能训练师职业资格准入制度、强化算法监管技术、落实全过程的伦理治理方案以及优化法律体系与法律责任配置。
陈锐 江奕辉. 生成式 AI 的治理研究:以 ChatGPT 为例[J]. 科学学研究, 2024, 42(1): 21-30.
0 / / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://journal08.magtechjournal.com/kxxyj/EN/
https://journal08.magtechjournal.com/kxxyj/EN/Y2024/V42/I1/21