Studies in Science of Science ›› 2025, Vol. 43 ›› Issue (6): 1293-1301.

Previous Articles     Next Articles

Ethical Risks of the Generative Information Privacy Divide

  

  • Received:2024-03-25 Revised:2024-06-17 Online:2025-06-15 Published:2025-06-15

生成式信息隐私鸿沟伦理风险探讨

王娟1,汤书昆2   

  1. 1. 巢湖学院
    2. 中国科学技术大学人文与社会科学学院
  • 通讯作者: 王娟
  • 基金资助:
    2023年安徽省人文社科重点项目“智能传播时代安徽省新型主流媒体舆论引导力提升路径研究”

Abstract: In the era of generative AI, the privacy crisis is intensifying. Current research primarily focuses on new methods of privacy invasion brought about by large models and their countermeasures, yet there is scarce attention to the more fundamental issue of the generative information privacy divide. This refers to the gap between the "ought" and the "is" of individual informational privacy, exacerbated by the predictive analytical capabilities and generative functions of multi-modal large models. These technologies enhance the privacy awareness and manipulation power of TA (human and machine agents), resulting in individuals facing a widening disparity in their enjoyment of information privacy. This concept helps to more intuitively reveal the "entanglement" between AI technology, privacy subjects, and privacy itself. The generative information privacy divide may lead to ontological privacy alienation, epistemological cognitive reification, and praxeological behavior "domestication". The key to addressing these issues lies in two main strategies: Firstly, achieving three major paradigm shifts in regulation; Secondly, a tripartite governance approach that simultaneously employs heteronomy, technonomy, and autonomy.

摘要: 生成式AI时代,隐私危机愈演愈烈。当下研究大多聚焦大模型引发的隐私入侵新方式与应对,却鲜有关注更为根本的生成式信息隐私鸿沟问题,即基于多模态大模型预测分析技术与生成功能引发的TA者(人类行动者与机器行动者)对个体的隐私认知与操纵力增强,导致个体在信息隐私享有上“应然”与“实然”之间的差距加大。该概念有助于更直观揭示AI技术、隐私主体与隐私三者之间的“纠缠”关系。生成式信息隐私鸿沟可能引发个体本体论层面的隐私异化、认知论层面的认知物化,以及行动论层面的行为“驯化”。应对的关键一是实现监管范式的三大转向,二是他治、技治与自治三“治”并举。