人工智能不应该破坏人类的进步 - 2025-06-29

Digital divide shouldn't be underestimated
In the context of the information economy, digital work has emerged as a new form of labor, involving by millions of people participating in activities on digital networks. This work, often unpaid, is appropriated by major companies to generate substantial profits through advertising. The accumulation and processing of personal data from users of search engines, social networks, and other software allows companies to tailor advertising and plan the production of goods and services accordingly. This leads to a cycle where users are both the producers and consumers of digital content, leading to economic overexploitation and increasing inequality.
The true digital divide lies not between those who have access to devices and those who do not, but between users and the large corporations that collect, process, and sell that information. These global corporations wield power that exceeds that of many states, posing a significant challenge to digital sovereignty and human rights, particularly regarding privacy and data protection. Techno-capitalism has widened the gap between work and the appropriation of digital wealth, threatening human rights further with the introduction of surveillance and digital intelligence systems.
The European Union's regulatory efforts to protect data face significant challenges due to a lack of independent digital infrastructures. This impotence of states translates into a growing discredit of their institutions, which are incapable of ensuring the autonomy and independence of their digital assets, leaving citizens unprotected from the abusive practices of large tech companies. The growing dependence on these corporations highlights the need for a strategy that prioritizes social purpose and human rights.
Consequently, the reappropriation of data production and management infrastructures requires states to regain their digital sovereignty and direct it toward democratic goals. Ensuring universal access to digital infrastructures and equitable distribution of digital work benefits necessitates the development of digital tools and artificial intelligence within a collaborative international framework. This aligns with China's concept of a shared future for mankind, which contrasts with the Western discourse that views digital technology as a threat.
From a domestic standpoint, digital sovereignty offers the opportunity to create structures for free and equal access to digital media and to initiate a public debate about data ownership and usage, aiming to direct artificial intelligence toward achieving social benefits. China's experience serves as an example of how digital tools can achieve prosperity and could guide other countries in pursuing digital development without gaps between capital and work, or between knowledge and the distribution of its benefits.
At the international level, equity in a multipolar framework could lead to non-competitive cooperation systems, allowing for great growth in knowledge and shared experiences for the benefit of all. In short, digital intelligence can shift from being a threat to becoming an effective mechanism for economic, scientific, and academic development, with China and the European Union potentially complementing each other in this endeavor.
The author is Juan Carlos Utrera García, a professor of philosophy of law at the National University of Distance Education (UNED) in Spain and advisor to Cátedra China Foundation
Navigating the AI regulatory paradox
In the contemporary global landscape, the intricate relationship between regulation, development, and human rights protection has emerged as a pivotal yet paradoxical challenge. The rapid advancement of artificial intelligence presents a regulatory paradox: the need to foster innovation while simultaneously safeguarding human rights.
In recent years, major developed economies have been contemplating the introduction of significant new regulatory frameworks targeting AI. However, this has diverged significantly from expectations, with countries such as the US, Japan and the EU either delaying or diluting their regulatory efforts.
This trend toward deregulation is partly due to the complexity of AI risks and the insufficient capacity to assess the security of advanced AI models. Moreover, AI technology holds an extremely important strategic position in national security and social development across all aspects. The US, for instance, prioritizes maintaining its global hegemony over implementing restrictive AI regulations. As a result, comprehensive international regulation aimed at safeguarding fundamental human rights remains an idealistic aspiration.
The criminal offenses resulting from the "misuse” of AI severely infringe on fundamental human rights, including the rights to life, health, and privacy. The "black box” nature of AI can lead to violations of human rights, posing a new type of challenge that arises from the collision of technological development and ethical norms. In circumstances where governments choose to forgo their regulatory responsibilities, it will not only bestow undue power on large tech companies but also allow technology to grow "wildly,” thereby leading to the realization of human rights to become an "empty promise”.
In the realm of social media, protecting young users has become a critical issue. Social media platforms introduce advertising models to achieve profitability by selling user attention to advertisers. These platforms often expose young users to harmful content more quickly and frequently than adults, compromising their safety and mental health. The EU focuses on transparency, accountability, and proactive measures to protect minors through regulations, while the US follows a more decentralized approach, relying on industry self-regulation. China adopts a comprehensive and proactive approach, emphasizing a safe and healthy online environment for minors.
The self-regulation model of App privacy policies is caught in a deep contradiction between "formal compliance” and "substantive infringement.” Developers leverage technical advantages to turn policy texts into legal veneers for evading substantive obligations.
Synthetic data offers solutions to the "data depletion” dilemma in AI development by simulating real-world data properties. However, it is not a foolproof method for privacy protection. Synthetic data carries re-identification risks, as anonymization techniques can still lead to personal privacy leakage. AI models trained on synthetic data may inadvertently disclose sensitive information, posing serious privacy risks. These risks, characterized by complexity and concealment, often go unnoticed by users lacking technical knowledge. This creates a power imbalance, highlighting the need for governmental intervention. Clear legal frameworks and regulatory mechanisms are essential to protect user privacy effectively.
In conclusion, the balance between AI development and human rights protection requires a global consensus on AI safety and governance. AI should not be used merely as a tool or weapon for competition. Instead, its development must be guided by ethical considerations, ensuring that technological advancements do not come at the expense of human dignity and rights.
The author is Li Juan, a researcher of the Human Rights Research Center, and associate professor at the School of Law, Central South University
Source: China Daily