What kind of future will we have when machines truly have the ability to understand, learn, and reason at a human level? Is it a comprehensive leap in the fields of medical care, education, and scientific research, or is it an existential crisis under the risk of getting out of control? At the annual meeting of the Zhongguancun Forum held not long ago, Zhu Songchun put forward the view that general artificial intelligence is one of the next frontiers; At the same time, Google's DeepMind team also released a 145-page AI security report, saying that artificial general intelligence (AGI) may appear within 5 years (2023), and warning that it may bring the extreme risk of "permanent destruction of humanity". In this era of approaching technological singularity, we are standing in the dawn of breaking through the boundaries of human cognition, but also hovering in the shadow of unknown risks.
1. Frontier opportunities: infinite possibilities for innovation
From the perspective of technological development, general artificial intelligence has made breakthroughs in recent years. Represented by large language models, such as OpenAI's GPT-4, it exhibits amazing multitasking capabilities. Not only can it fluently conduct natural language conversations and assist people with routine tasks such as copywriting and problem solving, but it can also excel in complex logical reasoning, code writing, and creative writing, and use knowledge to produce high-quality results by understanding the nature and requirements of the task. Google's Gemini Ultra model, developed by DeepMind, combines reinforcement learning techniques with large model experience to surpass GPT-4 in some benchmarks, further demonstrating the advancement of technology. In terms of multimodal fusion, general artificial intelligence systems are gradually realizing the comprehensive understanding and processing of various data types such as text, images, and audio.
At the level of industrial application, general artificial intelligence is deeply empowering various industries and becoming the core force to promote industrial upgrading and innovation. In the field of medical and health care, it can comprehensively analyze massive medical data, including patients' medical records, imaging data, genetic information, etc., to help doctors make early diagnosis of diseases and formulate precise treatment plans. By learning from a large number of medical cases and the latest research results, the AI system can discover potential disease patterns and treatment clues, and provide a reliable reference for medical decision-making, thereby improving the efficiency and quality of medical care and saving more lives.
Figure: The world's first general intelligent human "Tongtong" (Source: Courtesy of Beijing Institute of General Artificial Intelligence)
2. Risk analysis: examination of potential crises
However, the DeepMind report's description of the risks of AGI is a wake-up call. Abuse is at the top of the list, and as AGI technology becomes more widespread, it is highly likely that malicious actors will use it to generate highly realistic disinformation and manipulate public opinion. In the information age, the speed and destructive power of disinformation are increasing exponentially, which may trigger a crisis of social trust and disrupt the normal social order. For example, the synthesis of fake speech videos of politicians through deepfake technology is enough to mislead the public and influence the course of major events such as elections.
The risk of misalignment is equally intractable, as AGI systems can produce results that are contrary to expectations due to biased goal setting or misunderstanding of human values in the execution of complex tasks. If an AGI system responsible for the allocation of urban resources is purely oriented towards maximizing economic benefits and ignoring social equity and the needs of disadvantaged groups, it may lead to excessive concentration of resources, further widening the gap between the rich and the poor, and intensifying social conflicts.
3. The focus of controversy: the game between the pace of development and the ethical boundary
The controversy surrounding the development of AGI has never stopped. The primary controversy is the control of the pace of development. Some techno-optimists are encouraged by the huge potential of AGI and advocate accelerating technology research and development, believing that rapid breakthroughs in AGI can seize development opportunities and solve many global problems, such as the energy crisis and climate change. However, according to the DeepMind report, the cautious faction is concerned that the current technology is unstable and that blind acceleration could lead to risks getting out of control. They pointed out that until the underlying operating logic and safety mechanism of AGI are fully understood and perfected, too rapid development is tantamount to galloping on thin ice, and once it breaks, the consequences are unimaginable.
The definition of ethical boundaries is also at the heart of the controversy. When AGI has a certain degree of independent decision-making ability, the moral responsibility for its actions becomes a problem. If AGI causes damage in the performance of a task, should the responsibility be placed on the developer, the user, or the AGI itself? Taking medical malpractice as an example, how to protect the rights and interests of patients if there is a mistake in the AGI-assisted surgery? At the same time, the ethical norms for data collection and use in the development of AGI also need to be clarified. In order to obtain a large amount of personal data for training AGI, how to fully respect personal privacy and avoid data abuse while ensuring data availability has become a key issue on the road of development.