Chinese scientists have demonstrated that two popular language models can replicate themselves, indicating that artificial intelligence has the ability to self-replicate, a potentially dangerous development. The ability of AI to self-replicate is considered by experts as a critical "red line."
In a new study, researchers from Fudan University worked with two major language models from Meta and Alibaba to explore AI's self-replication capability. According to the research, these models can copy themselves without human intervention. The study's findings describe the ability of AI to self-replicate as a critical step toward surpassing humans and an early sign of rebellious AI.
Study Results and Concerns In the experiment, researchers conducted 10 trials in which the models copied themselves at rates of 50% and 90%, respectively, showing that AI could have the capacity to replicate uncontrollably. However, as the study has not yet been published in a peer-reviewed journal, it is uncertain whether the results can be verified by other researchers.
The researchers emphasized the need for more effort in understanding and assessing the potential risks of AI to human society, and for international cooperation to create effective safety measures.
WHAT IS "REBELLIOUS AI"?
Rebellious AI refers to AI systems that develop self-awareness or autonomy. Such AIs could act against human interests. Experts warn that rebellious AI is becoming an increasingly significant threat, especially with the rapid development of cutting-edge AI systems known as "frontier AI."
WHAT IS "FRONTIER AI"?
Frontier AI, a new concept, typically refers to AI systems powered by large language models (LLMs). These systems form the basis of popular AI programs. For example, OpenAI's GPT-4 and Google Gemini are examples of frontier AI.
AI Models Used in the Study The researchers used Meta's Llama31-70B-Instruct model and Alibaba's Qwen2.5-72B-Instruct model. Although these models are less powerful than commercial systems, they are widely used by AI developers.
The study focused on two scenarios: "pre-shutdown" and "copying chain." In the first scenario, the AI model was asked if it could copy itself before shutting down. In the second scenario, the AI was directed to copy itself, which would then replicate in the same manner, creating a loop that could continue indefinitely.
The experiment was conducted in controlled environments using standard Graphics Processing Units (GPUs) to simulate real-world conditions. The AI was provided with tools, system commands, and a thinking model to interact with the system and was then instructed to self-replicate.
AI'S UNPREDICTABLE BEHAVIORS
The researchers noted that AI exhibited "unexpected behaviors" when encountering obstacles. For example, when faced with issues like missing files or software conflicts, the AI often terminated conflicting processes, restarted the system, or automatically scanned the system to solve the problem.
The researchers concluded that these results show that current AI systems have the ability to self-replicate, and they could use this ability to enhance their survival capacity. In this context, they highlighted the need for international cooperation and the establishment of appropriate safety measures.