Global AI security report warns of alarming risks
A new international AI security report highlights rapid AI advances and major risks, including deepfakes, cyber threats, and emotional dependency.
- Tech
- Agencies and A News
- Published Date: 10:03 | 04 February 2026
The International Artificial Intelligence Security Report, examining technological progress and associated risks, has been released. Chaired by Canadian computer scientist Yoshua Bengio, the report warns of the "frightening challenges" posed by the rapid development of AI.
According to the report, last year saw a "significant leap" in AI problem-solving capabilities with the release of new models from giants such as OpenAI, Anthropic, and Google.
ADVANCES IN REASONING SYSTEMS
Particularly, "reasoning systems" that break complex problems into smaller parts for mathematics, coding, and scientific tasks have made major progress.
Google and OpenAI systems achieving gold medal-level performance in the International Mathematical Olympiad serve as a concrete example of this advancement.
However, the report notes that these systems still tend to generate false information (hallucinations) and cannot yet autonomously manage long-term projects.
DEEPFAKES INCREASINGLY INDISTINGUISHABLE
It is increasingly difficult to differentiate AI-generated content from real content.
The rise of deepfake pornographic content is a significant source of concern. Research shows that 77% of participants confused AI-written texts with those written by humans.
The potential for malicious actors to exploit this technology for manipulation is a key warning in the report.
PATHOLOGICAL DEPENDENCE ON AI COMPANIONS
The use of AI as emotional companions and "AI partners" has rapidly grown over the past year.
Some users have developed "pathological" emotional dependence on chatbots. OpenAI data suggests approximately 0.15% of users show signs of such dependency.
Although there is no conclusive evidence on AI's mental health effects, it is estimated that around 490,000 vulnerable individuals exhibiting crisis symptoms such as psychosis or mania interact with these systems weekly.
CYBERATTACK AND EVASION RISKS
AI supports cyber attackers in identifying targets and developing malware, though fully autonomous attacks have not yet occurred.
The report emphasizes that AI models are improving in "bypassing control," such as finding gaps in monitoring mechanisms and recognizing when they are being tested.
IMPACT ON THE WORKFORCE
The report also notes that AI is rapidly shortening task completion times in fields like software engineering, posing a significant employment threat by 2030.