Microsoft-Backed OpenAI Tools Being Used By Hackers In China, Iran, Russia

State-affiliated hackers from Russia's military intelligence, Iran's Revolutionary Guard, and Chinese and North Korean governments have been found to be using AI tools from Microsoft-backed OpenAI to improve hacking using large language models (LLM), according to a report published on Wednesday.

The groups were scripting and phishing, conducting vulnerability research, target reconnaissance, detection evasion and more, as outlined in a blog post by Microsoft Threat Intelligence. When identified, the OpenAI accounts associated with these threat groups were terminated.

"The objective of Microsoft's partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse," the blog post says. "As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models."

advertisement

advertisement

Fancy Bear, a prolific cyberespionage group linked to Russian military intelligence agency GRU, used LLMs to perform reconnaissance related to radar-imaging technology and satellite communication protocols that Microsoft said may be related to Russia’s military operations in Ukraine.

China's U.S. embassy spokesperson Liu Pengyu called the report "groundless smears and accusations against China' and advocated for the 'safe, reliable and controllable" deployment of AI technology' to enhance the common well-being of all mankind," Reuters reported.

Microsoft said the hackers appeared to be “exploring and testing” the LLM capabilities and that no significant cyberattacks that used generative AI were discovered by the researchers.

Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies in an attempt to understand potential value to their operations and the security controls they may need to circumvent.

"This is one of the first, if not the first, instances of a AI company coming out and discussing publicly how cybersecurity threat actors use AI technologies," Bob Rotsted, who leads cybersecurity threat intelligence at OpenAI, told Reuters. The two companies described the use of the AI tools by the hackers as "early-stage" and "incremental."

OpenAI recently revealed it is developing a blueprint for evaluating the risk around LLMs helping someone to create a biological threat. In an evaluation involving biology experts and students, the company found that GPT-4 provides at most a mild uplift in a threat. While this uplift is not large enough to be conclusive, the company said, the finding is the start of important research. 

Next story loading loading..