New Delhi, Feb 14 (IANS): Microsoft and OpenAI on Wednesday said hackers are using large language models (LLMs) like ChatGPT to improve their existing cyber-attack techniques.
The companies have detected attempts by Russian, North Korean, Iranian, and Chinese-backed groups using tools like ChatGPT for research into targets and build social engineering techniques.
In partnership with Microsoft Threat Intelligence, OpenAI disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities.
“We disrupted two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard,” said Sam Altman-run company.
The identified OpenAI accounts associated with these actors were terminated. These bad actors sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks.
“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said in a statement.
While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context, said the company.
“As always, hygiene practices such as multifactor authentication (MFA) and Zero Trust defenses are essential because attackers may use AI-based tools to improve their existing cyberattacks that rely on social engineering and finding unsecured devices and accounts,” the tech giant noted.