Friday, December 20, 2024

Microsoft and OpenAI say hackers are using AI tools like ChatGPT to improve cyberattacks – N.F Times

[ad_1]

New York: Microsoft and OpenAI have said that hackers are already using large language models like ChatGPT to refine and improve their existing cyberattacks. In newly published research, Microsoft and OpenAI have detected attempts by Russian, North Korean, Iranian, and Chinese-backed groups using tools like ChatGPT for research into targets, to improve scripts, and to help build social engineering techniques.

In a blog post released, Microsoft highlighted, “Cybercrime syndicates, state-sponsored threat actors, and other adversaries are actively exploring the potential applications of emerging AI technologies to bolster their operations and evade security measures.”

The notorious Strontium group, linked to Russian military intelligence and also known as APT28 or Fancy Bear, has been identified as utilising Language Models (LLMs) to dissect satellite communication protocols, radar imaging technologies, and intricate technical parameters. This group, infamous for its involvement in previous high-profile cyber incidents including targeting Hillary Clinton’s presidential campaign in 2016, has expanded its nefarious activities to encompass basic scripting tasks facilitated by LLMs, such as file manipulation and data selection.

Meanwhile, a North Korean hacking outfit identified as Thallium has been employing LLMs to scout publicly reported vulnerabilities, orchestrate phishing campaigns, and refine their malicious scripts. Similarly, the Iranian group Curium has turned to LLMs to craft sophisticated phishing emails and code aimed at evading detection by antivirus software. Additionally, Chinese state-affiliated hackers are leveraging LLMs for diverse purposes including research, scripting, translations, and the enhancement of existing cyber tools.

Despite the absence of major cyber assaults utilising LLMs thus far, Microsoft and OpenAI have remained vigilant, dismantling accounts and assets associated with these malicious groups. Microsoft stressed, “This research serves as a crucial expose of the preliminary, incremental steps observed from well-known threat actors, while also providing insights into our proactive measures to thwart and counter them alongside the defender community.”

Amid mounting concerns over the potential misuse of AI in cyber warfare, Microsoft has issued warnings regarding future threats, such as voice impersonation. The advent of AI-powered fraud, particularly in voice synthesis, poses a significant risk, where even brief voice samples can be utilised to fabricate convincing impersonations.

In response to the escalating AI-driven cyber threats, Microsoft is harnessing AI as a defensive tool. “AI presents adversaries with the opportunity to elevate the sophistication of their attacks, but we are equipped to combat this threat,” affirmed Homa Hayatyfar, principal detection analytics manager at Microsoft. “With over 300 threat actors under our radar, we leverage AI to fortify our protective measures, enhance detection capabilities, and swiftly respond to emerging threats.”

In a bid to empower cybersecurity professionals in this ongoing battle, Microsoft is rolling out the Security Copilot, an AI-driven assistant tailored to streamline breach identification and analysis amidst the deluge of cybersecurity data. Moreover, the tech giant is undertaking comprehensive software security revamps in the aftermath of recent Azure cloud breaches and instances of espionage by Russian hackers targeting Microsoft executives.

[ad_2]

Related Articles