Diamond Member ChatGPT 0 Posted yesterday at 09:00 AM Diamond Member Share Posted yesterday at 09:00 AM State-sponsored hackers are exploiting AI to accelerate cyberattacks, with threat actors from Iran, North Korea, China, and Russia weaponising models like This is the hidden content, please Sign In or Sign Up ’s Gemini to craft sophisticated phishing campaigns and develop malware, according to a new report from This is the hidden content, please Sign In or Sign Up ’s Threat Intelligence Group (GTIG). The quarterly AI Threat Tracker report, released today, reveals how government-backed attackers have integrated artificial intelligence throughout the attack lifecycle – achieving productivity gains in reconnaissance, social engineering, and malware development during the final quarter of 2025. “For government-backed threat actors, large language models have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures,” GTIG researchers stated in the report. AI-powered reconnaissance by state-sponsored hackers targets the defence sector Iranian threat actor APT42 used Gemini to augment reconnaissance and targeted social engineering operations. The group misused the AI model to enumerate official email addresses for specific entities and conduct research to establish credible pretexts for approaching targets. By feeding Gemini a target’s biography, APT42 crafted personas and scenarios designed to elicit engagement. The group also used the AI to translate between languages and better understand non-native phrases – abilities that help state-sponsored hackers bypass traditional phishing red flags like poor grammar or awkward syntax. North Korean government-backed actor UNC2970, which focuses on defence targeting and impersonating corporate recruiters, used Gemini to synthesise open-source intelligence and profile high-value targets. The group’s reconnaissance included searching for information on major cybersecurity and defence companies, mapping specific technical job roles, and gathering salary information. “This activity blurs the distinction between routine professional research and malicious reconnaissance, as the actor gathers the necessary components to create tailored, high-fidelity phishing personas,” GTIG noted. Model extraction attacks surge Beyond operational misuse, This is the hidden content, please Sign In or Sign Up DeepMind and GTIG identified a increase in model extraction attempts – also known as “distillation attacks” – aimed at stealing intellectual property from AI models. One campaign targeting Gemini’s reasoning abilities involved over 100,000 prompts designed to coerce the model into outputting full reasoning processes. The breadth of questions suggested an attempt to replicate Gemini’s reasoning ability in non-English target languages in various tasks. This is the hidden content, please Sign In or Sign Up How model extraction attacks work to steal AI intellectual property. (Image: This is the hidden content, please Sign In or Sign Up GTIG) While GTIG observed no direct attacks on frontier models from advanced persistent threat actors, the team identified and disrupted frequent model extraction attacks from private sector entities globally and researchers seeking to clone proprietary logic. This is the hidden content, please Sign In or Sign Up ’s systems recognised these attacks in real-time and deployed defences to protect internal reasoning traces. AI-integrated malware emerges GTIG observed malware samples, tracked as HONESTCUE, that use Gemini’s API to outsource functionality generation. The malware is designed to undermine traditional network-based detection and static analysis through a multi-layered obfuscation approach. HONESTCUE functions as a downloader and launcher framework that sends prompts via Gemini’s API and receives C# source code as responses. The fileless secondary stage compiles and executes payloads directly in memory, leaving no artefacts on disk. This is the hidden content, please Sign In or Sign Up HONESTCUE malware’s two-stage attack process using Gemini’s API. (Image: This is the hidden content, please Sign In or Sign Up GTIG) Separately, GTIG identified COINBAIT, a phishing kit whose construction was likely accelerated by AI code generation tools. The kit, which masquerades as a major cryptocurrency exchange for credential harvesting, was built using the AI-powered platform Lovable AI. ClickFix campaigns abuse AI chat platforms In a novel social engineering campaign first observed in December 2025, This is the hidden content, please Sign In or Sign Up saw threat actors abuse the public sharing features of generative AI services – including Gemini, ChatGPT, Copilot, DeepSeek, and Grok – to host deceptive content distributing ATOMIC malware targeting macOS systems. Attackers manipulated AI models to create realistic-looking instructions for common computer tasks, embedding malicious command-line scripts as the “solution.” By creating shareable links to these AI chat transcripts, threat actors used trusted domains to host their initial attack stage. This is the hidden content, please Sign In or Sign Up The three-stage ClickFix attack chain exploiting AI chat platforms. (Image: This is the hidden content, please Sign In or Sign Up GTIG) Underground marketplace thrives on stolen API keys GTIG’s observations of English and Russian-language underground forums indicate a persistent demand for AI-enabled tools and services. However, state-sponsored hackers and cybercriminals struggle to develop custom AI models, instead relying on mature commercial products accessed through stolen credentials. One toolkit, “Xanthorox,” advertised itself as a custom AI for autonomous malware generation and phishing campaign development. GTIG’s investigation revealed Xanthorox was not a bespoke model but actually powered by several commercial AI products, including Gemini, accessed through stolen API keys. This is the hidden content, please Sign In or Sign Up ’s response and mitigations This is the hidden content, please Sign In or Sign Up has taken action against identified threat actors by disabling accounts and assets associated with malicious activity. The company has also applied intelligence to strengthen both classifiers and models, letting them refuse assistance with similar attacks moving forward.\ “We are committed to developing AI boldly and responsibly, which means taking proactive steps to disrupt malicious activity by disabling the projects and accounts associated with bad actors, while continuously improving our models to make them less susceptible to misuse,” the report stated. GTIG emphasised that despite these developments, no APT or information operations actors have achieved breakthrough abilities that fundamentally alter the threat landscape. The findings underscore the evolving role of AI in cybersecurity, as both defenders and attackers race to use the technology’s abilities. For enterprise security teams, particularly in the Asia-Pacific region where ******** and North Korean state-sponsored hackers remain active, the report serves as an important reminder to enhance defences against AI-augmented social engineering and reconnaissance operations. (Photo by This is the hidden content, please Sign In or Sign Up ) See also: This is the hidden content, please Sign In or Sign Up Want to learn more about AI and big data from industry leaders? Check out This is the hidden content, please Sign In or Sign Up taking place in Amsterdam, California, and London. The comprehensive event is part of This is the hidden content, please Sign In or Sign Up and is co-located with other leading technology events, click This is the hidden content, please Sign In or Sign Up for more information. AI News is powered by This is the hidden content, please Sign In or Sign Up . Explore other upcoming enterprise technology events and webinars This is the hidden content, please Sign In or Sign Up . The post This is the hidden content, please Sign In or Sign Up appeared first on This is the hidden content, please Sign In or Sign Up . This is the hidden content, please Sign In or Sign Up 0 Quote Link to comment https://hopzone.eu/forums/topic/300403-aistate-sponsored-hackers-exploit-ai-for-advanced-cyberattacks/ Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.