North Korean hackers use AI to forge military IDs

Hackers from North Korea and China are exploiting AI to forge IDs, infiltrate companies, and supercharge espionage campaigns.

A North Korean hacking group, known as Kimsuky, used ChatGPT to generate a fake draft of a South Korean military ID. The forged IDs were then attached to phishing emails that impersonated a South Korean defense institution responsible for issuing credentials to military-affiliated officials. South Korean cybersecurity firm Genians revealed the campaign in a recent blog post. While ChatGPT has safeguards that block attempts to generate government IDs, the hackers tricked the system. Genians said the model produced realistic-looking mock-ups when prompts were framed as “sample designs for legitimate purposes.”

 

 

AI-Generated Virtual ID Card (Partially Masked)

Credit: Genians

 

How North Korean hackers use AI for global espionage

Kimsuky is no small-time operator. The group has been tied to a string of espionage campaigns against South Korea, Japan, and the US. Back in 2020, the US Department of Homeland Security said Kimsuky was “most likely tasked by the North Korean regime with a global intelligence-gathering mission.” Genians which uncovered the fake ID scheme, said this latest case underscores just how much generative AI has changed the game.

Sandy Kronenberg, CEO and Founder of Netarx, a cybersecurity and IT services company, warned:

“Generative AI has lowered the barrier to entry for sophisticated attacks. As this case shows, hackers can now produce highly convincing fake IDs and other fraudulent assets at scale. The real concern is not a single fake document, but how these tools are used in combination. An email with a forged attachment may be followed by a phone call or even a video appearance that reinforces the deception. When each channel is judged in isolation, attacks succeed. The only sustainable defense is to verify across multiple signals such as voice, video, email, and metadata, in order to uncover the inconsistencies that AI-driven fraud cannot perfectly hide.”

 

Metadata of the PNG File (Partially Masked)

Credit: Genians

 

Chinese hackers also exploit AI for cyberattacks

North Korea is not the only country using AI for cyberattacks. Anthropic, an AI research company and the creator of the Claude chatbot, reported that a Chinese hacker used Claude as a full-stack cyberattack assistant for over nine months. The hacker targeted Vietnamese telecommunications providers, agriculture systems, and even government databases.

According to OpenAI, Chinese hackers also tapped ChatGPT to build password brute-forcing scripts and to dig up sensitive information on US defense networks, satellite systems, and ID verification systems. Some operations even leveraged ChatGPT to generate fake social media posts designed to stoke political division in the US.

Google has seen similar behavior with its Gemini model. Chinese groups reportedly used it to troubleshoot code and expand access into networks, while North Korean hackers leaned on Gemini to draft cover letters and scout IT job postings.

 

Illustration of attack Scenario

Credit: Genians

 

Why AI-powered hacking threats matter now

Cybersecurity experts say this shift is alarming. AI tools make it easier than ever for hackers to launch convincing phishing attacks, generate flawless scam messages, and hide malicious code.

Clyde Williamson, Senior Product Security Architect at Protegrity, a data security and privacy company, explained:

“News that North Korean hackers used generative AI to forge deepfake military IDs is a wake-up call: The rules of the phishing game have changed, and the old signals we relied on are gone. For years, employees were trained to look for typos or formatting issues. That advice no longer applies. They tricked ChatGPT into designing fake military IDs by asking for ‘sample templates.’ The result looked clean, professional and convincing. The usual red flags – typos, odd formatting, broken English – weren’t there. AI scrubbed all that out.”

He added:

“Security training needs a reset. We need to teach people to focus on context, intent and verification. That means encouraging teams to slow down, check sender info, confirm requests through other channels and report anything that feels off. No shame in asking questions. On the tech side, companies should invest in email authentication, phishing-resistant MFA and real-time monitoring. The threats are faster, smarter and more convincing. Our defenses need to be too. And for individuals? Stay sharp. Ask yourself why you’re getting a message, what it’s asking you to do and how you can confirm it safely. The tools are evolving. So must we. Because if we don’t adapt, the average user won’t stand a chance.”

 

How to protect yourself from AI-powered scams

Staying safe in this new environment requires both awareness and action. Here are steps you can take right now:

 

1) Slow down, verify, and use strong antivirus

If you get an email, text, or call that feels urgent, pause. Verify the request by contacting the sender through another trusted channel before you act. At the same time, protect your devices with strong antivirus software to catch malicious links and downloads.

The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

 

2) Use a personal data removal service

Reduce your risk by scrubbing personal information from data broker sites. These services can help remove sensitive details that scammers often use in targeted attacks. While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap, and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

 

Is your personal information exposed online?

Run a free scan to see if your personal info is compromised. Results arrive by email in about an hour.

 

3) Check sender details carefully

Look at the email address, phone number, or social media handle. Even if the message looks polished, a small mismatch can reveal a scam.

 

4) Use multi-factor authentication (MFA)

Turn on multi-factor authentication (MFA) for your accounts. This adds an extra layer of protection even if hackers steal your password.

 

5) Keep software updated

Update your operating system, apps, and security tools. Many updates patch vulnerabilities that hackers try to exploit.

 

6) Report suspicious messages

If something feels off, report it to your IT team or your email provider. Early reporting can stop wider damage.

 

7) Question the context

Ask yourself why you are receiving the message. Does it make sense? Is the request unusual? Trust your instincts and confirm before taking action.

 

Kurt’s key takeaways

AI is rewriting the rules of cybersecurity. North Korean and Chinese hackers are already using tools like ChatGPT, Claude, and Gemini to break into companies, forge identities, and run elaborate scams. Their attacks are cleaner, faster, and more convincing than ever before. Staying safe means staying alert at all times. Companies need to update training and build stronger defenses. Everyday users should slow down, question what they see, and double-check before trusting any digital request.

Do you believe AI companies are doing enough to stop hackers from misusing their tools or is the responsibility falling too heavily on everyday users? Let us know your thoughts in the comments below. 

 

 

FOR MORE OF MY TECH TIPS & SECURITY ALERTS, SUBSCRIBE TO MY FREE CYBERGUY REPORT NEWSLETTER HERE

 

Copyright 2025 CyberGuy.com.  All rights reserved.  CyberGuy.com articles and content may contain affiliate links that earn a commission when purchases are made.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow