CrowdStrike has uncovered a surge in remote infiltration by North Korean IT operatives. Over the past year, these scammers have secretly landed jobs at more than 320 companies, many of them in the United States. And they’re doing it all with help from AI tools.
What’s Happening & Why This Matters

The report shows a 220% increase in incidents involving North Korean workers securing freelance or full-time positions — often in software engineering, technical support, or IT development. Most companies believe they’re hiring legitimate overseas talent. But behind the screen, they’re dealing with state-sponsored deception.
CrowdStrike’s annual threat-hunting review paints a serious picture: North Korea’s strategy is working, and AI is accelerating its reach.
How They Do It
North Korean operatives are leveraging generative AI to create fake identities. These tools allow them to:
- Generate deepfake videos for live job interviews
- Create photorealistic profile images
- Write convincing résumés and cover letters
- Translate English and write computer code
The result? Scammers appear polished, professional, and trustworthy — fooling even seasoned hiring managers. One disturbing trend: real-time deepfakes during job interviews. Using this tech, a single person can apply for the same job multiple times under different identities. It dramatically boosts the chance of success and evades detection.
CrowdStrike explains, “Using a real-time deepfake plausibly allows a single operator to interview for the same position multiple times using different synthetic personas.”
U.S. Investigators Step In

This isn’t just a corporate issue — it’s a national security concern. In June, U.S. investigators reported that over 100 companies had unknowingly hired North Korean workers. In one case, a woman in Arizona was arrested and jailed for helping North Koreans use company-issued laptops — right from inside the United States.
The concern goes beyond fake résumés. Hiring these individuals gives direct funds to the North Korean government, which is under strict international sanctions. It also opens companies up to cyber risks, including data theft and extortion.
CrowdStrike’s Warning
CrowdStrike urges companies to adopt better vetting measures. One suggestion: implement real-time deepfake detection during video interviews. These tests use physical actions — like waving a hand over one’s face — to expose AI distortions.
Companies are also encouraged to:
- Avoid relying solely on remote assessments
- Cross-verify identities using multiple methods
- Watch for overly polished résumés with generic experience
Without these safeguards, employers risk becoming unintentional funders of hostile regimes—and targets of cybercrime.
TF Summary: What’s Next
The surge in North Korean remote infiltration highlights a fast-growing cybersecurity blind spot. As generative AI tools get better, bad actors gain new ways to manipulate job markets and sneak into trusted networks. Companies are upgrading their hiring processes and remain proactive. The deepfake era is here — disguised as your next remote hire.
— Text-to-Speech (TTS) provided by gspeech