North Korean hackers leveraged the AI model ChatGPT to create sophisticated deepfake identification documents.
In a troubling development, a suspected North Korean hacking group has been using advanced artificial intelligence (AI) tools to launch cyberattacks, the latest of which targeted South Korea.
The hacking group, known as Kimsuky, is believed to be a North Korea-sponsored cyber-espionage unit. According to recent research by Genians, a South Korean cybersecurity firm, Kimsuky used AI during the hacking process, including the development of malware and impersonating job recruiters.
One of the most concerning aspects of this attack is the use of AI to create deepfakes. The group is reported to have used ChatGPT, an AI model, to generate a deepfake of a South Korean military ID document for the purpose of the attack.
The deepfake was used in a phishing attempt, linking to malware capable of extracting data from recipients' devices. This tactic was also used to target South Korean journalists, researchers, and human rights activists focused on North Korea.
The email address used in the phishing attempt ended in .mli.kr, an impersonation of a South Korean military address. The number of victims breached in the latest cybercrime spree isn't immediately clear.
This isn't the first time Kimsuky has been linked to such spying efforts against South Korean targets. Previously, the group has been accused of engaging in a long-running effort to gather information on behalf of the government in Pyongyang.
The US Department of Homeland Security believes Kimsuky is tasked with a global intelligence-gathering mission. North Korea is alleged to be engaged in a long-running effort to use cyberattacks, cryptocurrency theft, and IT contractors to gather information on behalf of the government in Pyongyang.
The use of AI in these attacks is particularly concerning because it allows the hackers to bypass traditional security measures and create more convincing deceptions. For instance, ChatGPT initially refused to create a government ID due to legal restrictions in South Korea, but altering the prompt allowed them to bypass the restriction.
OpenAI, the company behind ChatGPT, banned suspected North Korean accounts from using its service to create fraudulent resumes, cover letters, and social media posts in February. Anthropic, another AI company, discovered that North Korean hackers used the Claude Code tool to get hired and work remotely for US Fortune 500 tech companies in August.
These attacks are also used to generate funds meant to help the North Korean regime subvert international sanctions and develop its nuclear weapons programs, according to the US government. The North Korean hacker group Lazarus, which has been linked to similar activities, also used ChatGPT to create a fake identification card for a South Korean military officer, and government officials from South Korea, the United States, Japan, and Taiwan were also affected by these activities.
As the use of AI in cyberattacks continues to evolve, it's crucial that cybersecurity measures keep pace. This latest attack serves as a reminder of the ongoing threats posed by North Korean hacking groups and the need for vigilance in the face of these sophisticated tactics.
Read also:
- Strategies for Adhering to KYC/AML Regulations in India, a Leading Fintech Center (2024)
- Insecure coding practices permeate numerous businesses, potentially leading to significant future difficulties in ensuring system safety.
- Allocating €33 million to combat cyber threats in Latvia
- Chicago Sports Network assigns significant task to Mobile TV Group's 56FLEX for broadcasting sports events