Skip to content

North Korean-tied hackers manipulate ChatGPT for attacks on South Korean press personnel and activists.

North Korean hacking group Kimsuky allegedly utilized ChatGPT in a phishing endeavor to craft counterfeit South Korean military identification cards, aiming to deceive potential victims.

North Korean cybercriminals abuse ChatGPT to attack South Korean journalists and activists.
North Korean cybercriminals abuse ChatGPT to attack South Korean journalists and activists.

North Korean-tied hackers manipulate ChatGPT for attacks on South Korean press personnel and activists.

In a concerning turn of events, a state-sponsored hacker group with ties to North Korea, known as Kimsuky, is allegedly using OpenAI's AI chatbot, ChatGPT, to create fake South Korean military ID cards as part of a phishing campaign.

The phishing campaign targets South Korean journalists, researchers, and human rights activists who focus on North Korea. The attackers, by spoofing an official South Korean military email address ending in .mil.kr, aim to deceive recipients into believing the emails are genuine.

The deepfake ID created by the attackers appears realistic but is intended to add credibility to the phishing lure rather than to provide an actual document. Hackers from North Korea have used AI tools at various stages of cyberattacks, including planning, malware development, and impersonation of trusted entities.

Genians reported that AI tools are now being used at various stages of cyberattacks. In this latest case, the hackers were able to get around ChatGPT's safeguard against generating government ID images by rephrasing their requests.

This is not the first time Kimsuky has been involved in phishing activities. According to a 2020 advisory from the US Department of Homeland Security, Kimsuky works on Pyongyang's behalf to conduct global surveillance activities. Previously, the group has been linked to intelligence operations against South Korea and other countries.

The use of advanced technology, such as AI, in phishing campaigns underscores the need for continued vigilance and security measures to protect against such threats. OpenAI banned North Korean accounts that attempted to use its platform to create fraudulent resume and recruitment materials earlier this year.

The phishing message was linked to malware that could steal data from victims' devices instead of an authentic attachment. The concern lies in the potential damage that could be caused by the theft of sensitive information from these targeted individuals.

As the use of AI in cyberattacks becomes more prevalent, it is crucial for individuals and organisations to remain vigilant and implement robust security measures to protect against such threats.

Read also:

Latest