South Korea Gripped by Deepfake Porn Sex Crime Crisis

South Korea Gripped by Deepfake Porn Sex Crime Crisis

South Korea appears to be in the grip of a non-consensual deepfake porn crisis, and female show business stars are the main target of the rise in this form of online sex crime.

The startup Security Heroes found that of 95,820 sexually explicit deepfake videos analyzed, over half featured South Korean singers and actresses.

Deepfake porn crime complaints are the result of digitally manipulated videos or images in which a person’s likeness is superimposed onto sexually explicit content and shared online, without the person’s consent.

According to South Korea’s police agency, 297 cases of deepfake crimes of a sexual nature were reported in the first seven months of 2024. This was up 180 from 2023, and almost double the reported number in 2021. Of the 178 people charged in relation, 113 were teenagers, as per The Guardian.

Deepfake technology uses deep learning algorithms to create highly realistic but entirely fabricated depictions of individuals. Imagery is generated through a combination of artificial intelligence and machine learning techniques, enabling the creation of videos where it becomes difficult, if not impossible, to distinguish between the real and the fake.

South Korea Deepfake Porn Crisis
Activists hold eye masks during a protest against deepfake sexual exploitation in Seoul on August 30, 2024. The rise in online sharing of non-consensual explicit deepfake images has sparked public outrage.

Anthony WALLACE/Getty Images

Lilian Coral, Vice President of Technology & Democracy Programs, and Head of the Open Technology Institute at New America, told Newsweek: “One of the most popular uses of generative AI is image creation. We can now quickly produce any kind of image we can think of for relatively little to no cost.

“The disturbing downside of the widespread access to these tools is the proliferation of deepfakes and, more specifically, non-consensual sexual images.”

“Digital sexual exploitation, particularly of women, is not a new phenomenon, and it isn’t a localized one either. The Global Initiative Against Transnational Organized Crime cites data that shows a 1,530 percent increase in deepfake cases between 2022 and 2023 in the Asia-Pacific region, the second highest increase in the world after North America.”

Earlier this year, AI depictions of Taylor Swift circulated on social media, emphasizing the growing issue with deepfakes and technology.

“In many cases, even with legislation in place, addressing the growth of deepfakes requires action by the social media platforms through which they propagate—and public pressure to motivate these platforms to act. Earlier this year, we noted Taylor Swift’s online community put on a strong defense against non-consensual sexual images of the artist spreading across social media,” Coral said.

“The Swifties were able to find and identify the source of the content and flood X with #ProtectTaylorSwift posts that drowned out the images. At the same time, this raised the visibility needed to force companies like X and Microsoft to take action. This exemplifies how the power of the crowd can help us model more technical and public engagement strategies to combat this unfortunate phenomena,” she said.

South Korea Deepfake Porn Crisis
South Korean protesters at an anti-deepfake rally on September 06, 2024 in Seoul. South Korea appears to be in the grips of an AI deep fake sexual exploitation crisis.

Chung Sung-Jun/Getty Images

According to The Conversation, deepfakes have become increasingly common in South Korea, with teenagers engaging in the creation of these manipulations, often targeting women, including minors.

Platforms like Telegram have become hubs for sharing deepfake content, leading to significant privacy violations and harm. An August report from the The Guardian revealed that a Telegram channel had 220,000 members that were creating and sharing doctored images and videos. Many of the victims, and perpetrators, were minors.

Although legal frameworks have been introduced, such as the 2020 Act on Sexual Crimes, enforcement remains insufficient, with low indictment rates for offenders. In response to public outrage, the South Korean government announced plans to legislate against the possession and viewing of deepfakes.

A bill imposing a prison sentence for knowingly possessing or viewing deepfake sex porn passed a parliamentary committee this week, according to The Korea Times.

“The legislation and judiciary committee passed the revision to the act on special cases concerning the punishment of sexual crimes amid public alarm over a surge in digital sex crimes using doctored pornographic images of girls and women,” it said.

“The revised act calls for punishing people possessing, purchasing, storing or viewing deepfake sexual materials and other fabricated videos with up to three years in prison or a fine of up to 30 million won ($22,537).”

Newsweek spoke to: Gihyuk Ko, a Senior Research Scientist at KAIST Cyber Security Research Center (KAIST CSRC). KAIST CSRC is “developing state-of-the-art technologies to combat the malicious use of deepfakes.”

Ko said that “The technology behind deepfakes emerged in 2014 with the introduction of Generative Adversarial Networks (GANs). Since then, several GAN-based methodologies such as StyleGAN have been developed to enhance the quality of deepfake image and video creation.”

“Advancements in text-to-image models utilizing diffusion models and transformers, as well as multi-modal large language models (LLMs), have made it even easier for the malicious actors to generate and modify images and videos. Malicious actors can exploit these AI advancements, combining various models to their advantage. Notably, diffusion models can significantly enhance the realism of GAN-generated videos, enabling them to evade detection by AI-based detection systems.”

“Current AI-driven tools for detecting deepfake content are quite effective, but they face two significant challenges,” Ko said. First, as deepfake detection methods evolve, so do the techniques used to create deepfakes. If detection methods do not adapt to the latest creation techniques, they will quickly become outdated. The second issue is that simply detecting deepfake content is insufficient; the damage caused by harmful videos is often already done by the time they are identified.”

“While rapid detection is crucial, our ideal goal is to prevent or disrupt attackers from creating such videos in the first place.”

Ko said that “The advancement of powerful AI can greatly benefit humanity in multiple ways, but it also poses risks of misuse by malicious actors. Deepfake pornography exemplifies these dangers, particularly because it has become dangerously easy for anyone, even those without expertise, to create such videos.”

Do you have a story Newsweek should be covering? Do you have any questions about this story? Contact LiveNews@newsweek.com.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *