The rise of user-friendly AI software has made it easy for anyone to create Deepfake Pornography with little computer and AI knowledge. As a result, experts warn that incidents of harassment and extortion are likely to increase as bad actors use AI models to humiliate targets, ranging from celebrities to ex-girlfriends and even children.
• AI Could Greatly Amplify the Deepfake Porn Crisis
• The Dangers of AI in Deepfake Porn
• How AI Can Escalate the Deepfake Pornography Problem
• Artificial Intelligence May Exacerbate the Deepfake Porn Epidemic
• The Looming Threat of AI-Generated Deepfake Pornography
Deepfakes are videos and images digitally manipulated with artificial intelligence or machine learning. Initially, deepfakes were created by a Reddit user who placed the faces of female celebrities on porn actors.
Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists, and others with a public profile. Thousands of videos exist across various websites, allowing anyone to turn whoever they wish into sexual fantasies without their consent or use the technology to harm former partners.
The problem has grown as it became easier to make visually compelling deepfakes. Experts warn that the situation could worsen with the development of generative AI tools that generate novel content using existing data.
The founder of EndTAB, a group that provides training on technology-enabled abuse, warns that people will continue to misuse the technology to harm others, primarily through online sexual violence, deepfake pornography, and fake nude images.
Some companies have taken steps to address the issue. OpenAI, for example, removed explicit content from data used to train its image-generating tool DALL-E. At the same time, Stability AI rolled out an update that removes the ability to create explicit images using its image generator Stable Diffusion.
Some social media companies have also tightened their rules to protect their platforms against harmful materials. For example, TikTok now requires all deepfakes or manipulated content that show realistic scenes to be labeled to indicate that they are fake or altered in some way, and deepfakes of private figures and young people are no longer allowed.
Twitch has also updated its policies around explicit deepfake images. However, keeping deepfakes off platforms requires constant vigilance. The National Center for Missing and Exploited Children warns that AI-generated content, including deepfakes, is a growing concern for child safety groups.
Therefore, Meta, OnlyFans, and Pornhub are participating in an online tool called Take It Down, allowing teens to report explicit images and videos of themselves online.
With the advancements in AI software, creating deepfake porn has become easier than ever, and this is a growing concern for abuse experts. Deepfakes refer to digitally created or altered videos and images using AI or machine learning. It involves placing people’s faces onto porn actors’ bodies to create fake porn videos or images.
Initially, these deepfakes were limited to female celebrities’ faces, but they have now spread to online influencers, journalists, and other public figures. Thousands of such videos exist on various websites, with some allowing users to create images or videos.
Experts believe that the problem has worsened as creating deepfakes has become more visually appealing and sophisticated. With the development of generative AI tools that are trained on billions of internet images and can generate novel content, the issue of deepfakes could worsen.
Adam Dodge, the founder of EndTAB, a group that provides training on technology-enabled abuse, says that the technology will continue to develop, making it as easy as pushing a button. It, in turn, could result in the misuse of technology for online sexual violence, deepfake pornography, and fake nude images.
Several companies, including OpenAI and Stability AI, have introduced filters to prevent the creation of explicit images using their image-generating tools. Social media companies like TikTok and Twitch have also updated their policies to better protect their platforms against harmful materials.
TikTok now requires all deepfakes to be labeled as fake or altered, and Twitch has updated its policies to prohibit deepfake images. The Take It Down tool is an online reporting system allowing teens to report explicit images and videos of themselves online.
It works for regular images and AI-generated content, a growing concern for child safety groups. The National Center for Missing and Exploited Children operates the Take It Down tool and is concerned about the implications of AI and end-to-end encryption for child protection.