Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,227 posts)
Mon Jan 26, 2026, 11:47 AM Jan 26

Deepfake 'Nudify' Technology Is Getting Darker--and More Dangerous (Wired)

https://www.wired.com/story/deepfake-nudify-technology-is-getting-darker-and-more-dangerous/

Over the past year, WIRED has tracked how multiple explicit deepfake services have introduced new functionality and rapidly expanded to offer harmful video creation. Image-to-video models typically now only need one photo to generate a short clip. A WIRED review of more than 50 “deepfake” websites, which likely receive millions of views per month, shows that nearly all of them now offer explicit, high-quality video generation and often list dozens of sexual scenarios women can be depicted into.

Meanwhile, on Telegram, dozens of sexual deepfake channels and bots have regularly released new features and software updates, such as different sexual poses and positions. For instance, in June last year, one deepfake service promoted a “sex-mode,” advertising it alongside the message: “Try different clothes, your favorite poses, age, and other settings.” Another posted that “more styles” of images and videos would be coming soon and users could “create exactly what you envision with your own descriptions” using custom prompts to AI systems.

-snip-

A WIRED review found more than 1.4 million accounts were signed up to 39 deepfake creation bots and channels on Telegram. After WIRED asked Telegram about the services, the company removed at least 32 of the deepfake tools. “Nonconsensual pornography—including deepfakes and the tools used to create them—is strictly prohibited under Telegram’s terms of service,” a Telegram spokesperson says, adding that it removes content when it is detected and has removed 44 million pieces of content that violated its policies last year.

-snip-

Multiple experts WIRED spoke to said many of the communities developing deepfake tools have a “cavalier” or casual attitude to the harms they cause. “There's this tendency of a certain banality of the use of this tool to create NCII or even to have access to NCII that are concerning,” says Bruna Martins dos Santos, a policy and advocacy manager at Witness, a human rights group.

-snip-



Generative AI is normalizing this abusive behavior, by making it so easy.
6 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Deepfake 'Nudify' Technology Is Getting Darker--and More Dangerous (Wired) (Original Post) highplainsdem Jan 26 OP
This is going to continue to be a problem for women, especially young women and girls. MineralMan Jan 26 #1
It will be even worse if those perverts wear the smart glasses and other AI-enabled wearables the tech highplainsdem Jan 26 #2
I hadn't heard about those. MineralMan Jan 26 #3
Good for you for giving the photo back. highplainsdem Jan 26 #4
Right. MineralMan Jan 26 #5
So sorry to hear about it happening at schools near you. What happened to the boys who did that? highplainsdem Jan 26 #6

MineralMan

(151,281 posts)
1. This is going to continue to be a problem for women, especially young women and girls.
Mon Jan 26, 2026, 12:44 PM
Jan 26

As it gets easier to create such images and videos, more of it will occur. So, if you encounter any such images or videos, please report them to the venue where you saw them. That is a critical step in squashing this destructive trend.

highplainsdem

(62,227 posts)
2. It will be even worse if those perverts wear the smart glasses and other AI-enabled wearables the tech
Mon Jan 26, 2026, 01:04 PM
Jan 26

companies want them to buy and wear: https://www.democraticunderground.com/100220964224

We need very harsh penalties for adults using those tools, for the nudify platforms, and for the AI companies whose tools can be used this way. And kids need to be taught that this isn't harmless or cool, and made very aware of the penalties.

MineralMan

(151,281 posts)
3. I hadn't heard about those.
Mon Jan 26, 2026, 01:16 PM
Jan 26

It's likely that things will get pretty ugly before they get a handle on it. Fortunately, it's easy to avoid it on the viewer end. Just don't go there.

Trouble is, a lot of people want to go there. That's a crying shame!

I have an old story from the early 1960s that is relevant. My high school girlfriend and my best friend's girlfriend thought it would be fun to take a couple of topless photos of themselves and give them to their respective boyfriends. Now, I did not mind, of course, being 16 years old, but I hid that photo really well. My girlfriend told me that she and the other girl used their parents' Polaroid camera to take them. I'm sure we treasured those photos.

When, as is inevitable with high school couples, we broke up. I gave back the picture. My best friend, though, ended up marrying his girlfriend a few years later, so he kept his.

Such things are innocent, by nature, but that's not how the Internet works at all. The likelihood that photos like that end up widely distributed is high.

highplainsdem

(62,227 posts)
4. Good for you for giving the photo back.
Mon Jan 26, 2026, 01:46 PM
Jan 26

With genAI, though, nude photos aren't necessary.

And creating nude photos with AI isn't necessary to shame/harass women, either. Not just Grok, but ChatGPT and Gemini have been used to create nonconsensual AI "bikini photos" of women who dress very modestly because of their culture/religion.

A lot of the "bikini photos" Grok was asked to generate on X specified see-through material, too.

MineralMan

(151,281 posts)
5. Right.
Mon Jan 26, 2026, 01:54 PM
Jan 26

There have been a couple of incidences of that near where I am. Girls who had no idea it was happening were made aware of AI porn of them that was being distributed at their school.

I don't know how this is going to get stopped, frankly. Once the technology is there, it's very difficult to put it back in its cage.

As for my story, I couldn't keep that photo. My conscience wouldn't let me. It just would have been wrong.

highplainsdem

(62,227 posts)
6. So sorry to hear about it happening at schools near you. What happened to the boys who did that?
Mon Jan 26, 2026, 02:27 PM
Jan 26
I don't know how this is going to get stopped, frankly. Once the technology is there, it's very difficult to put it back in its cage.


It has to be dealt with through penalties. For the platforms and tech companies as well as the individuals.

As for my story, I couldn't keep that photo. My conscience wouldn't let me. It just would have been wrong.


Having a conscience is important, and it's always a good thing.

I won't use genAI at all because my conscience won't let me. It's all based on theft of intellectual property, and it causes a lot of other harms. I don't understand how anyone can use it if they're aware of the IP theft and the harm it causes.

We don't have to accept the AI companies' propaganda that genAI use is inevitable and a good thing. We can reject it, penalize it. Use human intelligence and not let the robber baron AI bros tell us we have to use their generative AI tools and accept the IP theft, the dumbing-down and de-skilling, the damage to our natural environment, the pollution of our information ecosystem, the increased surveillance, and the loss of jobs, as the AI bros accumulate more power and wealth.
Latest Discussions»General Discussion»Deepfake 'Nudify' Technol...