General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsDeepfake 'Nudify' Technology Is Getting Darker--and More Dangerous (Wired)
https://www.wired.com/story/deepfake-nudify-technology-is-getting-darker-and-more-dangerous/Meanwhile, on Telegram, dozens of sexual deepfake channels and bots have regularly released new features and software updates, such as different sexual poses and positions. For instance, in June last year, one deepfake service promoted a sex-mode, advertising it alongside the message: Try different clothes, your favorite poses, age, and other settings. Another posted that more styles of images and videos would be coming soon and users could create exactly what you envision with your own descriptions using custom prompts to AI systems.
-snip-
A WIRED review found more than 1.4 million accounts were signed up to 39 deepfake creation bots and channels on Telegram. After WIRED asked Telegram about the services, the company removed at least 32 of the deepfake tools. Nonconsensual pornographyincluding deepfakes and the tools used to create themis strictly prohibited under Telegrams terms of service, a Telegram spokesperson says, adding that it removes content when it is detected and has removed 44 million pieces of content that violated its policies last year.
-snip-
Multiple experts WIRED spoke to said many of the communities developing deepfake tools have a cavalier or casual attitude to the harms they cause. There's this tendency of a certain banality of the use of this tool to create NCII or even to have access to NCII that are concerning, says Bruna Martins dos Santos, a policy and advocacy manager at Witness, a human rights group.
-snip-
Generative AI is normalizing this abusive behavior, by making it so easy.
MineralMan
(151,281 posts)As it gets easier to create such images and videos, more of it will occur. So, if you encounter any such images or videos, please report them to the venue where you saw them. That is a critical step in squashing this destructive trend.
highplainsdem
(62,227 posts)companies want them to buy and wear: https://www.democraticunderground.com/100220964224
We need very harsh penalties for adults using those tools, for the nudify platforms, and for the AI companies whose tools can be used this way. And kids need to be taught that this isn't harmless or cool, and made very aware of the penalties.
MineralMan
(151,281 posts)It's likely that things will get pretty ugly before they get a handle on it. Fortunately, it's easy to avoid it on the viewer end. Just don't go there.
Trouble is, a lot of people want to go there. That's a crying shame!
I have an old story from the early 1960s that is relevant. My high school girlfriend and my best friend's girlfriend thought it would be fun to take a couple of topless photos of themselves and give them to their respective boyfriends. Now, I did not mind, of course, being 16 years old, but I hid that photo really well. My girlfriend told me that she and the other girl used their parents' Polaroid camera to take them. I'm sure we treasured those photos.
When, as is inevitable with high school couples, we broke up. I gave back the picture. My best friend, though, ended up marrying his girlfriend a few years later, so he kept his.
Such things are innocent, by nature, but that's not how the Internet works at all. The likelihood that photos like that end up widely distributed is high.
highplainsdem
(62,227 posts)With genAI, though, nude photos aren't necessary.
And creating nude photos with AI isn't necessary to shame/harass women, either. Not just Grok, but ChatGPT and Gemini have been used to create nonconsensual AI "bikini photos" of women who dress very modestly because of their culture/religion.
A lot of the "bikini photos" Grok was asked to generate on X specified see-through material, too.
MineralMan
(151,281 posts)There have been a couple of incidences of that near where I am. Girls who had no idea it was happening were made aware of AI porn of them that was being distributed at their school.
I don't know how this is going to get stopped, frankly. Once the technology is there, it's very difficult to put it back in its cage.
As for my story, I couldn't keep that photo. My conscience wouldn't let me. It just would have been wrong.
highplainsdem
(62,227 posts)It has to be dealt with through penalties. For the platforms and tech companies as well as the individuals.
Having a conscience is important, and it's always a good thing.
I won't use genAI at all because my conscience won't let me. It's all based on theft of intellectual property, and it causes a lot of other harms. I don't understand how anyone can use it if they're aware of the IP theft and the harm it causes.
We don't have to accept the AI companies' propaganda that genAI use is inevitable and a good thing. We can reject it, penalize it. Use human intelligence and not let the robber baron AI bros tell us we have to use their generative AI tools and accept the IP theft, the dumbing-down and de-skilling, the damage to our natural environment, the pollution of our information ecosystem, the increased surveillance, and the loss of jobs, as the AI bros accumulate more power and wealth.