Study Reveals Over 24 Million Visitors Use AI to Undress Women in Online Photos
In the ever-evolving landscape of technology, where innovation and growth often go hand in hand, a dark side emerges that tests the boundaries of ethics and privacy. Recent revelations about the growing popularity of apps and websites that use artificial intelligence (AI) to undress women in photos have raised concerns among researchers and privacy advocates. This disturbing trend highlights the alarming rise in non-consensual pornography due to rapid advances in AI technology.
According to a report by Bloomberg, 24 million people visited these undressing websites in September alone, which indicates a profound social change in the ways technology is used and misused. Graphica, a social network analytics company, drew attention to this alarming growth, highlighting a troubling aspect of the digital age.
Similar article: Meta Imagine AI: Your Gateway to Free-Standing Image Generation
So-called "nudify" services have cleverly infiltrated popular social networks for marketing purposes, with the number of link ad-stripping apps increasing by more than 2,400 percent on platforms like X and Reddit since the beginning of the year. These services leverage AI technology to digitally undress people, primarily focusing on women. What makes this increase even more disturbing is that these images are often taken from social media without the subject's consent or knowledge.
This troubling trend not only raises serious legal and ethical challenges but also increases potential harassment. Some ads suggest that customers can take nude photos and send them digitally to a naked person, furthering the invasive nature of the technology. While Google has stated its policy against sexually explicit content in ads and actively removes infringing content, the platform shows such a complex nature.
Privacy experts are warning about the increasing access of deepfake porn enabled by AI technology. Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, notes a shift toward ordinary people using these technologies for everyday purposes, including high school and college students. Many victims may remain unaware of these doctored images, and those who do face challenges seeking law enforcement intervention or taking legal action.
Also Read: Uncovering the 2023 Momentum - Wikipedia's most viewed pages
Despite growing concerns, there is no federal law in the United States expressly prohibiting the creation of deepfake pornography. A recent case in North Carolina, where a child psychiatrist was sentenced to 40 years in prison for using apps to take pictures of patients, is the first prosecution under the law banning the generation of child pornography. .
In response to the alarming trend, social media platforms such as TikTok and Meta Platform Inc. They have taken steps to block keywords associated with non-funny apps. TikTok has warned users that the word "Undress" may be associated with content that violates its guidelines, while Meta Platform Inc. It issued a statement on its actions, citing ongoing challenges in comprehensively addressing these issues and declined to comment.
As technology continues to evolve, the ethical and legal challenges posed by deepfake pornography emphasize the urgent need for comprehensive regulations to protect people from non-consensual and harmful use of AI-generated content. Maintaining a balance between technological innovation and privacy protection remains an ongoing struggle, but it is essential to protect the dignity and rights of people in the digital age.
Popular articles
Dec 14, 2023 07:51 PM
Dec 08, 2023 04:45 PM
Dec 21, 2023 09:04 PM
Dec 26, 2023 10:15 PM
Dec 08, 2023 10:07 PM
Categories
Comments (0)