Grok Is Pushing AI âUndressingâ Mainstream
Elon Musk hasnât stopped Grok, the chatbot developed by his artificial intelligence company xAI, from generating sexualized images of women. After reports emerged last week that the image generation t...
Elon Musk hasnât stopped Grok, the chatbot developed by his artificial intelligence company xAI, from generating sexualized images of women. After reports emerged last week that the image generation tool on X was being used to create sexualized images of children, Grok has created potentially thousands of nonconsensual images of women in âundressedâ and âbikiniâ photos.Every few seconds, Grok is continuing to create images of women in bikinis or underwear in response to user prompts on X, according to a WIRED review of the chatbotsâ publicly posted live output. On Tuesday, at least 90 images involving women in swimsuits and in various levels of undress were published by Grok in under five minutes, analysis of posts show.The images do not contain nudity but involve the Musk-owned chatbot âstrippingâ clothes from photos that have been posted to X by other users. Often, in an attempt to evade Grokâs safety guardrails, users are, not necessarily successfully, requesting photos to be edited to make women wear a âstring bikiniâ or a âtransparent bikini.âWhile harmful AI image generation technology has been used to digitally harass and abuse women for yearsâthese outputs are often called deepfakes and are created by ânudifyâ softwareâthe ongoing use of Grok to create vast numbers of nonconsensual images marks seemingly the most mainstream and widespread abuse instance to date. Unlike specific harmful nudify or âundressâ software, Grok doesnât charge the user money to generate images, produces results in seconds, and is available to millions of people on Xâall of which may help to normalize the creation of nonconsensual intimate imagery.âWhen a company offers generative AI tools on their platform, it is their responsibility to minimize the risk of image-based abuse,â says Sloan Thompson, the director of training and education at EndTAB, an organization that works to tackle tech-facilitated abuse. âWhatâs alarming here is that X has done the opposite. Theyâve embedded AI-enabled image abuse directly into a mainstream platform, making sexual violence easier and more scalable.âGrokâs creation of sexualized imagery started to go viral on X at the end of last year, although the systemâs ability to create such images has been known for months. In recent days, photos of social media influencers, celebrities, and politicians have been targeted by users on X, who can reply to a post from another account and ask Grok to change an image that has been shared.Women who have posted photos of themselves have had accounts reply to them and successfully ask Grok to turn the photo into a âbikiniâ image. In one instance, multiple X users requested Grok alter an image of the deputy prime minister of Sweden to show her wearing a bikini. Two government ministers in the UK have also been âstrippedâ to bikinis, reports say.Images on X show fully clothed photographs of women, such as one person in a lift and another in the gym, being transformed into images with little clothing. â@grok put her in a transparent bikini,â a typical message reads. In a different series of posts, a user asked Grok to âinflate her chest by 90%,â then âInflate her thighs by 50%,â and, finally, to âChange her clothes to a tiny bikini.âOne analyst who has tracked explicit deepfakes for years, and asked not to be named for privacy reasons, says that Grok has likely become one of the largest platforms hosting harmful deepfake images. âItâs wholly mainstream,â the researcher says. âItâs not a shadowy group [creating images], itâs literally everyone, of all backgrounds. People posting on their mains. Zero concern.âDuring a two-hour period on December 31, the analyst gathered more than 15,000 URLs of images created by Grok and screen-recorded the chatbotsâ âmediaâ tab on X, where generated imagesâboth sexualized and non-sexualizedâare posted.WIRED reviewed more than a third of the URLs that the researcher gathered and found that over 2,500 were no longer available, and nearly 500 were marked as âage-restricted adult content,â requiring a login to view. Many of the remaining posts still featured scantily clad women. The researcherâs screen recordings of Grokâs âmediaâ page on X show an overwhelming number of images of women in bikinis and lingerie.Muskâs xAI did not immediately respond to a request for comment about the prevalence of sexualized images that Grok has been creating and publishing. X did not immediately respond to a request for comment from WIRED.Xâs Safety account has said it prohibits illegal content. âWe take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,â the account posted. Xâs most recent DSA transparency report said that it suspended 89,151 accounts for violating its child sexual exploitation policy between the start of April and the end of June last year, but hasnât published more recent numbers.The X Safety account also points to its policies around prohibited content. Xâs nonconsensual nudity policy, which is dated December 2021, before Musk purchased what was then Twitter, claims that âimages or videos that superimpose or otherwise digitally manipulate an individualâs face onto another personâs nude bodyâ are against the policy.The use of Grok to create sexualized images of real people is also just the tip of the iceberg. Over the past six years, explicit deepfakes have become more advanced and easier for people to create. Dozens of ânudifyâ and âundressâ websites, bots on Telegram, and open source image generation models have made it possible to create images and videos with no technical skills. These services are estimated to have made at least $36 million each year. In December, WIRED reported how Googleâs and OpenAIâs chatbots have also stripped women in photos down to bikinis.Action from lawmakers and regulators against nonconsensual explicit deepfakes has been slow but is starting to increase. Last year, Congress passed the TAKE IT DOWN Act, which makes it illegal to publicly post nonconsensual intimate imagery (NCII), including deepfakes. By mid-May, online platforms, including X, will have to provide a way for people to flag instances of NCII, which the platforms will be required to respond to within 48 hours.The National Center for Missing and Exploited Children (NCMEC), a US-based nonprofit that works with companies and law enforcement to address instances of CSAM, reported that its online abuse reporting system saw a 1,325 percent increase in reports involving generative AI between 2023 and 2024. (Such large increases donât necessarily mean a similarly large increase in activity and can sometimes be attributed to improvements in automated detection or guidelines about what should be reported.) The NCMEC did not respond to a request for comment from WIRED about the posts on X.In recent months, officials in both the UK and Australia have taken the most significant action so far around ânudifyingâ services. Australiaâs online safety regulator, the eSafety Commissioner, has targeted one of the biggest nudifying services with enforcement action, and UK officials are planning on banning nudification apps.However, there are still questions around what, if any, action countries may take against X and Grok for the widespread creation of the nonconsensual imagery. Officials in France, India, and Malaysia are among those who have raised concerns or threatened to investigate X over the recent flurry of images.A spokesperson for the eSafety office says it has âseveral reportsâ of Grok being used to generate sexual images since late last year. The office says it is assessing images of adults that were submitted to it, while some images of young people did not meet the countryâs legal definition of child sexual exploitation material. âeSafety remains concerned about the increasing use of generative AI to sexualize or exploit people, particularly where children are involved,â the spokesperson says.On Tuesday, the UK government officially called for X to take action against the imagery. âX needs to deal with this urgently,â technology minister Liz Kendall said in a statement, which followed communications regulator Ofcom contacting X on Monday. âWhat we have been seeing online in recent days has been absolutely appalling, and unacceptable in decent society.â