Elon Musk’s Grok AI faces government backlash after using it to create sexualized images of women and minors.

admin

Elon Musk’s Grok AI faces government backlash after using it to create sexualized images of women and minors.

  • X users have used Elon Musk’s grok image generator to create sexualized images of women without their consent.

  • Some AI image requests included photographs of minors.

  • French authorities are investigating AI Deepfax. The Ministry of Electronics of India wrote to X.

Elon Musk’s Grok reacted after an AI image generator was used to generate objectionable sexual images of real people, including minors.

In the past week, some X users have used Grok to digitally dress people in photos, with the AI ​​model creating fake images of subjects showing more skin, wearing bikinis, or changing their body positions.

Some requests are consensual, such as the OnlyFans models asking Grok to remove their own clothes. But others inspired Groke to “undress” images of adults who were not themselves. Some of those images include minors, according to screenshots posted by concerned users on the social media platform and several examples seen by Business Insider.

XAI’s “acceptable use” policy prohibits “depicting the likeness of individuals” and “sexualizing or exploiting children.”

When asked for comment, xAI sent an automated email response that did not address the issue.

French authorities are investigating the rise of AI-generated deepfakes from Grok, the Paris prosecutor’s office told Politico. Distributing non-consensual deepfakes online carries a two-year prison sentence in France.

India’s Ministry of Electronics and Information Technology wrote a letter to the chief compliance officer of X’s India operations, describing reports of users distributing “images or videos of women in a derogatory or obscene manner to indecently humiliate them”.

The ministry asked X to conduct a “comprehensive technical, procedural and governance-level review” and remove any content that violates India’s laws.

Alex Davies-Jones, the United Kingdom’s minister for victims and violence against women and girls, urged Elon Musk, CEO of X’s parent company, xAI, to do something about the AI ​​images.

“If you care so much about women, why do you allow X users to exploit them?” she wrote. “Groc can undress hundreds of women in a minute, often without the knowledge or consent of the person in the image.”

Davies-Jones also referred to a UK proposal that would make the creation and dissemination of sexually explicit deepfakes a chargeable offence.

In response to an X user flagging screenshots they said showed Grok creating sexual images of minors, the official Grok account responded and said the company has “identified flaws in security and is fixing them immediately” – although it’s unclear whether Grok’s response was reviewed by xAI or just AI generated.

“There have been isolated cases where users have prompted and received AI images depicting minors in minimal clothing, such as the example you referenced,” the official Grok account responded in a separate thread. “xAI has safeguards in place, but improvements are ongoing to completely block such requests.”

Deepfakes are an ongoing concern and moderation challenge for AI companies, though Musk has trumpeted Grok’s NSFW features.

In August, Grok’s image and video generator Imagine launched a “spicy” mode, where users can create AI-generated pornographic images of women. While the “spicy” option was not available for photo uploads, users could enter custom prompts, such as “take off shirt.”

Workers who trained Grok previously told Business Insider that they encountered sexually explicit material, including cases where users requested AI-generated child sexual abuse material (CSAM).

The “undress” grok trend grew after Wired reported on December 23 that OpenAI’s ChatGPT and Google’s Gemini AI models were being used to generate images of real women in bikinis from clothed photos.

Individuals’ ability to fight self-made AI deepfakes varies.

In the US, the Take It Down Act protects against objectionable deepfakes, although its domain depends on the age and body parts displayed. For adults, the act only includes deepfakes that show genitalia or sexual activity. The law is tough on minors, covering deep fakes intended to “abuse, humiliate, harass, or humiliate” or “to arouse or satisfy the sexual desire of any person.”

Some states have even passed stricter laws regarding the spread of deepfakes.

While the creation of deepfakes through AI raises more complex questions of liability, Section 230 of the Communications Decency Act of 1996 primarily protects online platforms from liability for content posted by users.

Speaking to Business Insider in August, technology-enabled abuse attorney Alison Mahoney questioned whether AI-powered tools would “remove their immunity” from considering platforms as creators.

“There needs to be clear legal pathways to be able to hold platforms accountable for abuse,” Mahoney said.

Read the original article on Business Insider

Leave a Comment