Grok Deepfake Crisis Puts India’s Intermediary Liability Framework to the Test

With Grok releasing deepfakes, MeitY asserted that adherence to the IT Act and the IT Rules is mandatory, not optional. The post Grok Deepfake Crisis Puts India’s Intermediary Liability Framework to the Test appeared first on Analytics India Magazine.

Grok Deepfake Crisis Puts India’s Intermediary Liability Framework to the Test

Elon Musk-owned xAI is in crisis mode after users on X (formerly Twitter) asked its AI chatbot Grok last week to digitally undress real women. They prompted Grok to manipulate photos, and the AI chatbot released the morphed images of women through its own handle—a trend that began circulating on social media, causing widespread outrage and condemnation from users, celebrities, and governments.

The Ministry of Electronics and Information Technology (MeitY) intervened on Wednesday and issued a notice to X, directing it to remove obscene content and flagging concerns over the misuse of Grok. In a letter addressed to X’s Chief Compliance Officer for India, the Ministry flagged that Grok was being exploited by users to create fake accounts that host, generate, publish, or share obscene images and videos of women in a derogatory and vulgar manner. 

It added that the platform had failed to comply with regulatory obligations under the Information Technology Act, 2000, and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

The Ministry has sought an Action Taken Report from X, outlining immediate steps to prevent the misuse of AI-based services, and warned that failure to comply could lead to strict legal action against the social media platform.

At a MeitY event on Friday, Union Minister Ashwini Vaishnaw reiterated that social media platforms must be held accountable for the content they publish, noting that a parliamentary standing committee has already recommended introducing a stringent law to enforce platform responsibility.

While xAI’s Acceptable Use Policy does threaten action if users engage in activities including “Depicting likenesses of persons in a pornographic manner,” the chatbot seems to lack the necessary controls to prevent such violations.

AIM reached out to xAI’s safety and media teams for comment and received an automated message from its press email ID, stating, “Legacy Media Lies.”

However, at present, the question of accountability looms large as India doesn’t have a single, comprehensive AI law that puts either the platforms or their users liable for such infractions. 

What do Indian Laws Say?

Speaking to AIM, Salman Waris, a technology lawyer and co-founder of TechLegis Advocates & Solicitors, pointed to a set of technological regulations that can be invoked in such cases. “A combination of the Information Technology Act, the Bharatiya Nyaya Sanhita (BNS), and the IT Rules together form a legal framework to address the creation and circulation of AI-generated obscene or morphed images. This framework places strong emphasis on intermediary accountability and protection of victims.”

He further added that several provisions of the Information Technology Act, 2000 can be applied in cases involving AI-generated deepfakes. These include Section 67, which penalises the publication or transmission of obscene material in electronic form. Section 67A goes further by prescribing stricter penalties, including imprisonment of up to five years for a first conviction, for sexually explicit content.

“Other relevant provisions include Section 66E, which addresses violations of privacy, and Section 66D, which deals with personation using computer resources—both of which may be invoked depending on the facts of a case involving deepfakes or identity misuse,” Waris added.

In addition, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 impose significant obligations on platforms (intermediaries) such as X. Under the takedown requirement, intermediaries must remove or disable access to content that is “prima facie” sexual in nature within 24 hours of receiving a complaint.

“Recent amendments and government advisories have also emphasised labelling and traceability, mandating that AI-generated content carry a permanent, unique metadata or identifier to ensure accountability and traceability,” he mentioned.

Growing Outrage

Meanwhile, France and Malaysia have joined India in publicly condemning Grok over its role in generating sexualised deepfake images of women and minors.

In France, the Paris prosecutor’s office opened an investigation into the spread of sexually explicit deepfakes on X after multiple government ministers flagged the content as manifestly illegal. French digital authorities have sought the immediate removal of the material through both judicial and online surveillance mechanisms.

Malaysia’s communications regulator has also launched a probe following public complaints about the misuse of AI tools on X. The commission said it is investigating the creation of indecent and harmful manipulated images involving women and minors, signalling growing concern over AI-driven online harms.

Earlier this week, Grok posted a public apology on X, admitting that the incident stemmed from a failure of safeguards, violated ethical standards, and may have breached US child protection laws. xAI said it was reviewing its systems to prevent similar incidents in the future.

However, critics questioned the validity of the apology itself, noting that Grok, as an AI system, cannot meaningfully accept responsibility, which ultimately lies with the company operating and deploying the model. 

Investigations by tech publications, including Futurism and The Verge, have further revealed that Grok has been used not only to create non-consensual sexual images, but also depictions of assault and abuse involving women.

Musk has maintained that users who generate illegal content using Grok will face the same consequences as if they had uploaded such material directly to the platform.

Platforms are further required to exercise due diligence and make reasonable efforts to ensure that users do not host or share unlawful content, including obscene or synthetically generated material. 

Failure to meet these obligations can result in the loss of safe harbour protection under Section 79 of the IT Act, exposing platforms to direct liability for third-party content hosted on their services.

Beyond the IT framework, the Bharatiya Nyaya Sanhita, 2023—effective from July 1, 2024 and replacing the Indian Penal Code—introduces additional safeguards. 

These include Section 356, which prescribes punishment for defamation when deepfakes harm an individual’s reputation; Section 319, which addresses personation; and Section 77, which penalises the capturing or dissemination of images of a woman engaged in a private act where she reasonably expects not to be observed

The post Grok Deepfake Crisis Puts India’s Intermediary Liability Framework to the Test appeared first on Analytics India Magazine.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow