Can India’s AI Copyright Plan Survive Legal and Technical Scrutiny?

India’s ambitious proposal for a single mandatory AI training licence faces feasibility, legal and innovation concerns. The post Can India’s AI Copyright Plan Survive Legal and Technical Scrutiny? appeared first on Analytics India Magazine.

Can India’s AI Copyright Plan Survive Legal and Technical Scrutiny?

The Department for Promotion of Industry and Internal Trade’s (DPIIT) working paper on the copyright use in generative AI is creating quite a stir. 

The government body, proposing a mandatory blanket licence for AI training, has sought to strike a balance between granting AI companies access to content while also enforcing copyright laws, ensuring content creators receive statutory remuneration through a single-licence, single-payment mechanism. 

While the Ministry of Electronics and Information Technology has endorsed DPIIT’s proposed framework, the strongest early pushback is coming from India’s technology industry, which argues that the framework misunderstands the mechanics of AI training and risks burdening an emerging ecosystem still taking shape.

Tech industry questions the core premise

Nasscom, India’s apex technology industry body, has reiterated its preference for a Text and Data Mining (TDM) exception with opt-out rights, which would enable copyright holders to oppose the use of their works in training models. 

Replying to queries from AIM, Ankit Bose, head of AI, Nasscom, argued AI training involves “large-scale, automated, non-expressive processing of lawfully accessed data at the input stage,” which is fundamentally different from commercial exploitation. 

As training does not produce expressive copies or substitute creative works, Bose believes treating training as a licensable act creates conceptual and practical problems.

For this reason, Bose sees opt-outs as a targeted, proportionate remedy. They give creators who are commercially sensitive or strategically exposed the ability to reserve their works from being used to train models. 

He further argued that mandatory licensing at the training stage shifts copyright from its traditional domain of controlling use to controlling learning, a shift he believes goes beyond the intended function of copyright law and risks reshaping the incentives underpinning AI development.

Nasscom and Business Software Alliance, whose members include Google, Microsoft, Amazon Web Services, IBM, Salesforce and OpenAI, are among the entities that dissented to certain aspects of the proposal. 

Innovation concerns

Bose also warned that the proposed blanket licence for any lawfully-accessed copyright-protected content could be “potentially burdensome,” particularly for startups, MSMEs and research-driven entities. 

The industry body is concerned that the proposed royalty regime may introduce long-term cost unpredictability, since rates would be determined by a government-appointed committee with no clear precedent. 

He also believes uniform royalty obligations applied across model sizes, dataset types and commercial scales would disproportionately affect early-stage innovators who rely on low-cost experimentation. 

Treating training access as a paid statutory entitlement, Bose said, risks embedding fixed costs into the early phases of model development, where financial flexibility is critical.

Implementation challenges

Nasscom calls for a substantial redesign of the framework for it to be implemented. 

Bose noted that attributing training influence to individual rights holders across India’s vast and often informal creative economy may be a monumental task. 

He also questioned whether the proposed Copyright Royalties Collective for AI Training (CRCAT), a central body to set up royalties, could realistically achieve the transparency, auditing capacity and governance maturity needed to manage a national-scale royalty system. 

He further noted that India lacks having a clear definition of AI training. The country also lacks a clear strategy on foundational model development versus fine-tuning, and how open-source or research models should be treated. 

As India’s model diverges from the TDM-exception approach favoured in several global jurisdictions, including Japan, the UK, and the European Union, developers working across borders may face fragmented compliance requirements.

Bose said the system may only be implementable if it incorporates transparency-lite mechanisms, phased royalty applicability, startup exemptions, and explicit safe harbours for non-expressive training. Otherwise, it risks an “execution drag” that could eventually push innovation activity offshore.

Legal experts sound warnings

While the tech sector focuses on feasibility, legal experts say the proposal is on shaky statutory ground even before implementation challenges arise.

“India’s existing Copyright Act, 1957, does not permit a mandatory blanket licence for AI training,” said Priyanka S Kulkarni, senior legal advisor and solicitor specialising in IP. She explained that AI training “necessarily involves reproducing works in electronic form,” which falls squarely within the exclusive rights of copyright owners under Section 14. 

None of the existing compulsory or statutory licensing provisions—Sections 31 to 32A and 31B to 31D—cover mass text-and-data mining or machine learning. These provisions, she emphasised, are narrow and tied to specific public-interest circumstances, and cannot be stretched to justify a sector-wide, automatic licensing regime.

Kulkarni added that the proposal preventing creators from withholding their works from AI training “would directly override their statutory exclusive right of reproduction,” making such a provision ultra vires (acting or done beyond one’s legal power or authority). It “would be vulnerable to legal and constitutional scrutiny unless Parliament first enacts a carefully structured amendment,” she cautioned.

Burden of proof and CRCAT

Some elements of the proposal, however, do find support in principle.

The proposed framework puts the burden of proof on the AI developer to establish compliance in case of any legal challenges. Anandaday Misshra, founder and managing partner at AMLEGALS, said it is “constitutionally sound under Article 21.” 

He noted that the presumption aligns with Section 109 of the Bhartiya Sakshya Adhiniyam, which replaced the old Indian Evidence Act in 2023, places the burden on the party possessing exclusive knowledge. He called the presumption “a fair, proportional procedural mechanism.”

Misshra stressed, however, that CRCAT must rest on a strong statutory foundation. He said the entity would require its own chapter within the Copyright Act; clear, time-bound judicial review of royalty-setting to prevent arbitrariness; and distribution rules that ensure proportional and non-discriminatory treatment of all rights holders. 

Drawing from India’s history with copyright societies, he said CRCAT would need mandatory third-party audits, a robust rate-setting framework, explicit penalties for non-compliance, and mechanisms to enforce obligations on foreign AI firms operating commercially in India.

Overregulation

Digital policy specialists also warn that the proposal assumes technical capabilities that today’s AI systems do not possess.

Jameela Sahiba, associate director at The Dialogue, a policy think-tank, said the recommendations “risk tipping the balance decisively towards overregulation in a sector that is still in its formative stages.” 

She noted that the framework relies on advanced technical traceability that does not currently exist, and may introduce friction “precisely where agility and openness are most essential.” Monitoring training datasets, “massive, dynamic, and sourced from heterogeneous repositories” is operationally unrealistic, she added.

Sahiba argued that India must adopt a “technologically grounded, innovation-positive approach,” centred on dataset-level transparency rather than per-work accounting, which she said remains technically unfeasible.

Creators Deserve a Share

Filmmaker Joshua Sethuraman countered the tech policy narrative has long tilted toward large corporations, making the government’s attempt to rebalance power “crucial and much needed.” 

He added there should be “no doubt whatsoever” that creators deserve royalties when their work is used to train or strengthen AI systems as they derive commercial value from human-created inputs. 

“If a model’s capability is built on millions of creative works, the people behind those works deserve a share of the value,” he said.

Sethuraman also stressed that “the consumer of content reproduced by LLMs is often another creator,” making ethical use essential. Responsibility, he said, must rest both with users and AI developers, who must ensure their systems “respect the creative industry, not exploit it.”

As consultations begin, policymakers will need to determine whether India’s “one licence” vision can build cross-sector support, or whether the extensive legal, technical and institutional concerns raised by experts will force substantial redesign long before any amendment to the Copyright Act is drafted.

The post Can India’s AI Copyright Plan Survive Legal and Technical Scrutiny? appeared first on Analytics India Magazine.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow