Stakeholders flag concerns over blanket labelling in draft IT rules on synthetically generated information
Dec 11, 2025
New Delhi [India], December 11 : A cross-section of creators, legal experts, brand representatives and digital platforms on Monday raised strong objections to what they termed "blanket labelling" requirements in the Draft IT Rules on Synthetically Generated Information (SGI), urging the government to adopt a more transparent, risk-tiered regulatory framework.
According to a press release issued by the organisers, the observations were made at a closed-door roundtable convened by The Dialogue, a New Delhi-based tech policy think tank, to examine the feasibility and legal viability of the Draft IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025.
Participants warned that the current formulation risks clubbing routine AI-enabled creative processes with high-risk synthetic media. Creators argued that the digital economy is built on personal credibility, and excessive labelling could damage that trust.
"There is a clear difference between AI-authored content and AI-enhanced content. Almost everything in our industry is AI-enhanced now, but my mileage as a creator is still built on trust... If every video I make ends up with an 'AI' banner just because I used captions or a clean-up tool, my credibility is at stake," content creator Tuheena Raj said, stressing that strong labels should apply mainly to "finance, health, political messaging, deepfakes - not... routine, low-risk enhancements."
Representatives from the advertising sector noted that AI is already deeply integrated into scriptwriting, editing, localisation, and testing workflows. They cautioned that unclear provisions might enable "liability dumping", pushing compliance burdens onto smaller creators and agencies.
Platform representatives drew parallels with global regulatory trajectories, noting that even mature jurisdictions lean towards principle-based, risk-graded AI rules rather than rigid, format-specific mandates.
"We work across multiple jurisdictions... Even in those mature' territories, you don't yet see such detailed rules on how every piece of synthetic media must be tagged," said Shivani Singh of Glance (InMobi Group). She questioned whether "blanket labelling will actually solve the deepfake problem we are worried about."
Legal experts argued that the Draft Rules conflate transparency with harm prevention and lack a differentiated approach to risk. "The absence of risk grading results in overbroad mandates that treat all content with suspicion," said Akshat Agarwal of AASA Chambers, adding that labelling could become "a blunt instrument that penalises innovation without meaningfully curbing harm."
Across the discussion, stakeholders emphasised the need for clearer definitions, exemptions for routine or accessibility-related AI uses, and interoperable provenance standards rather than heavy detection obligations. They stressed the importance of frameworks that protect against deception without undermining legitimate creative expression.