Uncategorized

India Notifies Stricter AI Governance Rules: Mandatory Labelling, 3-Hour Takedown, and New Compliance Burden for Startups

India Notifies Stricter AI Governance Rules: Mandatory Labelling, 3-Hour Takedown, and New Compliance Burden for Startups
iit-ropar-launches-100-startups-100-days-as-pre-event-to-india-ai-impact-summit-2026

On February 10, 2026, the Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, which came into force on February 20, 2026 . These amendments mark a significant shift in how India regulates artificial intelligence and synthetically generated content online.

The new rules, which operate within the existing framework of the Information Technology Act, 2000, introduce a principles-based, techno-legal approach to AI governance rather than a standalone AI law . This approach was further reinforced at the AI Impact Summit 2026 in New Delhi, where the government released the India AI Governance Guidelines, anchored in seven guiding principles or “sutras”: Trust is the Foundation, People First, Innovation over Restraint, Fairness and Equity, Accountability, Understandable by Design, and Safety, Resilience and Sustainability .

The guidelines also propose the establishment of new national institutions, including the AI Governance Group (AIGG) chaired by the Principal Scientific Adviser, a Technology and Policy Expert Committee (TPEC) housed within MeitY, and an AI Safety Institute (AISI) to evaluate and test AI systems deployed across sectors .

“Trust is essential to support innovation, adoption, and progress, as well as risk mitigation. Without trust, the benefits of artificial intelligence will not be realised at scale.”
— India AI Governance Guidelines 

The New Compliance Landscape: Key Features

The IT Amendment Rules 2026 introduce several significant compliance requirements that directly impact AI startups, particularly those operating in generative AI, automation, and data-driven sectors.

1. Mandatory Labelling of AI-Generated Content

Under Rule 3(3)(a)(ii), intermediaries that facilitate the creation or dissemination of synthetically generated information (SGI) must ensure that permissible SGI is clearly and prominently labelled . Audio content must carry a prefixed disclosure indicating it is AI-generated, while visual content must include a prominently visible label .

Where technically feasible, intermediaries must also embed permanent metadata or provenance mechanisms, including a unique identifier and information identifying the intermediary’s computer resource used to create or modify the content .

Definition of Synthetically Generated Information (SGI):
Under Rule 2(1)(wa), SGI is defined as audio, visual, or audio-visual information that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that makes it appear real, authentic, or true, and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as, indistinguishable from a natural person or real-world event .

Key Exclusions: The rules explicitly exclude routine editing, formatting adjustments, noise reduction, compression, the creation of documents or educational materials, and accessibility enhancements made in good faith and without the intention of creating false or misleading records .

2. Three-Hour Takedown Window

Intermediaries must now act on government or court orders, including takedown orders, within 3 hours of receipt—a dramatic reduction from the earlier 36-hour window under the IT Rules .

ObligationPrevious TimelineNew Timeline
Lawful takedown direction36 hours3 hours
Grievance disposal15 days7 days
Urgent complaint handling72 hours36 hours
Intimate image removal24 hours2 hours

Source: IT Amendment Rules 2026 

For startups operating content platforms or social media intermediaries, this compressed timeline imposes significant operational strain, particularly for those handling large volumes of user-generated content.

3. Proactive Due Diligence for SGI

Under Rule 3(3)(a)(i), intermediaries offering SGI tools must deploy reasonable and appropriate technical measures, including automated tools, to prevent users from creating or disseminating unlawful synthetic content . Prohibited SGI includes:

  • Child sexual exploitation material
  • Non-consensual intimate imagery
  • Obscene, pornographic, or sexually explicit content
  • False documents or false electronic records
  • Synthetic depictions of explosives, arms, or ammunition
  • Deceptive portrayals of individuals or events (deepfakes) 

This provision represents a shift from a reactive notice-and-takedown model to a proactive, preventive compliance architecture .

4. Additional Obligations for Significant Social Media Intermediaries (SSMIs)

SSMIs—platforms with more than 50 lakh (5 million) registered users in India —face enhanced obligations under Rule 4(1A) :

  • Pre-publication declarations: Obtain a declaration from users whether content is SGI
  • Verification: Deploy technical measures to verify the accuracy of user declarations
  • Labelling: Ensure confirmed SGI is clearly labelled with an appropriate disclosure

Failure to comply may result in SSMIs being deemed to have failed their due diligence obligations, with corresponding legal exposure .

The Seven Sutras: India’s Principles-Based AI Governance Philosophy

At the AI Impact Summit 2026, the government released the India AI Governance Guidelines, which articulate a principles-based framework for responsible AI development and deployment .

The seven guiding principles are:

SutraDescription
Trust is the FoundationTrust must be embedded across the value chain—in technology, organisations, institutions, and individuals. Without trust, the benefits of AI will not be realised at scale.
People FirstAI governance should place people at the centre. Humans must retain meaningful control over AI systems, supported by effective human oversight.
Innovation over RestraintAI governance should actively encourage adoption and serve as a catalyst for impactful innovation, prioritising responsible innovation over cautionary restraint.
Fairness and EquityAI systems should be designed to ensure fairness and avoid bias or discrimination, particularly against marginalised communities.
AccountabilityAI developers and deployers should remain visible and accountable, with accountability clearly assigned based on function performed, risk of harm, and due diligence conditions.
Understandable by DesignAI systems must have clear explanations and disclosures to help users and regulators understand how the system works and the likely outcomes intended.
Safety, Resilience and SustainabilityAI systems should be designed with safeguards to minimise risks of harm, detect anomalies, and be environmentally responsible.

These principles provide a flexible and future-ready foundation for responsible AI development, applicable across diverse use cases and stages of technological evolution .

New Institutional Architecture: AIGG, TPEC, and AISI

The governance guidelines propose the establishment of new national institutions to operationalise the framework :

1. AI Governance Group (AIGG)

  • Chaired by the Principal Scientific Adviser
  • Coordinates between government ministries, regulators, and policy advisory bodies
  • Establishes uniform standards for responsible AI regulations
  • Identifies regulatory gaps and recommends legal amendments

2. Technology and Policy Expert Committee (TPEC)

  • Housed within MeitY
  • Brings multidisciplinary expertise from law, public policy, machine learning, AI safety, and cybersecurity
  • Advises on global AI policy developments and emerging capabilities

3. AI Safety Institute (AISI)

  • Primary centre for evaluating, testing, and ensuring the safety of AI systems
  • Develops techno-legal tools for content authentication, bias mitigation, and cybersecurity
  • Generates risk reports and compliance reviews
  • Facilitates cross-border collaboration with global AI safety institutes

Additionally, a National AI Incident Database will be established to record, classify, and analyse AI-related safety failures, biased outcomes, and security breaches across the country .

What This Means for AI Startups

The new governance framework carries several significant implications for AI startups in India.

1. Compliance Is Not Optional—Even Without a Standalone AI Law

As experts noted at the AI Impact Summit 2026, while no standalone AI law exists, compliance is not optional. Existing statutes such as the Information Technology Act and Intermediary Guidelines already apply to areas like synthetic media and platform liability .

Voluntary governance tools—including transparency reports, fairness testing, security reviews, and red-teaming—are expected to “develop into binding regulations in tandem with the ecosystem maturing” .

For startups, the recommended path is compliance with current laws plus gradual adoption of voluntary risk controls, especially for high-impact AI systems.

2. Generative AI Startups Face the Highest Burden

Startups building generative AI models or tools face the most significant compliance requirements:

  • Mandatory labelling of all AI-generated outputs
  • Provenance metadata embedded where technically feasible
  • Preventive technical measures to block unlawful content
  • User warnings about potential legal consequences of misuse

These requirements may be particularly challenging for resource-constrained early-stage ventures.

3. India Is Prioritising Innovation Over Restraint

A distinctive feature of India’s approach is the explicit prioritisation of “innovation over restraint” . Unlike the EU’s prescriptive AI Act, India is moving toward a principles-based, risk-calibrated framework tied to actual harm rather than blanket restrictions .

This approach aligns with the government’s broader vision of Viksit Bharat 2047, positioning AI as a catalyst for inclusive growth and global competitiveness .

4. The Cost of Compliance May Favour Incumbents

While the principles-based approach is flexible, the compliance burden—particularly the requirement to deploy automated detection tools and maintain provenance systems—may disproportionately affect startups with limited resources. As the Takshashila Institution noted in its analysis of the DPIIT’s copyright proposals, “the heavy-handed approach threatens to stall innovation” .

However, the government has signalled that it will offer financial, technical, and regulatory incentives to organisations demonstrating leadership in responsible AI practices .

5. The 3-Hour Takedown Window Is a Major Operational Challenge

For startups operating content platforms or social media intermediaries, the compressed takedown timeline—from 36 hours to 3 hours—represents a significant operational challenge. Platforms must now have 24/7 monitoring and response mechanisms in place, which may be difficult for early-stage ventures to maintain .

The Global Context: India’s Distinctive Approach

India’s AI governance framework is distinctive in several respects:

AspectIndia’s ApproachEU AI ActUS Approach
Legal FormPrinciples-based, techno-legal framework within existing lawsStandalone, prescriptive legislationSectoral, guidance-based
Risk ClassificationHarm-calibrated, tied to actual outcomesFour-tier risk classification (unacceptable to minimal)Principles-based
LabellingMandatory for SGIMandatory for deepfakesVoluntary (industry-led)
Institutional ArchitectureAIGG, TPEC, AISIAI Office, AI Board, Member State authoritiesNIST AI Safety Institute

India’s approach reflects its ambition to become a global leader not only in AI adoption and capability but also in responsible, inclusive, and trusted AI governance .

The Road Ahead: What Startups Should Do Now

For founders and AI startups, the new governance framework requires immediate attention:

1. Audit Your AI Systems
Identify whether your platform falls within the scope of SGI definition and assess your current compliance posture.

2. Implement Labelling Mechanisms
If you generate or host synthetic content, ensure you have systems in place for prominent labelling and, where feasible, metadata embedding.

3. Review Takedown Protocols
Ensure you have 24/7 monitoring and response mechanisms to meet the 3-hour takedown deadline.

4. Adopt Voluntary Governance Tools
Transparency reports, fairness testing, and red-teaming exercises are expected to become binding requirements. Early adoption positions you favourably.

5. Monitor Regulatory Developments
The AIGG, TPEC, and AISI are being established, and further guidance is expected. Stay informed.

6. Consider the Cost-Benefit of Compliance
For early-stage startups, the compliance burden may be significant. Evaluate whether the Indian market justifies the investment, or whether other jurisdictions (e.g., Singapore, US) offer friendlier regimes for your specific AI application .

The Final Word

India’s stricter AI governance framework, operationalised through the IT Amendment Rules 2026 and the principles-based AI Governance Guidelines, marks a new phase for the country’s AI ecosystem. The shift from a reactive notice-and-takedown model to a preventive, proactive compliance architecture places significant new obligations on intermediaries and AI startups.

However, India’s distinctive approach—prioritising innovation over restraint, relying on principles rather than prescriptive rules, and embedding governance within existing legal frameworks rather than creating a standalone AI law—offers flexibility that could benefit nimble startups.

The challenge for founders will be to navigate this new landscape: implementing labelling and provenance systems, maintaining 3-hour takedown readiness, and adopting voluntary governance tools that are likely to become binding over time. Those who adapt early may gain a competitive edge in a market where responsible innovation is becoming as important as rapid growth.

As the government continues to build out the institutional architecture—AIGG, TPEC, and AISI—and as the IndiaAI Mission scales compute access and model development, the framework for India’s AI future is taking shape. Startups that align with this vision—building trusted, transparent, and accountable AI systems—will be well-positioned to lead in the world’s third-largest startup ecosystem.

Leave a Reply

Your email address will not be published. Required fields are marked *