Funding NewsStartup Spotlights

Sarvam 105B vs. Global Giants: India’s Answer to Frontier AI Arrives

Sarvam 105B vs. Global Giants: India’s Answer to Frontier AI Arrives

India has officially entered the global AI arms race—and it’s not just participating; it’s competing.

On February 18–19, 2026, at the India AI Impact Summit in New Delhi, homegrown AI startup Sarvam AI unveiled two powerful new large language models (LLMs): the Sarvam-105B and Sarvam-30B . Trained entirely on Indian soil using compute infrastructure supported by the government’s IndiaAI Mission, these models represent a watershed moment for the country’s technological sovereignty .

But what makes this launch truly remarkable isn’t just the scale—it’s the efficiency. Sarvam’s flagship 105-billion-parameter model is outperforming frontier systems nearly six times its size on critical benchmarks for reasoning, coding, and Indian language understanding . Here’s why this matters for India’s 1.4 billion people and the global AI landscape.

The Models: Built Different by Design

Sarvam’s new lineup marks a significant leap from its October 2024 release (the 2-billion-parameter Sarvam 1) . Both new models employ a Mixture-of-Experts (MoE) architecture, a design philosophy that prioritizes efficiency over brute force .

Sarvam-30B: The Real-Time Conversational Engine

  • Total Parameters: 30 billion
  • Active Parameters per Token: 1 billion
  • Context Window: 32,000 tokens
  • Training Data: 16 trillion tokens 

This model is optimized for low-latency, real-time conversational use cases. By activating only a fraction of its parameters during inference, it dramatically reduces computing costs while maintaining high performance on reasoning tasks .

Sarvam-105B: The Complex Reasoning Powerhouse

  • Total Parameters: 105 billion (MoE architecture)
  • Active Parameters per Token: 9 billion
  • Context Window: 128,000 tokens—ideal for multi-step reasoning, long-form content, and agentic workflows 

As co-founder Pratyush Kumar explained at the launch: “We want to be mindful in how we do the scaling. We don’t want to do the scaling mindlessly. We want to understand the tasks which really matter at scale and go and build for them” .

Benchmark Dominance: Efficiency Over Brute Force

Here’s where the story gets interesting. Despite having only 105 billion total parameters (with just 9 billion active), Sarvam-105B is delivering knockout punches against much larger competitors :

MetricSarvam-105BCompetitorResult
Indian language technical benchmarks✅ OutperformsGoogle Gemini 2.5 FlashSuperior cultural & linguistic understanding 
General benchmarks✅ OutperformsDeepSeek R1 (600B params)6x efficiency advantage 
Cost-efficiency✅ CheaperGemini FlashLower inference costs with better performance 
Coding & reasoning✅ CompetitiveQwen3-Next-80B, GPT-OSS-120BState-of-the-art within its class 

This isn’t just academic one-upmanship. It proves a crucial thesis: sovereign AI doesn’t require trillion-parameter models. With intelligent architecture and culturally relevant training data, India can build world-class systems tailored to its unique needs .

Built for Bharat: Multilingual, Voice-First, Culturally Aware

The true differentiator for Sarvam’s models isn’t parameter count—it’s relevance. These models were trained from scratch on datasets reflecting India’s linguistic diversity, including code-mixed languages like Hinglish and Tamil-English .

Key Features for the Indian Context:

  • Support for all 22 scheduled Indian languages 
  • Optimized for voice-first interactions—critical in a population where voice interfaces often outperform text-based systems 
  • Understanding of code-mixing and cultural context, enabling more natural conversations 

At the launch event, Sarvam’s AI chatbot “Vikram” (named after physicist Vikram Sarabhai) demonstrated live conversations in Punjabi, Hindi, and other Indian languages—showcasing capabilities that global models often struggle with .

The IndiaAI Mission: Public-Private Partnership Done Right

Sarvam’s success story is inseparable from the IndiaAI Mission, the government’s ₹10,000 crore (~$1.2 billion) initiative to build domestic AI capabilities .

Key Support Received:

  • 4,096 NVIDIA H100 SXM GPUs allocated via Yotta Data Services
  • ~₹99 crore in GPU subsidies—the largest beneficiary of the mission’s ₹111 crore disbursed so far 
  • Technical support from NVIDIA for infrastructure optimization 

Sarvam was among the first 12 startups selected under the mission to build indigenous foundation models, alongside innovators like Soket AI, Gnani AI, and Gan AI . This ecosystem approach ensures that India isn’t building a single model, but a diverse portfolio of AI capabilities tailored to different sectors and use cases .

Beyond the Models: A Full-Stack AI Ecosystem

The LLMs are just one piece of Sarvam’s ambitious puzzle. The company also unveiled complementary tools designed for practical deployment :

  • Advanced Text-to-Speech (TTS) models for voice applications
  • Speech-to-Text (STT) systems optimized for Indian accents and languages
  • Vision models for document understanding—critical for enterprise workflows in legal, healthcare, and agriculture sectors 

These tools integrate with Sarvam’s enterprise platform, enabling businesses to build Sarvam for Work (coding-focused AI) and Samvaad (conversational AI agents) solutions on top of the foundational models .

Why This Matters: Sovereignty, Cost, and Scale

1. Data Sovereignty & Privacy

With the Digital Personal Data Protection (DPDP) Act reshaping how data flows, Indian enterprises and government agencies can now access world-class AI without sending sensitive data overseas .

2. Dramatically Lower Inference Costs

Sarvam-105B’s MoE architecture means it’s cheaper to run than Gemini Flash while delivering better performance on Indian tasks . For startups and researchers, this democratizes access to frontier AI capabilities.

3. Population-Scale Deployment

As Pratyush Kumar emphasized: “We want to make AI work at population scale. Being able to do it efficiently becomes a very core thesis” . With 1.4 billion potential users, efficiency isn’t optional—it’s essential.

4. Global Competitiveness

Sarvam’s models prove that Indian innovation can lead, not follow. By outperforming DeepSeek R1 (released just a year earlier) at one-sixth the size, Sarvam demonstrates that smart architecture can beat brute-force scaling .

The Road Ahead: Open Source and Ecosystem Growth

Sarvam plans to release both models as open source, accelerating adoption among developers, researchers, and government agencies . This aligns with the IndiaAI Mission’s vision of creating a thriving ecosystem of applications built on sovereign foundation models .

Upcoming Focus Areas:

  • Specialized coding models for software modernization
  • Legal and compliance AI for India’s complex regulatory landscape
  • Healthcare and agriculture applications tailored to local needs
  • Deeper voice integration as India’s primary AI interface 

Conclusion: India’s AI Moment Has Arrived

The launch of Sarvam-105B and Sarvam-30B isn’t just a company milestone—it’s a national statement. At a time when the global AI conversation is dominated by US and Chinese giants, India has demonstrated that it possesses the talent, infrastructure, and vision to build world-class systems on its own terms.

With Yotta’s $2 billion GPU supercluster coming online in 2026 , government backing through the IndiaAI Mission, and a vibrant startup ecosystem, India’s AI infrastructure story is entering hyper-growth mode. Sarvam has shown what’s possible when policy, compute, and innovation align.

The message is clear: India isn’t just adopting AI—it’s building the foundational intelligence to power its digital future.

Leave a Reply

Your email address will not be published. Required fields are marked *