THE POLICY EDGE
Expert Commentary

10 February 2026

India’s AI Strategy Must Shift from Frontier Scale to Frugal Deployment

A frugal AI strategy will succeed only if policy shifts from funding capability creation to governing deployment, risk, and workforce adaptation

SDG 8: Decent Work and Economic Growth | SDG 9: Industry, Innovation and Infrastructure

Ministry of Electronics and Information Technology MeitY | NITI Aayog

Views are personal.

A background note can be accessed here: Economic Survey FY26: AI as an Economic Strategy

The Economic Survey shifts India’s AI policy focus away from large, capital-intensive frontier models toward task-specific, frugal innovation tailored to local needs and institutional capacity. What are the key systemic barriers that could inhibit effective diffusion of decentralised AI across diverse sectors and regions, and how should policy redesign these institutional levers to balance economic viability with broad-based innovation?

-Advertisement-
-Advertisement-
-Advertisement-
-Advertisement-

The main barrier to decentralised AI diffusion in India is not technical capability but institutional misalignment across data, compute, and standards. The greater risk lies elsewhere: that these capabilities remain concentrated, under-deployed, and misaligned with everyday service delivery – where productivity and inclusion gains actually lie.

Firstly, there is a lot of sectoral data in the country but it is often fragmented across custodians with weak incentives for controlled sharing. There are a number of datasets but their effective use is constrained by the absence of trusted intermediaries and standardised licensing which increases transaction costs for small organisations and local innovators.

Secondly, current compute-access initiatives risk replicating frontier-model bias if subsidies are allocated based on scale rather than deployment intent. While the focus on Small Language Models (SLMs) is a good bet for India’s AI leap, given that they are computationally efficient, cost effective and locally relevant, they risk remaining peripheral without explicit prioritisation. As a result, task-specific and edge-deployable models will remain crowded out.

-Advertisement-
-Advertisement-
-Advertisement-
-Advertisement-

Thirdly, standardisation of models is dominated by global architectures, leaving limited space for low resource benchmarks, regional language evaluation, or offline-first system design.

Policy redesign should therefore focus on institutional plumbing. This requires moving towards sector-specific data trusts with predefined auditability and liability rules. Compute subsidies should be conditional on downstream deployment metrics, such as the number of users reached, reduction in costs and improvement in service delivery. India should also actively take the lead in designing standards for frugal AI through public procurement and regulatory sandboxes.


The Survey advocates a proportionate, risk-based regulatory approach that favours open and interoperable systems over rigid localisation or command-and-control regimes. Given this normative shift, what governance architectures and accountability mechanisms are needed to manage AI risks, ranging from bias and opacity to infrastructure dependencies, without stifling experimentation and private investment?

-Advertisement-
-Advertisement-
-Advertisement-
-Advertisement-

A proportionate AI governance regime in India should not focus on creating a single AI regulator. The regulatory challenge, therefore, is not a binary choice between control and freedom, but avoiding a regime that is simultaneously too restrictive for low-risk applications and too permissive in contexts where AI systems acquire scale, lock-in, or coercive power.

India is deliberately pursuing a pragmatic, light-touch regulatory approach, aimed at enabling AI innovation while addressing risks through voluntary guidelines, sectoral oversight, and the application of existing IT and data protection laws. For low-risk, task-specific AI systems, used in routine service delivery, it is best to use disclosure, documentation, and post-deployment auditability rather than ex ante approval.

A stronger regulatory approach should be used only for those AI systems with systemic reach or coercive effects, including, but not limited to, population-scale surveillance, automated eligibility determination, or infrastructure-level services. These thresholds should be tied to potential social harm and irreversibility, not to the size of a model or its technical capability.

Accountability should be linked to function-based governance, with line ministries and existing sector regulators doing the main oversight, supported by institutions responsible for standards-setting, certification, and safety testing.


A strategic emphasis in the Survey is on human capital realignment toward foundational skills that complement AI rather than narrow specialised technical training. In an economy where AI adoption varies widely across sectors, what policy instruments are most likely to reshape labour-market signalling so that workers and firms can co-adapt effectively to AI-augmented production processes?

The present debates on AI and work are increasingly shaped by a divide between technooptimism, which frames AI as a productivity tool and techno-pessimism, which highlights the displacement risks and the possibility of an AI investment bubble. The danger for labour markets is not technological displacement per se, but premature specialisation – training workers for specific AI tools and workflows whose relevance may erode faster than institutions can recalibrate.

Evidence suggests that both positions capture part of the truth. While AI is unlikely to disappear and will continue to improve, it remains prone to error, bias, and rapid shifts in capability and cost structures. This uncertainty reinforces the case for human capital realignment.

If AI trajectories become volatile, labour markets cannot be anchored to narrow tool-specific skills. Instead, policy must prioritise AI-complementary capabilities like reasoning, judgment, communication, and human supervision, which are more likely to retain value over time.

Policy instruments should therefore work towards labour-market signalling. Certification frameworks can validate adaptive competencies rather than static expertise. As AI tools are increasingly being integrated across organisations and sectors, continuing education incentives can enable workers to adjust as AI tools evolve or fail to scale. Public–private skill coalitions can ground training in real workplace use, recognising that AI adoption will increasingly involve co-production between workers and AI models.

Managing Schumpeterian creative destruction – particularly in the event of potential AI bubbles – requires flexibility, ensuring that labour resilience keeps pace with the technological uncertainty ahead.

Rethinking Public Policy Through Insight | Inquiry | Impact

Opinion • Grassroots Voices • Policymakers Perspectives • Expert Analysis • Policy Briefs