Skip to main content
Enterprise AI Analysis: India's AI Governance Landscape: Insights from Elite Stakeholder Interviews

Enterprise AI Analysis

India's AI Governance Landscape: Insights from Elite Stakeholder Interviews

Authors: Siddharth Mehrotra & Sarah Hladikova

India's approach to AI governance differs substantially from Western regulatory frameworks, emphasizing voluntary guidelines and public-private partnerships over prescriptive legislation. While policy documents outline this strategy, little empirical research examines how key stakeholders interpret and implement these frameworks in practice. We conducted semi-structured interviews with 14 elite stakeholders across government, industry, civil society, and end-user sectors to understand their perspectives on India's governance approach. Our findings reveal significant tensions between developmental aspirations and ethical safeguards, highlight the substantial influence of private technology companies in contributing towards national policy, and expose critical gaps in addressing algorithmic fairness for India's diverse social contexts. This work contributes empirical insights into how India's distinctive governance model operates in practice and identifies key challenges for inclusive AI deployment.

Introduction

As AI technologies proliferate globally, different regions have adopted varied governance approaches. While established frameworks like the European Union's AI Act embody a regulatory-heavy approach to emphasize trustworthiness, emerging economies like India present a starkly different narrative, one that must carefully interweave developmental priorities, technological sovereignty, and socio-cultural complexities [17, 18]. Rather than implementing rigid regulatory frameworks, India has chosen to eschew prescriptive legislation in favor of adaptive, innovation-oriented policies underpinned by voluntary guidelines and strategic public-private partnerships [21].

Our Research Questions:

  • How do elite stakeholders perceive the balance between innovation enablement and ethical safeguards in India's AI governance framework?
  • What role do public-private partnerships play in shaping AI governance in practice for India?
  • What challenges do stakeholders identify in implementing India's governance approach?

Key Insights & Statistics

Our analysis uncovers critical trends and stakeholder perspectives shaping India's unique AI governance model.

0 Stakeholders Interviewed
0 Women Participants
0 Men Participants
0 Avg. Stakeholder Age
0 Informal Economy Workers
0 English Proficient Indians

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

AI Governance Framework

India has deployed a diverse range of policy instruments, demonstrating a multi-faceted approach to AI governance. For instance, in 2018 India established the AI Standardisation Committee in 2018 [20], and the launch of the National AI Portal (INDIAai) in 2020 [23] created a centralized knowledge platform. The emphasis on public-private partnerships, evidenced through initiatives like the NITI Aayog Cloud Innovation Center [2], reflects India's strategy for managing technological dependencies while maintaining some policy autonomy. The recent Digital Personal Data Protection Act of 2023 [24] further strengthens the regulatory framework around AI systems and data governance. The NITI Aayog framework for Responsible AI emphasizes voluntary commitments and ethical guidelines, suggesting that companies will adopt trustworthy practices to maintain their market position and public reputation [3]. Within this framework, the state positions itself as a facilitator of the development of private sector AI, providing infrastructure and access to data while trying to maintain some control over the direction of development [22]. This is evident in key policy documents such as the National Strategy for Artificial Intelligence from 2018, Principles for Responsible AI framework, which emphasizes strategic and advisory approaches over strict regulation [1, 3] or the Reserve Bank of India AI report [7]. Most recently, a drafting committee constituted by the Ministry of Electronics and Information Technology (MeitY) in July 2025 was tasked with developing an AI Governance framework for India. This report was released in November 2025¹ where the main focus is to promote innovation, adoption, diffusion, and advancement of AI while mitigating risks through principles, actionable plans, and institutional support. The guidelines are explicitly called guidelines (not laws), highlighting a pro-innovation philosophy preferring a light-touch, risk-based governance model.

Critical Perspectives

Indian scholarship has provided substantial critiques of the nation's AI governance approach, though these perspectives remain under-utilized in international discourse. Marda [19] early work argued that technical limitations of AI systems should be reckoned with at policy development stages, noting that limitations and risks of data-driven decisions feature as retrospective rather than proactive considerations in Indian AI policy. This concern about treating ethical implications as afterthoughts rather than design principles resonates throughout subsequent scholarship [15]. The Centre for Internet and Society has extensively documented AI deployments across Indian sectors, revealing significant gaps between policy aspirations and ground realities [33]. Sambasivan and colleagues conducted foundational research demonstrating that conventional algorithmic fairness frameworks are West-centric and that several assumptions of algorithmic fairness are challenged in India, where data is not always reliable due to socio-economic factors, ML makers follow double standards, and AI evokes unquestioning aspiration [30]. Their work highlights how caste, religion, gender, and regional disparities create fairness challenges that Western frameworks cannot adequately address. Recent analyses reinforce the above concerns amid India's push for AI leadership, with Dua et al. [12] critiquing the 2023 National Strategy on Artificial Intelligence's overemphasis on economic competitiveness at the expense of robust data protection and equity safeguards, leading to fragmented state-level implementations that exacerbate biases in welfare and policing systems. Similarly, Pandey [25] examines the MeitY Responsible AI guidelines, arguing they lack enforceable mechanisms for auditing socio-technical harms in multilingual, low-resource contexts, drawing on case studies of facial recognition failures in Aadhaar-linked services. Decolonial perspectives from Sethy [31] further underscore the imperative to reframe AI governance beyond Global North paradigms, advocating context-specific audits that prioritize community data sovereignty to mitigate caste-based exclusion in predictive policing and hiring algorithms. Critics have also highlighted the ethics-washing phenomenon in Indian AI policy, where voluntary ethical frameworks substitute for regulatory accountability². The lack of enforcement mechanisms means corporate compliance remains voluntary, raising questions about whether India's approach can adequately protect vulnerable populations. Overall, research on AI governance in India specifically remains limited in examining how stakeholders interpret and implement frameworks in practice, particularly regarding tensions between innovation and equity [32]. Our work addresses this gap by foregrounding stakeholder perspectives and the contradictions they navigate.

Methodology Overview

We adopted an exploratory qualitative approach centered on elite stakeholder interviews. Drawing on Hertz and Imber [16] founda-tional definition, we conceptualize elites as individuals who occupy institutionally powerful positions that grant them disproportionate influence over the policies, practices, and discourses that shape AI governance in India. This includes not only those who hold formal authority such as government officials and senior policymakers but also those with proximate power: industry practitioners whose technical decisions embed values into deployed systems, civil soci-ety researchers whose framing of problems shapes policy debates, and domain professionals such as healthcare and defense person-nels whose institutional endorsement legitimizes or contests AI adoption. Critically, we distinguish elite status from expertise alone; a farmer or a freelancer may possess deep domain knowledge yet lack the institutional positioning to actively challenge or reshape governance frameworks. Our focus on elite discourse also acknowledges that AI gover-nance policies emerge primarily from institutional stakeholders, though their impacts affect much broader populations. This method-ological choice allows us to understand how policy actors navigate governance frameworks while recognizing it captures elite perspec-tives rather than marginalized community experiences, a limitation that itself demonstrates the epistemic hierarchies in AI governance. We conducted semi-structured 1:1 interviews with 14 stakehold-ers representing diverse sectors of India's AI landscape (Table 1). Participants were recruited through snowball sampling, selected to represent key dimensions of digital governance: state institutions, economic actors, and civil society as articulated by Pohle and Thiel [27]. Interviews lasted approximately one hour and were conducted in English or Hindi based on participant preference. The sample included 5 women and 9 men with a mean age of 39 years. To main-tain the anonymity of our participants, we only provide their work experience as less or more than five years. To systematically address our research questions, we developed a comprehensive interview protocol that aligns with various aspects of India's AI governance landscape we aim to investigate. Drawing from work by Weimer and Vining [34]'s framework for evaluating governance impacts, we structured our interview questions around five key categories that directly map to our research objectives: (1) General Perceptions regarding current state of AI governance, (2) Policy and regulatory framework by the government, (3) Public-private partnership, (4) Ethical considerations and specificity of the region, and (5) Challenges and opportunities. Questions were tailored to each stakeholder's expertise while maintaining consistency across core themes and the protocol was iteratively refined. Participants provided informed consent, and all data was anonymized. The study followed institutional ethics guidelines for research with human subjects. We employed qualitative content analysis following established procedures by Elo and Kyngäs [13]. Two researchers independently coded all interviews, with ambiguities resolved through discussion. We used an iterative process to identify concepts, group similar ideas, and develop higher-level thematic categories. Analysis fo-cused on identifying patterns, tensions, and divergences in stake-holder perspectives-particularly contradictions between stated beliefs and revealed concerns that could inform critical analysis.

Developmental Aspirations vs. Resource Constraints

Stakeholders consistently emphasized India's ambition to leverage AI for socio-economic development, accepting calculated risks to enable rapid adoption. As P3 noted: “India's approach to AI governance fundamentally differs from Western models. We're not just regulating technology; we're actively leveraging it as a development accelerator.” However, this optimism was tempered by practical constraints. Technical practitioners highlighted infrastructural bottlenecks, particularly in rural areas, and insufficient sector-specific protocols. C3 (farmer) stated: “I've heard that AI can revolutionize agriculture, but the lack of reliable infrastructure in rural areas limits its potential. Governance policies should address these foundational barriers before introducing AI solutions.” Healthcare and defense stakeholders emphasized the need for more rigorous oversight in high-risk domains. M1 noted: “The current governance framework lacks sufficient clinical protocols for AI integration in healthcare. Patient data security must be prioritized while enabling AI-driven diagnostic innovations.” Ethics-focused stakeholders (E1) highlighted that fairness and equity considerations remain insufficiently embedded in current frameworks, with vulnerable populations facing disproportionate impacts due to disparities in access, literacy, and dataset representation.

Divergent Views on Regulatory Framework

Policy informers and AI developers generally viewed India's light-touch regulatory approach favorably. P2 explained: “Our National AI Strategy focuses on enabling innovation while maintaining basic safeguards. We're conscious of not creating regulatory bottlenecks that might impede development.” But this view was not universally shared. Healthcare professionals, defense personnel, and academics advocated for more structured approaches in their respective domains. M1 reported: “Current regulatory frameworks need to incorporate specific clinical protocols for responsible AI deployment in medical settings, particularly regarding diagnostic applications and patient data protection.” L1 added: “The light-touch approach must be balanced with more structured security standards in defense applications, where national security implications require specialized governance.” This divergence suggests tensions between policy objectives of enabling innovation and sectoral needs for rigorous safeguards, particularly in high-stakes applications.

Public-Private Partnerships: Collaboration and Concerns

Stakeholders highlighted numerous government-industry collaborations, including the AI compute infrastructure (18,000+ GPUs), AIKosh Datasets Platform, and API-Setu open API platform³. P3 noted that the IndiaAI report emphasizes: “Improving digital infrastructure and attracting private sector investment in AI infrastructure should be the priority.” While developers appreciated this collaborative approach (AID1: “The government's approach enables meaningful dialogue and collaboration”), several stakeholders raised concerns about corporate influence. AS1 pointed out that India's National Strategy for AI acknowl-edges contributions from NVIDIA, Intel, IBM, McKinsey, and Accenture, while the Responsible AI document acknowledges Google experts which shows how major MNCs are influencing a national agenda on AI. E1 stated: “The influence of tech giants on AI policy is undeniable. Although they drive innovation, their priorities may not always align with the broader public interest.” C2 (banker) added concerns about vendor lock-in: “The dominance of a few tech giants can stifle competition and innovation. In my sector, everyone uses Oracle's system where the dependence for any faults and bugs correction is too much.” These findings reveal tensions between leveraging private sector resources and maintaining policy autonomy, with questions about whose interests shape national AI strategy.

Ethical Considerations: Context-Specific Challenges

Stakeholders emphasized unique ethical considerations specific to India's context. C1 highlighted: “AI systems need to account for our diverse linguistic landscape and varying levels of digital literacy.” P4 added: “Designing trustworthy AI systems here requires understand-ing numerous cultural nuances. It's not just about language transla-tion; it's about understanding how different communities interpret and interact with AI systems.” M1 contributed an important healthcare perspective on cultural attitudes toward medical decision-making: “In some rural regions of India, family members or community elders play a significant role in healthcare decisions, and patients may not be accustomed to making autonomous choices about their data.” Multiple stakeholders highlighted algorithmic fairness challenges specific to India's social structure. E1 stated: “Challenges such as un-reliable data due to social disparities, double standards in ML products, and unquestioning aspiration towards AI must be addressed. Policies should consider issues like caste, religion, gender, and ethnicity biases in models.” AID3 noted: “The AI sector's claim of being ‘merit-based’ masks how merit itself is a function of caste privilege. We need to address the stark socio-economic disparities between Indian engineers and marginalized communities.”

Implementation Challenges

Stakeholders identified multiple structural challenges in implement-ing India's AI governance approach. P1 and P2 emphasized that traditional fairness metrics fail to account for India's complex social hierarchies and informal economy structures, noting that seem-ingly neutral data points like names, zip codes, and occupations can encode deep-rooted social biases in the Indian context. The digital divide emerged as a fundamental concern, with AID1 high-lighting that skewed datasets where majority of Internet users are male which affects how AI systems learn and operate [4], while AID2 questioned how systems can serve linguistic diversity when only around 10% of Indians understand English [10]. C2 raised fi-nancial inclusion concerns about documentation barriers, noting that many AI-based services require extensive documentation that economically disadvantaged populations cannot provide, making them “document-poor” and systematically excluded.

AI Governance Policy Lifecycle

Policy Formulation
Stakeholder Engagement
Guideline Development
Implementation & Monitoring
Feedback & Iteration

Estimate Your AI ROI

Understand the potential impact of responsible AI adoption on your enterprise.

Estimated Annual Savings
Annual Hours Reclaimed

Your Responsible AI Roadmap

A phased approach to integrating ethical AI governance into your enterprise operations.

Phase 1: Governance Framework Assessment

Evaluate existing AI governance frameworks and identify gaps specific to your organizational context. Focus on ethical guidelines, data privacy, and accountability mechanisms.

Phase 2: Stakeholder Alignment & Policy Integration

Engage key internal and external stakeholders to align AI policy with business objectives and societal impact. Integrate responsible AI principles into existing corporate governance structures.

Phase 3: Technical Implementation & Auditing

Implement technical safeguards for fairness, transparency, and data security in AI systems. Establish robust auditing processes to monitor compliance and identify potential biases or risks.

Phase 4: Continuous Monitoring & Adaptive Governance

Establish a continuous monitoring system for AI performance, ethical compliance, and societal impact. Implement an adaptive governance model that evolves with technological advancements and emerging ethical challenges.

Unlock Your AI Potential Responsibly

Ready to navigate India's AI landscape with confidence? Schedule a personalized consultation to discuss how our expertise can help your enterprise leverage AI responsibly and effectively.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking