🔔Join APTI PLUS Prelims Mirror 2026 | All India Open Mock Test Series on 12th April, 26th April & 3rd May 2026 |Register Now!
The AI regulation requires a balanced, risk-based governance approach that avoids stifling innovation while preventing social harm, rather than rigid, static regulations.
It is an umbrella term for a constellation of technologies that enable machines to simulate human intelligence and emulate the cognitive capabilities of sensing, comprehending, and acting.
It is not a single technology, but rather an interdisciplinary field leveraging techniques like machine learning, natural language processing, and computer vision.
As AI increasingly drives consequential decisions like hiring and credit, governments are mandating oversight to address systemic economic, security, and societal risks.
Bias and Civil Rights: Unregulated AI can perpetuate and scale human biases; for example, algorithmic systems used in hiring and credit scoring have been documented to discriminate against marginalized groups based on historical data flaws.
Threats to Democracy: Use of AI-generated "deepfakes" and micro-targeting algorithms can spread industrial-scale disinformation, undermining the integrity of public elections and social cohesion.
Physical and Digital Safety: Without safety standards, AI in critical infrastructure (like power grids or autonomous transport) could lead to catastrophic failures or be exploited for large-scale cyberattacks.
Privacy and Mass Surveillance: AI enables persistent, automated facial recognition and biometric tracking, which can lead to a "chilling effect" on fundamental freedoms and the right to privacy.
Accountability Gap: In the absence of regulation, it is legally unclear who is responsible when an autonomous system causes harm—the developer, the data provider, or the end-user.
"Black Box" Problem: Many AI systems operate with algorithmic opacity, making it incredibly difficult to explain how or why a model reached a specific decision, which undermines due process and accountability.
Rapid Evolution and Agentic AI: Technology is advancing faster than the law, moving from generative AI to "agentic AI"—autonomous systems capable of executing tasks and making decisions with minimal human intervention.
Global Fragmentation: There is no universal standard. The divergence between the EU’s strict risk-based approach, the U.S.’s decentralized state-level approach, and China’s state-centric model has created a fragmented global landscape, leading to "AI governance arbitrage" where developers exploit regulatory gaps.
Dual-Use Dilemma: AI is a general-purpose technology; the same algorithm used for medical imaging (beneficial) can be repurposed for unauthorized biometric surveillance (harmful), making it hard to ban the technology without stifling innovation.
Transnational Nature: AI development and data flows happen across borders, but laws are territorial. This leads to "regulatory arbitrage," where companies may move operations to jurisdictions with the weakest oversight.
The Risk-Based / Rights-Based Approach (European Union)
The EU's AI Act is the world’s first comprehensive horizontal law. It categorizes AI applications into levels of risk:
The Sectoral / Innovation-First Approach (United States)
The U.S. currently avoids a single overarching AI law. Instead, it uses a decentralized approach where existing agencies (like the FDA for health) apply rules to AI within their specific sectors, guided by the Executive Order on Safe, Secure, and Trustworthy AI.
The State-Led / Security-Focused Approach (China)
China has implemented specific regulations targeting "Deep Synthesis" (deepfakes) and "Recommendation Algorithms." These rules focus on ensuring AI content aligns with national security laws and prevents social disruption.
The Principles-Based Approach (OECD)
Many nations follow the OECD AI Principles, which focus on "soft law"—guidelines that encourage transparency, safety, and accountability without necessarily imposing heavy fines initially.
"AI for All" Strategy: Led by NITI Aayog, this inclusive philosophy prioritizes AI in agriculture, healthcare, and education to drive social development.
Hybrid Regulatory Model: Instead of a single "AI Act," India combines sectoral advisories with the Digital Personal Data Protection (DPDP) Act, 2023, as the foundation for training data regulation.
Advisory-Led Governance: Ministry of Electronics and Information Technology (MeitY) issues advisories requiring platforms to prevent AI bias, discrimination, and threats to electoral integrity.
IndiaAI Mission: Government approved ₹10,372 crore to develop sovereign AI capacity and an "AI Marketplace," combining regulation with state-backed growth.
Global Leadership: As Global Partnership on Artificial Intelligence (GPAI) Chair, India promotes "sovereign AI" and technology democratization to protect Global South interests.
Synthetic Media Interventions: Amending IT Rules in 2026 to mandate strict labeling, traceability, and automated takedowns of harmful "Synthetically Generated Information".
Technical Expertise Shortage: India lacks sufficient government experts to perform algorithmic auditing and identify code-level biases.
Infrastructure and Compute Dependency: India's AI growth depends on foreign hardware and cloud services. Balancing regulation with the need to attract these digital infrastructure investments remains a challenge.
Legal Ambiguity on Liability: The Information Technology Act, 2000 lacks specific provisions for generative AI, leaving it unclear whether developers, deployers, or users are liable for harmful autonomous decisions.
Data and Diversity Issues: AI trained on Western data often lacks Indian cultural and linguistic nuances. A shortage of high-quality indigenous data makes regulating for accuracy and bias difficult.
IP and Copyright Uncertainty: The 1957 Copyright Act is outdated, failing to recognize AI authorship or provide training data exemptions, leading to major lawsuits like ANI vs OpenAI.
Systemic Hurdles: India faces limited judicial expertise for technical evidence, reliance on international infrastructure, and a critical lack of specialized AI researchers.
Global consensus from organizations like the OECD, G20, and NITI Aayog suggests that an "ideal" framework should be Adaptive, Proportionate, and Human-Centric.
Risk-Based Categorization
Instead of a "one-size-fits-all" law, the framework should categorize AI by its potential for harm. Low-risk AI (like spam filters) should have minimal rules, while high-risk AI (like medical diagnosis) should face rigorous auditing.
Transparency and "Explainability"
An ideal framework requires "Algorithmic Transparency." If an AI denies a loan or a job, the system must be able to provide a human-readable explanation of why that decision was made.
Human-in-the-Loop (HITL)
For decisions in healthcare, law, or warfare, the framework must mandate that a human has the final say and the ability to override the AI system.
Interoperability
To prevent "regulatory silos," an ideal framework should be compatible with international standards, allowing AI companies to operate across borders without following different sets of rules.
Accountability and Liability
The framework must clearly define "Legal Personhood" for AI—clarifying whether the developer, the data provider, or the user is responsible when an autonomous system causes damage.
Continuous Monitoring
Because AI learns and changes after it is deployed, regulation should not be a one-time "license" but a continuous monitoring process involving regular safety audits.
Establish a Statutory Authority
Form the Artificial Intelligence and Data Authority of India (AIDAI), guided by a Multi-Stakeholder Body (MSB), to act as the apex regulator to oversee AI categorization, formulate ethical codes, and manage national data digitization.
Strengthen Infrastructure & Research
Operationalize the Sovereign AI Stack by investing in indigenous compute power (e.g., the AIRAWAT cloud platform) and establishing Centers of Research Excellence (COREs) and International Centers for Transformational AI (ICTAIs).
Pass Targeted Legislation
Advance frameworks like the Digital India Act and the Artificial Intelligence (Ethics and Accountability) Bill to create statutory backing for bias audits, algorithmic transparency, and deepfake regulation.
Scale Skill Development
Overhaul the education system to integrate AI ethics and technical skills at all levels, while providing financial incentives to reskill the existing workforce to prevent job displacement.
Transition to a "Risk-Based" Statutory Framework
Move from temporary "advisories" to a clear legal statute that categorizes AI based on risk levels—similar to the EU model but customized for the Indian context to ensure it doesn't stifle small-scale innovation.
Focus on "Indo-Centric" AI
Focus toward training Large Language Models (LLMs) in regional Indian languages (Bhashini project) to ensure AI benefits are not restricted to English speakers and are culturally relevant.
International Alignment
India should leverage its Global Partnership on AI (GPAI) leadership to advocate for Global South priorities, such as data sovereignty and labor protection, in universal AI standards.
To secure a projected $1 trillion economic boost by 2035 while mitigating systemic risks like bias and privacy loss, India must adopt a "Goldilocks" regulatory framework that provides a proportional, risk-based approach to protect citizen rights without stifling global leadership in innovation
Source: INDIAN EXPRESS
|
PRACTICE QUESTION Q. Critically analyze the divergent global approaches to AI regulation, explicitly comparing the European Union's risk-based model with India's evolving strategy. 150 words |
AI regulation is necessary to mitigate severe socio-economic risks, such as the spread of deepfakes, electoral manipulation, algorithmic bias, copyright infringement, and job displacement, while ensuring that the technology benefits the economy securely and ethically.
Approved by the Union Cabinet, the IndiaAI Mission is an initiative with a ₹10,371.92 crore budget designed to build sovereign AI computing capacity by deploying 38,000 GPUs and creating a national datasets platform to foster an indigenous AI ecosystem.
The Collingridge Dilemma describes the pacing problem between innovation and law. It highlights that technology evolves exponentially while regulations evolve linearly, meaning by the time governments understand an AI system's risks, it is already too integrated into society to regulate easily.
© 2026 iasgyan. All right reserved