ARTIFICIAL INTELLIGENCE (AI) IN LEGAL SYSTEM: CONCERNS AND OPPORTUNITIES

The Supreme Court raised concern over unchecked AI use in legal drafting after fake citations surfaced. While tools like SUVAS and SUPACE aid administration, accuracy remains paramount. Citing risks of bias and privacy breaches, it urged human oversight and sovereign AI safeguards.

Description

Copyright infringement not intended

Picture Courtesy:  THE HINDU

Context

The Supreme Court raised concerns about legal professionals using Generative AI for drafting due to risks associated with accuracy, ethics, and reliability.

Read all about:  Role of AI in Judiciary l AI Integration in Judiciary  

What are the Key Observations made by the Supreme Court?

Fictitious Precedents: A lawyer wasted judicial time and misled the court by citing a fabricated, AI-generated case, ‘Mercy vs Mankind’.

Threat to Justice: The Chief Justice of India warned that AI must not "overpower the justice administration process," insisting that AI's convenience must not compromise accuracy and truth.

Professional Responsibility: Relying on unverified AI output violates the professional duty of lawyers to present factual, verified information to the court.

What are the Risks of Unregulated AI in the Legal System?

AI Hallucinations & Misinformation

Generative AI models predict text probabilistically and can confidently invent facts.

  • Case Study: In Mata vs Avianca (2023), a US lawyer was fined for citing six non-existent cases generated by ChatGPT.
  • In India, the ‘Mercy vs Mankind’ incident shows a similar credibility crisis emerging.

Algorithmic Bias & Discrimination

AI models trained on historical data can perpetuate societal biases against certain groups.

  • A study by National Law School of India found that legal AI models can associate criminality with specific genders or races, violating Article 14 (Right to Equality).

Data Privacy & Confidentiality

Using public AI tools can expose sensitive client data to third parties, violating privacy (under Digital Personal Data Protection Act 2023) and professional ethics. 

Abdication of Professional Duty

The Bar Council of India (BCI) Rules mandate that advocates must be truthful to the court. Blindly relying on AI amounts to professional negligence and misconduct.

Constructive Applications of AI in the Indian Judiciary

The Supreme Court raised concern on using generative AI for legal reasoning but supports its application to enhance administrative efficiency and expand access to justice.

Supreme Court Vidhik Anuvaad Software (SUVAS): An AI tool for translating Supreme Court judgments into regional languages. As of March 2025, it had translated 36,344 judgments into Hindi and 47,439 into other regional languages. (Source: PIB)

Supreme Court Portal for Assistance in Court’s Efficiency (SUPACE): A tool for judges that processes case files and facts to aid in research and case management. It acts as a "force multiplier," not a decision-maker.

Phase III of the e-Courts Project: The Union Cabinet approved ₹7,210 crore for the 3rd phase focused on using technology, including AI, for digital case management and scheduling to address a backlog of over 5 crore cases. (Source: PIB).

Way Forward: Towards Responsible AI in Law

Adopt a "Human-in-the-Loop" Model

AI should be used for tasks like data retrieval and translation, but never for legal reasoning or drafting without rigorous human verification.

Issue Judicial SOPs

The Supreme Court should formulate clear Standard Operating Procedures (SOPs) for the use of Generative AI in legal practice.

Update Legal Education: The Bar Council of India and law schools must incorporate "Legal Tech Ethics" into their curriculum to educate future lawyers on the capabilities and limitations of AI.

Develop a Sovereign Legal AI

India needs a secure, closed-loop AI model, trained solely on verified Indian legal data. This would prevent hallucinations and align with the NITI Aayog's 'AI for All' National Strategy.

Learn from Global Best Practices for Regulating Legal AI

  • United States: Many federal courts now require lawyers to submit a mandatory certification confirming human verification of AI-generated content in their filings.
  • European Union (EU): The EU AI Act (2024) designates AI in the "administration of justice" as "High Risk," requiring strict human oversight, accuracy testing, and conformity assessments before use.
  • Brazil: The Brazilian Supreme Court uses the AI tool VICTOR for classifying appeals, but human judges retain final decision-making power for accountability.

Conclusion

The Supreme Court's firm stance reminds us that technology must serve the rule of law. AI can boost judicial efficiency, but its adoption requires "Responsible AI" principles. The goal is to strengthen justice without compromising truth and fairness.

Source: THE HINDU

PRACTICE QUESTION

Q. Technology must remain a servant to the rule of law, not its master. Discuss. 150 Words

 

 

Frequently Asked Questions (FAQs)

AI hallucinations occur when Generative AI models (like ChatGPT) create confident but entirely false information, such as inventing non-existent case laws, dates, or citations, because they predict words based on probability rather than verifying facts.

SUVAS (Supreme Court Vidhik Anuvaad Software) is an AI tool used for translating legal judgments into regional languages. SUPACE (Supreme Court Portal for Assistance in Court’s Efficiency) is an AI tool designed to assist judges in case management and processing facts.

It is a framework where AI is used to assist—such as retrieving cases or translating text—but humans (lawyers and judges) must verify the output. AI should never replace human reasoning or final decision-making in legal processes.

Free access to e-paper and WhatsApp updates

Let's Get In Touch!