THE ETHICS OF AUTOMATED ADMINISTRATIVE STATE

22nd November, 2021

THE ETHICS OF AUTOMATED ADMINISTRATIVE STATE

Introduction

  • The proliferation of artificial intelligence and algorithmic decision-making has helped shape myriad aspects of our society: From facial recognition to deep fake technology to criminal justice and health care, their applications are endless.
  • Artificial intelligence, is gradually replacing bureaucratic agency in making important executive decisions and government functions.
  • But the story of applied algorithmic decision-making is one of both promise and peril.

 

The Indian context

  • 2018-19 Economic Survey lays out an ambitious roadmap for the government of India to use data ‘of the people, by the people, and for the people’.
  • Part of the report is dedicated to praising the Samagra Vedika initiative of the government of Telangana, described as a scheme which integrates data across government databases.
  • Samagra Vedika system utilises ‘artificial intelligence’ or AI—to make predictions about people’s behaviour, and uses these predictive analytics to ultimately process applications for welfare schemes.
  • More specifically, Samagra Vedika has been used to identify the eligibility of welfare beneficiaries and remove potentially fraudulent or duplicate beneficiaries.

Ethical Issues associated with Algorithmic Decision Making in Public Administration

Not in line with Public Values in Administration

  • Administrative decision-making is governed by principles of transparency and accountability, intended to keep a check on arbitrary executive actions.
  • Transparency, accountability and democratic participation form the core values in administration.
  • However, when administrative decisions are usurped by systems that use data and complex algorithmic analysis, the system uses logic of reasoning for making policy decisions.
  • This cannot easily be interrogated to analyse its reasonableness or fairness.
  • In Machine learning the system’s logic is constantly changing, and the use of multiple data points from various sources obscures whether the use of a particular kind of data was relevant, reasonable or fair.

 

For example, in using ‘data-based’ systems for deciding how to allocate policing resources,
By its own description, in 2016, the use of the Samagra Vedika system for removing so-called fraudulent ration cards led to the cancellation of 100,000 cards.
Subsequently, ‘public resistance’ to the cancellation, led to the re-addition of 14,000 cards.

 

Unaccountable process

  • The concept of ‘natural justice’ and due process establishes specific procedural safeguards to ensure that decisions made are fair and accountable.
  • These include the requirement to give notice, the duty to provide an explanation and justification for a decision.
  • However, the use of AI in decision-making processes again fundamentally alters how natural justice and procedural safeguards should be applied.
  • Decisions made with the use of AI are not always interpretable or explainable in a way that can allow affected individuals to understand and contest them.

 

Lack of objectivity

  • Amazon's failed attempt to develop a hiring algorithm driven by machine learning: Amazon scrapped its internal recruitment AI once it came to light that it was biased against women
  • International Baccalaureate's and UK's A-Level exams: The AI did not actually correct any papers; it only produced final grades based on the data it was fed, which included teacher-corrected coursework and the predicted grades.
  • In all of those cases, the algorithms introduced to automate decisions kept identifying patterns of bias in the data used to train them and attempted to reproduce them.
  • A recent study by American Association for the Advancement of Science found racial bias in a widely used machine learning algorithm intended to improve access to care for high-risk patients with chronic health problems.
  • Researchers estimated that as a result of this embedded bias of 84%, the number of Black patients identified for extra care was reduced by more than half.


Lack of appeal in absence of specific law

  • In the absence of any specific law and policy on automated decisions in India, there is no structural manner in which obligations in policy making can be enforced.
  • Failures of such protections have been observed in a number of cases where government agencies are using AI or algorithmic systems, from the denial of benefits using Aadhaar to the cancellation of voter ID cards using the NERPAP algorithmic system by the Election Commission of India.

 

No Emotions

  • There is no doubt that machines are much better when it comes to working efficiently but they cannot replace the human connection.
  • Machines cannot develop a bond with human beings which is an essential attribute of public administration.
  • As Dan Goleman said ‘Emotional Intelligence Matter More Than IQ’. Empathy, self awareness and other emotions are of utmost importance for better policy targeting.

 

Outsourcing of public welfare polices

  • Most applications of AI in administration are through the procurement of AI technologies from private vendors—whether the data that is used or the algorithmic processes and software.
  • In doing so, government agencies are outsourcing not only the creation of technologies, but also the process of making policy decisions from private vendors.
  • These private vendors are currently not guided by any obligations to make technologies whose outcomes are fair, transparent, accountable and participatory in ways that conform to democratic values.

 

Lacking Out of Box Thinking

  • Machines can perform only those tasks which they are designed or programmed to do, anything out of that they tend to crash or give irrelevant outputs which could be a major backdrop.

 

Way Ahead

  • Government should tread carefully when considering the application of AI, Big Data and predictive analytics for making consequential decisions, and is attuned to the limitations and consequences of the use of these systems.
  • There is an urgent need to revisit and reframe the application of principles of fair and reasonable decision-making under Indian administrative law, both by courts, as well as through regulatory mechanisms, such as creating notice and due process requirements for the use of AI-based decision-making
  • For example, the EU’s General Data Protection Regulation, or by creating processes for intervening in the procurement of AI systems (as attempted by the Tamil Nadu Safe and Ethical AI Policy).
  • As the government goes forward with developing its policies for ‘ethical AI’, it must keep in view how the use of AI in this crucial context of the administrative state can ensure that important democratic values are not compromised.