By Basim Ali

Edited by Muhammad Hani Ahsan

Graphic by: Norie Wright and Arsh Naseer

According to Greek mythology Prometheus gave humankind the gift of fire by stealing it from Zeus.1 This gift of fire resulted in the advancement of society and the transformation of all aspects of human life. The mastery over fire contributed to human progress but also destruction and annihilation of societies evident from the various wars waged since the dawn of civilization. Thus, fire is one of the examples of a discovery that was first accepted, became essential, and finally volitional. 

Artificial Intelligence and Machine Learning (AI/ML) are akin to modern-day revelations that are ushering in an era of unprecedented transformation. However, AI is being widely adopted and becoming an essential part of our lives.  AI can be defined as the ability of computers to replicate human cognition2, while ML refers to the technologies and algorithms that enable the computers to accomplish this3. The ubiquitous and all-encompassing nature of AI significantly augments human research and has the potential to disrupt and transform the existing norms. Even in the ‘natural intelligence’ systems like bureaucracies, AI has started to be assimilated into the working process.

Bureaucracy and AI

As defined by Max Weber, bureaucracy is a form of organization that is distinguished by the prevalence of protocols and rules that are applied impersonally by specially trained officials.4 Bureaucracy replaced the traditional and personal forms of rule like that by divine sanction (Dei Gratia), by diktat, or even by whims and fancies, to that of a system that was characterized by separation of powers, adherence to hierarchy, and rule-following. The bureaucratic system de-personalized governance by strict adherence to the ‘legally established impersonal order’.5 The English philosopher Thomas Hobbes in his book Leviathan, elucidates that the state is an ‘artificial man’,6 which is a metaphorical representation of the state or the ‘Commonwealth’. This artificial man is responsible for establishing a governing structure and administering a ‘social contract’7 in which the individuals give up some of their freedoms in exchange for protection and security provided by the Commonwealth. Interestingly, this Leviathan is represented by the office of the government, which itself is a legal entity that needs to be represented by a natural person.8 It is puzzling that the rationale for the Commonwealth, which is the seat of sovereignty, is the ‘law of the land’, which represents the will of the state and establishes and upholds directives, but the state itself amounts to nothing more than artifice.9 The bureaucracy is supposed to implement this enigmatic nature of the state; therefore, it is rendered ‘dehumanized’, and this nature of bureaucracy is pragmatically accepted in the contemporary era.10

This Kafkaesque nature of bureaucracy advocates for the use of human discretion, to avert the descent of a state into a dystopia. However, the crucial question is that when the genie of AI is assimilated with a byzantine entity like bureaucracy, then how might the interactions unfold? When bureaucracy was adopted as a system of governance, it too was deemed to be ‘artificial’, however, over time bureaucracy became an essential part of our lives. The acceptance of any new phenomenon, especially technology is governed by the rules that the English author Douglas Adams highlighted in his book ‘A Salmon of Doubt11, they are as follows:

  1. “Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
  2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
  3. Anything invented after you’re thirty-five is against the natural order of things.”

Models For AI – Bureaucracy Interactions

The issue with AI is not limited to its newness, but its omnipresent nature and generative capabilities.12 For individual-level bureaucrats or ‘street-level’ bureaucrats as per Michael Lipsky, discretion was instrumental in their decision-making process while implementing government policies. The Information Communication Technology (ICT) tools enhanced these decision-making abilities thus, transitioning the ‘street-level’ bureaucracy into ‘screen-level’ bureaucracy.13 Decision-making, which is the central animating force of a bureaucracy was enhanced by the ICT tools. With AI adopting the mantle of decision-making, there is a significant concern regarding the interaction which would be between traditional bureaucrats and AI. As per Richard Danzing, there could be four kinds of interactions between humans and AI, namely, those of ‘subjects’, ‘users’, ‘partners’, and ‘overseers. In the case of ‘subjects’, the engagement between humans and AI is reactive and not chosen.14 This interaction is analogous to the interaction between humans and bureaucracy. Humans accept the opacity of the bureaucratic system as a given and do not question the inherent reductionism of the system. Under the ‘user’ model, humans seek to leverage AI as a source of empowerment. They may not understand the underlying processes which enable these interactions, nonetheless, they use AI to increase efficiency and make life simple. The ‘partner’ model involves not only using the machine to perform complex tasks but also using independent decision-making capabilities outside the purview of AI. Thus, the user model integrates the outputs with due considerations given to factors beyond the capacity of the machine. In this model, the objective is to foster a symbiotic relationship between AI and humans enabling decisions based on the effective assessment of multiple variables whose computation may be a challenge for human beings. The ‘subject’, ‘users’, and ‘partnerships’ interaction models are tactical, whereas the ‘overseers’ interaction model is strategic. In the strategic model, humans are entrusted with regulating AI by assessing and mitigating systemic risk. 

Even though these interaction models have a degree of discretion, they too are vulnerable as humans are not rational automatons.15 People have cognitive biases and may succumb to planning fallacies.16 These biases can be reflected in the codes and algorithms that constitute the ML models. Thus, the AI would have an undercurrent of systemic bias as it was designed using a flawed code. In such cases, another interaction model can result which can be termed the ‘subjugated’ model. In this model, the apocalyptic imagination of an all-controlling AI would be realized where human discretion would be ineffectual. A version of this was shown in the film Mission: Impossible – Dead Reckoning Part One17, where an all-controlling AI is posed to decide the fate of the world. In the movie, we see a representation of a scenario where AI exercises almost complete discretion while humans do the bidding for the AI.

It cannot be denied that AI/ML is going to be an intrinsic part of governance in the forthcoming future. For bureaucrats and AI to complement each other, new procedures need to be deliberated upon drawing from the ideas of ‘New Public Administration’ as highlighted by Dwight Waldo.18 This is a digression from the model of bureaucracy advocated by Max Weber, as in a digitally administered state human ethos like empathy and compassion need to be institutionalized in the bureaucratic frameworks. This will ensure that the society and the government are ‘re-humanized’ and the Pavlovian persuasions of AI are addressed.19 A classic example could be a system that encompasses elements from John Rawls’ ‘Veil of Ignorance20, Amartya Sen’s ‘Idea of Justice21, and Dwight Waldo’s ‘Map of Ethical Obligations.22 Rawls’ theory is centered around political justice. For Rawls, liberty and equality were the central animating forces behind the basic structure of an orderly society. In his ‘Veil of Ignorance’, he states that everyone in society would be ignorant about their position in society, and this would minimize the confirmation bias in any system. Amartya Sen focuses on society’s fundamental institutions rather than “the actual societies that would ultimately emerge” from those institutions as opposed to what Rawls proposed. For him, the traditional strain of political philosophy which seeks to identify ‘the just’ or a single set of just principles that can be used to design perfectly just institutions for governing society, reveals little about how we can identify and reduce injustices. He elucidates that democracy is a ‘government by discussion’ rather than a ‘government by elections’, focusing on institutions and the capability of people. Waldo elucidates the various obligations of public service, adhering to a vision that extends beyond self-interest.23 Waldo elucidates different types of public service obligations that extend beyond the self. It includes the constitution, the law, nation/country, democracy, organizational/bureaucratic norms, professionalism, friends and framework, self, public interest, middle-range collectives, humanity, and religion/god. These imperatives should be enshrined in bureaucratic functioning and reflected in the integration of artificial intelligence in administrative decision-making. 

The Ethical Advisory Model

Borrowing from Rawls, Sen, and Waldo, this paper proposes the Ethical Advisory Model to guide the adoption of emerging technologies. This model highlights the need for ethics, transparency, and accountability in the context of an AI-enabled bureaucracy or technocracy. The goal is to leverage technology to advance social progress and make the bureaucracy more efficient without diluting ethics and morality. Using the ‘Veil of Ignorance’ the model would view society objectively and from a just perspective. Any inclusion of AI must ensure that it is not only accessible to people, but it is also neutral from any discrimination and biases. AI models that are in commercial use have shown systemic biases across racial and gender lines.24 Therefore, any model must be designed to approach decision-making from behind the veil of ignorance. The ‘Idea of Justice’ articulated by Sen is also instructive, ensuring that the institution’s integrity and independence are paramount. The bureaucracy must remain the ultimate decision-maker in the context of public administration. AI models should be used to inform decision-making, but policymakers must not be subordinate. Additionally, constituents must not only be informed but must also have a say in how AI is being used by governments and the impact that it has on people to ensure a government by discussion. Finally, integrating the obligations of civil service can ensure that the engagement of the AI-enabled bureaucracy with the populace is politically neutral, moral, and professional, and upholds the rule of law. The AI models and their use in public service must be in line with ideals including democracy, organizational norms, professionalism, public interest, middle-range collectives, and humanity. 

Conclusion

On 26th September 1983, at the height of the Cold War, the world almost witnessed a nuclear war between the United States and the Soviet Union.25 The Soviet early-warning system codenamed Oko, or Eye26 detected the launch of an American pre-emptive nuclear strike. The operating protocol for the Soviets was to retaliate with a nuclear strike. The duty officer Stanislav Petrov, whose job it was to register apparent enemy missile launches, decided not to report them to his superiors and instead dismissed them as a false alarm.27 Despite having the data from the computer systems to support the notion of an American attack, the officer chose to commit what was treason and a dereliction of duty.  He reported a system malfunction, which was indeed true and was later confirmed during an inquiry. This incident demonstrates that despite having all the extrinsic analyses of data, an intrinsic ‘human judgment’ is vital. The fact that AI and bureaucracy would be compelled to collaborate is obvious. However, effective procedures and protocols would be needed to ensure that the association between the two is productive and humane. In a manifesto issued on July 9, 1955, to warn the world about the dire consequences of a nuclear war Bertrand Russell and Albert Einstein stated, “We have to learn to think in a new way”.28 Similarly, artificial intelligence can transform humanity for the better and the worse. In the context of a complete bureaucracy-AI integration, the Ethics Advisory Model ensures that the human considerations of justice, ethics, and obligations are accounted for in decision-making. 

Bibliography

[1] Britannica. 2024. “Prometheus | God, Description, Meaning, & Myth.” Britannica. https://www.britannica.com/topic/Prometheus-Greek-god.

[2] Columbia Engineering. 2024. “Artificial Intelligence (AI) vs. Machine Learning | Columbia AI.” Columbia Engineering Artificial Intelligence certificate program. https://ai.engineering.columbia.edu/ai-vs-machine-learning/.

[3] Brown, Sara. 2021. “Machine learning, explained.” MIT Sloan. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained.

[4] Legal Information Institute, Cornell Law School. 2022. “Bureaucracy.” Law.Cornell.Edu. https://www.law.cornell.edu/wex/bureaucracy.

[5] Kettl, Donald F. 2023. Politics of the Administrative Process. N.p.: SAGE Publications.

[6] Copp, David. 1980. “Hobbes on Artificial Persons and Collective Actions.” The Philosophical Review 89 (4): 575-606. https://doi.org/10.2307/2184737.

[7] Britannica. 2024. “Social contract | Definition, Examples, Hobbes, Locke, & Rousseau.” Britannica. https://www.britannica.com/topic/social-contract.

[8] Chwaszcza, Christine. 2012. “The Seat of Sovereignty: Hobbes on the Artificial Person of the Commonwealth or State.” BRILL. https://brill.com/view/journals/hobs/25/2/article-p123_1.xml?language=en.

[9] Skinner, Quentin. 2002. “Hobbes and the Purely Artificial Person of the State.” Journal of Political Philosophy 7 (1): 1-29. 10.1111/1467-9760.00063.

[10] Danzig, Richard. 2022. “Machines, Bureaucracies, and Markets as Artificial Intelligences.” Center for Security and Emerging Technology.” Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/machines-bureaucracies-and-markets-as-artificial-intelligences/.

[11] Adams, Douglas. 2024. Wikipedia. https://www.goodreads.com/work/quotes/809325-the-salmon-of-doubt-hitchhiking-the-galaxy-one-last-time.

[12] Lipsky, Michael. n.d. “Street-Level Bureaucracy, 30th Ann. Ed.: Dilemmas of the Individual in Public Service.” Russel Sage Foundation. https://www.jstor.org/stable/10.7758/9781610446631.

[13] Bovens, Mark, and Stavros Zouridis. 2002. “From Street-Level to System-Level Bureaucracies: How Information and Communication Technology Is Transforming Administrative Discretion and Constitutional Control.” Public Administration Review 62 (2): 174-184. https://doi.org/10.1111/0033-3352.00168.

[14] Danzig, Richard. 2022. “Machines, Bureaucracies, and Markets as Artificial Intelligences.” Center for Security and Emerging Technology.” Center for Security and Emerging Technology. 

[15] Bullock, Justin B. 2019. “Artificial Intelligence, Discretion, and Bureaucracy.” The American Review of Public Administration 49 (7): 751-761. https://doi.org/10.1177/0275074019856123.

[16] Kahneman, Daniel, and Amos Tversky. 1982. “Intuitive Prediction: Biases and Corrective Procedures.” Semantic Scholar. https://www.semanticscholar.org/paper/Intuitive-Prediction%3A-Biases-and-Corrective-Kahneman-Tversky/67ed8b4dea889ff81e897f89cc89653450f98dde.

[17] McQuarrie, Christopher, dir. 2023. Mission: Impossible – Dead Reckoning Part One.

[18] Lowery, George, and Dana Cooke. 2024. “Dwight Waldo Started It All.” Maxwell School. https://www.maxwell.syr.edu/news/article/dwight-waldo-started-it-all.

[19] Ploog, B.O. 2012. “Classical Conditioning.” Encyclopedia of Human Behavior 2:484–91. https://doi.org/10.1016/B978-0-12-375000-6.00090-2.

[20] Davies, Ben. n.d. “John Rawls and the “Veil of Ignorance” – Philosophical Thought.” OPEN OKSTATE. Accessed March 25, 2024. https://open.library.okstate.edu/introphilosophy/chapter/john-rawls-and-the-veil-of-ignorance/.

[21] Sen, Amartya. 2011. The Idea of Justice. N.p.: Harvard University Press.

[22] O’Leary, Rosemary. 2017. “The 2016 John Gaus Lecture: The New Guerrilla Government: Are Big Data, Hyper Social Media and Contracting Out Changing the Ethics of Dissent?” PS: Political Science & Politics 50 (1): 12 – 22. https://doi.org/10.1017/S1049096516002018.

[23] Getha-Taylor, Heather. 2009. “Where’s (Dwight) Waldo?” Public Performance & Management Review 32 (4): 574-578. https://www.jstor.org/stable/40586775.

[24] Pequeño IV, Antonio. 2026. “Google’s Gemini Controversy Explained: AI Model Criticized By Musk And Others Over Alleged Bias.” Forbes. https://www.forbes.com/sites/antoniopequenoiv/2024/02/26/googles-gemini-controversy-explained-ai-model-criticized-by-musk-and-others-over-alleged-bias/?sh=543743fd4b99.

[25] Aksenov, Pavel. 2013. “Stanislav Petrov: The man who may have saved the world.” BBC. https://www.bbc.com/news/world-europe-24280831.

[26] Shuster, Simon. 2017. “Stanislav Petrov, the Russian Officer Who Averted a Nuclear War, Feared History Repeating Itself.” Time. https://time.com/4947879/stanislav-petrov-russia-nuclear-war-obituary/.

[27] ibid

[28] Born, Max, Percy W. Bridgman, Albert Einstein, Leopold Infeld, Frederic Joliot-Curie, Herman J. Muller, Linus Pauling, et al. 1955. “Russell-Einstein Manifesto – Nuclear Museum.” Atomic Heritage Foundation. https://ahf.nuclearmuseum.org/ahf/key-documents/russell-einstein-manifesto/.

Written by Basim Ali