Lecturer in Criminology and Business at Fairfield School of Business, Croydon Campus | Article Date: 15/10/2025
We are at an unprecedented turning point in human history, marked by the swift and widespread integration of Artificial Intelligence (AI) into the very fabric of our daily lives. From algorithmic curation of news and social media feeds to autonomous vehicles navigating public roads, from diagnostic tools in healthcare to automated trading systems in finance, AI systems are increasingly entrusted with decisions that hold significant moral weight and profound consequences for individuals and society. This transition signifies a shift from tools that enhance our physical abilities to systems that actively augment, and in some cases replace, human judgment. This new era of delegated decision-making presents us with a vital and urgent question: how can we ensure that these powerful artificial agents operate in a manner consistent with human ethical principles?
The significant challenge of turning human morals into a clear, operational language for machines has become a key issue at the crossroads of computer science, philosophy, law, and ethics. The motivation for this inquiry stems from the increasing capabilities and autonomy of AI systems. Early AI mainly functioned as a passive tool, needing direct human initiation for each action. Modern machine learning has produced systems that learn independently from data, identify complex patterns invisible to the human eye, and make decisions based on probabilistic inferences that are often inscrutable even to their creators. This autonomy acts as a double-edged sword. It offers significant benefits in terms of efficiency, scalability, and the ability to solve complex problems.
According to Laudon (2001), ethics involve choosing the right options among many. Now, we face the challenge of creating systems that can make these decisions for us, but we lack a straightforward way to encode what “rightness” means in code. This task is complex, both philosophically and practically. As discussed in this paper, human morality is not a simple, unified set of rules. Instead, it is a complex landscape influenced by deontological duties (which see actions as inherently right or wrong), consequentialist views (which judge rightness based on outcomes), and virtue ethics (which focus on the character of the moral agent). Emotions, cultural contexts, and personal experiences also play a crucial role.
Morality often involves navigating “grey areas” where clear answers are hard to find, as McGrath and Gupta (2018) point out. Attempting to reduce this complex moral fabric into a set of computable rules risks oversimplification, leading to rigid systems that may be technically correct but ethically blind. Furthermore, the nature of intelligence itself is central to this challenge. As the comparative analysis in this essay will demonstrate, a fundamental chasm exists between human and AI.
Human cognition is embodied, conscious, and general, capable of transferring learning across domains and understanding the semantic meaning and emotional weight of concepts. Professor Powers (2006) offers a crucial pragmatic perspective, arguing that a simulacrum of ethical deliberation may not only be enough but also essential, as many humans themselves often fail to reach a higher standard of ethical reasoning.
Underpinning this entire investigation is a central research question that guides the analysis: How can a robust framework for codifying morality into AI be developed to ensure ethical alignment?
This question recognises that the technical challenge of codification is closely linked to its societal impact. It is not sufficient to develop an ethically aligned AI in a lab; the system must be designed to be transparent enough to earn public trust, auditable enough for government regulation, and accountable enough to define the responsibilities of the companies that implement it.
The attempt to formalise morality for AI relies on a key, often implicit, assumption: that human ethical reasoning can be sufficiently structured to be encoded into a computational process. This pursuit prompts us to revisit centuries of philosophical thought to extract practical principles. As Laudon (2001) notes, ethics primarily involves the decision-making of autonomous agents when choosing among conflicting options and goals. This perspective offers an essential starting point, positioning ethics not as a fixed set of rules but as an evolving process, a characteristic that any computational model must incorporate.
The long-standing effort to formalise behaviour has deep roots. Forsyth and O’Boyle (2011) correctly cite the Code of Hammurabi as an early example of turning moral intuitions about justice, responsibility, and consequences into clear, enforceable laws with specific punishments. This reflects a deontological approach to codification, which involves creating a set of inviolable rules (“thou shalt not steal”) that an agent must follow. Such an approach offers clarity and predictability, highly valued qualities in regulatory compliance and auditability within AI systems. However, a rigid deontological framework faces the well-known problem of normative conflict. What should an agent do when two rules conflict in a new situation? A purely rule-based system risks being extremely inflexible.
This requires adding a second philosophical perspective: consequentialism. Here, the morality of an action is evaluated based on its outcomes. A consequentialist approach for an AI would involve programming it to assess the potential benefits and harms of its actions using a set of valued metrics (such as well-being, economic efficiency, and minimising harm) and to choose the option that optimises the “good.” This method provides the flexibility that strict rules lack.
The initial step in codification is not selecting a single philosophy but recognising that any practical system needs a hybrid structure. It should include a deontological’ constitution” with fundamental, non-negotiable prohibitions, such as’ do not manipulate human autonomy’, and a consequentialist’ calculus” for cases where these rules are not broken but trade-offs are necessary to balance lesser harms or greater benefits.
A key philosophical objection to the entire project of machine ethics, as highlighted by Nath and Sah (2019), is rooted in Kantian philosophy. Immanuel Kant argued that an action only has true moral worth if a rational agent performs it out of a sense of duty and with the right intention. An automaton, merely executing code without understanding, consciousness, or genuine intentionality, cannot be regarded as “moral” in this profound sense. It can only act in accordance with duty, not out of duty. The goal is not, and perhaps cannot be, to create a genuine moral agent endowed with consciousness and free will. The aim is to create systems whose outputs are ethically sound, trustworthy, and advantageous, even if their internal workings are a complex imitation of human moral reasoning rather than a genuine one.
The Three Laws of Robotics – Isaac Asimov
It is generally acknowledged that AI now plays a vital role in financial systems, enhancing fields like algorithmic trading, fraud detection, and risk management. Nevertheless, as AI systems become more autonomous, ethical questions about their decision-making processes have grown more urgent. Isaac Asimov’s Three Laws of Robotics, introduced in his 1942 short story “Runaround”, serve as a fundamental ethical framework that still influences AI regulation today, including in the financial industry (Asimov, 1950). Initially created for fictional robots, these laws have sparked ongoing debates about AI safety, responsibility, and regulation in finance (Bostrom & Yudkowsky, 2014). Asimov’s Three Laws of Robotics are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. (Asimov, 1950).
These principles highlight human safety, obedience, and self-preservation, values that have been integrated into modern AI ethics (Wallach & Allen, 2008). In finance, where AI-driven choices can influence markets, investments, and consumer confidence, these laws offer a philosophical foundation for guiding AI systems to act in humanity’s best interest (Brundage et al., 2018). The influence of Isaac Asimov’s three laws of robotics extended to European Union Law, where his first law was reflected in the EU AI Act, specifically in Article 5, which states:
EU AI Act, article 5 – The following AI practices shall be prohibited:
(a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm;
Thus, I proceed to answer the research question: How can a robust framework for codifying morality into AI be developed to ensure ethical alignment?
The argument for such an answer is directly associated with developing a robust framework for codifying morality into AI, which requires a multi-faceted approach. It must integrate top-down ethical principles, like those from the UN (United Nations) or IEEE (Institute of Electrical and Electronics Engineers), with bottom-up, context-specific learning from diverse human feedback. Technically, this involves value learning algorithms, constitutional AI, and rigorous testing for harmful bias. Crucially, the process must be iterative, transparent, and inclusive, involving global and multicultural perspectives to avoid a single cultural bias. The goal is not a static, perfect moral code, but a dynamic system that can learn, explain its reasoning, and align with evolving human values under oversight.
Importantly, this framework should be grounded in the pragmatic understanding that the goal is not to develop a conscious Kantian moral agent, which is a philosophical impossibility for an artificial entity, as Nath and Sah (2019) have argued. The focus is to engineer a functional simulant of ethical deliberation, a “moral simulant” as per Powers (2006), whose outputs are trustworthy and beneficial. This reframing is empowering; it allows us to focus on developing systems that act ethically without being hindered by the philosophical debate over whether they can be ethical.
In conclusion, the project of codifying morality is crucial for bridging the gap between the realm of human values and the realm of computational power. It is a complex but necessary undertaking. The framework proposed herein offers a pathway forward, transforming abstract ethical principles into a workable engineering blueprint. By embracing a layered approach to ethical simulation, grounded in a clear-eyed understanding of both human and AI, we can strive to create a future where AI not only performs tasks efficiently but also does so in a manner that is just, trustworthy, and aligned with our most fundamental values.
The answer to “how to codify morality” is therefore not a single algorithm, but a commitment to a continuous process of value alignment. This process will define the character of our technological civilisation for generations to come.
· Asimov, I. (1950). I, Robot. Gnome Press.
- Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G.C., Steinhardt, J., Flynn, C., hÉigeartaigh, S. Ó., Beard, S., Belfield, H., Farquhar, S., Lyle, C., Crootof, R., Evans, O., Page, M., Bryson, J., Yampolskiy, R. and Amodei, D. (2018) The malicious use of artificial intelligence: forecasting, prevention, and mitigation. [online] Available at: https://maliciousaireport.com/
- Bostrom, N. and Yudkowsky, E. (2014) ‘The ethics of artificial intelligence, in Frankish, K. and Ramsey, W.M. (eds.) The Cambridge handbook of artificial intelligence. Cambridge University Press, pp. 316-334.
- European Union (2024). *Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)*. Official Journal of the European Union, L 2024/1689. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689.
· Forsyth, D. R., & O’Boyle, E. H. (2011). Ethics and the code of conduct: A systematic analysis. Journal of Leadership & Organizational Studies, *18*(4), 483–493. https://doi.org/10.1177/1548051811416511
· Laudon, K. C. (2001). Ethics in the information age. In G. D. Garson (Ed.), Handbook of public information systems (pp. 47-62). Marcel Dekker.
· McGrath, R., & Gupta, A. (2018). The ethical algorithm: The practical challenges of building ethical robots. Technology and Society, *55*, 93-100. https://doi.org/10.1016/j.techsoc.2018.07.002
· Nath, R., & Sah, S. (2019). The artificial moral agent: A systematic review of the literature. Journal of Experimental & Theoretical Artificial Intelligence, *31*(4), 563-579. https://doi.org/10.1080/0952813X.2018.1555850
· Powers, T. M. (2006). Prospects for a Kantian machine. IEEE Intelligent Systems, *21*(4), 46–51. https://doi.org/10.1109/MIS.2006.77
- Wallach, W. & Allen, C. (2008) Moral machines: teaching robots right from wrong. Oxford University Press.