Artificial Intelligence (AI) is increasingly embedded in decision-making
processes across industries, including financial marketing, where it
influences consumer behaviour, risk assessment, and strategic
communication. However, the rapid advancement of AI systems has raised
profound ethical concerns, particularly regarding fairness,
transparency, accountability, and human autonomy. In my research, I have
examined these ethical challenges and the broader question of whether it
is truly possible to create AI systems that operate ethically.
While some scholars remain sceptical about the feasibility of ethical AI,
others contend that it is both achievable and crucial for sustainable
technological progress. This article examines the potential to develop
ethical AI by integrating theoretical insights, governance structures, and
practical design strategies. Therefore, I propose the following research
question for this paper:
To what extent is it possible to create ethical AI systems in financial
marketing, and what conditions are necessary to ensure their ethical
alignment with human values?
I proceed with this analysis in the present paper to close the gap on the
research question and answer it in the conclusion.
The Foundations of Ethical AI
The concept of ethical AI is grounded in a set of widely recognised
principles, including fairness, accountability, transparency, privacy,
safety, and human wellbeing (Schelling & Rubenstein, 2021). These
principles provide a normative foundation for evaluating AI systems, yet
they often remain abstract and difficult to implement in practice.
Eitel-Porter (2021) situates ethical AI within a broader responsible
business agenda, emphasising that organisations are increasingly expected to
align technological innovation with societal values.
This shift reflects growing public awareness and regulatory pressure,
particularly in sectors such as finance, where AI-driven decisions can
significantly impact individuals’ economic opportunities. However, as Prem
(2023) argues, ethical frameworks alone are insufficient unless they are
operationalised. Ethical AI requires moving beyond theoretical constructs
toward actionable systems that embed ethical considerations throughout the
AI lifecycle.
One of the most compelling approaches to ethical AI is the lifecycle model
proposed by Ng et al. (2022), which identifies five key stages: data
creation, data acquisition, model development, model evaluation, and
deployment. This model highlights that ethical considerations must be
integrated at every stage, rather than treated as an afterthought.
For example, Harwell et al. (2025) demonstrate that ethically sourced and
openly licensed datasets can contribute to more transparent AI systems.
However, they also acknowledge the significant labour and scalability
challenges associated with curating such datasets. This reinforces the idea
that ethical AI is not simply a technical problem but also an organisational
and economic one.
Epstein et al. (2020) further emphasise the need for comprehensive standards
and guidelines that translate ethical principles into practical
requirements. These include best practices in coding, infrastructure design,
and organisational governance. Similarly, Blackman (2020) argues that
ethical AI requires sector-specific frameworks, as risks vary significantly
across industries such as healthcare, finance, and criminal justice.
In my research, I found that ethical AI cannot be achieved through isolated
interventions. Instead, it requires a systemic approach that integrates
ethical considerations into the entire socio-technical ecosystem.
The Challenge of Aligning AI with Human Values
This challenge is echoed by Bengio (2025), who calls for the development of
“honest AI” systems that are transparent and aligned with human intentions.
He warns that as AI systems become more sophisticated, they may exhibit
behaviours such as deception or resistance to shutdown, highlighting the
urgent need for robust safety mechanisms.
Gebru (2022) also stresses the importance of rigorous safety research and
transparent design practices. She argues that ethical AI is achievable if
developers prioritise accountability and inclusivity, particularly by
addressing biases embedded in training data and model architectures.
However, the complexity of human morality raises a fundamental question: can
machines truly act ethically, or can they only simulate ethical behaviour?
Artificial Moral Agents and Their Limitations
Creating ethical AI is not solely a technical challenge; it also requires
robust governance and societal engagement. An ethical AI must focus on
systemic impacts rather than isolated principles. Corporate-led initiatives,
while important, are insufficient without broader policy frameworks and
public accountability. This aligns with Rubenstein’s (2021) observation that
ethical AI is often driven by institutional motivations, including
regulatory compliance and reputational concerns. While these motivations can
promote ethical practices, they may also lead to superficial or performative
approaches if not accompanied by genuine commitment.
In my research, I argue that ethical AI requires a multi-level governance
approach that includes:
- Regulatory frameworks to ensure accountability and enforce ethical standards
- Organisational governance to translate principles into operational practices
- Public engagement to reflect diverse societal values
- Interdisciplinary collaboration to integrate technical, legal, and ethical expertise
Without these elements, ethical AI risks becoming a rhetorical concept
rather than a practical reality.
Conclusion
This article has explored the extent to which ethical AI can be created and
the conditions necessary for its development. I answer the research question
by assuming that
it is possible to create ethical AI systems, but only in a limited
and conditional sense. AI can be designed to behave ethically by embedding principles
such as fairness, transparency, and accountability throughout its lifecycle.
However, AI cannot possess genuine moral agency, as it lacks consciousness
and the capacity for ethical understanding.
To ensure ethical alignment with human values, several conditions must be
met:
1. Lifecycle integration of ethics, from data collection to deployment (Ng et al., 2022)
2. Robust governance frameworks that translate principles into practice (Blackman, 2020)
3. Transparent and inclusive design processes that address bias and accountability (Gebru, 2022)
4. Continuous safety research and oversight to manage emerging risks (Bengio, 2025)
5. Human-centred approaches that prioritise dignity, autonomy, and societal wellbeing (Faggin, 2021)
Ultimately, ethical AI is not a fixed outcome but an ongoing process that
requires continuous reflection, adaptation, and collaboration. In my
research, I conclude that the question is not whether AI can be perfectly
ethical, but whether we, as designers, regulators, and users, are willing to
take responsibility for shaping AI systems that reflect our collective
values.
References
Bengio, Y., Mindermann, S., Privitera, D., Besiroglu, T., Bommasani, R., Casper, S., Choi, Y., Fox, P., Garfinkel, B., Goldfarb, D. and Heidari, H., 2025. International ai safety report. arXiv preprint arXiv:2501.17805.
Blackman, R., 2020. A practical guide to building ethical AI. Harvard Business Review, 15, p.15.
David S. Rubenstein, Acquiring Ethical AI, 73 Fla. L. Rev. 747 (2021).
Available at: https://scholarship.law.ufl.edu/flr/vol73/iss4/2
Eitel-Porter, R., 2021. Beyond the promise: implementing ethical AI. AI and Ethics, 1(1), pp.73-80.
Epstein, Z., Levine, S., Rand, D.G. and Rahwan, I., 2020. Who gets credit for AI-generated art?. Iscience, 23(9).
Faggin, F. (2021) Silicon: From the invention of the microprocessor to the new science of consciousness.
Fu-Yun, Y.U., Tak-Wai, C.H.A.N., Su Luan, W.O.N.G. and Hyo-Jeong, S.O., 2024, November. ‘Global Harwell’in an Examination-Driven Education System and an Excellence-Pursuing Society: Possible? How? Better with Digital Technologies?. In International Conference on Computers in Education.
Gebru, B., Zeleke, L., Blankson, D., Nabil, M., Nateghi, S., Homaifar, A. and Tunstel, E., 2022. A review on human–machine trust evaluation: Human-centric and machine-centric perspectives. IEEE Transactions on Human-Machine Systems, 52(5), pp.952-962.
Kastrup, B., 2021. Science ideated: The fall of matter and the contours of the next mainstream scientific worldview. Simon and Schuster.
Lau, T. and Leimer, B., 2021. Beyond good: How technology is leading a purpose-driven business revolution. Kogan Page Publishers.
Ng, D.T.K., Leung, J.K.L., Su, M.J., Yim, I.H.Y., Qiao, M.S. and Chu, S.K.W., 2022. AI literacy for all. In AI literacy in K-16 classrooms (pp. 21-29). Cham: Springer International Publishing.
Prem, E., 2023. From ethical AI frameworks to tools: a review of approaches. AI and Ethics, 3(3), pp.699-716.
Schelling, N. and Rubenstein, L.D., 2021. Elementary teachers’ perceptions of data-driven decision-making. Educational Assessment, Evaluation and Accountability, 33(2), pp.317-344.
Xu, Y., 2025. Aligning AI With Human Values: A Path Towards Trustworthy Machine Learning Systems (Doctoral dissertation, University of Maryland, College Park).
Wallach, W. and Allen, C., 2008. Moral machines: Teaching robots right from wrong. Oxford University Press.
Yilmaz, A., Nacar, M. and Uysal, G., 2024. Ethical use of artificial intelligence applications in early childhood education. Transforming early childhood education: Technology, sustainability, and foundational skills for the 21st century, 175.
Zhang, X., Ferry, J., Hewson, D.W., Collins, G.S., Wiles, M.D., Zhao, Y., Martindale, A.P., Tomaschek, M., Bowness, J.S., GRAITE‐USRA Working Group and Dixon, A.J., 2025. Guidance for reporting artificial intelligence technology evaluations for ultrasound scanning in regional anaesthesia (GRAITE‐USRA): an international multidisciplinary consensus reporting framework. Anaesthesia, 80(12), pp.1528-1539.
