Skip to Content
Photo: FSB

 

By Jiahuan (Henry) Qian, FRSA, BNU Trainee Programme Leader, FSB Memo House | Article Date: 11/05/2026

 

In recent years, higher education in the UK has found itself navigating a period of considerable uncertainty. Institutions are responding to a complex combination of funding pressures, changing student expectations, employability demands, and rapid technological development. Within this wider context, disciplines and practices grounded in reading, writing, interpretation, ethical reasoning, and critical reflection are sometimes questioned for their immediate practical value.

Among all the factors, the rise of AI, particularly generative AI, has given this question a renewed urgency. If a machine can produce essays, poems, reports, or even philosophical reflections within seconds, it is understandable that some may begin to ask whether the human capacities traditionally cultivated through education are becoming less valuable. This question matters profoundly, especially for higher education providers like FSB, where education is about helping students grow as leaders, professionals, and most importantly, thinkers.

If anything, the age of AI makes it more urgent and necessary, not less, for us to reflect on what it means to be a human. We are rapidly (almost dangerously rapidly) entering a historical moment in which efficiency, automation, and optimisation are increasingly celebrated. In such a climate, there is a risk that we begin to value ourselves only in terms of productivity: how fast we can work, how much we can produce, and how easily our tasks can be replicated by a machine. But if we look at the history since the very beginning of our civilisation, human life has never been reducible to output alone. To be human is not merely to generate information; it is to feel, to interpret, to doubt, to imagine, and to find meaning.

AI can imitate many forms of language, but imitation is not the same as understanding. A machine may produce a moving paragraph on grief, love, hope, or belonging, but it does not grieve, love, hope, or belong. It has no childhood memories, no moral burden, no vulnerability, and no genuine stake in the world it describes. Human beings, by contrast, think and speak from within life itself. Our words matter not merely because they may be well structured or intellectually polished, but because they emerge from experience, memory, responsibility, and relationship. This is precisely why the humanities, arts, and other reflective disciplines remain vital: it is never about the content per se (though as students and teachers we usually talk quite a lot about it), but how we engage with the human condition.

There is also a wider social danger in allowing AI to define the terms by which we understand value. If society begins to privilege only what is measurable, scalable, and immediately profitable, then many human qualities such as empathy, justice, and imagination may gradually be overlooked, and it is precisely these qualities that are the foundations of civilised life. This is why the current moment calls for a deeper defence and reflection of being human. In the presence of AI, we should not ask only, “What can machines do?” We should also ask, “What should humans remain responsible for?” There are many domains in which efficiency is valuable and AI can be genuinely helpful. Used wisely, it may reduce certain burdens and create more space for creativity and higher-level thought, yet the existence of these advantages should not tempt us into intellectual laziness or moral surrender.

Indeed, one of the greatest ironies of the age of AI is that the more machines can perform tasks associated with intelligence, the more we are reminded that intelligence alone is not enough. The more capable AI becomes, the more important it is that human beings retain ownership of judgement, values, purpose, and meaning. A person is not a search engine, nor a content generator, nor an efficiency system. A person is a moral and social being whose life unfolds through relationships, commitments, failures, hopes, and acts of interpretation. To forget this would be to become technologically advanced but existentially diminished.

Higher education, at its best, has never been only a preparation for the labour market. It is also a preparation for thoughtful participation in human society. This is particularly important in private higher education, where many students pursue study with clear aspirations for career progression and personal transformation. HE establishments should therefore avoid reducing education to the delivery of short-term employability skills alone, but instead should support students to develop critical thinking, ethical judgement, and the capacity to engage with ambiguity.

Perhaps, then, the task of our time is not to compete with AI on its own terms. We are unlikely to win by trying to be faster, more constant, or more mechanically productive than machines. We should instead deepen qualities that remain distinctly and irreducibly human. We should protect educational spaces that cultivate these qualities, even when they do not appear immediately profitable on a spreadsheet.

At FSB, this principle is given practical shape through our AI with Integrity, Honesty and Transparency model (Mehta and Qian, 2026). The IHT model reminds us that responsible AI use is not merely a technical matter, nor simply a question of compliance. It is, more fundamentally, a question of character and intellectual formation. Integrity asks students to ensure that their work reflects genuine learning and understanding. Honesty requires them to acknowledge how AI has supported their thinking, rather than allowing technological assistance to disappear behind the appearance of unaided authorship. Transparency invites them to remain open, accountable, and critical about the processes through which knowledge is produced. In this sense, FSB’s Academic Integrity Guidelines and AI guidance are not intended to discourage thoughtful experimentation with technology but instead seek to cultivate a wiser form of academic practice.

Therefore, to insist on the significance of being human in the world of AI is not to reject technology, nor to romanticise the past. It is to remember that tools should serve human purposes; AI may transform how we work, learn, and communicate, but it should not be allowed to narrow our understanding of what a worthwhile human life is. In fact, this technological moment offers an invitation, that is, to become more reflective about our values, more intentional about our education, and more committed to those forms of knowledge that help us live well with others.

The real question, then, is not whether AI can do more, because it almost certainly will. The more important question is whether, in the midst of its expansion, we will continue to make space for the human capacities that make education meaningful: curiosity, judgement, responsibility, empathy, and critical reflection. For FSB, this means continuing to support students as employable, reflective, and ethically aware graduates who can engage with technology confidently without surrendering their own intellectual agency. If HE establishments, educators, and society as a whole lose sight of this, the crisis we face will not be merely technological, but human in nature.

 

References

Mehta, K. and Qian, J.H. (2026). A Student Guide to Using AI Responsibly at FSB: How to Stay Honest and Transparent – Fairfield School of Business. [online] Fairfield School of Business. Available at: https://fsb.ac.uk/a-student-guide-to-using-ai-responsibly-at-fsb-how-to-stay-honest-and-transparent [Accessed 08 May 2026].