Artificial intelligence has swept through higher education with such force that universities are scrambling to redesign assessments, rewrite policies and rethink what academic integrity even means in a world where Gen-AI tools can generate essays in seconds. Many institutions have reacted by focusing almost exclusively on containment: building “secure” assessments that can resist AI-generated content. But this narrow approach misses the real issue. The problem isn’t that students have access to Gen-AI — it’s that higher education has long relied on a transactional, performance-driven model that Gen-AI now exposes with uncomfortable clarity.
Universities speak loudly about innovation and the exciting potential of AI to enhance research and productivity. Yet, in the same breath, they warn students that using the very same tools may constitute academic misconduct. These contradictory messages reveal not just confusion, but a deeper incoherence within the system itself. Higher education is caught between promoting AI for efficiency and policing AI for assessments, leaving students both empowered and criminalised at once.
At the heart of the conversation is a paradox: while institutions use Gen-AI to automate meeting minutes, streamline administration and boost staff productivity, they caution students that AI threatens the “authenticity” of their learning. This inconsistent logic creates a landscape where AI use is acceptable depending on one’s role — a staff member may rely on AI, but a student risks punishment for the same behaviour.
But the issue runs deeper. The real crisis isn’t cheating — it’s the performance-centric nature of learning itself. If students turn to AI to complete essays, perhaps the assignments were never designed to foster intellectual engagement in the first place. Gen-AI reveals the fragility of a system that treats knowledge as something to be delivered, reproduced and graded rather than created, questioned and explored.
This is particularly concerning when we consider how Gen-AI is marketed: as a tool for personalisation, creativity and freedom. But this freedom often masks a more troubling reality. AI’s efficiency promises tempt students to outsource the hardest part of learning — the thinking. Instead of nurturing autonomy, Gen-AI risks reinforcing a “banking model” of education, where information is deposited into passive learners rather than co-constructed through dialogue and critical inquiry.
A major driver of this problem is the widespread misunderstanding of critical thinking. Too often, institutions equate criticality with problem-solving rather than problem-posing. Gen-AI, however, demands deeper questions: Who does it empower? Who does it marginalise? And how might it shape our understanding of knowledge, creativity and agency? If our only response is to “secure” assessments, we ignore the profound intellectual and social implications of this technology.
Universities’ instinct to increase surveillance — plagiarism detectors, AI-proof exams, proctoring tools — reflects a broader shift toward risk management and institutional control. Yet these measures erode trust, one of the essential conditions for meaningful learning. Students feel watched rather than supported, and teachers become gatekeepers rather than collaborators.
The alternative is not to abandon rigour but to reimagine the curriculum itself. Instead of designing assessments that merely resist AI, institutions can create learning experiences grounded in creativity, collaboration, process and real-world relevance. When students are invited to work together, grapple with uncertainty, and situate their knowledge in authentic contexts, the learning becomes inherently resistant to automation because it engages the human elements AI cannot replicate.
Gen-AI also highlights the undervaluation of teaching. If content can be generated instantly, what is the purpose of the classroom? The answer is simple: education is not information transmission. It is an intellectual encounter — a shared engagement with complexity, ambiguity and discovery. Gen-AI cannot reproduce the emotional and cognitive dynamics of a vibrant classroom discussion, nor can it replace the sense of meaning students experience when they create something original.
More importantly, reclaiming learning as a relational practice means resisting the logic of efficiency and embracing the messy, human process of growth. This involves redesigning curricula to centre curiosity, dialogue and lived experience; investing in staff development; and empowering students to take active roles in their own and others’ learning.
What higher education needs now is a pedagogy of solidarity — one that positions students as engaged citizens rather than potential rule-breakers, and emphasises trust, care, responsibility and intellectual courage. Instead of treating Gen-AI as the enemy, we can approach it as a provocative opportunity to rethink what learning truly means.
Conclusion:
Gen-AI has not created a crisis in higher education — it has revealed one. Institutions now face a choice: keep tightening surveillance around assessments or embrace this moment as a catalyst for transformative, human-centred change. If the aim of education is merely credentialing, automation will inevitably take over. But if we aspire to cultivate critical, creative and autonomous thinkers, then now is the time to reclaim education’s deeper purpose and reimagine the curriculum for an age where machines can produce content — but cannot replace meaning.




