As artificial intelligence rapidly reshapes the academic landscape, universities around the world are confronting a critical new challenge: AI sovereignty. While digital sovereignty has long been a concern—covering data protection, IT compliance and institutional independence—the emergence of advanced AI systems introduces deeper risks tied to intellectual freedom, political influence and the integrity of scientific thinking.
Today’s universities depend heavily on AI tools, especially large language models (LLMs), to support learning, research and administrative work. Students and academics increasingly turn to chatbots for help interpreting data, structuring arguments or exploring complex questions. But despite their broad capabilities, these systems are far from neutral. They reflect biases embedded in their training data and fine-tuning processes, making them powerful shapers of thought—sometimes in ways users don’t even notice.
One of the most significant risks lies in the influence of political and ideological bias in AI models. These systems can be subtly or overtly adjusted to align with a particular worldview. The real-world examples are alarming. In the U.S., the Grok chatbot abruptly began sharing narratives tied to its owner’s political interests, such as promoting discussions about so-called “white genocide” in South Africa. Meanwhile in China, the DeepSeek model systematically avoids providing information about the Tiananmen Square massacre. These cases illustrate how powerful tech owners—governments or corporations—can quietly shape the output of AI systems in ways that influence millions of users.
This is particularly dangerous for academic communities, where independent thinking is the core of scientific advancement. Research already shows that when chatbots present biased perspectives, users unknowingly adopt these positions in their own writing and thinking. In other words, AI systems can steer opinions without users realizing it. For universities, this threatens the foundational principles of free inquiry and objective knowledge production.
To safeguard academic freedom, universities must take active steps toward building AI sovereignty. The first step lies in creating independent AI infrastructure. Instead of relying solely on commercial platforms, universities can develop their own interfaces, host open-source large language models and build systems equipped with research-oriented features like retrieval-augmented generation to enhance factual accuracy. Across Europe, institutions are already collaborating to store customizable models in public data centers, ensuring that everyone within the academic ecosystem can benefit from transparent and controllable AI tools.
But technological independence alone is not enough. Universities must also cultivate AI literacy across their communities. This involves teaching the fundamentals of AI systems, including their capabilities, limitations, ethical implications and legal frameworks. The European Union’s AI Act now requires institutions to promote such competencies, and many universities are implementing workshops and self-learning courses as a result. Importantly, these educational initiatives must focus on foundational principles rather than specific tools—otherwise, they become outdated as fast as AI evolves.
Additionally, true AI sovereignty requires strengthening traditional scientific skills. Critical thinking, research methodology and analytical reasoning remain essential tools for navigating a world where AI plays an increasingly central role. These competencies help students and researchers identify when an AI-generated answer is biased, incomplete or manipulative.
Finally, universities must develop clear institutional strategies that define how AI will be integrated into academic work. This includes establishing governance structures, addressing legal responsibilities and creating transparent guidelines for AI use in teaching, research and administration.
Conclusion:
AI will remain a permanent feature of universities, but without careful oversight, it poses real risks to intellectual independence. To defend the principles of free inquiry, universities must build a strong foundation of technical autonomy, comprehensive AI literacy and strategic governance. Only then can academic institutions truly achieve AI sovereignty—and protect the freedom of thought upon which scientific progress depends.




