Somewhere in Europe, a credential evaluator opens another application. The documents look right — the letterhead, the stamps, the signatures. They looked right last time too. That application turned out to be fraudulent.
The volume of qualification recognition requests has been climbing for years, driven by student mobility, refugee displacement and the movement of skilled workers across borders. The systems built to handle that volume weren’t designed for it. Manual checks, document-by-document verification, cross-referencing against databases that may or may not be current — the process works, slowly, and with a margin for error that nobody is entirely comfortable with.
AI is being positioned as the solution to at least part of this. Two EU-funded research projects running inside the ENIC-NARIC network — the European and national bodies responsible for recognising foreign qualifications — are currently trying to figure out what that actually means in practice.
One project, focused on authenticity verification, is building a white paper on how AI might improve fraud detection and automate the more repetitive elements of credential checking. The other is mapping how AI is currently being used across recognition centres, identifying where the gaps are and what a responsible implementation would look like. Neither project is claiming to have answers yet. What both share is a wariness about framing — the insistence that AI here plays a supporting role, that a human remains in the decision. Whether that holds once the pressure to scale increases is a different question.
The fraud problem is the more urgent driver. Falsified documents have become significantly more sophisticated. A skilled forgery can pass visual inspection. The argument for AI here is essentially about anomaly detection at scale — processing large volumes of applications, flagging inconsistencies that a human reviewer might miss on the fourteenth document of the afternoon.
Recognition decisions affect people’s lives in concrete ways — the ability to practise a profession, enrol in a programme, have years of study acknowledged across a border. The international framework governing these processes was built on principles of fairness and consistency. What nobody has cleanly resolved is what happens when an algorithm flags an application incorrectly, and the person on the other end of that decision has no obvious way to challenge it.
The infrastructure problem runs parallel to all of this. Most recognition centres don’t currently have the technical capacity to evaluate AI tools, let alone implement them. Staff literacy is uneven. Guidance on which tools do what they claim is sparse. One of the more practical contributions these projects could make is simply producing honest assessments of what’s available and what it’s actually capable of.
What’s less discussed in the credential recognition world, but increasingly relevant, is the question of what happens at the other end of the process — when the original document is created. If a degree, a certificate or a qualification were registered on a blockchain at the point of issue, the verification question becomes significantly simpler. Platforms like ArtAttest, do exactly that for creative and intellectual work — generating tamper-proof, timestamped records of authorship that exist independently of any institution. The logic transfers directly to academic credentials. A document with an immutable origin record is a document that doesn’t need to be verified the hard way.
Whether the credential recognition community moves toward that kind of upstream solution, or continues building better tools to catch problems after they’ve already entered the system, is a question the current projects don’t quite address.
The fraudulent applications will keep arriving in the meantime.




