The Branding Coup That Shaped a Generation of Students

Halfway through the semester, a student puts his pen down and says the thing out loud. If machines are beating us at thinking, he says, why are we here? Not aggressive. Just genuinely lost — which is a different problem entirely, and a harder one to answer from a podium.

The problem is that the question itself is built on a category error — one that universities have been quietly complicit in spreading for the better part of a decade.

John McCarthy coined the phrase “artificial intelligence” at Dartmouth in 1956 to describe an aspiration. A research agenda. Something researchers hoped to build one day, not something that already existed in a useful form. Seventy years on, the term is still attached to systems that work through statistical inference at massive scale — pattern matching, probability estimation, output optimisation. Useful. Genuinely remarkable in places. Not intelligence in any philosophically meaningful sense.

Buolamwini and Gebru’s work on facial recognition didn’t land as an abstract finding. It named something specific: systems trained on skewed datasets failed darker-skinned women at rates that would have caused a scandal in any other professional context. Nobody had programmed in the bias. Nobody needed to. The data carried it, and the model learned it faithfully. The word “intelligence” had made it harder to ask what the system was actually doing.

That confusion runs deep now. Algorithmic recommendations feel authoritative. Automated decisions carry an aura of impartiality that handwritten memos never had. The language of intelligence — borrowed from a conference that was really about hope — amplifies this effect. It suggests the machine knows something. Understands something. That its outputs deserve a different kind of deference than a spreadsheet formula would get.

They don’t.

The Ashish Vaswani transformer architecture that underpins most large language models is an extraordinarily sophisticated tool for estimating which word is statistically likely to follow another. It does this across datasets so vast the outputs feel uncanny. Scale creates the illusion of comprehension. A calculator performing arithmetic faster than any human doesn’t give us reason to credit it with mathematical understanding — and this, despite producing fluent paragraphs on demand, is closer to what’s actually happening inside these systems.

What makes this period genuinely strange is that the branding has been extraordinarily effective precisely because it serves so many interests at once. Governments compete to become AI superpowers. Universities race to establish AI institutes, often before they’ve agreed on what AI actually is. Technology companies promote themselves as leaders in artificial intelligence even when their core products rely on statistical modelling that McCarthy’s original group would barely have recognised. The metaphor draws capital and political attention in ways that “large-scale probabilistic inference infrastructure” simply wouldn’t. Understandably. But the metaphor also shapes what students think they’re preparing for.

What universities are struggling with is partly a governance question dressed up as a curriculum question. Probability theory clearly matters — students who can’t read a model’s assumptions will be badly positioned in almost any professional context within ten years. Algorithm design. Data science. The technical infrastructure is real and the case for teaching it is straightforward.

But the argument gaining traction in seminars and faculty meetings goes further. Ethics, epistemology, political economy — the intellectual tools for asking who controls these systems, whose data trained them, what the optimisation target actually serves, and whether the outputs are being trusted in contexts where they shouldn’t be. These are not soft alternatives to technical training. They are the harder questions that technical training alone cannot answer.

Human intelligence isn’t losing a race. The framing of competition was always a bit off. What’s shifting is where judgment sits — increasingly spread across hybrid arrangements where a person and a statistical system work through the same problem together, each contributing what the other can’t. That’s not replacement. It’s a reorganisation that most curricula weren’t designed for.

The student came back after class. Same question, narrower now: if it’s not really intelligence, what do we actually study? The professor’s answer didn’t require much setup. You learn what machines can’t — how knowledge gets made, where it goes wrong, when the output in front of you deserves skepticism rather than deference. The seminar moved on. The question didn’t go anywhere.

Get notified whenever we post something new!

spot_img

Join to your future

Continue reading

The Laptop in the Room

Schools handed out millions of devices. Now legislators and parents are asking whether laptops belong in every classroom at every age.

Who Owns Your Research?

Academic misconduct silently destroys junior researchers careers. Learn how blockchain timestamping protects your dissertation and research authorship permanently.

The Bibliography Is Not the Work

AI tools make research faster but not smarter here's why judgment, not retrieval, defines real scholarship today.

Enjoy exclusive access to all of our content

Get an online subscription and you can unlock any article you come across.