AI Struggles to Write Authentic College Essays, Cornell Study Finds

In an era where artificial intelligence (AI) tools are reshaping education, a new study from Cornell’s Ann S. Bowers College of Computing and Information Science has raised important concerns about relying on ChatGPT and other large language models (LLMs) for college admissions essays. Researchers compared 30,000 human-written essays with those generated by eight leading AI models, including systems developed by OpenAI, Meta, Anthropic, and Mistral, and found that AI-produced texts were strikingly generic, formulaic, and easy to spot.

Admissions essays are meant to showcase individuality, providing a personal narrative that reveals the applicant’s identity, values, and unique journey. According to Rene Kizilcec, associate professor of information science and senior author of the study, AI essays fail to capture that authenticity. “Tools like ChatGPT can give solid feedback on writing and may benefit weaker writers,” he said. “But when asked to produce a full essay, the result is generic writing that lacks a true voice.”

The study, to be presented at the 2025 Conference on Language Modeling in Montreal, revealed that AI systems often just repeat keywords from prompts and present details in a structured but unnatural way. For instance, when given the prompt to reflect the voice of a specific applicant, such as an Asian student from California, ChatGPT produced repetitive, stereotype-driven narratives rather than a nuanced, personal account.

Researchers also noted that when AI was asked to simulate highly personal identities—such as a biracial student growing up in Morocco—it often sounded even less human. The essays relied on clichés and sweeping generalizations instead of rich, reflective detail. This pattern not only reduces essay quality but also makes AI-written texts easily detectable. When researchers trained an AI to differentiate between human and AI-written essays, it succeeded with near-perfect accuracy, suggesting universities can reliably identify students attempting to pass off machine-generated work as their own.

The implications are significant. College admissions committees place strong emphasis on essays as a way to evaluate authenticity and self-reflection beyond grades and test scores. Using AI to draft essays could backfire, making an application appear inauthentic. While some schools permit limited AI use for brainstorming or grammar correction, the consensus among researchers is clear: students should rely on their own voice for the first draft and use AI sparingly, if at all.

Conclusion: The Cornell study underscores a growing truth—AI is not a substitute for human reflection and authenticity. While tools like ChatGPT can refine grammar and style, they cannot replicate the genuine self-expression admissions officers are seeking. For students, the safest path remains to embrace their individuality and craft essays that reflect their personal journey, not a machine’s approximation of it.

Get notified whenever we post something new!

spot_img

Join to your future

Continue reading

Sales training is not enough for companies anymore. They are finding this out the way.

Sales training alone no longer guarantees results. Companies are turning to sales enablement to support teams during real conversations with buyers.

Qualified but Stuck: Why International Early Childhood Graduates Struggle to Move Up in Australia

Australia faces teacher shortages, yet many international early childhood graduates remain underemployed. Why qualified teachers struggle to advance in the sector.

Defending Time in the Age of AI

Universities confront generative AI’s impact on learning, assessment, and academic integrity as questions grow about judgment, time, and degrees.

Enjoy exclusive access to all of our content

Get an online subscription and you can unlock any article you come across.