In an era where artificial intelligence (AI) tools are reshaping education, a new study from Cornell’s Ann S. Bowers College of Computing and Information Science has raised important concerns about relying on ChatGPT and other large language models (LLMs) for college admissions essays. Researchers compared 30,000 human-written essays with those generated by eight leading AI models, including systems developed by OpenAI, Meta, Anthropic, and Mistral, and found that AI-produced texts were strikingly generic, formulaic, and easy to spot.
Admissions essays are meant to showcase individuality, providing a personal narrative that reveals the applicant’s identity, values, and unique journey. According to Rene Kizilcec, associate professor of information science and senior author of the study, AI essays fail to capture that authenticity. “Tools like ChatGPT can give solid feedback on writing and may benefit weaker writers,” he said. “But when asked to produce a full essay, the result is generic writing that lacks a true voice.”
The study, to be presented at the 2025 Conference on Language Modeling in Montreal, revealed that AI systems often just repeat keywords from prompts and present details in a structured but unnatural way. For instance, when given the prompt to reflect the voice of a specific applicant, such as an Asian student from California, ChatGPT produced repetitive, stereotype-driven narratives rather than a nuanced, personal account.
Researchers also noted that when AI was asked to simulate highly personal identities—such as a biracial student growing up in Morocco—it often sounded even less human. The essays relied on clichés and sweeping generalizations instead of rich, reflective detail. This pattern not only reduces essay quality but also makes AI-written texts easily detectable. When researchers trained an AI to differentiate between human and AI-written essays, it succeeded with near-perfect accuracy, suggesting universities can reliably identify students attempting to pass off machine-generated work as their own.
The implications are significant. College admissions committees place strong emphasis on essays as a way to evaluate authenticity and self-reflection beyond grades and test scores. Using AI to draft essays could backfire, making an application appear inauthentic. While some schools permit limited AI use for brainstorming or grammar correction, the consensus among researchers is clear: students should rely on their own voice for the first draft and use AI sparingly, if at all.
Conclusion: The Cornell study underscores a growing truth—AI is not a substitute for human reflection and authenticity. While tools like ChatGPT can refine grammar and style, they cannot replicate the genuine self-expression admissions officers are seeking. For students, the safest path remains to embrace their individuality and craft essays that reflect their personal journey, not a machine’s approximation of it.




