[Maximilian Milovidov is a freshman at Columbia University and a member of TikTok's Youth Council. He used a large language model to edit this essay for length and a human to edit for content. This piece also appeared in the Body Electric newsletter. Sign up here for a biweekly guide to move more and doomscroll less.]
Last fall on campus, I attended a reading of The New Yorker article "What Happens After A.I. Destroys College Writing?" During the audience discussion that followed, I shared something that drew unexpected laughs: A course I was taking, "Writing AI," might be the only one on campus where artificial intelligence was not prohibited but, rather, required. This "AI-first" class was a living thought experiment asking: What if we taught students to use AI critically, rather than insisting they ignore it or assume they're using it to cheat?
I don't see our choice as "AI or no AI" any more than past generations could halt the spread of the printing press — that widely decried threat to scholarship. Children born today will never know a world without AI. The majority of U.S. teens already use AI chatbots, and over half turn to them for schoolwork. Students will reach for these tools, whether universities ban them or not.
A prevailing concern is that generative AI encourages people to outsource their thinking to machines, which weakens understanding — known as "cognitive offloading." Through class, I realized this worry holds weight only if AI is treated like an omniscient oracle. When students are encouraged to experiment with and critique large language models (LLMs), AI becomes an on-demand study partner with benefits and drawbacks. At the very least, it's a sounding board; at best, a viable alternative to a teaching assistant or tutor.
In class, we brought our own ideas and outlines. We fed drafts into a chatbot while documenting its suggestions and then explaining why we accepted or rejected them. We began with our own sparks of inspiration, argument and thought, but learned to prompt chatbots to expose gaps in reasoning or find unseen connections. My professor called this the "friend test": You would ask a friend for feedback on a paper, but you wouldn't make the friend write it.
Research shows that AI can supplement education when used as a collaborator for feedback, iteration or ideating. One study found that students using moderate AI assistance during lectures outperformed both the students using fully automated help and those using minimal support. When used in moderation, AI can improve human cognitive performance.
AI can also level the uneven playing field of academia. For the many students without access to private tutors, chatbots can generate practice questions, mock exams and flash cards; give feedback on a paragraph; or suggest a counterargument. Using "study mode" features, LLMs can nudge students toward answers instead of handing them over, the way a good TA would. A 2025 Harvard University study found that students using an AI tutor achieved learning gains there were more than double those in traditional classrooms, and they felt more engaged doing so.
The fact that these systems are trained on biased, Western-centric data is precisely why students must learn to question them. When we fed drafts into a model, it did not magically return A+ prose. More often, it amplified our weaknesses back at us: Vague claims remained vague, filler language spread and it often read as impersonal and corporate.
Sometimes the chatbot's version of an essay was so horrifyingly bland that I became weirdly proud of my own messy and imperfect paragraphs. Those moments taught me more about my own writing process than any closed-book or in-class essay. When anyone can generate a passable paragraph, what distinguishes us is not whether we can produce text, but whether we can think, judge and revise.
My "Writing AI" class sought to explore appropriate use of AI in academia. In practice, though, we learned how to be thoughtful about why we are reaching for a tool, what we are hoping to get from it, and what is given up in the process. It provided a space to openly discuss our relationship with the defining technology of our generation, without shame or fear of punishment — remarkable amid a climate where campuswide "AI shame" routinely drives student use underground.
These skills will matter after graduation. AI may automate entry-level jobs, which makes it all the more crucial that we are taught how to work alongside these systems. We cannot control what kind of job market we join, but we can demand that our education prepares us to wield this technology, not hopelessly endure it.
This essay was written by Maximilian Milovidov and edited by Phoebe Lett.
You can hear more from Milovidov and why he resents being called a member of the "Anxious Generation" in TED Radio Hour's episode "Did social media break a generation — or just change it?"
Sign up for our Body Electric newsletter, or share it with a friend.
Copyright 2026 NPR