Reflections on LLMs as a student
On 3+ years of LLM exposure in college
Thank you to Kate Avendano-Woodruff for helping me shape my thoughts and inspiring me to write about this, especially around the broader impact of learning and school systems. She shared with me an old speech of hers which inspired the conclusion of this essay.
It was winter break of 2022, and I had just gotten a foothold on what college life was like. I was a freshman computer science student at Chapman University’s Fowler School of Engineering. Unbeknownst to me, the following years would be significantly different compared to my first; fall 2022 was the calm before a great storm.
I remember casually reading OpenAI’s initial ChatGPT announcement. I had no clue what they were talking about in technical terms, and the examples seemed rudimentary at best. I dismissed the initial hype around the chatbot, thinking that it wasn’t for me.
Yet, after GPT-4 came out that March, I sensed that things were changing. I became part of the initial cohort playing around with it. I quickly made custom versions of the chatbot with tailored prompts. One of them read handwritten notes and generated summaries, while another attempted to work through math problems. I was also part of the Notion AI beta, which I used to generate essay outlines and proofread drafts.
This was when I first discovered that LLMs can’t actually do math, even if they could explain calculus and linear algebra pretty well. I still attempted to get ChatGPT to do math, simply because it was fascinating to watch. Seeing the token-by-token generation of a detailed but incoherent solution was mystifying in its own way. You could imagine the dopamine hit when it actually got something right.
There were few, if any, guidelines on what could be produced by LLMs for schoolwork. Suddenly, every student I knew started using it seriously. It wasn’t like a switch had been flipped, but it feels like that in my memory. Within weeks, LLMs were everywhere. Every time I walked through the Keck Center, I would see laptops with ChatGPT open. Every time I collaborated with another student on an assignment, we would both try plugging questions into the chatbot. I barely remember a time in college without this experience. My graduating class is the first to have had exposure to ChatGPT for almost all four years of college.
In a time of naivety, there was some truly wild imagination about what AI could do. At a Shark Tank night in a local (computer science) club, our group created the presentation of all time. My contribution (if one could even call it that) was the DALL-E generated imagery of AI wingmen: uncanny images of well-dressed men guaranteed to improve your rizz. None of us thought that AI companionship, now a legitimate market in multiple countries, would actually take off.
The early college years was the peak of my LLM enthusiasm, because we had yet to face some of the consequences first-hand.
Chapman wasn’t the only school embracing this technology; it seemed like suddenly, ChatGPT and friends were everywhere. This isn’t just based on vibes: according to OpenAI, one-third of college students in the U.S. used ChatGPT by February 2025.
While generative AI has struggled in the enterprise, the base product unintentionally accommodated students from the beginning. ChatGPT had always seemed ready to replace websites like Course Hero and Chegg, the latter of which cut half its workforce last year. ChatGPT was faster, cheaper, and more accessible. Where previous solutions still required students to search for answers, ChatGPT completely removed the friction of getting them.
Early on, I thought this was just the new norm: college was going to be a breeze as the tech improved. However, several factors helped shape a more nuanced understanding of LLMs and their consequences.
One belief perpetuated by the industry was that LLMs would continue to scale indefinitely. Gary Marcus’s newsletter, one of my first subscriptions on Substack, completely subverted these expectations. It was his page, and not a computer science class, where I learned that LLM progression would hit a wall; indeed it has.
There were arguments that AI would fully replace coding jobs. Even if the current job market slump is due to a variety of factors, it still felt like a reflection of this idea. We all felt this impact equally when looking for internships, and I think this was all when we were collectively like: “…oh shit.”
Copyright issues over training data became prevalent. The New York Times sued OpenAI and Perplexity in December 2023, following several author lawsuits alleging the same thing: ChatGPT could produce copyrighted text, verbatim. Further research confirms this recurring phenomenon for both text and image models.
All the while, the impact of LLMs on academic integrity heightened, both at Chapman and elsewhere. Chapman’s academic integrity committee handled a record-breaking number of cases following the release of ChatGPT. Across the country, LLMs disrupted an already weak K-12 system, becoming part of a toolkit letting students completely opt out of the learning process.
With some awareness on these issues, students continued to use LLMs. I continued to use it, for my classes and projects, as did others.
Vibe coding slowly became the norm for student projects. I figured this would eventually become the case. Out of curiosity, I took an old data structures assignment and prompted ChatGPT to do the entire thing. The results were astoundingly good for a fraction of the time.
I witnessed this transition firsthand as a tutor. For coding assignments, the default response from students was that they asked ChatGPT first. I would sift through code that looked suspiciously well-done. Each file looked good in theory, but they didn’t piece together to form one cohesive program.
Another thing that drove LLM usage was top-down messaging. The well-intentioned messaging from Fowler faculty was to have projects to showcase for employers. Nowadays, the easiest way to get there is to vibe code. I get it! It’s tempting to let AI do all the work. In my experience, though, people (and prospective employers) are interested in the technical decisions, which you should make yourself.
In the context of education, some people have equated the invention of the LLM to that of the calculator. The primary difference is that the calculator doesn’t lie. In the case of LLMs, the machine doesn’t just lie: it makes up authoritative bullshit, where there is no notion of truth. It’s just a token predictor. And yet, both are machines, and students would point to the machine and say that it told them to do something. I know I personally had a hard time convincing tutees and group partners when the LLM was just plain wrong.
Everything I’ve mentioned so far involves students using LLMs; what happens when professors get involved in the mix?
At first, the reaction to LLMs from professors and faculty in the Fowler School of Engineering was mixed. To this day, some professors require citing code assistance using something similar to the following contrived example:
#include <string>
#include <fstream>
#include <iostream>
using namespace std;
int main() {
/* Begin assistance from ChatGPT: How do I read in a file line by line in C++? */
ifstream file("hello.txt");
string line;
while (getline(file, line)) {
cout << line << endl;
}
file.close();
/* End assistance from ChatGPT */
return 0;
}
(This example was fully written by me, and not, in fact, generated by ChatGPT)
Those same professors would ban AI assistance on quizzes and exams.
Other professors fully embraced AI. For software engineering at Chapman, there is a separate track of classes focused on software design patterns, testing methodologies and agile development. For one of my projects, I got full points for submitting a v0-generated design alongside LLM-generated documentation. To be clear: the professor encouraged this, and I was fully transparent with how I used AI to do the assignment. Future assignments were the same, and even the provided instructions and templates had clear tells of being AI-generated.
I did not take this class seriously, and I attribute this to the way that AI use was encouraged. When I got AI-generated emails and assignment instructions from some of my other professors, I felt the same way. I think the worst offender of this was a training session for my tutoring job on campus: literally everything was AI generated, from the slides to the take home assignment.
Some of these experiences represent the vicious cycle that AI can bring to education: the educator generates assignment details with AI, students’ submissions are AI-generated, and the educator likely reviews and grades submissions with AI. In other words, the educator makes grading decisions based on AI. With how inherent gender and racial biases are in current-generation LLMs, this cycle has the potential to discriminate against women, people of color, and other groups underrepresented in technology. Left unchecked, my experience was a potential disaster waiting to happen.
There is generally more awareness now of the consequences of generative AI and their limitations. The disinformation campaigns coming from authoritarian regimes like the PRC. Deepfakes becoming even easier to generate than before. Multiple deaths and suicides linked to chatbots, to the point of having a dedicated Wikipedia page.
It’s hard to pinpoint when the AI “ick” started to take hold in some of my friend groups. Even though people still use AI for help on assignments, I’m sensing a weariness when it comes to AI slop on social media.
I mentioned it earlier, but something I realize now is that AI lets people opt out of caring. It feels disingenuous to consume AI-generated emails, assignment directions, or other pieces of writing because the other person didn’t really write it. This feels like a universal experience among professors receiving fully AI-generated answers. Writing, imagery, and other media forms all constitute thinking, and there is intrinsic value in how much someone thought about the content itself.
Programming is a bit more nuanced. Peter Naur’s Programming as Theory Building encapsulates and justifies one central idea: the theory of a program is equally, if not more, important than the source code itself. The point is to maintain a mental model sophisticated enough to justify design decisions and account for future ones. I can understand the appeal of making LLM do repetitive tasks while thinking at the design level; that is, the level at which caring matters. In my experience, if I tried to put LLMs beyond this, projects became cluttered and broken. Putting this kind of program out in the real world results in security leaks.
Of course, there are serious long-term consequences to opting out of learning, which includes lived experiences and struggles. If you’re a student reading this (or anyone, really), there are plenty of reasons to care about lived experiences. They are uniquely yours. Your school can’t take them away from you, nor can some corporation. I wouldn’t let AI take those experiences away, either.


