
On a Tuesday night, the scene in any university library is nearly identical to what it was five years ago. The same students hunched over laptops, the same half-eaten sandwiches, the same fluorescent hum. The screens are different. There is frequently only one window open, a silent dialogue scrolling between a student and a chatbot that doesn’t grow weary, doesn’t pass judgment, and responds in a matter of seconds, whereas there used to be ten browser tabs of JSTOR and Wikipedia.
The speed at which this occurred is difficult to ignore. According to surveys conducted in 2024, about two-thirds of students used generative AI. 92% of UK undergraduates were using it for coursework in some capacity a year later, according to the Higher Education Policy Institute. It typically takes ten years to make such a leap. It took eighteen months to complete this.
The headline figures don’t fully capture the intrigue of what students are doing with it. While some do ask the bot to write their essays, most do not. They are using it in the same way that an older generation used an astute older sibling or a patient tutor. Describe this passage. Write a synopsis of this chapter. Why does it seem like my argument is weak? Speaking with undergraduates, it seems that ChatGPT has evolved from a tool to a study companion that just so happens to be present on every screen.
This is supported in unexpected places by the data. In September of last year, a cross-institution survey of over 1,600 newly enrolled international students revealed that 17% of them had relied on AI before selecting a course during their university search. 96% of those students reported that the advice was on par with or superior to what they received from agents, pamphlets, and official websites. Adoption was highly regionally skewed. Nearly 30% of Filipino and South Korean students felt at ease allowing a chatbot to assist them in making one of their most important life decisions.
Researchers have attempted to gauge whether any of this is beneficial. A highly significant impact on learning performance was found in a widely cited meta-analysis of 51 studies that was published in Humanities and Social Sciences Communications. The fact that the paper was withdrawn in April 2026 indicates how conflicting the evidence is. Results continue to come in more quickly than anyone can confirm them.
Teachers are torn between alarm and resignation. Exams are now written by hand under close supervision in blue books, according to some professors. Others have completely given up and begun creating tasks that presume AI is present. The slow grind of close reading—sitting with a challenging sentence until it loses its meaning—is a real concern that is frequently expressed in humanities departments.
It remains to be seen if that concern holds up. Mathematics did not end with calculators. Research was not stopped by Wikipedia. However, those tools were unable to converse, flatter, or generate paragraphs that sounded confident when needed. This one does. Pupils appear to think they are learning more quickly. They are not always learning the same things, according to their professors.
As we watch this develop, the truth is that we don’t yet know what kind of thinkers this generation will produce. Maybe more pointed in their inquiries. They may be less patient when responding. The classroom remains intact. It simply has a new occupant, quietly positioned between each student and each page.
