On A.I.
Remembering why we learn

Let’s talk about AI. I know, who really wants to? I sure don’t. Almost every academic space I’ve found myself in in the last year or more – from the conference panel to the departmental Zoom meeting – has heard the ominous words “… and now in the age of AI” thrown around at one point or another. Aside from the alleged (and it really is unproven) power of AI to radically transform our research lives and our classrooms, one of AI’s most unspoken and unchallenged powers is precisely the silence that descends on the conference attendees or the Zoom room participants when those guttural vowels – A, I – are belched into the professional air. In those moments – when my learned, reflective, compassionate colleagues are unable to speak – I wonder why we ever allowed something as anti-intellectual, anti-human and anti-environmental as AI to let us second-guess ourselves, never mind letting it into the university in the first place.
Of course, it was not our fault. We teachers and researchers were only reacting to the rapidly changing world around us. We began to notice the oddly well-written but substantively empty style of our students’ essays. We read in a mix of fear and awe the law reviews publishing AI-assisted journal articles. We heard from the zealous tech-savvy colleague that actually, there are ways to use AI “responsibly”, if you only give it the right prompts and incorporate it into your courses so that students are aware that if they use it, they should only be using it as a research aid rather than to draft whole chunks of text but that–of course (em-dash!)–we cannot really do anything about it until a *real* case of fraud has been found and even then the models are learning SO FAST that by the time the university establishes a policy it will already be outdated so…
*and breathe*
Despite sounding AI-generated themselves (or because of it), we knew that these long-winded responses only scratched the surface of AI’s effects on our core work of reading, writing, and teaching.
So how do we respond? Precisely as we respond to everything else: we create a committee, we draft a new policy, we bury our heads in the sand as we “wait and see” even as we know we are being outpaced. I do not think I am being cynical in saying there is no way we can compete with AI on its own turf, i.e. in amassing data, producing knowledge, drafting and editing outputs such as essays, articles, and so on. This may seem incredibly pessimistic, but it depends who you ask: the scientific empiricist or someone else.
The scientifically-minded expert who really did think that research was about gathering data, testing hypotheses, offering findings and recommendations was already beginning to sweat when they realised how easily data could be manipulated even before “the age of AI”. The university manager who had faith in science’s ability to forge social impact and produce value for economic stakeholders was only ever a few strategic papers away from cutting researchers and teachers out of the process of value-maximisation anyway and jumping to the finished product. Students’ experiences of a career rat-race proves so constraining that resorting to AI to draft their assignments is often the only way to keep up and acquire those letters that they can put on their CV, amidst a culture in which employers and teachers alike are more interested in churning out numbers (grades, citations, budget figures) than in inculcating a desire to learn and grow.
But for those who had long ago begun to question the scientific and neoliberal premises of the modern university system, AI might be nothing less than a gift. Because it finally throws all the unquestioned assumptions we had about why we read, why we write, and why we teach and learn into stark relief. Perhaps we might have been better prepared, less concerned, more relaxed, if we had fostered in students the practice of close reading (from Nietzsche to granny’s birthday card message) rather than pandered to the nightmare image of the distracted student through flashy tech which already paved the way for the ‘single-use plastic of the mind’ that is AI. Perhaps ChatGPT would have posed less of an existential crisis for universities had we properly modelled the value of learning for oneself and one’s community – for fostering curiosity, deep personal reflection, humility, the courage to change your mind, the courage to admit you are wrong, or don’t know – rather than turned obsessively to ‘modes of assessment’ and ‘learning goals’ which transformed the wealth of reading materials in our syllabi into sausage meat for the grading mincer. Perhaps listening to the academics who had committed to the humanities (and now, with the huge sinks that are AI data centres, what Rosi Braidotti has called the posthumanities) would have led the AI-zealots to ask what kind of person, what kind of world, AI creates. A joyless, goal-oriented person; an emptied, end-oriented world. Eschatology encoded into everyday life.
For all its capabilities, AI has not yet and perhaps will never be able to replicate the inspiration that a piece of writing can create in a student, the commitment a teacher brings to a classroom, the creativity that is the unique source of good academic scholarship, or the motivation that will bring a student to join a climate protest. In fact, all these possibilities – which are also what made me want to be an academic – are the antithesis of AI and should be protected if we care about learning at all.

It was beautiful. One thing that I have been thinking these past months and can be added to what you said is the question of 'object and core of the text' versus the its margins, its silenced noises. What Derrida, a big fan of close reading, was obsessed with was not the core of the text (if such a thing ever existed), but it's margins, its repressed sides, its 'slips of the tongue', where you can find the authors ghost, their specter, instead of their narcissist agency.
An AI produced text does not have these margins, these errors. It is based on the idea of 'objectivity' of a text, a text that is supposed to transfer a specific message, and does it with absolute clarity. Whether one wants their texts to carry their specter or to be just filled with their 'self' remains for people to decide.
And all of a sudden, 'self' and AI become allies, and the margins, the ghost, the errors, are the ones still able to escape this ugly game\chain of fast production of academic commodity.
Fabulous. My immediate instinct after reading this piece was to shut myself in a room with nary a digital device in sight and engage in the pleasure of close reading an Anne Carson essay, followed by scribbling a three line reflection. May we find ways of preserving this joy for our students (and, you know, ourselves).