A warning to the young: just say no to AI
- July 9, 2025
- Aaron MacLean
- Themes: Culture, Technology
The substitution of Large Language Models for genuine thinking is a generational threat. At stake is no less than the life of the mind.
/https%3A%2F%2Fengelsbergideas.com%2Fwp-content%2Fuploads%2F2025%2F07%2FLibrary.jpg)
I have a warning for you. There is a conspiracy afoot in the land, targeting all of us. The computers in our pockets and the screens all around us have for years paired incredible access to all the world’s information with increasingly ruthless attacks on our capacity for focus, or for what some call ‘deep work’. That’s old news. We all fight this battle every day and it’s important to develop techniques to win it.
But there’s something new under the sun that is far more destructive – and especially for you, the young, who are still in the thick of education, perfecting the ability to reason and really (shocking as it may seem) just at the start of a journey of serious reading and writing that will ultimately reveal, in ten years or so, the questions you ought to be asking.
About ten years after that, you’ll begin to have some tentative and decent answers to those questions, the implications and consequences of which you’ll then spend the rest of your life working out. (Or you could work faster than me, I suppose!)
Of course, I’m talking about AI – specifically, LLMs, or Large Language Models, and the ways in which students use them. The situation overall is serious like a heart attack—or, maybe more appropriately, like a stroke, because the threat is to your mind. And it is a deadly threat. The life of your mind is at stake.
Let me explain.
One of the most depressing conversations I’ve had in the last year was with a former colleague of mine at the United States Naval Academy who now occupies a position of real responsibility at that school. This person is a brilliant, cultured, well-meaning patriot who only wants the best for the young officers in training there. I was genuinely shocked when this person told me that humanities departments at Annapolis would be incorporating the use of AI into class writing. After all, the students were all using AI for assignments anyway, and would be ‘writing’ this way when they got out into the world, so the practice may as well be brought into the light. The students should, as it were, train as they fight.
To this, I responded that we should just save time: the students could use AI to write the papers, and then the faculty could use it to grade them, and everyone could knock off to the bar early.
Slightly more seriously, I said: I think I must have misunderstood the purpose of student papers when I was teaching at the Navy. I didn’t think the goal of classroom writing was for students to produce the best possible papers using any available tool. If that was our goal, well, we were failing, spectacularly. I thought the goal was for students to practise writing – and for the faculty to read the papers (sorry to break it to you, but this is hardly any teacher’s favourite chore) so that we could give advice and some accountability. The goal was for the students to become better writers, in time.
Why did we expend all that effort? Why does writing matter so much?
There is, of course, a practical and superficial argument, that persuasive writing is important for your career. I recently heard a senior retired officer regale a classroom with tales of his bureaucratic victories securing resources within the Pentagon and point out that he basically won – and here I am quoting directly – because he ‘wrote a better essay’ than his opponents. This kind of thing matters. Clear and persuasive writing, in my experience, is essential to career success.
Of course, you might respond: well, if I can get the AI to write clearly on my behalf, doesn’t that get the job done just as well? Indeed, if I don’t think I’m the strongest writer in the first place, wouldn’t it be an enormous advantage? And here, folks, lies the road to catastrophe. I genuinely wish I were kidding.
An old professor of mine, in my freshman year, once said something wise and important to a seminar I was in when one of my classmates observed that ‘I know what I think, I just can’t get the words down on the page.’ My teacher responded: ‘Well, you don’t actually know what you think, then. The act of writing the thing is the same thing as the thinking of it. If you can’t write it, you haven’t actually thought it.’
Which is to say that writing and reasoning are effectively identical activities – and for many years now, writing has been the way we have taught young people how to reason. To be clear, we are already in a Bronze Age on how we go about such education. If you want to see what the Gold or Silver ages looked like, go look up class syllabi for high schools from a century or two centuries ago. You can do it on your phone.
The substitution of AI for the level of instruction remaining threatens to take us from a Bronze Age right through Tin and then on, quick smart, into the abyss. Perhaps our best hope will be that, ultimately, AI will suffer model collapse because, as no one will be producing anything intelligent anymore, AI will have only other AI products to read, and the whole thing will get even dumber until it falls apart. (This concern, model collapse, is a real thing, and the engineers trying to sell you these products are very concerned about it.) I do worry about what it will do to human nature in the interim, though.
Okay. You might say all of this is a little overwrought. After all, technological change has regularly altered human nature, just as it increased man’s power and our access to pleasure. So, what’s the big deal? Indeed, technological progress is co-eval with homo sapiens itself. It seems to have been the mastery of fire that improved our digestion and allowed for our larger brains – and, in so doing, set us on the road to being less brawny and strong as a species. Seems like a fair trade to me. What Luddite would want to stay an ape?
Another interesting example is the proliferation of cheap paper and especially the invention of the printing press, before which educated elites were trained in elaborate mental exercises to build vast imagined ‘memory palaces’, so as to store the vast amounts of information a learned person would need, as libraries were exceedingly rare and, by modern standards, very poorly stocked. No one has really needed an amazing memory for some time now. Was the trade worth it? I don’t particularly want to go back to a world without books, so I’ll allow it.
What about the ability to record and distribute and broadcast music? That’s an interesting one. When I pick up a novel about a domestic subject written more than a century ago, inevitably at some point in the story a family gathers and makes music together – after all, there wasn’t that much to do. Now, I expect that a lot of that living-room music was pretty bad – especially considering that I could play for everyone over the mic right now Maria Callas or Yo-Yo Ma or whoever, and we could listen to some of the most transcendent music ever performed. But here we are arriving at my point: it wasn’t actually the listening to the music that really mattered. It was the making of the music, and the centrality of that activity to being human – which is, today, much diminished. What an irony. We live today in an utterly aurally overstimulated world. There is music, much of it very fine, everywhere. And we are as unmusical a species as we have been since shortly after we gained control of fire.
Now we have technology that, in essence, is promising to supplant the core and foundational human activity – that of thinking. And if it is a bit sad that human beings are simply less musical than we used to be, this threat is much more serious.
Freedom is, in the first and most essential place, intellectual freedom – the ability to reason clearly, to navigate received opinion and accreted prejudice so as to pursue and sometimes even catch knowledge of things that matter. This activity, when directed at the things that matter most, is called philosophy. It’s a kind of hunt, sometimes pursued in cooperation with others, racing in bands across the limitless plain after our quarry – but in the moment of the kill, we all must kill alone, for ourselves. The hunt for wisdom requires, of course, basic reasoning skills and ideally wide exposure to high-quality efforts of others to reason about things that matter. It requires learning.
Generative AI thus presents a double threat: first it tempts you by offering to unburden you of your need to reason, the tedium of organising your thoughts. But, almost as bad, it also scrapes the brain of human civilisation of all information and all learning ever and then, apparently, reduces it all to unvariegated mush, spitting out to you its tawdry imitation of thinking – insipid, composite accounts that I would say are written at a 10th-grade level but for their uncanny alien quality that is not quite like anything human at all.
This problem is connected to challenges in foreign policy and implicates, well, our whole national destiny. I think I will find little disagreement that the core purpose of a free society’s foreign policy is the preservation of that free society. That freedom depends not just on our wealth and happy geography and the strength of our arms – though these are vital – but also on wise discernment among our leaders, which is itself a function of the discernment of our citizens. But this discernment, which in this context we might call strategic thinking or just ‘strategy’ as such, is an intellectual activity not dissimilar from philosophy, though its objects are different. Philosophy is the hunt – that race across the plains – for wisdom, whereas strategy is the search for advantage. Both things are ultimately, however, human activities. Philosophy and strategy are not things that you know. They are things that you do. If we cannot reason and if we have had no exposure to wise teachers – often in the form of their writings – we cannot strategise or philosophise – and we will no longer be able to maintain our freedom. Our only consolation will be that we will no longer deserve it.
What an astonishing moment in human history, when the long story of our species – the whole history itself of reason and technology – concludes with self-destruction unfolding from the very principle that has defined the species from the start. The threat posed by this new technology is even more concerning than the splitting of the atom, because what has come to market first is not a bomb that can kill cities, but a much more subtle and much broader-based attack on the very substance of our humanity.
What are you to do? As individuals? I have some thoughts about the reforms institutions and universities should undertake. They must redo their schedules to allow for in-class writing and ‘blue book’-style assessments, in the spirit of the Oxford exam schools. Embracing oral examinations as a part of reassessment will help, too.
But for a room of talented, curious, and committed individual patriots such as you – how should you respond?
I’ll close with this. It may be the case that there are some practical individual uses of AI that are not pernicious. But I advise you to be completely radical. You young people, right here, are what we are depending on. Your country depends on you thinking for yourselves – on actually doing the reading, formulating your own thoughts, doing the work. On being a reader and a writer, not just seeming like you have a brain.
For any use of these new tools that takes the place of your own genuine understanding – which, I fear, is most of their uses – reject the crutch. To quote an old classmate of mine, if you haven’t started using these tools, don’t. If you have started, stop.
Freedom starts at home. It starts in your mind. It, too, is an activity, not a possession. Don’t give up the race.