Turn On, Tune In, Drop Out
On AI's effects on the brain and how we teach a new generation what thinking even is

The 1960s were a time of rupture.
In the United States, Martin Luther King Jr. was marching on Selma, Malcolm X was shot in Harlem, and hundreds of thousands of young men and women were protesting a war that few understood and even fewer believed in. Across the Atlantic, Britain was also unravelling in its own way. Enoch Powell’s “Rivers of Blood” speech loomed over a country struggling to define itself post-Empire. The Sexual Offences Act was decriminalising homosexuality, even as police were raiding queer clubs from Soho to Salford. And as the Beatles started experimenting with acid, miners started experimenting with industrial strikes. Society was bubbling up. Change could be felt everywhere.
People were waking up. The culture of obedience that had lingered from the end of the war through the 1950s was giving way to rebellion. The idea was taking hold that the individual mind, if properly expanded, could resist the Establishment. You could find it in the essays of Baldwin, the lectures of Marcuse, the manifestos of Sontag. Thinking, back then, was insurgent. And education was being reimagined as rebellion.
Into this moment stepped Timothy Leary. A psychologist turned LSD evangelist, he urged students to “Turn on, tune in, drop out.” It was a call to reject the programmed routines of industrial education. To escape the institution and elevate human perception beyond what could be tested or graded.
His was a chemical revolution. Ours isn’t.
The psychedelic of our time isn’t a tab to be swallowed, but a prompt to be typed. It’s less about dissolving the ego and more about rewiring cognition. A recent study by MIT Media Lab has begun to show that the use of large language models like ChatGPT reduces neural engagement, lowers memory retention, and dissolves ownership of thought. Where Leary asked us to expand our minds, Altman may be asking us to outsource ours entirely.
The intellectual paradigm is shifting. Tools that once supported thinking are now starting to replace it. Students are outsourcing not just research, but reasoning. Professionals are being encouraged to “leverage AI” to the point where their job becomes little more than an autocomplete.
This essay asks a few simple questions:
- If machines can think faster than we can, what’s left for humans to learn?
- What should education look like in the age of cognitive outsourcing?
- What skills are worth preserving, and what new literacies do we need to survive?
A recap on the MIT Study

If you haven’t yet come across the MIT Media Lab study on the effects of ChatGPT on our brains, here’s a summary to help you get up to speed. Researchers tried to understand what happens to the brain when we outsource our thinking. Participants were split into three groups and asked to write short essays: one group used no tools, one used a traditional search engine, aka Google, and one used ChatGPT. As they wrote, their brain activity was measured using EEG – short for electroencephalography, a non-invasive method that places sensors on the scalp to track electrical signals in the brain. In this case, it was used to measure how actively different regions of the brain were working together, especially those linked to problem solving, working memory, and attention. In essence, how hard the brain was actually thinking.
The results confirmed what most have been fearing.
The group that used no tools – the ones thinking unaided – showed the highest levels of neural activation. The group using search engines scored lower. The group using ChatGPT showed the weakest neural signatures. When asked to recall quotes from their essays, for example, more than 80% of the ChatGPT users couldn’t do it. They’d produced text but hadn’t internalised it.
One of the subtler findings was that this effect lingered. Participants who used ChatGPT in earlier sessions continued to show reduced brain activity even after switching back to writing without tools. Something had changed in how they approached the task. The researchers called it cognitive debt, a kind of learned passivity that builds up when thinking is repeatedly outsourced.
The study is still undergoing peer review. But even in its early stages, the implications are clear: when we give away the process of thinking, we lose more than just the effort; we also suffer a loss of ownership.
We Can Know More Than We Can Tell

What does it mean to know something? To Learn?
Is it the ability to memorise facts or to string together abstract concepts in the right order? How do we know when we’ve learnt something? When we’ve passed the test, is that it? Or is it, perhaps, something deeper and more embodied?
Some knowledge we inherit. Babies don’t learn to eat; they’re born knowing how to suckle. Their grip reflex, their eye-tracking, their turn toward warmth – these are behaviours coded into the body that cannot be found in any classroom.
But most knowledge is different. To hunt, to build, to diagnose, to argue – all of these are learned. Often, and certainly historically, learning has been painful, done through imitation, repetition, correction, and most importantly, failure.
To learn is to gain even a sliver of control over your environment. Not necessarily to change it, but at the very least to predict it. To anticipate what comes next. Learning is survival; it is pattern recognition in motion. And for most of history, it was about staying alive.
Long before universities, people learned by doing things near other people who already knew how. A boy stood beside his father in a forge; a girl followed her aunt into the field.
In The Emperor of All Maladies, Siddhartha Mukherjee tells the story of how surgeons came to be. For centuries, surgery in Britain wasn’t performed by doctors, but instead done rather roughly by barbers. They shaved beards, pulled teeth, and removed limbs. Their knowledge came from practical, hard-earned, blood-soaked experience rather than any formal training or schooling. Fun fact, this is why to this day, surgeons in the UK are still addressed as Mr. and Ms. rather than Dr. It’s a living artefact of a time when knowing how to cut didn’t require knowing how to theorise.
It was only later, when the outcomes of experience were codified into repeatable technique, that this kind of knowledge began to shift from body to book. This is the moment education began to move from proximity to abstraction, from trade to theory. Less intuitive, more instructional.
And even though the professionalisation of teaching was a net positive, it wasn’t without its trade-offs. As the philosopher Michael Polanyi put it: “We can know more than we can tell.” It’s the kind of knowing that lives in the hands. The way a welder senses heat or a surgeon feels the tension of the skin. Actions are not just physical – they are themselves a form of thought, but unlike thoughts, they cannot be exported or copied and pasted. They have to be transferred and experienced through contact and care.
To do the thing itself over and over again is to learn it again. The doing, the friction, the biting point of the gears you learn to handle without a second’s thought, is itself education. Not the certificate or the theory.
“In the room the women come and go / Talking of Michelangelo.”
Another Brick in the Wall

Maybe Pink Floyd were right, we really don’t need no education. Or at least, not the kind we’ve inherited.
For most of modern history, school did exactly what it was designed to do. It reflected the tools, technologies, and social roles of its time, and its list of subjects became a blueprint for the kind of adult you were expected to become.
In the 19th century, it was about factory workers and colonial clerks. In the 20th, secretaries, engineers, and the civil service. There were finishing schools for women, trade schools for the lads, and grammar schools for the kids rich enough to make it into university.
Even as recently as the 1980s, taking a computer science class at school was seen as fringe. Today, it feels as obvious as Latin did half a century ago. Because, in a way, that’s what curricula are: mirrors. They reflect what a society values and where it thinks it’s going. The trouble now is, I don’t think anyone is sure where we’re heading.
The pace of technological change has outstripped the system’s ability to keep up. One year AI is a novelty; the next it’s writing your essays, a few months after that, it’s your best friend at work and your most trusted colleague. The idea of a stable career path has long been dissolving into gig work, portfolio jobs and a permanent upskilling treadmill. Is it any wonder “productivity hacking” (read: procrastinating while giving yourself a panic attack) is such a popular genre on TikTok?
And it’s into this climate – this fluid and uncertain world – that we’re still sending students to institutions designed for a professional structure that no longer exists. Teachers doing their best with tools that feel current at the start of the school year but are obsolete by the time the ink dries on a student’s diploma.
The lag is no one’s fault, but the cost is everyone’s problem.
Learning in the Age of Autocomplete

If you’ve been following my work, this is a theme I revisit frequently. A few weeks ago, I wrote Leave Thinking Behind, where I explored the implications of our growing dependence on shortcuts. As I argued then, it’s not that I’m against progress – far from it – but the pace of change right now demands we pause, take stock, and examine the trade-offs.
Every time we outsource more, we drift further from the cognitive core of activity. That’s not inherently bad – technology has freed countless people from farm fields to pursue diverse careers. But it does raise a question: what are we giving up?
For those of my generation and older, who built careers before AI was integrated into everything, the shift feels optional. We used algorithms to augment our thinking. For today’s 23-year-olds, AI is embedded. Their first job might demand they be AI savvy, but if they’ve never learned the fundamentals, i.e., research from scratch, analyse with uncertainty, and write with rigorous thought, how can they function when the machine gets it wrong? How will they even know? Or develop instinct and intuition?
We can already see signs of how this tension is playing out in the job market.
A recent UK survey found that one in ten graduates are pivoting careers out of fear that AI will render their current path obsolete. At the same time, data from Exploding Topics shows that jobs most exposed to AI replacement (like customer service, data entry, and admin roles) are often the very ones early-career professionals rely on to build experience and earn trust.
The entry points for careers are shifting, and it feels to many like the on-ramps are shrinking. Junior researcher, paralegal, editorial assistant, analyst – these were once the proving grounds. Today, they’re the most automatable roles on the market. And as they start to disappear, we risk creating a generation of professionals who never actually did the thing. At best, they know how to direct the thing that does it.
Perhaps that’s the future. Maybe being a lawyer who can’t draft a contract will feel as unremarkable as a DJ who can’t play an instrument (no shade to DJs). But the danger is more fundamental: we’re becoming dislocated from where the action happens.
The MIT study showed that when people relied on AI from the start, they retained almost nothing. Not the thought, nor the reasoning, nor even a single sentence. They hadn’t engaged, because they hadn’t actually done the thing.
Vaclav Smil puts it bluntly in How the World Really Works: most people have no idea how food is grown, how energy is produced, or how the materials that make modern life possible are even made. The percentage of the population who truly understand how to keep us alive is vanishingly small.
The further we drift from the doing, the more passive we become. An entire society of middle managers, prompting machines while nodding along, barely aware of what’s being produced.
I Need an Engineer, Immediately

Every generation faces its moment of panic. If I were a horse-drawn carriage driver in 1908, I’d be penning a heartfelt piece about the lost art of sensing the tension in the reins. How to feel the horse, not just steer it.
Today, we have the option to ride in driverless cars in the Bay Area with a smoother ride than anything the human hand could’ve delivered.
Things change, we adapt, and it all works out in the end.
But that doesn’t mean we should sleepwalk through the transition. Because, as I hope I’ve made clear, this isn’t just about a new tool, it’s about a new relationship with thought itself.
I opened this essay with some questions:
– If machines can think faster than we can, what’s left for humans to learn?
– What does education look like in the age of cognitive outsourcing?
– What should we still be teaching, and what new literacies do we need to survive?
Let’s attempt to answer them, or at least, sketch the outlines of a response.
The first thing we have to admit is that AI is brilliant at answers, but it doesn’t quite know what the question was really about.
Writing, at its best, is far more than content; it is cognition itself. It’s how we organise ideas, test logic, sort chaos, and spot holes. In a recent Psychology Today piece, innovation theorist John Nosta argues that prompting is a kind of orchestration, and that while memory may be fading, intent is becoming the new marker of authorship.
It’s an elegant defence. But it only works if the person prompting already knows what they mean.
Can you really orchestrate what you don’t understand?
Prompting is just an interface; it doesn’t require any insight. And the more seamless that interface becomes, the more vigilant we need to be about what’s happening under the hood.
So what does learning need to look like now? The honest answer is I don’t know. I’m not an education specialist. But if I had to sketch a working version, it would go something like this:
Engagement over execution.
We need to move away from assigning tasks and toward assigning ownership. The process has to matter as much as the product, especially when the product can now be automated.Discomfort as a feature.
Learning needs friction. Ambiguity. The slight pause before autocomplete. It’s in the uncertainty that instincts form.Inference literacy.
People need to understand how models predict, not just that they write. What’s statistically likely isn’t always what’s true. Students need to learn to ask: Where did that come from? Can I verify it?Epistemic hygiene.
Or in simpler terms: the ability to call bullshit. Just because something sounds confident doesn’t mean it isn’t a hallucination. Critical thinking is now a survival skill.Co-authorship, not co-dependence.
Use AI late, not early. Start with your own thought. Use the tool to stretch, not substitute. Don’t begin by filling in the blanks; begin by defining what the blanks might be.
Or maybe we’re all destined to become timely engineers. Sorry, I meant prompt engineers. Where your only job is to think vaguely and ask for things with feeling:
“Make it sound more like me, but smarter.”
“Make it sound more urgent, but not desperate.”
I’ll leave you with this. I don’t think it’s about resisting the tools or hiding from advancement. But the more new things get introduced, the more we need to remember what they’re for and what the trade-off is.
AI is fast. It can act smart.
But it doesn’t care if you succeed or not.
My final question is: do we?
Thanks for reading.
I’m sure this piece sparked more questions than answers. If it resonated, challenged or gave you language for something you’ve been sensing, please do share it.
And if you’re enjoying my writing, please hit that subscribe button.
Until next time.