Leave Thinking Behind: The Unravelling of Human Abilities
What we lose when we outsource our minds to machines.

Leave the World Behind is a slow-burn thriller about a family renting a luxury home in Long Island just as the world starts to collapse — no internet, no power, no clear explanation. A slow, creeping apocalypse with no push notifications to tell you how bad it’s gotten.
There’s a moment in the film that doesn’t announce itself as the climax, but sure hits like one. Ethan Hawke’s character — an upper-middle-class, well-read, well-meaning academic — is pushed to his limit. America is falling apart, and he, in desperation, delivers a line that should feel absurd, but instead feels all too prophetic:
“I can barely do anything without my cell phone and my GPS. I am a useless man.”
It’s not delivered for laughs. It’s not ironic. It’s an admission of defeat — of someone who’s spent their life being told their intellect is their currency, only to realise in a moment of collapse that currency means nothing if you don’t know how to build a fire, find water, protect your family, or think without a screen. His whole sense of usefulness collapses the second the systems he’s outsourced his competence to stop working.
In the world that Clay (the character’s name) belonged to, value was defined by credentials and cognition. He was a professor. A man of letters. Someone people turned to for insight, opinion, interpretation. In that world, he was useful. But in his current crisis, when intellectual capital no longer holds the economy together, he’s exposed. Not because he’s dumb. But because he’s dependent. And that’s the point.
It’s not just the end of the world that’s terrifying him, it’s the revelation that the skills he’s spent a lifetime building don’t really matter when the rubber hits the road.
The Disabling of Cognition
But here’s where it gets darker. That, at least, was about manual labour — about the loss of physical skill, survival instinct, practical intelligence. Hunting, shooting, foraging, making fire. Now, it’s the mental muscle that’s starting to atrophy.
A recent article in New York Magazine exposed what many educators already know but few will admit: students across America are cheating their way through university using GenAI. Not faking it with SparkNotes or hiring an essay writer, they’re outsourcing cognition itself to language models. Whole term papers ghostwritten by prompts. The entire function of education reduced to curation and formatting.
“GenAI is not just helping students cheat — it’s teaching them that thinking is optional.”
It also doesn’t stop at the school gates. Across social media, we’re seeing younger generations use GenAI to plan every moment of their lives. Not just essays and work projects, but the soft, seemingly insignificant admin of daily living. A shopping list generated from last night’s leftovers. A playlist made for their current mood, prompted by what ChatGPT thinks a therapist would recommend, or a prewritten response to a flirty DM from Stephanie on Hinge.
Whether it’s deep reasoning or shallow decision-making, AI is bleeding into everything. What movie to watch, what book to read, what tone to use in an apology text. The thinking part — the deliberation, the working it out, the trial-and-error of decision making — that’s being quietly offloaded. And most don’t even realise it’s happening.
There’s an argument to be made that we’ve been here before. Every era has had its revolutionaries and its sceptics — those who race ahead toward progress and those who hold onto the tools they know. From the printing press to the assembly line to the smartphone, technology has always brought with it creative disruption. And in doing so, it’s made life more comfortable than our ancestors could’ve ever imagined.
But here’s where it gets more complicated when we talk about AI.
AI isn’t just another productivity tool. It’s not a hammer or a spreadsheet. It’s a technology that has the power to reshape how we think, if we think, and who gets to think in the first place. Right now, society is the petri dish for an experiment run by the few, with consequences felt by the many. Job displacement, economic restructuring, and embedded algorithmic decision-making, all unfolding around us and oftentimes, without our full consent.
And while AI is levelling itself up and becoming more autonomous, more predictive, and more capable, we’re quietly dumbing ourselves down. Not because we’re stupid, but because it’s comfortable. Because ease is addictive. Why struggle through a process when a machine can give you the outcome?
That’s the real tension we’re faced with: comfort versus competence.
We’re gaining speed, scale, and convenience at the cost of friction, effort and original thought. There’s an existential balancing act happening right now between our growing dependence on AI and its exponential leap in capability. The risk is that we stop noticing how much we’ve forgotten in the process.
And maybe that’s the deeper threat: not that we made AI brilliant, but that we made ourselves irrelevant by forgetting how to think.
The Process is the Point
Psychologist Daniel Kahneman talks about two selves that exist within every person: the Experiencing Self and the Remembering Self.
The Experiencing Self lives in the moment. It feels the boredom, the frustration, the discomfort of effort. The Remembering Self comes later. It builds the story. It turns that discomfort into something meaningful. The trip that was a nightmare in real time becomes a favourite anecdote. The training that felt relentless becomes the thing you’re proudest of.
It’s the process that gives things weight. Without the process, without the struggle, you don’t just lose the difficulty. You lose the point.
But right now, that idea is under threat. We’ve built a world that’s increasingly designed around the Experiencing Self and engineered to sedate it. Everything is being flattened into ease. One-tap. One-click. Instant.
Anna Lembke, in Dopamine Nation, writes about how our constant exposure to reward, especially online, has rewired our ability to tolerate discomfort. The brain becomes addicted to the hit. To the shortcut. To the surge of ease. And when that hit doesn’t come, we disengage.
We’re seeing that addiction bleed into how we live and how we work. Efficiency has become the organising principle of modern life. Every app, every business model, every quarterly strategy is calibrated around doing more, faster, with less. But efficiency isn’t the same as value. It’s not the same as satisfaction, and it’s not the same as growth.
We’re mistaking output for experience.
Nowhere is this clearer than in the corporate world’s obsession with productivity tooling. There’s a GPT for everything. A workflow app for every process. We’ve got dashboards to track the dashboards. Strategy is becoming a prompt. Thought leadership is being templated. The process — once a source of pride, iteration, and craft — is being streamlined into near-nonexistence.
We call these tools “co-pilots,” but who’s really flying the plane? Are we leading the technology, or just reacting to its momentum?
The story we keep telling ourselves is that these systems exist to serve us. But when every decision is automated, every step abstracted, and every output optimised, who, exactly, is doing the work?
A machine exists to produce, we were made to experience. And if we forget the difference, we risk letting the wrong thing take the lead.
I Swear I’m Not a Luddite
Photo by Fernando Lavin on Unsplash
I appreciate that, by this point, I might be coming off as a well-seasoned Luddite — romanticising the past, clinging to effort for effort’s sake, waving a pitchfork at progress. Let me reassure you: that’s not the case.
I’m enthralled by innovation. Especially this one. AI is, in many ways, the most profound leap forward we’ve made in a generation. Professionally, it’s all-consuming in the best possible way. It’s challenging assumptions, reframing creativity, and forcing new conversations. I see the potential and feel it daily.
And the gains are real.
Researchers are using AI to model the location of every protein inside a human cell — something that was previously impossible to do at scale or with accuracy. This changes how we understand disease, how we develop treatments, and how we map the body at a molecular level. That’s extraordinary stuff.
Or NASA using it to make space exploration safer, more autonomous, and more efficient. Systems are being trained to detect anomalies in deep space, automate repairs, and optimise mission performance in ways humans alone never could.
There are models helping neurodivergent individuals learn in ways that suit their processing styles. Platforms that translate complex legal, medical, and scientific concepts into accessible language. AI is expanding knowledge and amplifying access. It’s moving at a speed that is, frankly, awe-inspiring.
So no, I’m not anti-tech. I’m not even anti-AI.
What I’m worried about isn’t the capability, it’s the passivity.
AI solving some of the world’s hardest scientific problems, in collaboration with humanity’s brightest minds? I’m here for it. But us handing over basic cognitive functions — decision-making, problem-solving, curiosity — without so much as a pause? That’s what keeps me up at night.
The danger isn’t that we’ll build a system that’s smarter than us. The danger is that we’ll stop using the existing intelligence system we’ve got inside of us at all.
What we need is a shift. A reframing of how AI integrates into our process — not as a replacement for thought, but as a partner in it. Something that accelerates learning, sharpens instinct, and encourages creativity. A copilot, yes, but one that keeps us in the cockpit. A tool that helps us to co-evolve with it, not to disengage from the practice of thinking.
The goal isn’t total automation — it’s to redefine our role in a world where intelligence is no longer uniquely human, but meaning still is.
From One Useless Man to Another
I opened this piece with Clay’s realisation of his own uselessness in Leave the World Behind, and I want to close it with another useless man: me.
Because while I’ve spent the last few thousand words talking about the importance of knowing how the world works — how to start a fire, tend to a wound, think through a problem — the truth is, I can’t do most of those things either. I don’t know which berries are poisonous and which are dinner. I can’t navigate by the stars or fix a broken water pump. Like Clay, I belong to the well-read, well-meaning class of people who rely on the same fragile systems we all do.
But this moment we’re in has been a wake-up call. The tech finally came for me and what I’m good at. And oddly, I’m thankful for that. Because it’s forced me to take stock. It’s made me want to be sharper. More grounded. Less automated.
There’s a joke by Nate Bargatze that’s always stuck with me. He imagines what would happen if he time-travelled to the past and tried to prove he was from the future. He says he’d tell someone something along the lines of “We have phones that don’t hang on walls anymore — you carry them in your pocket and they have screens.” And the person from the past would ask, “How do they work?” and he’d have absolutely no idea.
It’s funny until it’s not.
Because that’s where a lot of us are now. Surrounded by tools we can’t explain. Dependent on systems we couldn’t rebuild. In awe of technology we’ve stopped questioning.
So maybe that’s the takeaway: not that we all need to learn how to light a fire with two sticks or recode a neural net — but that we should stay close to the practice of being useful. Thinking clearly. Solving problems. Staying curious. Staying involved.
The risk isn’t that we become obsolete, it’s that we become so passive, it’s hard to tell if we’re even here at all.
Thank you for taking the time to read this post, I appreciate in our age of AI that even reading something this long feels like a marathon to some.
If this piece made you think, I’d love to hear how you’re using (or avoiding) AI in your own day-to-day