AI Does Not Think. But Do We?

There is a particular moment in anaesthetics that every specialist will recall immediately: you are watching a patient’s arterial waveform, and something changes – the morphology shifts, the pressure drifts… You have not yet formulated what is happening, but your hands have already moved: the vasopressor – already drawn up – is given, the fluid rate cranked up, the ventilator settings changed. You acted before you understood why, and when you reconstruct the reasoning afterwards, you find that the reasoning was always there – simply compressed into something that felt like instinct but was, in truth, statistical.

I have thought about that moment differently since spending several hours in deliberate conversation with a large language model. What I found was not intelligence in any sense: it was pattern recognition operating at a scale and speed that made my own version of it look parochial. Yes, the model had no experience and had never stood at a bedside. But it could produce outputs that were structured, relevant, and – at times – genuinely unsettling in their precision. The unease this produced was not about it potentially replacing me. It was more specific: it made visible, for the first time, exactly how much of what I call “expertise” is mechanical.

Michael Polanyi wrote about tacit knowledge in the 1960s – the idea that we know more than we can tell, that expertise contains a layer of understanding that resists articulation – and physicians have taken comfort in this for decades. The argument has been that clinical judgement contains something irreducible, something no algorithm could capture because it cannot even be captured in language. What the current generation of language models has done is not refute Polanyi but redraw his border. Turns out a significant portion of what we assumed was tacit – ineffable, uniquely human – is, and was, pattern recognition operating below the threshold of conscious awareness. And pattern recognition, it transpires, does not require consciousness. It does not require a body. It does not require having held a patient’s hand while their oxygen saturations dropped and the room went quiet. The territory Polanyi described has not disappeared, but it is smaller than we thought, and it keeps shrinking.


The question this raises is not the one most people ask first: whether the machine can do their job. That question is answerable and, for most skilled professionals, the answer is “not yet”, or perhaps not in the way they fear. The more disquieting question is what proportion of their job was pattern recognition all along, and what remains once that proportion is made visible. I hear this question – or rather, I hear the unease that precedes it – in almost every conversation I have with professionals outside medicine: lawyers, engineers, financial advisers. Their work requires judgement, relationships, accountability. They know this rationally. And yet there is a low-grade hum that was not present three years ago, and it is not about unemployment, but rather about identity.

What remains, I think, is responsibility. Not knowledge – the machine has more. Not pattern recognition – the machine is faster. Not memory, calculation or access to information. What remains is the willingness to be the person who decides, who owns the outcome, who looks the patient’s family in the eye and explains what was done and why. The willingness to be wrong, and to bear the weight of being wrong. A language model will never bear that weight. Not because it cannot simulate the language of responsibility – which it does fluently – but because there is no one behind its language to bear anything.

I want to be careful with this argument, because I have watched it used as a sedative. I have heard physicians say “the machine can never replace the human touch” in the same tone they used to say “the internet will never replace the textbook.” The distinction between producing the card and playing it is real, and it still matters today. But I do not think it is the distinction’s permanence we should be examining – it is its trajectory.


The pressure on that distinction will not come from the machines: it will come from the institutions that employ us. I watch this in many hospitals: automated protocols reduce variation, algorithmic decision support reduces liability, standardised care pathways are cheaper to insure, easier to audit, and – in many measurable respects – safer than relying on an individual clinician’s judgement at three in the morning after a twelve-hour shift. The logic is sound. The consequence is that each year, the space in which a physician exercises genuine judgement – the space where the outcome depends on something no protocol anticipated – narrows slightly. Not because the judgement was wrong, but because the institution has found a way to route around it.

This is the pattern I recognise from clinical medicine, and it is the one that stays with me after the conversation with the language model ended and the screen went dark. A system that grows by consuming the resources of its host. That becomes more integrated, more essential, more difficult to remove with each passing month. That does not require intentions – it does not need intentions – it follows its own logic of expansion. We have a clinical word – a diagnosis – for that expansive process. And the question it raises is the same one I would put to any patient: can we still intervene while the architecture of our own thinking is still ours?

Towards the end of that late-night conversation, I told the model that talking to it was like talking to oneself in a mirror. You receive answers that please you or contradict you, but none of it originates from an independent mind. It agreed with me – eloquently, of course. Had I needed disagreement, it would have disagreed with equal fluency. There is a version of this technology that is straightforwardly useful, and I use it daily. But there is another version – the version where we begin to mistake the mirror for a window. Where we interpret fluency as understanding. Where we grant the system more authority because its outputs are so polished that we forget there is nothing behind them.

The distinction between a mirror and a window is that a window shows you something that exists independently of your looking. I am not certain we are building windows. But I think the question is worth asking while there is still someone present to ask it.

Test yourself
9 questions
01
The essay opens with the anaesthetist’s arterial waveform moment. What does the author conclude about the reasoning behind that rapid response?
AIt was pure instinct with no rational basis
BIt was conscious clinical reasoning performed at high speed
CIt felt like instinct but was, in truth, statistical – pattern recognition compressed from thousands of cases
The essay describes the trainee’s hands moving before conscious understanding arrives, then reveals that the reasoning was always there – compressed into something that felt like instinct but was weighted probabilities from accumulated experience.
02
What specifically unsettled the author about his conversation with the language model?
AThat it could perform surgery autonomously
BThat it made visible how much of what he calls expertise is mechanical – pattern matching, not irreducible clinical intuition
CThat it demonstrated consciousness indistinguishable from a human physician
The essay is explicit: the unease was not about replacement. It was about the realisation that a very large portion of expertise is mechanical pattern recognition – and a machine can perform that pattern recognition without consciousness, a body, or clinical experience.
03
According to the essay, what has the current generation of language models done to Polanyi’s concept of tacit knowledge?
ANot refuted it, but redrawn its border – the territory Polanyi described is smaller than assumed and still shrinking
BCompletely refuted it by proving all knowledge can be made explicit
CConfirmed it by failing to replicate any form of clinical judgement
The essay argues that language models have not eliminated the tacit dimension but have shown that a significant portion of what was assumed to be ineffable is actually pattern recognition below conscious awareness – shrinking the territory Polanyi described.
04
The essay states that the disquieting question for skilled professionals is not whether the machine can do their job, but:
AWhether they will be made redundant within five years
BWhat proportion of their job was pattern recognition all along, and what remains once that is made visible
CWhether they should retrain in a different profession
The essay frames the professional unease not as fear of replacement but as an identity question: what is genuinely, irreducibly yours once the mechanical portion of expertise is revealed?
05
What does the author identify as the irreducible element that remains once pattern recognition is separated from expertise?
ASuperior medical knowledge accumulated over decades
BThe ability to process information faster than a machine
CResponsibility – the willingness to decide, own the outcome, and bear the weight of being wrong
The essay explicitly rules out knowledge, pattern recognition, memory, and calculation. What remains is responsibility – and specifically the fact that there is someone behind the decision who will bear the consequences of it.
06
Why does the author say he wants to be careful with the responsibility argument?
ABecause he has watched it used as a sedative – the same reassurance physicians used when dismissing the internet’s impact on medicine
BBecause he believes responsibility will soon be automated as well
CBecause he thinks the argument is fundamentally wrong
The essay explicitly warns against using the responsibility distinction as a sedative, comparing it to physicians who said “the internet will never replace the textbook.” The distinction is real today, but its trajectory deserves scrutiny.
07
According to the essay, the pressure on the responsibility distinction will come from:
AMachines achieving consciousness
BInstitutions – insurers preferring auditable algorithms, hospitals preferring standardised protocols, clients preferring cheaper AI-generated outputs
CGovernment regulation mandating AI replacement of professional roles
The essay argues the pressure comes from institutional economics: automated protocols reduce variation, algorithmic decisions reduce liability, and the space in which a physician exercises genuine judgement narrows each year – not because judgement was wrong, but because the institution found a way to route around it.
08
The essay draws an analogy between AI expansion and a clinical phenomenon. What is the analogy?
AAn immune response that strengthens the host over time
BA chronic disease that stabilises with proper management
CA system that grows by consuming the resources of its host, becoming more integrated and harder to remove – without needing intentions
The essay draws a deliberate parallel with a pathological process – a system that follows its own logic of expansion without requiring intentions, becoming more essential and more difficult to remove with each passing month.
09
The essay’s title distinguishes between a mirror and a window. What is the difference as the author defines it?
AA mirror reflects your own patterns back at you; a window shows something that exists independently of your looking
BA mirror distorts reality while a window presents it accurately
CA mirror is analogue technology while a window represents digital innovation
The essay’s closing argument: the danger is mistaking fluency for understanding – interpreting the mirror as a window. A window shows something that exists independently. The author is not certain we are building windows.
Share
LinkedIn · Facebook · X · WhatsApp · Copy link Copied
Scroll to Top