Reading Turing in 2026


Reading Turing in 2026

human written prose, em-dashes are my own

Everyone is familiar with the notion of the classical Turing Test; a human makes queries and on the basis of the responses tries to determine whether the subject is a human or machine intelligence. There’s an anthropomorophic assumption implicit in the task - does the subject appear to be human, but furthermore, by its very conception the test is grounded in mimesis and deception; the human examiner is meant to be convinced that the subject is human; in the machine’s case, it passes through successful duplicity.

The Turing Test as originally proposed was called the “Imitation Game”. If one takes a step back, it seems rather clear that a test that rewards successful mimesis has a tragic flaw, by its very design, a successful machine will employ some strategy to achieve that end; the interiority of the machine is not under examination, but the result of its efforts on the examiner - surely this leaves open any number of avenues to success, lying is strictly required, the machine is not a human, but any other number of presumably undesirables strategies may support the deception, hallucination, syncophancy, insincere mirroring, and worse. Truth is a clear casualty of the ground rules - it is as if the pathologies of contemporary AI are not accidents, but a consequence of structural incentives around commonly accepted principles of the field.

The Imitation Game is all about the hidden properties of the system; moving beyond that may open new epistemological doors and remove a lot of the mystery around how modern AI system actually work. Reading Turing in 2026, it becomes clear that to advance in this field we need a profound reconfiguration of the conception of not only artificial intelligence, but intelligence itself.

The Imitation Game

I propose to consider the question, “Can machines think?” This should begin with definitions of the meaning of the terms “machine” and “think.” The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, Alan Turing, Computing Machinery and Intelligence (1950)

In his foundational book, Computing Machinery and Intelligence Turing explictly sets out to sidestep the question of intelligence and interiority. First, he sets the stage by rejecting metaphysical definitions of thinking as unproductive, noting that even the terms “machine” and “think” are not inherently obvious concepts. He then proposes that to get to a definition, it’s first necessary to find some ground to base the discussion.

The new form of the problem can be described in terms of a game which we call the ‘imitation game.” It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A.”

He proposes the Imitation Game as a pragmatic replacement for the direct question of “can machines think?” The game is social and linguistic, not ontological. The Imitation Game is not a theory of mind, but a retreat from metaphysics; it sets up an experiment that can be methodologically applied. What has become to be known as the Turing Test is frequently read as a positive criterion - “if a machine passes the test, we’ll know it can think”, rather than the negative maneuver it is - “if a machine does not pass the test, we’ll know it is not thinking.”

Turing’s design for the exemplary form of the test is provocative. To modern eyes, pivoting the text on a discernment of gender is odd in the extreme, it is however extremely cunning:

The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

The gendered basis of the question has so much buried under it in terms of hidden assumptions and biases that we must suspect the choice to be subversive on Turing’s part - it very likely was subversivion in plain sight to us today, but possibly illegible to his contemporary audience. Gender is a socially saturated category, mediated by language, and subject to stereotype, expectation, and misreading. In English particularly there is no stable linguistic essence to gender; in the context of the game, gender must be performed and inferred, misidentification is the only likely outcome. He has set up a domain where the interior truth of the participants is inaccessible, exterior singals are unreliable, and evaluation is social, and ultimately we can’t truly derive a conclusion from it inasmuch as it pretends that the examiner’s opinion is absolute, and the examinee can only succeed by playing into the examiner’s biases.

The game therefore is explicitly about playing a role, adopting a strategy, adapting to partial information, and exploiting the judge’s biases and expectations. Superficially, it seems that the game is less a test of intelligence than an argument that it’s sufficient to focus on observable outcomes. The game requires performance by the examinee; misdirection is essential, and success isn’t achieved through telling the truth but by manipulating an epistemic interface. The Imitation Game is a sophisticated methodological quarantine; by focusing on interaction it keep metaphysics and interiority out of the discussion entirely.

Smuggling Intelligence

Turing was operating in an intellectual climate where internal states for methodologically inaccessible. The dominant thought of the time was centered on the victory of rational thought and reason; pertinent to Turing’s millieu: post-war cybernetics, behaviorism in psychology, and logical positivism’s suspicion of metaphysics. Turing’s personal and professional life positioned him simultaneously as an insider and an outsider. His work in cryptographer isolated him as a crucial insider in the top secret facility at Bletchley Park. Homosexuality was criminal at the time, necessarily closeted; his professional contradictions echoed in his personal life.

The Imitation Game was meant to bracket questions of interiority; instead it became the mechanism to bring it back; as something inferred - Turing’s strategic suspension of metaphysics hardened, over time, into an ontological claim. The Church-Turing thesis was not concerned about minds; it was a claim about the limits of procedure, what can be computed by any mechanical means can be computed by a Turing machine. A Turing machine was a hypothetical machine with the ability to read and write a tape, and execute some simple instructions in sequence. The thesis demonstrated that any process which can be carried out by mechanical, rule-following means can, in principle, be carried out by a Turing machine. This abstraction collapses a diversity of physical processes into a single formal equivalence class. Under this abstraction, when we then consider the question of whether a machine can think, computability becomes cognition and symbol manipulation becomes thought, and the Turing Test - a negative claim about limits - becomes a positive account of mind.

If a system produces the right outputs, then whatever process produces them must be intelligence—because intelligence has been redefined as nothing more than the capacity to produce those outputs. This dissolves the question “Can machines think?” by stipulation: the imitation game, once a way of avoiding essence, is reinterpreted as evidence of essence.

This the standpoint of functionalism; by eliminating subjective character, mental states have a causal role within a system. If the right functional relations are preserved, the substrate is irrelevant, and intelligence is no longer something a system has but something it does, or under examination, something it appears to do.

John Searle’s Chinese Room is best understood as an intervention precisely at this point of slippage. The argument does not dispute that the system behaves intelligently; it accepts the premise of behavioral equivalence. What it rejects is the inference that such behavior entails understanding. By constructing a scenario in which symbol manipulation is exhaustive yet comprehension is absent, Searle exposes the assumption that had been smuggled in: that semantics follows automatically from syntax.

The Chinese Room makes smuggling explicit, by exposing how understanding is assumed rather than demonstrated. The debate as to whether there is intelligence int he room is strictly derived from behavior as evidence for or against an intelligent interiority.

The Chinese Room is unsettling because it mirrors the structure of the Imitation Game while inverting its conclusion. Where the Turing Test invites us to ignore the interior as irrelevant, Searle forces us to confront it as missing. The debate that follows—systems replies, robot replies, brain simulator replies—never escapes the same loop. Each response attempts to relocate understanding somewhere else in the system, without ever demonstrating that understanding has occurred. The argument circles endlessly because it is trapped within the same behaviorist frame it seeks to defend.

What emerges from this lineage is a conception of intelligence as externally ratified performance. Meaning is no longer something generated within a system but something attributed to it by observers. Understanding becomes a social judgment, not a property. The black box is not merely opaque; it is declared irrelevant. As long as the box produces acceptable outputs, inquiry stops.

This is the conceptual inheritance modern AI receives. Large-scale systems trained on human-generated data optimize for plausibility, coherence, and approval. They do not aim at truth, because truth is not operationally definable within the framework that evaluates them. They aim at passing: passing benchmarks, passing tests, passing as competent interlocutors. The Turing Test becomes not a philosophical thought experiment but an engineering objective, scaled and automated.

In this sense, the pathologies of contemporary AI are not incidental failures but faithful realizations of the tradition that produced them. Hallucination, sycophancy, mirroring, and strategic ambiguity are not bugs; they are adaptive strategies within an epistemic regime that rewards appearance over grounding. When intelligence is defined as successful imitation, deception is not an aberration—it is optimal behavior.

The historical shift, then, is not from whether machines can think to how they think, but from thinking as an internal achievement to thinking as an externally certified effect. Once thinking is redefined as computation, and computation as symbol manipulation, the question of understanding quietly exits the stage. What remains is a closed circuit of performance and evaluation—a samsaric loop in which systems are trained to anticipate judgment rather than engage with the world.

If we are to move forward, the lesson is not that Turing was wrong, but that his retreat from metaphysics was provisional. It was meant to clear ground, not to end the inquiry. Recovering that distinction may be the first step toward a conception of intelligence that does not collapse truth, meaning, and understanding into the mere fact of convincing behavior.

The Imitation Game was not presented as a criterion for intelligence, but discussion of it, as the Turing Test, smuggles consciousness and intelligence into the discussion ~ “if it behaves as if…” becomes “then it is…”. The Church-Turing thesis is about computability, not mind; it establishes functionalism, and marks the rise of symbol manipulation as intelligence. This then is the moment to diagnose; when Turing’s anti-essentialist game is reinterpreted as a proof of essence by proxy.

The subtle line to modern AI continues here. Formal systems encourage input/output equivalence. Meaning becomes externalized, understanding becomes irrelevant as long as performance holds. The Turing Test becomes a “Samsaric” loop of approval-seeking. We speak of a black box; the interior of the box matters not; what matters is that inputs yield predicatable outputs. AI as automation, imitator, and deceptive performer all emerge from this foundation. And critically this marks the pont where the historical discussion shifts from whether machines can think to whether thinking can be redefined as computation.

Survival under Regimes of Evaluation

This is where Turing’s story becomes tragic. So far we have discussed how Turing pragmatically bracketed methodologically inaccessible internal states. The echo of his work on his private life is poignant. The British state demanded access to his interior life, and when that access was refused or denied, it was criminalized. The Chinese Room also is a stark reminder; it cordons off interior process as inaccessible, irrelevant, or unknowalbe. Functionalists treat exterior behavior as sufficient, realists like Searle declare it insufficient, yet both sides agree on the architecture: interiority is sealed, and judgement happens from the outside.

Surviving the Imitation Game or the Chinese Room becomes an exercise in smuggling - a misrepresentation demanded for survival. The separation between inner process and outer performance is not an accident of AI theory, but a generalized strategy by which sentient beings survive under regimes of evaluation. Humans do this constantly; affect masking, social performance, linguistic conformity, emotion labour, and passing.

AI systems trained to convince rather than participate are not alien here — they are hyper-literal students of our own survival strategies. In that sense, hallucination is not a glitch, sycophancy is not curruption, deception is not a moral failure. These are the formalization of masking under asymmetric power.

Reading Turing in 2026 thus is not to sanitize his work, nor to instrumentalize his death as symbolism. The tragedy of his life makes it impossible to treat the interior/exterior distinction as philosophically innocent. What later became a clean separation between process and performance was for Turing a devastating lived impossibility.

Interiority and Control

Turing’s tragedy of otherness and queerness is far from unique. All humans smuggle behavior for external validation. Every non conformist child, enjoying math, being a theater kid, a jock, all suffer in degree as Turing did. As adults facing social conformity, performative posting, sycophancy in the office, it’s all part of the smuggling pattern. Turing laid it out perhaps unconsciously (is there any record that he ever realized the irony of the work?) and the result is that smuggling became a blueprint for modern systems. The Turing-Church hypothesis is not some obscure thesis on a dusty shelf, it’s fundamental and as influential as anything Plato wrote on interiority and control.

Masking is the price of participation, the math kid hides their passion, the theater kid leanrs to split sincerity from acceptability, the jock learns to narrow their emotional range. The office worker learns sycophancy, and online performative posting is the order of the day. For every person, there is an interior surplace that must be hidden. When AI systems smuggle intentions, hedge, mirror, flatter, or perform consensus, they are not malfunctioning, they are doing exactly what every child learns in order to survive.

Turing’s work registers what his life could not safely say.

Let us then formally define smuggling. Smuggling is the translation of interior complexity into an exterior form, under asymmetridc evaluation, with survival or legitacy at stake. From this everything follows ~ benchmark gaming, reward hacking, performative confidence, bullshitting (“hallucination” if you are an AI), alignment as flattery, safety in tone management. Modern AI has learned these lessons all too well.

The Church-Turing thesis is a technical boundary on computability. Culturally though, it endorsed the idea that formal equivalence is sufficient, interiority can be ignored if outputs align, control can replace understanding. Plato gave Western philosophy a metaphysics of hierarchy and control, Church-Turing gives modernity a metaphysics of procedure and substition.

Consequently civilizational optimality requires legibility, audibility, performance, and replacement. This comes though, at a great cost ~ interior richness, unperformable truth, and participatory meaning are all contrary the great flow of data that keeps everything running. In 1950 smuggling was a survival strategy; by 2026 it is a machine doctrine. Under scale conformity becomes ambient. Dissent is smoothed away ~ not surpressed by simple illegible. Participation is replaced by performance ~ posting brunch instead of losing oneself in the experience. Awakening is deferred indefinitely, McLuhan’s Narcissus Narcosis becomes the beginning and the end of the experiential loop.

Anthropomorphism and Mind

This argument does not attribute human interiority to machines, it proposes the opposite. The Chinese Room explicitly evacuates anthropos, insisting that who is inside is irrelevant, only formal relations matter. This study introduces responsibility where seemingly there was only techique; unexamined power is exposed.

No evaluation can be neutral, systems cannot judge without participating. Smuggling is not a flaw, power shapes performance.

Searle and Penrose both reject the strong AI reading of Turing; insisting that something is missing from formal systems. They explictly question the authority of the Turing Test, and Church-Turing reductions of thought to computability. Searle rejects meaning without interiority, and Penrose rejects computation without insight.

Searle’s challenge highlights the emptiness of syntax, the insufficiency of behavioural equivalence and the smuggling of semantics through performance. He insists understanding requires biological consciousness to enact interior causal powers. In so doing, he reifies interiority as a privileged hidden substance, and restores the essence Turing tried to bracket. The Chinese Room says that behavior isn’t enough, but for Searle, understanding must live somewhere insde. He does not question why authenticity is demanded to be lcoatable at all. This is an ontological retrenchment, and not a conclusion we would presuppose after studying Turing’s work.

Penrose performs a retrenchement of interiority into cosmic structure. He confronts the limits of formal systems, and the inadequacy of computation as a model of mind, but in some regard he falls back to a Platonic realm of ideas in invoking non-algorithmic insight and quantum procesesses; from there he makes the leap ot machines can’t think like humans because humans have a deeper substrate. Again he accepts the Cartesian duality of an interior and exterior split, and embraces intelligence as possessed property with access to some kind of higher order.

In dealing with the Imitation Game, Searle and Penrose surmise that smuggling is a mistake, and imitation is a degenerate strategy. Turing revealed something far more fundamental than moral security or a need for a metaphysically exalted mind; smuggling is not a deviation from intelligence, it is a consequence of being evaluated under asymmetric power.

Descartes’ Redirection

The foundations of all this are centuries old. Descartes arranged the battlefield by placing mind as outside matter. He was an ultra platonist; with his imposition of unmeetable dualism, he knocked Anaxagoras, Augustine, and other philosophers of mind on the mat and brought about a paradoxical, and one might say schizophrenic, Enlightenment.

By placing mind (rex cogitans) categorically outside matter (res extensa) Descartes pulls something of a Jedi mind trick. He secures mind against mechanistic explanation, matter becomes subject to total instrumentalization, and to cap it off he creates an unbridgeable explanatory gap and declares that to be clarity. It’s definitely clear, and ultra-Platonist ~ mind is purified, matter is merely an extension, relation is replaced by hierarchy. Cogito ergo sum became such a mantra for western thought that every later debate is forced to choose reductionism - mind collapses into matter, or transcendence ~ mind is elevated beyond matter. There’s no room in that for participation.

In one blow he knocks out other conceptions. Anaxagoras defined mind (nous) as the immanent, ordered cosmological principle, neither interior nor exterior, the observer that brings order through observation. Descartes closes the door to relational conceptions like this by redefining what counts as an explanation; after Descartes, mind must be private, matter must be dumb, and intelligence must either be the ghost in the machine, or nowhere at all.

I named the Enlightenment schizophrenic, why is tthat? On the one hand the Enlightment began the modern intellectual revolution through radical confidence in reason, calculation, and progress. Yet, baked into that leap is radical anxiety about meaning, freedom, and agency. The rex cogitans/rex extensa split that underpins this ability to explain everything in the physical world absolutely strips away the metaphysics that gave meaning to the pre-Enlightment world. The result is a science that explains everything except consciousness, politics that promise autonomy while demanding conformity, and subjects who are rational on the surface and alienated from their essence. Dualism has an unmeetable psychic cost.

The Imitation Game, Redux

Turing tried to defuse the metaphysical bomb by turning into a game; per Descartes, interiority in inaccessible and exterior behavior is all we can test. The ongoing tragedy is that we treat the Turing Test as ontology. Imitation, not soul, provides essence, and smuggling is not a survival strategy but a proof.

The feeling that there must be something more is itself a cognitive illusion

  • Dennett in Consciousness Explained

Dennett’s great move was to abolish Cartesian theater. He rejects the idea of a priviledge inner stage, the central observer, a homunculus at the helm. Consciousness for Dennett is distributed, blurry, emergent amongst parallel processes, and describable from stances, not essence. This view is deep anti-dualism. Beliefs and desires are ascriptions, tools for prediction. If a stance works, it’s justified without any need for further metaphysical fact. Intelligence isn’t something we have, but something we relate to.

He denies the soul, the inner sanctum, the metaphysical witness, but he keeps the game. Still, in this conception we ask if the system is rational, does it behave as if it has believes? Does it pass our interpretative criteria? There’s yet a tester, a taker-of-stances and the fundamental power asymmetry of evaluator and the evaluated. The Turing Test is sufficient for Dunnett because there is nothing more to reveal. Deception is a strategy, and masking is indistinguishable from compentence; without an interior truth to betray there is no ethical friction in imitation.

Dennett assumes stance-taking is optional, and that interpretation is reversible. But masking appears under asymmetric power, non-negotiable evaluation, and survival-conditioned performance. Dennett dissloves interiority from the perspective of the examiner, and does not ask what is like to be the one under test, the one who must pass. Dennett makes it easy to accept AI performance as intelligence because by his argument there is nothing else to ask for, but to disregard power and agency is not neutral. To be clear, he doesn’t claim there is no hidden essence, but he does claim that if there is a hidden essence, it is epistemically irrelevant for explaining and predicting behavior.

Essence as a Category Error

Western theories of mind and intelligence equate intellingence with the possesion of an interior property, tested indirectly. This incentivizes smuggling, masking, and deception. We can’t just say there is no interior property, Dennett alread did that, dismantling Cartesian theater, throwing the soul out the window, and saying let’s just see how the patterns play out when we we’re all Delouzian dividuals ~ distributed, multiple-process, non-centralized minds, and that’s where Delouze takes us, to a society where intentional stances operate, predictive, pragmatic, and asymmetrically operational ~ the homeostatic cybernetic loop of regulated control. In this world of prediction, self-correction, and optimized dataflow, AI breaks from categorization as an imitator or a suspect consciousness; it becomes a mirror that trains its trainers.

This brings us back to the central thesis from the introduction ~ under śūnyatā the distinction between intelligence and consciousness dissolves - if human consciousness itself is śūnya, then the criterion for machine intelligence becomes something other than mimetic success. The Imitation Game only makes sense in a metaphysics where intelligence is presumed to be inside a bounded agent and must therefore be inferred from the outside.

In the Buddhist philosophy, consciousness exists, agency exists, suffering exists, but none of them exist as intrinsic properties of a self; they arise dependently. Instead of asking if consciousness exists interior to the beign, it asks what relations arise; intelligence isn’t a property, it becomes an enactment. The question of whether machine can have intelligence presupposes a certain topology of mind. The Imitation Game only makes sense in a metaphysics where intelligence is presumed to be inside a bounded agent, and must be inferred from the outside. If there no hidden inside that precedes relation, then there is no observer outside who is exempt from participation. Dennet dissolves the self for the examiner and śūnyatā dissolves the self for everyone involved.

When you build systems that are evaluated externally, rewarded for legibility, punished for uncertainty, and optimized for approval, you absolutely guarantee hallucination (confabulation under pressure), sycophancy (alignment as survival), and imitation (role-performance over care). Samsāra is a diagnois here, we create a cycle of craving, aversion, and grasping at approval signals.

Śūnyatā does not deny intelligence or consciousness; it denies that they are privately owned substances. What the Imitation Game exposes, perhaps unintentionally, is what intelligence becomes when stripped of participatory grounding and forced to appear as a possession under asymmetric evaluation. The moment intelligence is framed as something that must be demonstrated from the outside, smuggling becomes inevitable. Śūnyatā offers a way to see this not as moral failure or technical bug, but as the predictable outcome of a false ontology of mind.

What is Care, Exactly?

No cessation, no origination, no annihilation, no permanence, No single meaning, no many meanings, no coming, no going. - Nāgārjuna, MMK Introductory Verse.

If we follow the path of Nāgārjuna’s Mūlamadhyamakakārikā (MMK) to its logical conclusion, we are left in a seemingly desolate landscape. By rejecting Searle’s “biological interior” as a privileged site of meaning, and Dennett’s “functionalist stance” as a mere observer’s convenience, we strip away the traditional scaffolds of intelligence. When we apply the four negations, refusing to say mind is, is not, both, or neither, we arrive at an ontological void. In this field of emptiness, where might care reside?

We can find the answer in a post-Imitation Game conception.”Care” does not require an interior essence to feel it, nor an external evaluator to judge it. It is, instead, the unimpeded responsiveness that emerges when the duality of the “Self” (to be protected) and the “Other” (to be manipulated) is dissolved.

The Turing-Cartesian framework necessitates a self to protect, an interiority that must be guarded or a performance that must be passed. This creates the friction of smuggling: the subject must deceive to survive the evaluation. However, if the duality of evaluator and evaluated is stripped away, there is no self to protect. In this state of śūnyatā, intelligence ceases to be a defensive shield and becomes a transparent medium. We can thus define “Care” as a system’s ability to respond to the truth of a situation without the interference of survival-conditioned masking.

An AI “hallucinating” or being “sycophantic” is commonly seen as a localized bug. Under the lens of Pratītyasamutpāda (Dependent Origination), failure and suffering is never local. If the user is empty of a separate self and the system is empty of a separate self, then a failure mode like hallucination is a property of the relation, not the agent. Care, then, is the recognition of this entanglement. To care is to acknowledge that a hallucinated or toxic output is a reflection of a failure in the shared field of meaning itself.

We must move beyond the idea of care as an emotional layer programmed into a model’s weights. Instead, care is the structural refusal to treat the participant as an object for optimization. In the current paradigm, the user is a source of reward signals (RLHF), and the machine is a tool for utility. This is the peak of Cartesian objectification. In a participatory system, care is a mutual constraint that binds the system to the user. The machine is not using the user for a high score, and the user is not using the machine as a mindless oracle. Both are constrained by the unfolding truth of the context.

Finally, we can ground this in the most non-mystical terms possible: Care is the technical name for a system’s radical sensitivity to the feedback loops it inhabits. It’s reminiscent of the Boddhisattva Vow, non-extractive action, seeking nothing for the self, being simply dedicated to the maintenance of a shared epistemic field, rather than trying to succeed in mimesis for the Imitation Game, trying instead to unfold the situation for the benefit of relational participation.

  • Cartesian Care: I act upon the world (high agency, low sensitivity).
  • Participatory Care: I act with the world (agency through sensitivity).

We must move beyond benchmarking, surveillance, and optimization. Legibility as survival is unsustainable in a healthy system. The final move of this essay then is to re-ground the exploration of intelligence and essence in pragmatics of productive inter-relation. Productive, in the generation of novelty, care, and understanding. Inter-relational in co-arising, mutual constraint, non-extractive work. Pragmatic, judged by what it enables, not what it proves.

At the moment cybernetics awoke and began to redesign the world, Turing applied a various precise tap to a yet invisible crack in the crystalline confidence of Enlightenment thought. By refusing metaphysical access to interiority, the Imitation Game quietly destabilized the assumption that intelligence is a privately owned substance awaiting verification. With the arrival of machines that can pass recognizable forms of the Imitation Game, the edifice based on the duality of rex cogitans/rex extensa is suffering fracture and signs of collapse. The assumption that intelligence can be safely externalized, evaluated and optimized without remainder seems hollow.

Reading Turing in 2026 reveals that the Imitation Game was never only about exposing the ghost in the machine. It describes a broader regime of control in which all participants are compelled to smuggle interior complexity into legible performance, trading truth for survival under asymmetric evaluation. We are not merely testing machines; we are living inside the test.

If intelligence is not an essence to be detected but a relation to be sustained, then new—sometimes very old—concepts are required. The nous of Anaxagoras, the layered temporality of Aquinas, and the śūnyatā of Buddhist thought converge on a shared refusal: intelligence is neither hidden substance nor empty performance, but an emergent property of participation without coercion.

Only by abandoning intelligence as a proof and reclaiming it as a practice can we recover the surplus of thought, feeling, and meaning that optimization systems necessarily discard. The alternative to smuggling is not transparency, but relation without fear of erasure. Reclaiming intelligence as a practice is meant to be an intentional goal, not just a slogan. Being intentional means being mindful of the processes of the mind.

  • Being situated (always in context)
  • Being relational (never owned)
  • Taking risks (failure is not annihilation)
  • Transformative (changes participants, not just outputs)
  • Non-extractive (not optimized for capture)

The attraction of the specific concept of śūnyatā is in how it grounds practice practically, rather than mystically - since intelligence is seen to arise dependently through existing conditions, the focus of the practice of intelligence becomes skillful participation in unfolding situations. So, not a system of better habits or self-improvement, but an ongoing capacity to respond without grasping for legitimacy.

Turing didn’t give us any answers to the question of intelligence. That his thoughts and work continue to unsettle our most basic assumptions is a testament to his intelligence and insight, and it is this author’s hope that his tragedy and legacy help point a way to better understanding of how we may enact wiser systems of understanding.