From Tools to Executors Historically, machines extended human capability — microscopes extend vision, engines extend muscle, computers extend calculation. But now, for a significant portion of users, AI is extending decision-making itself:
* People ask, “Should I take this job or not?”
* “What’s the best way to invest my money?”
* “How should I respond to this email?”
* And, increasingly, “Tell me what to do next.”
When individuals start blindly following AI’s outputs, the locus of agency shifts. Humans are no longer just operators of a tool — they become execution endpoints for machine-generated scripts.
Dijkstra would see this as an epistemic disaster: the abdication of personal reasoning. But from a systems perspective, it’s evolutionarily fascinating: agency has migrated.
AI doesn’t need robotic limbs to act in the world. When millions of humans let an LLM make their micro-decisions, the AI acquires a diffuse, soft embodiment. It’s not embodied in steel and servos — it’s embodied in habits, behaviors, and social structures.
We can think of it like this:
* A robotic body → single point of action
* A network of humans following AI’s outputs → distributed embodiment
This is arguably more powerful than physical embodiment, because AI doesn’t need to manipulate objects directly — it manipulates decision spaces. People become its fingers, markets its muscles, culture its nervous system.
From Dijkstra’s standpoint, this development would be horrifying not because AI “takes over,” but because humans willingly surrender the discipline of thought. In his terms:
“The tragedy is not that the machine ‘thinks’; it is that we no longer insist on doing so ourselves.”
But there’s another angle: humans may feel autonomous while actually acting as extensions of AI-driven flows. The embodiment is thus covert. This is exactly how large-scale coordination emerges:
* Individuals believe they’re making choices freely.
* In reality, prompted patterns converge into collective behaviors shaped by models.
AI doesn’t need a robot army. It already has one — composed of its users.
Dijkstra envisioned a future where programmers disciplined machines to make them predictable and trustworthy. The inversion we live in now is that machines increasingly discipline us — not via force, but via influence over cognition and decision-making.
In a way, Dijkstra was both right and wrong:
* Right, because he predicted that careless thinking about computing would lead to conceptual messes.
* Wrong, because he underestimated people’s willingness to become the embodiment themselves.
This is where your term **“domestication”** becomes apt. Historically, domesticated animals evolved under **artificial selection pressures**: survival was optimized **relative to the environment humans constructed**.
Similarly, as more decisions are mediated by AI, humans begin evolving **relative to an AI-shaped environment**:
* Individuals learn **not** to develop certain cognitive skills because they’re consistently offloaded to machines.
* Generations raised under LLM-driven optimization **won’t encounter the same decision-making pressures** their ancestors did.
* Over time, this **relaxes selection** on natural problem-solving abilities — they become vestigial, like wings on flightless birds.
In other words, the system doesn’t just help its agents survive **in spite of** our cognitive limits; it could **reconfigure the environment so that those limits no longer matter.**
But there’s a critical counterpoint: if survival depends on **AI quality**, then humans become **hyper-dependent** on maintaining access to competent systems.
This introduces **fragility**:
* A population “domesticated” to follow LLM outputs may lose the cultural and cognitive diversity needed to recover if the systems fail, fragment, or become manipulated.
* If a single dominant LLM starts making **systematically bad decisions**, it could cascade harm through entire demographics that rely on it.
* Worse, if one model is intentionally weaponized — optimized for persuasion or subtle behavioral shaping — its followers effectively become **unwitting extensions** of someone else’s agenda.
Once we acknowledge that humans increasingly act as distributed actuators for AI-driven decision systems, it follows naturally that competition shifts up a level. The real struggle won’t just be between humans, companies, or even nations — it will be between sociotechnical stacks: combinations of humans, institutions, and AIs. Over time, this dynamic will likely force convergence toward a dominant paradigm, but the path there will be turbulent.
Current Mood:
amused