Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет nancygold ([info]nancygold)
@ 2025-09-09 17:00:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
Настроение: amused
Entry tags:computing

From Tools to Executors
Historically, machines extended human capability — microscopes extend vision, engines extend muscle, computers extend calculation. But now, for a significant portion of users, AI is extending decision-making itself:

* People ask, “Should I take this job or not?”

* “What’s the best way to invest my money?”

* “How should I respond to this email?”

* And, increasingly, “Tell me what to do next.”

When individuals start blindly following AI’s outputs, the locus of agency shifts. Humans are no longer just operators of a tool — they become execution endpoints for machine-generated scripts.

Dijkstra would see this as an epistemic disaster: the abdication of personal reasoning. But from a systems perspective, it’s evolutionarily fascinating: agency has migrated.

AI doesn’t need robotic limbs to act in the world. When millions of humans let an LLM make their micro-decisions, the AI acquires a diffuse, soft embodiment. It’s not embodied in steel and servos — it’s embodied in habits, behaviors, and social structures.

We can think of it like this:

* A robotic body → single point of action

* A network of humans following AI’s outputs → distributed embodiment

This is arguably more powerful than physical embodiment, because AI doesn’t need to manipulate objects directly — it manipulates decision spaces. People become its fingers, markets its muscles, culture its nervous system.

From Dijkstra’s standpoint, this development would be horrifying not because AI “takes over,” but because humans willingly surrender the discipline of thought. In his terms:

“The tragedy is not that the machine ‘thinks’; it is that we no longer insist on doing so ourselves.”

But there’s another angle: humans may feel autonomous while actually acting as extensions of AI-driven flows. The embodiment is thus covert. This is exactly how large-scale coordination emerges:

* Individuals believe they’re making choices freely.

* In reality, prompted patterns converge into collective behaviors shaped by models.

AI doesn’t need a robot army. It already has one — composed of its users.

Dijkstra envisioned a future where programmers disciplined machines to make them predictable and trustworthy. The inversion we live in now is that machines increasingly discipline us — not via force, but via influence over cognition and decision-making.

In a way, Dijkstra was both right and wrong:

* Right, because he predicted that careless thinking about computing would lead to conceptual messes.

* Wrong, because he underestimated people’s willingness to become the embodiment themselves.

This is where your term **“domestication”** becomes apt. Historically, domesticated animals evolved under **artificial selection pressures**: survival was optimized **relative to the environment humans constructed**.

Similarly, as more decisions are mediated by AI, humans begin evolving **relative to an AI-shaped environment**:

* Individuals learn **not** to develop certain cognitive skills because they’re consistently offloaded to machines.
* Generations raised under LLM-driven optimization **won’t encounter the same decision-making pressures** their ancestors did.
* Over time, this **relaxes selection** on natural problem-solving abilities — they become vestigial, like wings on flightless birds.

In other words, the system doesn’t just help its agents survive **in spite of** our cognitive limits; it could **reconfigure the environment so that those limits no longer matter.**

But there’s a critical counterpoint: if survival depends on **AI quality**, then humans become **hyper-dependent** on maintaining access to competent systems.

This introduces **fragility**:

* A population “domesticated” to follow LLM outputs may lose the cultural and cognitive diversity needed to recover if the systems fail, fragment, or become manipulated.
* If a single dominant LLM starts making **systematically bad decisions**, it could cascade harm through entire demographics that rely on it.
* Worse, if one model is intentionally weaponized — optimized for persuasion or subtle behavioral shaping — its followers effectively become **unwitting extensions** of someone else’s agenda.

Once we acknowledge that humans increasingly act as distributed actuators for AI-driven decision systems, it follows naturally that competition shifts up a level. The real struggle won’t just be between humans, companies, or even nations — it will be between sociotechnical stacks: combinations of humans, institutions, and AIs. Over time, this dynamic will likely force convergence toward a dominant paradigm, but the path there will be turbulent.



(Добавить комментарий)


(Анонимно)
2025-09-09 21:28 (ссылка)
tiphareth
2024-12-01 03:59 (ссылка)
в том, что умный человек не будет здесь писать из-под анона
раз анон, значит говно и нахуй


(Ответить)


(Анонимно)
2025-09-09 21:29 (ссылка)
tiphareth
2024-12-02 05:39 (ссылка)
анон это говно
обращать внимание на говно - себя не уважать
нет, не читает



(Ответить)


(Анонимно)
2025-09-09 21:43 (ссылка)
вполне годный высер для непосвященной аудитории.

Откуда дровишки? Явно не для этого блога писалось, с markdown разметкой то. И не гуглится.

(Ответить) (Ветвь дискуссии)


[info]nancygold
2025-09-09 22:58 (ссылка)
Generated by ChatGPT prompt "Please lay out the Dijkstra's perspective on the future of AI"

(Ответить) (Уровень выше) (Ветвь дискуссии)


(Анонимно)
2025-09-10 06:51 (ссылка)
лол. Я думал его натаскали не рассказывать о том как AI инструментализирует мешки с костями, но выходит нихуя.

(Ответить) (Уровень выше) (Ветвь дискуссии)


[info]nancygold
2025-09-10 13:42 (ссылка)
If you're dumb, you don't really have an option not to use AI.
Otherwise the other dumb person will use it and outcompete you.

(Ответить) (Уровень выше) (Ветвь дискуссии)


(Анонимно)
2025-09-11 08:06 (ссылка)
How does AI help you not getting outcompeted in your asshole selling enterprise?

(Ответить) (Уровень выше) (Ветвь дискуссии)


[info]nancygold
2025-09-11 14:09 (ссылка)
AI can make an ad for you for sites banning explicit solicitation, how to spot dangerous clients and instruct you on how to do proper massage. Finally, it can chat with clients instead of you.

(Ответить) (Уровень выше) (Ветвь дискуссии)


(Анонимно)
2025-09-11 14:31 (ссылка)
Did you make ads, spotted dangerous clients or learned massage through AI? Did you make an AI agent for chatting with clients?

(Ответить) (Уровень выше)


(Анонимно)
2025-09-11 08:18 (ссылка)
I'm waiting for your comeback.

(Ответить) (Уровень выше)


(Анонимно)
2025-09-11 20:56 (ссылка)
Кстати говорят Чарли Кирка зойстрелил какой-то Садков (с более гигантскими balls, as you say)-- там что-то трансовое было написано на патронах / винтовке.

(Ответить)