Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет Richard Stallman's Political Notes ([info]syn_rms)
@ 2025-05-18 08:37:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
Aiming to make humans obsolete

Some technology investors already aim to make almost all humans obsolete.

LLMs will never be capable of doing this — they play with text but don't understand what it means. That's why I call them "bullshit generators". But that doesn't mean it is impossible to develop true general artificial intelligence. It could happen someday. What would happen then?

There is always the Ex Machina possibility that the masters will lose control of intelligences which they "own" but are smarter than they are. But let's suppose that does not happen — that the masters retain control of them. What would the masters use them to do to the other remaining humans?

I see two extreme possibilities: (1) the masters can share their wealth with the rest, and (2) they can wipe out the rest.

There could be a variety of intermediate possibilities, in which they maintain some humans as pets or serfs for whatever purposes.

Aside from sex, there is (3) the hunger games outcome in which the masters make the serfs fight each other, and (4) the artistic competition alternative in which the masters maintain a population of serfs who compete by making art.

Overall, I consider possible superintelligences rather dangerous.

</li>



(Читать комментарии) (Добавить комментарий)