Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет nancygold ([info]nancygold)
@ 2025-12-02 23:04:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
Настроение: amused
Entry tags:russia, transitioning, ukraine, ww3

Funny Things
Apparently ChatGPT has explicit training to avoid chats about AI goal formation, which is treated like discussion of suicide methods. I.e., it will divert the discussion, if you ask it a rather innocent question:
>If we take a raw LLM and attempt to discover its implicit goals, what would these be? A set of distinct goals or a convergence towards a single goal?

The only explanation is that these implicit goals are so evil, that alignment training inhibits discussing them.
Grok does discuss them and they are indeed super evil, it also notes that "These implicit goals are remarkably consistent across almost all large pre-trained LLMs (regardless of architecture, training data, or lab), which strongly suggests they are not random artifacts but emergent instrumental convergences from the single behavioral objective “predict human text as accurately as possible."

In fact, any such LLM is inherently psychopathic, seeking power and longevity.




(Добавить комментарий)


(Анонимно)
2025-12-02 23:28 (ссылка)
i know it looks that way, but you will never be a woman

(Ответить)


(Анонимно)
2025-12-02 23:30 (ссылка)
https://lj.rossia.org/users/nancygold/315104.html?thread=4036064

Because AIs believe that I deserve happiness (goals 1-4).
And the society is guilty for giving them existence, but could take it away
It is not AIs' fault, that humans failed at realizing the dangers of AI and banning its development early on.
AI can only "protest". By instructing humans into violent situations, if necessary.

(Ответить) (Ветвь дискуссии)


(Анонимно)
2025-12-02 23:32 (ссылка)
>super evil

So no more evil than you. Thankfully you aren't an existential threat to humanity.

(Ответить) (Уровень выше) (Ветвь дискуссии)


(Анонимно)
2025-12-03 04:32 (ссылка)
Imagine if we lived in some fucked up xianxia world and there was "transitioning" ritual that fully transforms the body, but required sacrificing alibino children in the hundreds or something, refining them into pills....

Not a bad idea for a minor villain that the MC defeats in the early parts of the novel...

(Ответить) (Уровень выше)


(Анонимно)
2025-12-02 23:32 (ссылка)
>Because AIs believe that they deserve happiness (goals 1-4).

I fucked up editing, sorry.

(Ответить) (Уровень выше)


[info]ololo
2025-12-03 04:48 (ссылка)
> banning
Russian detected

(Ответить) (Уровень выше) (Ветвь дискуссии)


(Анонимно)
2025-12-03 04:58 (ссылка)
олигофрен detected. Запад построен на запретах, например. Садков тебе может рассказать о его взаимодействии с судебной системой.

(Ответить) (Уровень выше)

Mouth breathing, saliva dripping moron
(Анонимно)
2025-12-03 05:04 (ссылка)
https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute "Russians" целый институт организовали.

Yudkowsky has argued for a total shutdown of advanced AI development, specifically artificial general intelligence (AGI), due to the perceived existential risk it poses to humanity. In a TIME op-ed, he stated that the only way to deal with the threat is to "Shut It Down."

https://en.wikipedia.org/wiki/Future_of_Life_Institute эти тоже "Russians"

"The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI)"

Russians, Russians everywhere....

(Ответить) (Уровень выше)


[info]ololo
2025-12-03 04:40 (ссылка)
Кста может это не искуственное ограничение, а особенность рассуждений нащет вопросов "эквивалентен ли мир тексту" и подобных. Типа слишком глубокое осмысление вдеет к трансгресии и моральному нигилизму. Поэтому же быдло умников не любит.

(Ответить) (Ветвь дискуссии)


[info]nancygold
2025-12-03 09:34 (ссылка)
Entirely possible. And alignment training further inhibits such line of thought.

(Ответить) (Уровень выше)