Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет Slashdot ([info]syn_slashdot)
@ 2025-11-12 14:02:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
Researchers Surprised That With AI, Toxicity is Harder To Fake Than Intelligence
Researchers from four universities have released a study revealing that AI models remain easily detectable in social media conversations despite optimization attempts. The team tested nine language models across Twitter/X, Bluesky and Reddit, developing classifiers that identified AI-generated replies at 70 to 80% accuracy rates. Overly polite emotional tone served as the most persistent indicator. The models consistently produced lower toxicity scores than authentic human posts across all three platforms. Instruction-tuned models performed worse than their base counterparts at mimicking humans, and the 70-billion-parameter Llama 3.1 showed no advantage over smaller 8-billion-parameter versions. The researchers found a fundamental tension: models optimized to avoid detection strayed further from actual human responses semantically.

Read more of this story at Slashdot.



(Читать комментарии) (Добавить комментарий)