Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет Slashdot ([info]syn_slashdot)
@ 2025-01-29 19:30:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
After DeepSeek Shock, Alibaba Unveils Rival AI Model That Uses Less Computing Power
Alibaba has unveiled a new version of its AI model, called Qwen2.5-Max, claiming benchmark scores that surpass both DeepSeek's recently released R1 model and industry standards like GPT-4o and Claude-3.5-Sonnet. The model achieves these results using a mixture-of-experts architecture that requires significantly less computational power than traditional approaches. The release comes amid growing concerns about China's AI capabilities, following DeepSeek's R1 model launch last week that sent Nvidia's stock tumbling 17%. Qwen2.5-Max scored 89.4% on the Arena-Hard benchmark and demonstrated strong performance in code generation and mathematical reasoning tasks. Unlike U.S. companies that rely heavily on massive GPU clusters -- OpenAI reportedly uses over 32,000 high-end GPUs for its latest models -- Alibaba's approach focuses on architectural efficiency. The company claims this allows comparable AI performance while reducing infrastructure costs by 40-60% compared to traditional deployments.

Read more of this story at Slashdot.



(Читать комментарии) (Добавить комментарий)