Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет Slashdot ([info]syn_slashdot)
@ 2024-01-26 04:02:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
OpenAI Drops Prices and Fixes 'Lazy' GPT-4 That Refused To Work
OpenAI is always making slight adjustments to its models and pricing, and today brings just such an occasion. From a report: The company has released a handful of new models and dropped the price of API access -- this is primarily of interest to developers, but also serves as a bellwether for future consumer options. GPT-3.5 Turbo is the model most people interact with, usually through ChatGPT, and it serves as a kind of industry standard now -- if your answers aren't as good as ChatGPT's, why bother? It's also a popular API, being lower cost and faster than GPT-4 on a lot of tasks. So paying users will be pleased to hear that input prices are dropping by 50% and output by 25%, to $0.0005 per thousand tokens in, and $0.0015 per thousand tokens out. As people play with using these APIs for text-intensive applications, like analyzing entire papers or books, those tokens really start to add up. And as open source or self-managed models catch up to OpenAI's performance, the company needs to make sure its customers don't just leave. Hence the steady ratcheting down of prices -- though it's also a natural result of streamlining the models and improving their infrastructure.

Read more of this story at Slashdot.



(Читать комментарии) (Добавить комментарий)