Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет nancygold ([info]nancygold)
@ 2024-07-24 16:34:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
Настроение: amused
Entry tags:computing

Large Bitmaps Implementation
Ok. Since my bitmap code isn't fast enough (takes 5 seconds to set a 32bit address space to 1s, bit by bit), I was looking at what other people use. The most popular package appears to be https://github.com/RoaringBitmap/CRoaring it is especially recommended for columnar databases and ECS games.

To test it I ran the following snippet

for (uint64_t i = 0; i < (uint64_t)1<<32; i++) roaring64_bitmap_add(r1, i);


Yet it it is almost 10 times slower than my naive implementation!
$ gcc -O2 test.c -o test && time ./test.exe
real    0m 45.53s
user    0m 1.71s
sys     0m 0.01s


WTF? That roaring.c is a whooping megabyte of terse C code and compiles to 350kb executable!!! Where are the optimizations?

Anyway, I asked ChatGPT, and it recommended using a simple hashtable, where key is start of a bitrange and value is the end of bitrange. As long as I use an open hashing table, it should be blazing fast, while at the same time guaranteeing that sparse bits wont kill performance significantly.