nancygold's Journal
 
[Most Recent Entries] [Calendar View] [Friends View]

Friday, May 17th, 2024

    Time Event
    2:40a
    ECS Table Joins
    Classic SQL tables are stored using B+ trees, so the data stays sorted by primary key.
    That is good for SQL, since majority of the time is spent loading the data into memory.
    But using ECS as a replacement for OOP requires our tables to be fast.
    So here is an article about efficiently implementing ECS tables and doing the inner joins:
    https://www.dataorienteddesign.com/dodbook/node9.html#SECTION00940000000000000000

    Inner join is required when some system operates on several components.

    Apparently it is possible to cache such join into an internal "super component" table, which gets invalidated on change.

    Current Mood: amused
    1:44p
    Cheney GC and ECS
    I'm using Cheney's semispace algorithm, where spaces are dynamic in size and organized as a stack of generations over the heap. So it is not immediately obvious how one can efficiently collect an entity, since it is split among tables, which will all go into the oldest generation.

    As of now my idea is creating a list of reachable ids in the collected generation, which gets used as a filter against the list of all ids in the current generation. The resulting unreachable ids are queued to be freed from table. That is basically how you implement GC finalizers.

    In case of a large number of fine grained entities, like particles, it is very inefficient to keep a list of ids, but we can exploit the fact that newly allocated ids increase sequentially, and in general keep the generation's id list sorted, with compressed ranges.

    Such technique indeed allows us to completely supplant the classic objects with the ECS ones. It should be especially useful in places like compiler, where say a parsed SEXP has different kinds of metadata (its origin file, macro expansion, etc...).

    I remember being fascinated by the database Lisp idea ever since I began designing Lisp interpreters. In fact one of my very first Lisp implementations ran on top of TSQL with stored procedures ( https://pastecode.io/s/0xcvazqq ). Yet only now I realized how it can be done efficiently enough and what possibilities it truly opens if done properly, by making difficult processing a cakewalk, in addition to seamlessly integrating PLANNER/Prolog like capabilities, where you can just go over the every single instance of some predicate.

    TLDR: "Any sufficiently complicated Lisp program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of Prolog."



    Current Mood: amused
    6:14p
    ECS and Perfect Hashing
    Since with ECS the number of entities having a component could stay the same inside one or more heap generations, we can try computing a perfect hash function. That will require keeping a machine code block for each component table, then writing the xor constants into it.

    Current Mood: contemplative

    << Previous Day 2024/05/17
    [Calendar]
    Next Day >>

About LJ.Rossia.org