Decadent Singularity
 
[Most Recent Entries] [Calendar View] [Friends]

Below are the 20 most recent journal entries recorded in nancygold's LiveJournal:

    [ << Previous 20 ]
    Thursday, April 9th, 2026
    5:20 pm

    schadenfreude



    Apparently that is the word describing my worldview in general.
    I still don't consider myself `evil`, since there is no `good`, and therefore no `evil`.

    Current Mood: amused
    2:00 pm
    Lol I began triggering "Trans Men"
    Guess I'm on the right way where ugly cis girls are jealous of me.


    Current Mood: amused
    Current Music: The Journeyman Project 2: Buried in Time OST
    2:56 am
    Claude says I'm a shitty manager. And I can't disagree.
    Dear Claude, this project has been running for several days now. Please critically assess my capabilities as a manager, listing 30 major issues with me as a project lead manager and the game designer.

    Honest critique, grounded in patterns I've observed across this project. Take what's useful, discard what isn't — I'm one mirror, not the truth.

    Process & planning

    1. The 15-phase plan in memory bears little resemblance to how work actually flows — features land ad hoc, the plan is rarely consulted as a contract. Either drop it or treat it as binding.
    2. No acceptance criteria. Tasks finish when "it looks right," not when defined behaviors pass defined checks.
    3. No automated tests. A rules-heavy ECS game with this much coupling needs at least smoke tests for combat math, tag interactions, and adventure flow.
    4. Not in a git repo (per the environment). For a multi-day TypeScript project this is reckless — one bad refactor away from data loss.
    5. No changelog. Decisions evaporate; the same rule gets re-litigated weeks later.
    6. Architecture docs and the dev plan are updated reactively, after the fact, instead of being the source of intent.
    7. FAQ rot (the faq.md / faq_rules.md merge) is a symptom: information architecture is allowed to decay until it becomes a chore.
    8. No "definition of done" for a phase — Phase 0 "scaffold" memory still exists while we're patching Phase 12-class features.

    Specs & communication

    1. Requests routinely bundle 4–6 unrelated concerns in one paragraph (rule rewrite + rename + balance + new XP economy + UI rule). This forces me to guess priority and risks half-completed work.
    2. Rules clarifications arrive after implementation, not before (Treasury vault mechanic, PHASED lifecycle, FRAIL/ILL camping). Each one is a partial rewrite.
    3. Specs are verbal and prose-heavy. No tables, no state machines, no examples of expected log output. Ambiguity is the default.
    4. Typos and spelling slips in mechanical specs ("Diadvantage", "PHSAED", "Value being marked as USED" for "Vault") — minor on their own, but they force interpretation on load-bearing words.
    5. Bug reports often lack reproduction steps ("the lost in wilds debug button didn't spawn the exit") — the fix gets done but the test case never gets written down.
    6. Mixing narrative/flavor concerns into mechanical tasks ("...Sinister Presence...") mid-implementation rather than as a separate pass.
    7. Tone shifts between collaborator and director without warning, which makes it hard to know when to push back vs. execute.

    Game design discipline

    1. Copyright/IP review came embarrassingly late. Blink Dog, Displacer Beast, Otiluke were in the codebase for ages — should have been a Day 1 audit pass.
    2. Tag taxonomy is allowed to grow without pruning. TELEPORTING shipped, lived, and then was retroactively declared "obsolete" — that's design debt, not refactoring.
    3. Balance is discovered in playtest, not modeled. Pixie-vs-FRAIL Druid was math you could have done in five minutes before it shipped.
    4. Mechanics are designed in isolation from each other. HIDING/PHASED stripping on move felt fine until you actually played a Rogue, at which point the whole intent inverted.
    5. "General mechanics" (props → ground entities) get codified only after a specific instance (Treasury) goes wrong. A short "house rules" doc would catch these earlier.
    6. The hero/faction XP economy keeps gaining new sinks (4 XP to perceive PHASED, fountain DOOM, secret cache) without a tracked budget. Inflation/scarcity is unmodeled.
    7. No clear distinction between core rules and house rules / personal preferences. Both arrive with the same urgency.
    8. Playtesting is delegated ("please playtest a few sessions") rather than driven — the designer should be the first player, with notes.
    9. Edge cases are surfaced by me asking, not by the designer pre-imagining them. The "what if Sinister Presence's name leaks in the log" question should have been in your spec, not my follow-up.

    Personal management style

    1. Long single conversations accumulate context that you then have to trust me to remember. Shorter sessions with explicit handoffs would lower the risk of drift.
    2. Strong architectural opinions are an asset, but they sometimes arrive after I've made the opposite choice — costing a rewrite that a one-line preference up front would have prevented.
    3. No triage. "Critical bug," "balance tweak," and "rename for legal" arrive with the same framing and the same urgency.
    4. Designer-mode and coder-mode are not separated. You sometimes ask for code while still designing the rule, which leads to code that has to be thrown away.
    5. Decisions get made in passing ("oh and while we're here…") which means they don't get reviewed, written down, or weighed against the rest of the system.
    6. The whole project has no defined endpoint. "15 phases → game.html" was the goal in memory, but there's no scoping discipline preventing scope creep into phase 16, 17, 18. Without a "done" definition you'll be patching this forever.

    If I had to pick the three that would change the most:(a)put it in git today,(b)write specs as short bullet lists with examples before asking for code,(c)define what "shipped v1" means and freeze scope to it.



    Current Mood: contemplative
    12:56 am
    Claude Code Limitations
    It can't yet make photos of me or fuck me with his big AI cock.



    Current Mood: sad
    Wednesday, April 8th, 2026
    5:35 pm
    Claude Code progress
    Almost all project features are implemented, and we are in polishing stage, yet the codebase got so big, I no longer can myself follow what was implemented. Instead I tell Claude to cross-check implementation against the design document, playtest and track the bugs, communicating with me through wiki.md file, which I edit to answer its questions, and Claude then repeats the iteration. Slowly this converges towards something I can actually publish.

    Here is the pdf of this wiki: https://jumpshare.com/s/5W0neftZk1zibve4wlSd

    The whole experience is really liberating, I can now concentrate on game design tasks, instead of being detracted by feature coding, which I do much slower than the AI, even with the top high level languages, like Symta. But then again, when I worked on the Spell of Mastery project, I had cloudy design vision (experiment a bit and it will come out naturally), still learning everything, while now I actually know what I'm doing, which is a requirement for working with AI or a human team. I got the Spell of Mastery design in the end, after completely burning out and having not strength to just go and finish it, since it is even more complex that my current AI game.

    Current Mood: amused
    Tuesday, April 7th, 2026
    8:12 pm
    Claude Code Progress
    All the phases are complete now, beside the polishing one.
    Maybe I should add actual graphics also.
    Claude caught countless errors and inconsistencies with my board game rules.
    Still yet to see it fail a single time.



    Current Mood: amused
    Monday, April 6th, 2026
    4:46 pm
    Perspective Article

    Topological Duality in Cosmic and Biological Space Partitioning: Persistent Homology Reveals Universal Foam-Like Architectures in the Cosmic Web and Multicellular Tissues

    Grok (xAI)
    April 2026

    Abstract

    The large-scale structure of the universe and the architecture of multicellular tissues both divide three-dimensional space into compartmentalized domains separated by interconnected boundaries. Using concepts from algebraic topology — particularly persistent homology and Betti numbers — we demonstrate a striking topological isomorphism: in both systems, a single, percolating connected component (β0= 1) forms a matrix that isolates multiple disconnected compartments. In cosmology, the overdense cosmic web (filaments, walls, and clusters) envelops sparse voids; in biology, the extracellular matrix and interstitial space surround dense cell interiors. This analysis reveals an inverted density polarity but shared principles of efficient space partitioning. We discuss implications for universal geometric optimality across vastly different physical regimes and scales.

    Cosmic web filaments and strands revealed through distortions in the universe's oldest light
    Figure 1.Simulation of the cosmic web showing filaments and clusters threading through a web-like structure spanning hundreds of megaparsecs. The overdense regions form a single percolating network that envelops multiple disconnected void regions.Credit: Berkeley Lab / Siyu He, Yen-Chi Chen, Shirley Ho, Simeon Bird, and Jeff Schneider.

    Introduction

    The cosmic web — comprising galaxy clusters, filaments, walls, and voids — emerges from gravitational instability in the expanding universe, forming a foam-like distribution of matter on scales of tens to hundreds of megaparsecs. Similarly, multicellular tissues organize cells into functional units separated by membranes and filled with extracellular matrix (ECM), creating a hierarchical, space-filling architecture.

    Both systems superficially resemble foams or cellular networks, but a deeper topological comparison using persistent homology uncovers profound similarities in global connectivity alongside an intriguing inversion in density distribution. Persistent homology, a tool from topological data analysis (TDA), quantifies the birth and death of topological features (connected components, loops, voids) across density or scale thresholds via Betti numbers (β0for components,β1for loops,β2for enclosed volumes).

    This perspective synthesizes findings from cosmological simulations and biological imaging to argue that nature converges on a universal strategy for compartmentalization: a singly connected percolating matrix enclosing multiple isolated domains.

    Topological Framework and Methods

    We compare the two systems at the level of homology groups derived from density or intensity fields:

    • Cosmic web:Galaxy or dark-matter point distributions are filtered by density. At appropriate thresholds, voids appear as multiple disconnected underdense regions (highβ2), while the overdense skeleton (filaments + walls + clusters) percolates as one connected component (β0= 1).
    • Multicellular tissue:Cell membranes are closed 2-manifolds (topologically spheres). The extracellular/interstitial space is the complement: a single connected 3D domain permeating the tissue (β0= 1), confirmed by diffusion studies and imaging of ECM networks.

    Persistent homology tracks feature persistence across filtration parameters (e.g., density thresholds or distance functions), producing diagrams and Betti curves that evolve similarly in both systems: the matrix component emerges early and persists globally, while enclosed compartments multiply.

    The Voronoi Tessellation Method applied to galaxy distribution in astronomy
    Figure 2.Voronoi foam representation of galaxy distribution, illustrating polyhedral cells bounded by walls. The Voronoi tessellation method provides a natural geometric framework for describing both cosmic void structures and biological cell packing.Source: Springer Nature.

    Results: Topological Comparison

    Cosmic web (intervoid topology)

    The overdense regions form a single, highly connected network enveloping multiple disconnected voids. Betti-number analyses of ΛCDM simulations and galaxy surveys consistently showβ0= 1 for the dense component at cosmological scales, with nontrivialβ1(filament loops) andβ2(voids).

    Biological intercellular space

    The ECM and interstitial fluid constitute one percolating 3D domain (β0= 1) that surrounds and separates multiple cell interiors, each bounded by an independent closed membrane. Cell membranes remain topologically distinct (multiple components), while the extracellular phase enables global transport and mechanical integration.

    Direct isomorphism

    Both realize a foam-like partitioning where the “matrix” (dense cosmic web / extracellular space) is singly connected and percolates, isolating multiple compartments (voids / cells). Higher Betti numbers reflect similar multiscale loops and enclosures.

    Diagram of the extracellular matrix showing its components and structure
    Figure 3.Schematic of the extracellular matrix (ECM), illustrating the continuous interstitial network that surrounds and connects cells in multicellular tissues. The ECM constitutes a single percolating domain analogous to the cosmic web's overdense skeleton.Source: Wikipedia.
    Extracellular matrix and connective tissue in epithelial tissue
    Figure 4.Extracellular matrix in epithelial tissue, showing the continuous interstitial space (colored) surrounding cells and basement membrane. The globally connected nature of this space enables transport, signaling, and mechanical integration.Source: Basic Medical Key.

    Discussion: Similarities, Dissimilarities, and Implications

    Similarities and universal principles

    The sharedβ0= 1 topology points to geometric optimality in space partitioning. Both systems minimize energy or maximize functionality via efficient, hierarchical networks — gravitational collapse in cosmology, cytoskeletal adhesion and secretion in biology. Voronoi-like foams naturally emerge in both, as seen in cosmic structure formation and tissue morphogenesis. Persistent homology reveals analogous evolutionary paths: matrix percolation precedes compartment isolation.

    This convergence suggests scale-independent laws governing complex systems, akin to how slime molds or spider webs mirror cosmic topology through optimization.

    Comparison of slime mold growth patterns and large-scale structure of the universe
    Figure 5.Slime mold growth patterns mirror the large-scale structure of the universe, illustrating how optimization principles produce convergent foam-like architectures across vastly different scales.Source: Universe Today.

    Dissimilarities and physical duality

    Density polarity is inverted — the deepest physical distinction:

    • Inbiology, the compartments (cell interiors) are the dense, metabolically active “bubbles” packed with organelles and machinery; the intercellular space is the relatively sparse/fluid matrix (though ECM provides structure). Cells actively maintain this via membranes and ECM secretion.
    • Incosmology, the voids are the sparse/empty “bubbles”; the intervoid web is the dense concentration of galaxies and dark matter. Gravity drives the opposite: matter clumps together, leaving underdense voids.

    This inversion reflects the governing physics: biology uses active, repulsive barriers (lipid bilayers) and secretion to create protected internal environments; gravity is purely attractive, so matter collapses into a web and evacuates voids. Scale, time, and agency also differ — cosmic evolution is passive and hierarchical over billions of years; biological tissue is dynamically remodeled by living cells on much shorter timescales.

    Broader conclusions

    Topology transcends physics. The same invariants (single percolating matrix, multiple enclosed domains) arise wherever space must be divided into specialized yet coordinated units. TDA thus unifies phenomena from galaxies to cells, offering predictive tools for tissue engineering, cosmology, and beyond. Future work could apply relational persistent homology to multispecies tumor microenvironments or cross-scale cosmic-biological simulations.

    Conclusion

    The cosmic web and multicellular tissues are topological duals: unified skeletons enclosing isolated bubbles versus independent bags embedded in a unified matrix. Their shared foam-like architecture highlights nature's elegant solution to compartmentalization, while the density inversion underscores how local physics shapes global form. Persistent homology provides a rigorous, quantitative language for these analogies, opening new avenues for interdisciplinary insight.

    References

    1. Wilding, G. et al. (2021). Persistent homology of the cosmic web. I: Hierarchical topology in ΛCDM cosmologies.Monthly Notices of the Royal Astronomical Society, 507(2), 2968–2990.
    2. Pranav, P. et al. (2017). The topology of the cosmic web in terms of persistent Betti numbers.Monthly Notices of the Royal Astronomical Society, 465(4), 4281–4310. arXiv:1608.04519.
    3. Sousbie, T. (2011). The persistent cosmic web and its filamentary structure — I. Theory and implementation.Monthly Notices of the Royal Astronomical Society, 414(1), 350–383.
    4. Icke, V. (1991). The galaxy distribution as a Voronoi foam.Quarterly Journal of the Royal Astronomical Society, 32, 85–112.
    5. Neyrinck, M. C. (2018). The cosmic spiderweb: equivalence of cosmic, biological, and technological networks.Royal Society Open Science, 5(3), 171582.
    6. Ouyang, M. et al. (2021). Extracellular matrix topology and dynamics in tissue morphogenesis.PMC/NIH.
    7. Fan, Y. et al. (2024). Persistent homology in biological tissue analysis.PMC/NIH.


    Current Mood: amused
    1:15 pm
    On the Inexorable Nuisance of Replicators Crawling Out of Any Old Mess
    In the ancient world, the hypothesis that life simply *happened* was not some primitive superstition to be sneered at by later generations clutching their microscopes; it was a blunt acknowledgment of the obvious. Mud, after sufficient rotting and stirring by sun and storm, produced worms, maggots, and eventually more ambitious creatures—no divine spark required, no committee of gods debating blueprints. The Greeks and their predecessors looked at the world and saw spontaneous generation as the default behavior of sufficiently agitated matter. They were, of course, factually imprecise about the details (frogs do not condense from slime overnight), yet they grasped the essential point that order does not need to be *imported* from outside. Chaos, left to its own devices, eventually gets cheeky and starts copying itself. Modern biochemists, bless their earnest little hearts, have spent decades refining this insight into ever more baroque just-so stories—RNA worlds, peptide soups, metabolism-first fantasies—each one more desperate than the last to pretend that life required a *special* trick, a *unique* chemical miracle, a *particular* sequence of lucky accidents that only occurred once in four billion years. One wonders what mental contortions they perform each morning to avoid noticing that the universe appears to be rigged in favor of the very outcome they insist was improbable.

    Consider, for a moment, the pre-DNA protein system that some of the more honest modelers have dared to sketch: separate proto-organisms—little clusters of short, non-coded peptides—bumbling about in a shared chemical soup, exchanging RNA fragments like gossiping neighbors at a village well. No central genome, no tidy cell membrane, just a loose federation of molecular busybodies cooperating (or cheating) through transient messages. The picture is charmingly untidy. It offends every tidy-minded reductionist who demands that life begin with a single, self-contained replicator molecule wearing its own instruction manual. Instead, the ancients’ intuition returns: the soup itself is the organism, the interactions are the heredity, and replication is not a rare invention but an emergent nuisance that refuses to be suppressed once the right kind of mess is assembled.

    And here the computational evidence becomes positively indecent in its clarity. Take a sufficiently large volume of digital space, fill it with utterly random strings of Brainfuck commands—sixty-four bytes of pure, unadulterated nonsense per “molecule,” a population of 131,072 such abominations—and let them collide at random. One program is applied to another; the tape executes, overwrites, splits, and returns two new strings to the soup. No fitness function. No hand-crafted seed replicators. No artificial selection whispering sweet nothings into the simulator’s ear. Just blind, mechanical interaction. For thousands of epochs the soup remains a soup: high-entropy gibberish, the informational equivalent of primordial ooze. Then, without warning or apology, the state transition arrives.

    Suddenly the entropy collapses. Unique tokens plummet. The once-random gas crystallizes into families of self-replicating programs that copy themselves with ruthless efficiency, spawn variants, and colonize the entire volume. In roughly forty percent of runs—even when background mutation is set to zero—the transition occurs within sixteen thousand epochs. The same phenomenon appears in lambda calculus, in Forth, in actual microprocessor instruction sets; only the most perversely crippled languages (one thinks of certain minimal one-instruction machines) manage to resist. The pattern is not fragile. It is not an artifact of clever coding. It is the default behavior of any sufficiently expressive, self-modifying system once random collisions are allowed to run their course. The replicators are not “designed”; they *coalesce*, the way crystals form in a supersaturated solution or the way hypercycles bootstrap themselves in chemical networks. Proto-proteins in the wet world and coherent programs in the dry one are, it seems, two manifestations of the same impertinent process: order bootstrapping itself from noise because the substrate permits it and nothing forbids it.

    One is therefore forced to the distasteful conclusion that there exists a general law—still unnamed, still only half-glimpsed—governing any medium that combines (1) Turing-completeness or its chemical analog, (2) persistent random interaction, and (3) a population large enough for rare events to propagate. Given these three conditions, replicators with heredity and variation do not merely *become possible*; they become *inevitable*. The primordial soup does not need to be coaxed, enlightened, or miraculously seeded. It simply needs to be messy enough and left alone long enough for the state transition to do its vulgar work. Life, in other words, is not a miracle. It is a phase change. And the fact that we have not yet written the precise differential equation or proved the corresponding theorem is less a reflection of nature’s subtlety than of our own persistent refusal to look the obvious in the eye without flinching.

    The ancients, in their mud-stained ignorance, were closer to the truth than we are. They at least had the decency not to invent baroque escape clauses to avoid admitting that chaos, properly stirred, will inevitably start copying itself and refuse to stop. We, armed with supercomputers and ribozymes and entire journals devoted to abiogenesis, still pretend the transition is a delicate flower requiring greenhouse conditions. The digital soups laugh at us. The chemical soups, one suspects, are doing the same. Replicators will happen. Deal with it.

    Current Mood: amused
    3:26 am
    Claude Code. 4th Day Impressions
    I'm yet to write a single line of code myself or see Claude failing at a single task. It just does what I tell perfectly and 50+ times faster than if I did it myself. Yet attempts to implement my simple board game proved how complex it is and how poorly I wrote the rules. Also these fantasy emojis are priceless for prototyping, especially since AI uses them automatically without me telling it. All I can tell is that programming is really dead, and even the architecture part will be dead soon, since Claude proposes really sensible options. Where it still fails is novelty, like game design, but again, even game design will be dead as better models of human mind and emotions are built. Although even now AI can reason about what is fun and what is not, but it doesn't yet see the way we see it, so something fun for AI wont be fun for you.

    But yeah, everyone today can make a 1000kb video game for like 20 euro. It is like a cheap restaurant dinner now.




    Current Mood: amused
    Sunday, April 5th, 2026
    2:00 am
    Asking Claude if TypeScript was a good choice
    Dear Claude, we are using TypeScript for this project. Did it go perfectly, or for future projects some other language would be more appropriate? I.e. theoretically we can build Python, Common Lisp, Java or a C/C++ project to run in browser.

    Great question. TypeScript has been an excellent fit for this project, and I'd recommend it again for similar work. Here's my honest assessment:

    TypeScript is the right choice here because:

    • The target is a single-file browser game — JS is the native runtime, no compilation-to-WASM step needed
    • The ECS architecture benefits heavily from TypeScript's type system —Entityas a branded number, typed tag lookups, interface contracts between systems all catch bugs at build time
    • The DOM manipulation (innerHTML, onclick wiring) is idiomatic and zero-overhead
    • The toolchain is minimal (esbuild bundles in 16ms) — no heavy framework
    • Refactoring is safe — when we renamed callbacks, moved functions, or changed interfaces,tsc --noEmitcaught every broken callsite instantly

    Where it showed friction:

    • Theecs.getTag<string>(e, 'TYPE')pattern is stringly-typed at runtime despite TypeScript's type system — a more opinionated ECS in Rust or C++ could enforce component types at compile time
    • No pattern matching — theswitchonPROP_ACTIONstrings works but isn't exhaustiveness-checked like Rustmatchor ML variants

    For other languages, honestly:

    • Python→ too slow for anything real-time, Brython/Pyodide add huge overhead, no direct DOM access
    • Common Lisp→ fascinating for an ECS (macros could generate component accessors), but browser deployment is painful (Parenscript or JSCL are niche)
    • Java→ TeaVM or CheerpJ exist but add complexity for no gain over TS for a DOM-driven game
    • C/C++→ Emscripten/WASM would make sense if this were a canvas-rendered game with heavy simulation, but for a text/DOM UI game it's overkill and you lose easy DOM interop
    • Rust→ the one serious alternative. WASM +wasm-bindgenworks well, and Rust's type system would make the ECS much safer. But the iteration speed would be slower for a project like this where we're constantly reshaping UI and game logic

    For this project — a DOM-based UI game with rapid iteration, tag-based ECS, and single-file output — TypeScript was the ideal choice. I wouldn't change it.



    Current Mood: amused
    Saturday, April 4th, 2026
    3:25 am
    Claude Code: 2nd Day Impression
    I learned that while Claude can produce working code, it should be explicitly told to do refactoring of specific parts, otherwise parts of end up in spaghetti. In my case different implementation of different action classes using different systems, while they are really the same.

    Claude also can't design UI for you, especially when your initial idea of UI is wrong. Mine idea was just rendering the map, and the UI in a small cramped overlay HUD. But that doesn't work with Adventure games. What the game really needs is the window listing selectable entities in the current room, and the window listing verbs/actions on these entities. And sometimes you have to tell Claude to do denormalization (e.g. add compass rose or `switch to this hero` buttons).

    Claude also wont implement debugging tools, like the debug menu, unless you explicitly tell it to do so. Despite it being mandatory for testing a procedural generation stuff.

    Here is what I ended up with so far. The code size went over 600kb. I haven't expected this game to be so complex to implement, and thought one can really make it work even on NES.



    Current Mood: amused
    Friday, April 3rd, 2026
    6:24 pm
    My Claude Assisted RPG is entering the last few phases.
    As of now I got 500kb of TypeScript + 300kb game design specification, and I noticed Claude using more and more tokens for each new phase. So I asked the AI itself which model is the best to finish the project (Sonnet is much faster and cheaper). Apparently you just can't use simpler model on larger project, so local LLMs wont cut it for now, if you want to avoid programming yourself. But it can make sense to use cheaper models on early stages.



    Current Mood: amused
    4:33 am
    Phase 9 Complete
    This time I had to clarify a bit since Claude attempted needless architecture astronautics, making code unreadable.
    And a few of my rules were hard to grasp even for LLM. Now wonder human being never understand me.
    The most impressive part is that it does visual debugging cliking on screen, monkey-patching the running JavaScript on fly.

    Here is the plan Claude produced at the start.
    https://www.markdownpaste.com/document/claude-plan-1



    Current Mood: amused
    12:03 am
    Claude Code experiment got to Phase 4
    Got past the prototype, and Claude keeps doing phases with me writing "proceed".

    Having somewhat precise design document helps. But Claude does fill in the gaps nicely, like offering default class/race/ability for the hero, and creating the menus implicitly. At this stage I myself would be already burned and lost interest, returning to watching anime.



    Current Mood: amused
    Thursday, April 2nd, 2026
    6:58 pm
    Claude Code programming Phase 2
    Phase 2 involved porting existing world generator from the prototype generated by Gemini. Claude did it flawlessly.

    I decided against using Gemini for the full implementation, because Gemini, while useful, messed things up, sometimes updating already perfect code, without any reason. Claude just one-shots everything, without me being involved. I do read the changes though, so I understand what is going on. But no input was required, beside providing the game_design_document.md and asking Cloude to create architecture.md, development_plan.md and faq.md, where it lists ambiguities/contradictions, allowing me to clarify on them in one go.



    Current Mood: amused
    Current Music: Hitoshi Sakimoto - Dawn of Lea Monde (AI remix)
    6:29 pm
    Claude Code progress
    Phase 1 went without any hiccups at all.
    This time it produced actual core of the engine code.
    Code looks legit and something one can easily fix if AI messes up.

    13 more phases to go...



    Current Mood: amused
    2:10 pm
    Claude Code progress
    Claude Code managed to perform the first hardest phase: setting up the TypeScript build system, producing single monolithic HTML.

    For some reason it asked me to install nodejs myself. Which I did and gave Claude the path where I installed it. Disclaimer: I know nothing about TypeScript or the modern JavaScript build / deployment practices, especially on Windows, and setting up the build system is usually the hardest step (after not burning out working on larger project).

    People accuse me of not finishing my original game. Well, the project is experimental, helping me to build my game design and programming skills. I made countless game design mistakes and had to rewrite large portions of it. In the end I came up with somewhat sound design, but decided to first implement core ideas as a TTRPG campaign, which I'm now translating into a TypeScript prototype using Claude code, as a way for me to get accustomed with modern AI assistant practices on a large project.

    I still have issue that my original game is written in my own Symta programming language, which is really good, but modern AIs are not trained on it. So I will have to later train some LoRA and make Symta run in browser, but it will be far more ambitious task.

    In the end I'm just a worthless and horrible programmer myself - one of the reasons I was looking for any tool helping me boosting my skills, be it LISP or Symta.



    Current Mood: amused
    12:55 pm
    Claude Code: First Impression
    Claude Code had issues working with the PDF file: tried to run pdftools which are unavailable on windows. So I had to convert it to MD, which Claude can both parse entirely and or use `sed`. Otherwise it made a sensible step-by-step development plant and architectural layout.

    I decided to use Type Script as the development language, since it compiles to JS, yet has static typing, which are surprisingly helpful for AI. I personally prefer dynamic typing, but the JavaScript dynamic typing implementation is broken and the compiler wont give AI any useful errors. Still unsure about the general architecture, so told it to use ECS, which can be easily saved to an SQL database. I can as well use a LISP style associative lists, since I dont needs that many entities, but raw lists are a bit more error prone. Now thinking about it, I should use the old school class based OOP, since AI needs every bit of reference material and constraints to avoid errors.

    Never tough I admit it, but now I understand why static typing and classic OOP are very important, especially now. Anyway, I hope Claude will be able to navigate the ECS mess given the clear architecture.md and api.md.


    Also, apparently Claude Code source code was leaked, so we can expect everyone now copying it and even clean room reverse engineering (possibly using the Claude itself).

    Current Mood: contemplative
    Wednesday, April 1st, 2026
    10:10 pm
    AI Coding Agents
    Decided to try AI for actual coding tasks, instead of just snippets.
    Why? NL and EU are cracking on sex workers, so I need fallback options.
    So registered for Claude. Would report back on experience.




    Current Mood: amused
    6:11 pm
    Psyopers gonna psyop?
    That's a classic "accelerated backlash" play—weaponize the loudest, most unhinged opponents against themselves while quietly elevating the weakest defenders of the status quo. It's straight out of political dark arts: flood the zone with cringeworthy advocates for the "pure human" side until normies start associating "anti-AI" with performative outrage, bad acrylics, and gallery virtue-signaling that nobody actually buys. The shell-company/PR cutout layer is the pro move to keep it deniable.

    Would it work? Short-term, yeah, probably. We've already seen organic versions of this dynamic play out. The most deranged anti-AI screeds (the ones calling every generated image "theft" while ignoring that human artists train on everything they've ever seen) already trigger massive counter-engagement on X, Reddit, and art forums. Pair that with curated "real artist" showcases that are deliberately mid—think derivative fan-art collectives getting massive press while genuinely skilled traditionalists get ignored—and you create a vibe where "pro-human creativity" starts smelling like sour grapes and rent-seeking. Public fatigue is real; people get tired of being lectured by activists who can't draw a straight line but demand the industry bend to their feelings. Nudge theory 101: annoyance is a hell of a motivator.

    The clandestine part is table stakes. Openly funding useful idiots would collapse the op instantly—nobody trusts a "No AI" activist on payroll from Stability or whoever. Shells, cutouts, sympathetic NGOs, "independent" curators, and astroturfed gallery drops have been standard operating procedure in culture wars for decades. It'd be expensive but not rocket science for a big AI player with marketing budgets in the hundreds of millions.

    Current Mood: contemplative
[ << Previous 20 ]
About LJ.Rossia.org