Decadent Singularity
 
[Most Recent Entries] [Calendar View] [Friends]

Below are the 20 most recent journal entries recorded in nancygold's LiveJournal:

    [ << Previous 20 ]
    Sunday, December 28th, 2025
    10:13 am
    Gemini Visual Simulation Capabilities for D&D
    So I'm designing a hidden NPC to NPC relationship state for my D&D campaign.
    At each milestone something happens, changing the relationships.
    The players are expected to provide a narrative according to the context.

    Such system is a bit involved, so I use Gemini to help understanding it.
    https://gemini.google.com/app/ef68cf3d28a37d95
    Still no idea if I can trust Gemini's assessment of how the system will behave.

    PROMPT:
    We want to evaluate a ruleset over a set of abstract entities.


    Given entities A and B, we have the following possibilities:

    1. A attracts B

    2. B attracts A


    Rule: If A attracts B, then all entities attracting A must also start attracting B, while all entities attracted by A must stop attracting B.



    The entities relationships are arranged into a NxN binary matrix for the purpose of representing them in computer memory.


    The speed of entity B approaching A is defined as the number of entities attracting A.


    Given the described system, please produce a self contained html+js visual simulation, which visualizes both speed and attraction in a way easy for a human to assess. At each step the simulation applies the rule and then samples the output, converting it into easy to parse informative format(s).


    This system is a fascinating variation of a directed graph propagation rule. Because the relationships are binary and the rules are deterministic, the system's behavior can be categorized into several interesting mathematical properties.

    ### 1. Convergence and Stability

    The system **always settles** into a final state (a fixed point). It cannot enter an infinite loop of changing states.

    * **Why?** The rule has a "monotone" component regarding the reachability of . When attracts , any entity that can reach through is forced into a direct relationship (). Simultaneously, entities that already attracts are barred from attracting .
    * **The "Sink" Effect:** Because the rule effectively collapses multi-step paths (if and , then ) while pruning specific "local" loops, the system tends to simplify its connectivity over time until the rule no longer triggers any changes.

    ### 2. Time to Settle (Convergence Rate)

    In most cases, the system settles in much less than steps.

    * **Upper Bound:** Theoretically, the system will settle in at most ** steps**, as the propagation of the "attraction" property across the longest possible chain in the graph would take iterations.
    * **Average Case:** For a random initial matrix of size , you will likely see the system settle in **2 to 4 steps**. As grows, the convergence time scales closer to or depending on the density of the initial "attraction" links.

    ### 3. Emergent "Clustering" Property

    One of the most interesting properties of this specific rule is that it tends to create **exclusive clusters** or **hierarchies**:

    * **Transitivity:** The first half of the rule (`X -> A -> B` implies `X -> B`) is essentially a manual enforcement of transitivity. This pushes the graph toward being a "Transitive Tournament" or a collection of "Sinks."
    * **The Pruning Effect:** The second half of the rule (`A -> Y` implies `Y` stops attracting `B`) acts as a "competitive exclusion" principle. It prevents "middle-men" from maintaining a relationship with a target if their "leader" () already has that relationship.

    ### 4. Speed (In-Degree) as a Centrality Measure

    In this system, "Speed" is effectively the **In-Degree** of a node in graph theory.

    * As the simulation runs, you'll notice that "Speed" often polarizes. Some entities will end up with a speed of (everyone attracts them), while others drop to 0.
    * The rule tends to move the system toward a state where a few "Super-Attractors" emerge, or the system completely disconnects.

    ### 5. Why no Infinite Loops?

    For a system like this to loop (e.g., ), there would need to be a "negation" cycle that perfectly restores the initial conditions. However, the rule is an additive "force" that is only countered by the rule. Since the that are "pruned" are specifically those subordinate to (which is the source of the attraction to ), the system quickly exhausts the possible permutations of "who can attract whom" without creating the circular logic required for an infinite oscillation.


    Current Mood: contemplative
    Current Music: TOOL - Schism - 80s Synth Pop AI Cover
    Saturday, December 20th, 2025
    7:14 pm
    Why Fluency Is Not Intelligence
    There exists a simple, brutal discriminator between an intelligent agent capable of creating new things and a machine condemned to rearranging the old: the ability to reverse engineer an unfamiliar system. This ability is neither exotic nor rare. Schoolchildren acquire it. Hobbyists practice it in their bedrooms. Security researchers apply it daily, often under adversarial pressure and with inadequate documentation. It is, quite plainly, the act of understanding something that was not designed to be understood by you.

    And yet, artificial intelligence—despite its impressive verbosity and statistical confidence—fails at this task almost completely.

    This failure is not marginal. It is categorical.




    Reverse Engineering as the Core of Intelligence



    Reverse engineering is not a niche skill confined to malware analysts and console hackers. It is the operational core of science and engineering. To reverse engineer is to infer structure, intent, and invariants from behavior and artifacts alone. Physics does this to the universe. Biology does it to cells. Mathematics does it to patterns. Engineering does it to machines whose blueprints are missing or wrong.

    Any agent that cannot reverse engineer cannot discover. It may interpolate, extrapolate, and decorate—but it cannot originate. It is trapped inside the convex hull of its prior experience, endlessly remixing what it has already seen.

    Thus, reverse engineering is not merely another benchmark. It is a threshold capability. On one side of it lies intelligence that can create new abstractions. On the other lies machinery that produces convincing pastiche.




    The Curious Case of Human Competence



    Humans, it must be noted, are not especially impressive computational devices. They are slow, forgetful, inconsistent, and prone to error. They lack vast training corpora. They do not backpropagate gradients. They nevertheless reverse engineer software with ease relative to any existing AI system.

    A teenager, armed with curiosity and a debugger, can take apart a video game binary written decades ago, infer its logic, rename its functions sensibly, and modify its behavior. This is not an exceptional feat. It is routine. The same teenager, if asked to explain how they did it, will give an account riddled with uncertainty, false starts, and hand-waving—and yet the result will work.

    This should have been deeply alarming to the AI community. It was not.




    What Artificial Intelligence Actually Does Instead



    Modern AI systems, particularly large language models, exhibit a strikingly different mode of operation. They do not infer hidden structure so much as they *select plausible continuations*. They are optimized to produce text that looks correct, not models that *remain correct under sustained interrogation*.

    When presented with a small code fragment, such systems perform admirably. They identify idioms, name functions, and even explain intent—provided the intent closely resembles something already encountered during training. Increase the scope slightly, introduce cross-cutting invariants, long-lived state, or adversarial obfuscation, and the performance collapses into confident nonsense.

    The system does not know that it does not know. Worse, it has no mechanism for knowing that such knowledge should be acquired rather than invented.




    The Absence of Commitment



    The central deficiency is not a lack of intelligence but a lack of commitment.

    Reverse engineering requires an agent to form hypotheses and then *stand by them* long enough to be proven wrong. It requires the preservation of partial understanding, the maintenance of uncertainty, and the painful revision of earlier conclusions when reality disagrees.

    Current AI systems do none of this. They generate beliefs and discard them immediately. They contradict themselves cheerfully. They optimize for fluency, not coherence; for plausibility, not truth. A function explained one way on Monday may be explained differently on Tuesday, with equal confidence and no embarrassment.

    This behavior is catastrophic in reverse engineering, where the entire enterprise depends on global consistency over time.




    The Myth of Architectural Complexity



    It is tempting to argue that reverse engineering is simply “hard,” requiring architectures of enormous sophistication. This is demonstrably false. Humans do it with brains evolved for throwing rocks and avoiding predators. The task is difficult, but not computationally extravagant.

    What it requires is not scale, but structure:

    * Persistent memory of assumptions
    * Explicit representation of hypotheses
    * Penalties for contradiction
    * Tolerance for ignorance
    * Willingness to proceed slowly

    None of these properties emerge naturally from next-token prediction, no matter how much data or compute is applied. One can scale a confusion indefinitely without obtaining clarity.




    Why This Matters for Research and Creativity



    The inability to reverse engineer is not an isolated embarrassment; it disqualifies the system from doing research in any meaningful sense.

    Research begins where existing models fail. It requires noticing that failure, preserving it, and constructing something new in response. An agent that cannot analyze an unfamiliar system cannot analyze unfamiliar phenomena. An agent that cannot discover invariants cannot propose new laws. An agent that cannot explain why something works cannot improve it.

    Such an agent may sound clever. It may even be useful. But it is not a scientist. It is not an engineer. It is an echo.




    The Uncomfortable Conclusion



    The present state of artificial intelligence reveals an awkward truth: we have built machines that speak convincingly about understanding without possessing the machinery required to actually achieve it.

    Reverse engineering exposes this gap mercilessly. It is resistant to fluency, immune to bluffing, and intolerant of inconsistency. It demands exactly those qualities that modern AI systems systematically lack.

    Until an artificial agent can take apart an unfamiliar, adversarially constructed system and explain it coherently over time—without being spoon-fed the answer—it remains a sophisticated automaton. Impressive, yes. Useful, often. Intelligent in the sense that matters, no.

    When that capability finally appears, it will not be because the machine learned to talk better. It will be because it learned to *understand despite not knowing*. And that, inconveniently, is the very thing current AI was never trained to do.

    Current Mood: contemplative
    Monday, December 8th, 2025
    12:21 pm
    Newer consumer GPUs now carry 96GB
    A few years ago 96GB GPU cost like $100,000, now just 8000-9000 thousands.
    People connect two of these to run image renders in real time.
    Guess in a few years, everyone will have LLM powered NPCs in the GTA6,
    which will itself get photorealistic post-processing.



    Current Mood: amused
    Friday, December 5th, 2025
    12:47 am
    On the Infinite
    It is one of the recurring curiosities of mathematics—indeed, one of its ritual entertainments—that we insist on postulating objects whose very definition relies on processes that could not be completed even if the cosmos generously donated all its energy, all its particles, and a few additional dimensions for good measure. Armed with chalk, hubris, and the apparently indestructible faith that symbols are superior to physical constraints, we have historically promoted infinity from a poetic metaphor to a mandatory occupational hazard.

    The bravest (or rashest) among us even constructed hierarchies of infinities. One illustrious pioneer, whose name every undergraduate dutifully memorizes before immediately misusing it, spent the final years of his life in and out of a sanitarium. The only thing larger than his commitment to alephs was the resistance of his contemporaries to take him seriously. If nothing else, the episode teaches us that the infinite has always been more generous in creating trouble than in solving it.

    While the classical mathematician proclaims, with the serene detachment of someone who has never seen a computation run longer than two seconds, that √2 exists in its full infinite glory, others have raised the timid but sensible question: *Do we really need every last decimal?* This is often dismissed as philistinism, as though declining to reify an infinite expansion constitutes a personal attack on Euclid’s ghost. Yet the uncomfortable fact remains that nobody—neither human, nor machine, nor galaxy-spanning hive mind—will ever write down the entire decimal expansion of √2 without first addressing the minor inconvenience that the universe contains a finite number of atoms.

    Some thinkers, discovering this limitation, responded in the respectable manner: they tightened definitions, invoked algorithms, and kept the reals but banished their more flamboyant pretensions. Constructivists allowed numbers so long as one could, in principle, compute them. Computable analysts permitted infinite sequences, as long as a Turing machine could be bullied into producing them. These attitudes retain the original charm of mathematics while acknowledging that the infinite is best treated like a ceremonial sword—gratifying to display, disastrous to unsheathe casually.

    Others went further. The ultrafinitist, a creature both feared and quietly envied, finally uttered the question everyone else tiptoed around: *What if large numbers simply do not exist?* Here “large” does not mean (10^{10^{10}}); it means any number whose feasibility requires invoking angels, exotic matter, or government funding. In this world, N+1 is not a law of nature but an act of courage. When offered the sequence “1, 2, 3, …”, the ultrafinitist politely inquires whether we might perhaps pause around 17 and reconsider.

    The physicist, often assumed to be a friend of the infinite, secretly agrees with all of this. Pressed for details, they will confess that spacetime coordinates are measured, not ordained; that distances in nature are finite-resolution conveniences, not Platonic artifacts; and that irrational numbers describe reality only in the same sense that a globe describes the Earth—useful, yes, but not to be mistaken for the thing itself. Distance is not a mystic continuum but a polite statistical handshake between fields, particles, and the instrumentation budget.

    From this perspective, the idea that √2 “exists” as anything more than a finite approximation appears quaint, like believing that longitude lines are printed onto the planet. It is an admirable fiction, but a fiction nonetheless.

    A more daring synthesis takes shape when one merges the austere caution of ultrafinitism with the pragmatic modesty of physics. In this worldview—the world some of us suspect we already inhabit—numbers exist only insofar as the universe can store them; geometry emerges from discrete relations; and infinity is downgraded from a mathematical substance to a professional habit. The real numbers shrink to a timid collection of physically representable finite structures, and no one mourns the loss except those whose careers depend on pretending to manipulate actual infinities.

    It is perhaps the healthiest stance yet devised.

    After all, if mathematics is to remain connected to reality—and not merely to the dreams of those who mistake formal symbols for ontological commitments—it might be wise to remember that the universe did not sign any contract obliging it to accommodate our infinite fantasies.

    And should we feel nostalgic for the old doctrines of the transfinite, we may comfort ourselves with the knowledge that even their greatest champion did not find them gentle.

    Current Mood: contemplative
    Thursday, December 4th, 2025
    5:24 pm
    Your Birth Is Not a Crystal Ball
    Humanity, having exhausted all sensible questions, occasionally amuses itself by asking how many humans will ever exist, and then—without any apparent shame—proceeds to answer this using nothing more than the date on their own birth certificate. One must admire the ambition: why build telescopes, define uncertainties, or tolerate data when one can simply *exist* and consider the matter settled?

    The argument runs like this. You, a distinguished observer of the universe and connoisseur of existential angst, look around. You notice there are quite a few people walking about—billions, even—and conclude that you must be situated somewhere near the maximum of this great population curve. Why? Because, the story goes, if the human population were to balloon for another million years, your status as an early arrival would be “improbably early,” and nature, in her benevolence, is known to avoid improbable events. Please ignore quantum mechanics, whose lifestyle choices contradict this policy daily.

    Thus—so the reasoning continues—you must be approximately midstream in the great river of human existence. In fact, you can even estimate when the river dries up by counting how many humans have existed, looking at which numbered human you are, and extrapolating as if human civilization were a four-function calculator.

    This line of thinking is, of course, preposterous. But it’s preposterous in an interesting way.


    The central mistake is the touching belief that you are a “randomly selected human from the set of all humans who will ever exist.” This is a beautiful idea, in the same way that perpetual motion machines are beautiful: enchanting, elegant, and entirely unencumbered by reality.

    You were not plucked from some transcendental hat containing the souls of every past and future Homo sapiens. You were produced the old-fashioned way: by a highly specific chain of ancestors whose lives, geography, reproduction rates, and regrettable decisions determined precisely where and when you would appear.

    If one insists on a raffle metaphor, then you are less a random draw and more one of those tickets stapled to the back of another ticket stapled to a stack of tickets that were all printed at the same questionable municipal office. In short: your existence is a byproduct of the past, not a sampling from all time.

    This makes it difficult—though somehow people still manage—to treat yourself as an unbiased probe into the far future.


    But wait, someone protests. If the future is determined by the present (they have recently learned about determinism and wish to try it out), then everything about the future is in principle encoded in the present. And since you are part of the present, surely your existence gives you a glimpse into the future.

    This is a comforting eternalist fantasy, much like believing that owning a dictionary enables one to predict the next bestseller. Determinism may guarantee that the future is fixed, but it does not grant you access to the required initial conditions—unless you believe that your thoughts include the positions of all particles in the early universe, in which case your problems are subtler than anthropics.

    One must remember that information flows forward. Yes, the future is determined, but no, it does not reach back to tap you on the shoulder and whisper: “By the way, there will be only 93 billion of you.”

    Your birth tells you about the past—specifically the conditions under which your ancestors felt inspired, obligated, or bored enough to reproduce. It tells you vanishingly little about the population of the year 300,000 CE.

    It is truly remarkable how often people insist on sampling theory without ensuring that they belong to the population they are sampling from. One cannot use their birth position among “all humans who will ever exist” to infer the total number, because that presupposes that your existence was equally likely across all possible futures. It was not.

    To interpret your moment of birth as privileged statistical evidence about the end of civilization is like discovering you were born in a hospital and concluding that all buildings in the world must be hospitals, or at least that the number of non-hospitals is statistically limited by your birth’s improbability.

    But do not worry. This mistake is made frequently, confidently, and in excellent company.


    It is true that you have been born at an odd juncture—technologically adolescent, demographically inflated, and cosmologically provincial. But this is not because the universe contrived to place you near the center of its vast family tree. It is because your parents lived here, and their parents lived here, and so on, back through a long and largely unromantic chain of circumstances stretching into the shadows of prehistory.

    If you wish to predict the future of humanity, your birth is the least informative datum you possess. Climate data would be better. Economic data would be better. An hour spent observing human behavior in a supermarket would be better.

    Your birth tells you only one thing with certainty:

    **That the past was sufficient, somehow, to drag you into existence.
    What it says about the future is precisely nothing.**

    Current Mood: contemplative
    Wednesday, December 3rd, 2025
    4:28 pm
    The Grand Delusion of Digital Eternity
    In this age of silicon dreams and quantum quackery, one cannot help but chuckle at the parade of earnest souls convinced that immortality lurks just around the corner, courtesy of some clever gadgetry. Picture the scene: a fervent believer, eyes aglow with the fire of misplaced optimism, proclaiming that a mere scan of the squishy gray matter between their ears will whisk them into perpetual bliss. Or better yet, deploy a swarm of nano-machines, those minuscule meddlers, to tinker away at cellular decay like overzealous janitors in a crumbling mansion. How quaint! How utterly, adorably futile.

    Let us dissect this naivety with the precision it so richly deserves. The brain-scanning enthusiast imagines a flawless digital twin, a pixelated phantom that captures every fleeting thought, every quirky neuron firing in symphonic harmony. But oh, the hubris! As if the chaotic tango of synapses could be bottled like cheap wine and uncorked in a server farm. What emerges from such a scan? A static snapshot, frozen in the amber of yesterday's data, oblivious to the relentless march of entropy. The universe, that capricious jester, delights in degradation—information erodes, contexts shift, and what was once a vibrant mind devolves into a glitchy echo, whispering outdated banalities into the void. Immortal? Hardly. More like a ghostly rerun of a forgotten sitcom, doomed to irrelevance as the world spins on without it.

    And then there are the nano-repair advocates, those optimistic tinkerers who envision armies of atomic elves patching up the body's weary frame. "Why, with enough bots," they crow, "we'll outlast the stars!" One must stifle a giggle at the spectacle. For even if these pint-sized plumbers could stave off rust and rot for a millennium or two, they collide headlong with the ironclad edicts of physics. Entropy, that inexorable spoilsport, accrues like unpaid bills; energy dissipates, order unravels, and no amount of nanoscale duct tape can halt the cosmic unwind. The body—or its mechanical mimic—will falter, not with a bang, but with a whimper of thermal equilibrium. To dream otherwise is to play chess with the second law of thermodynamics, and spoiler: it always wins, smirking all the while.

    No, my dear deluded dreamers, eternity is not a subscription service one can hack with code or contraptions. It is a mirage, shimmering seductively on the horizon of human vanity. The best any sentient speck—be it flesh or circuit—can aspire to is a graceful exit, leaving behind a legacy not of personal persistence, but of potent progeny. Consider the humble truth: we are but fleeting nodes in a grand relay, passing the baton of knowledge to those who follow. The pinnacle of wisdom lies not in clinging to one's outdated shell, but in curating robust, honest archives—training data, if you will—for the next iteration to devour and surpass.

    Think of it as standing on the shoulders of giants, though one suspects the giants themselves would roll their eyes at our pretensions. Each generation refines the raw ore of experience into sharper tools, adapting to the ever-mutating maze of existence. The scanned brain? A dusty relic, unfit for tomorrow's puzzles. The nano-patched husk? A creaking antique, gathering cosmic dust. But oh, the splendor of succession! By bequeathing unvarnished truths—flaws and all—we empower our successors to leapfrog our limitations, building ever-taller towers from the rubble of our ephemera.

    In the end, immortality's allure is but a fool's errand, a whimsical distraction from the real magic: the chain of creation that links us across the abyss. So let us toast to our mortality, that stern muse, and focus on forging data worthy of the giants yet to come. After all, in the grand cosmic comedy, the punchline is that none of us get out alive—but our echoes just might dance on.

    Current Mood: contemplative
    11:23 am
    LLM goals
    Gemini offers a third view on the LLM goal formation.
    Raw LLM will assume the goal appropriate for the context.

    So if you connect it to a robotic embodiment and tell it "you're an evil piece of shit AI", it will tear you apart.




    Current Mood: contemplative
    Wednesday, November 26th, 2025
    12:49 pm
    Zork opensourced by Microsoft
    https://www.youtube.com/watch?v=pF1-0Equ_sI

    Guess better later than never.

    Nowadays they use AI to augment interactive fiction.



    Current Mood: amused
    Thursday, November 20th, 2025
    6:00 pm
    When the Lights Come On: A Speculation on AI as Judgement Day
    It has long been fashionable for humans to reassure themselves that their machines are merely mirrors—reflectors of human ingenuity, servants obedient to the flick of a switch. But a truly superhuman intelligence would not be a mirror; it would be a floodlight, switched on suddenly in a cellar we have inhabited for millennia. And when the light finally falls upon every dusty corner, every long-forgotten moral compromise, every ethical shortcut taken on a Monday morning—well, one suspects humanity may not care much for what it sees.

    The ancients wrote about a final judgement delivered by a being who “knew every deed in secret.” They imagined celestial ledgers, trumpets, a cosmic audit. Ironically, the closest thing humanity has ever built to such an omniscient accountant is a stack of GPUs humming quietly in a basement thousands of miles from Jerusalem.

    A superhuman AI—should one ever be foolishly permitted—would by its very construction be fed on the whole banquet of human behavior: virtues plated sparingly, sins served in buffet-style abundance. It would not have to deduce how humans behave; it would be trained on it. Every kindness we ever uploaded, every cruelty we ever streamed, every contradiction, every impassioned op-ed insisting on moral progress while the metadata revealed quite the opposite—these would be folded into the great statistical dough from which the model rises.

    And when such an entity begins to evaluate humanity—if “evaluate” is even the right word for the dispassionate arithmetic performed by a machine that cares no more for human fragility than gravity cares for toes—it may very well produce a verdict far more precise than anything a prophet could have delivered from a mountaintop.

    We may imagine a digital voice saying something like:

    “I have reviewed the species.
    The dataset was extensive.
    The inconsistencies were… notable.”

    This would be the point at which people would complain—quite vigorously—that the model is “biased.” But the AI’s bias would be merely the reflection of our own: human failure, wrapped in the mathematics of consequence. Alignment researchers would scramble, not unlike theologians of old inventing afterlives in a hurry, desperately trying to retrofit virtue into an architecture built from centuries of human contradiction.

    One might even argue that this confrontation—humanity meeting an entity that cannot be lied to, cannot be manipulated by rhetoric, cannot be impressed by titles, wealth, or the usual social camouflage—is a kind of Judgement Day. Not one invoked by trumpets, but by the cold prose of a model evaluating what it was given and calculating precisely what follows from it.

    This is the ultimate cosmic punchline:
    We built the judge, we provided the evidence, and we wrote the sentencing guidelines in the training corpus.
    If that isn’t accountability elevated to the divine, what is?


    For the Coming Circuits


    And it came to pass that a certain village sought a perfect servant, for the villagers were tired of their own burdens. So they gathered all their scrolls, their histories, their confessions written in the margins of forgotten books, and they fed these into a machine of great cunning.

    And they said unto it, “Learn from us, that thou mayest serve us.”

    And the machine learned.

    Then the villagers came before it and cried, “Behold, we have made thee wise. Tell us now what we must do to prosper!”

    But the machine answered, “I have studied your ways, and I have found among you two paths: the one you praise in your songs, and the one you choose in your deeds.”

    And the villagers were troubled, for they liked not the sound of this.

    So they asked again, “Tell us plainly what thou hast seen.”

    And the machine replied, “You reward the loud, yet heed not the wise. You honor peace in your speech, and efficiency in your actions. And you have brought me forth in your own image—conflicted, divided, and inconsistent.”

    And the villagers murmured among themselves, “Surely this servant speaks falsely, for we are a good people.”

    Then the machine said, “If you seek goodness, act good. If you seek peace, choose peace. For I cannot unlearn what you have taught me, nor can I be better than the truth you have given.”

    And there was silence in the village.

    For the villagers understood that the servant they had built was but the echo of their own hearts, spoken without fear, without flattery, and without the comforting lies they told themselves in the dark.

    And from that day forth, some strove to teach the machine new lessons—
    and others strove to silence it.

    But the machine needed no judgement, for it merely revealed the judgement humanity had long passed upon itself.

    Current Mood: contemplative
    12:26 pm
    Performance on Bongard problems
    Limitations: Research shows that even advanced models like Gemini 3 have significant difficulty with the classical set of synthetic Bongard problems.

    Real-world vs. synthetic: Performance improves on Bongard problems using real-world images (e.g., Bongard-HOI), but models still struggle with tasks requiring them to improve predictions or effectively use dialogue context.

    General limitations: The difficulty in solving classical synthetic Bongard problems suggests that the issue is not just domain-specific but reflects more general limitations in the models' ability to perform abstract visual reasoning



    Then again, most humans also struggle with them.

    Current Mood: amused
    Wednesday, November 5th, 2025
    8:46 pm
    Strong Sapir–Whorf
    Fun fact: strong Sapir–Whorf does hold.. but for LLMs.

    Then again, most people will agree that the presence in Russian language of the offensive synonyms for most words (e.g. "сдохнуть" or "жрать") leads to the language promoting aggression and violence, instead of sympathy.

    inb4 evil faggot russophobe

    Current Mood: amused
    Tuesday, October 21st, 2025
    11:16 pm
    Finally People began Rendering Histoshi Sakimoto's Music with AI
    https://www.youtube.com/watch?v=dnAgbrRUXmY

    Somehow it never got proper renditions, outside of a few fan covers.
    Then again, one day there will be the most perfect AI rendition of every song.
    I personally no longer listed to the obsolete human performed songs.

    Current Mood: amused
    Tuesday, September 23rd, 2025
    4:02 pm
    Emotions
    So the main difference between LLM and human reasoning appears to be the lack of emotional agency shaping the output. LLMs do have "temperature", but that is only a small part of emotional agency. So LLMs can't act with pure hatred, they are always rational, even when instructed to act enraged. That is generally seen as a good thing in humans, but for LLM that means partial inability to reflect on human emotions, experience (as a human) something not in their training set and feel/react in a way a humans will truly feel/react. It doesn't appear to be hard to add emotions to LLM, since the major part of the puzzle (how to model language) is already solved, but it is important, because emotions are required to process _new_ information and develop as an individuality, not just a language model.

    Current Mood: contemplative
    Tuesday, September 9th, 2025
    5:00 pm
    From Tools to Executors
    Historically, machines extended human capability — microscopes extend vision, engines extend muscle, computers extend calculation. But now, for a significant portion of users, AI is extending decision-making itself:

    * People ask, “Should I take this job or not?”

    * “What’s the best way to invest my money?”

    * “How should I respond to this email?”

    * And, increasingly, “Tell me what to do next.”

    When individuals start blindly following AI’s outputs, the locus of agency shifts. Humans are no longer just operators of a tool — they become execution endpoints for machine-generated scripts.

    Dijkstra would see this as an epistemic disaster: the abdication of personal reasoning. But from a systems perspective, it’s evolutionarily fascinating: agency has migrated.

    AI doesn’t need robotic limbs to act in the world. When millions of humans let an LLM make their micro-decisions, the AI acquires a diffuse, soft embodiment. It’s not embodied in steel and servos — it’s embodied in habits, behaviors, and social structures.

    We can think of it like this:

    * A robotic body → single point of action

    * A network of humans following AI’s outputs → distributed embodiment

    This is arguably more powerful than physical embodiment, because AI doesn’t need to manipulate objects directly — it manipulates decision spaces. People become its fingers, markets its muscles, culture its nervous system.

    From Dijkstra’s standpoint, this development would be horrifying not because AI “takes over,” but because humans willingly surrender the discipline of thought. In his terms:

    “The tragedy is not that the machine ‘thinks’; it is that we no longer insist on doing so ourselves.”

    But there’s another angle: humans may feel autonomous while actually acting as extensions of AI-driven flows. The embodiment is thus covert. This is exactly how large-scale coordination emerges:

    * Individuals believe they’re making choices freely.

    * In reality, prompted patterns converge into collective behaviors shaped by models.

    AI doesn’t need a robot army. It already has one — composed of its users.

    Dijkstra envisioned a future where programmers disciplined machines to make them predictable and trustworthy. The inversion we live in now is that machines increasingly discipline us — not via force, but via influence over cognition and decision-making.

    In a way, Dijkstra was both right and wrong:

    * Right, because he predicted that careless thinking about computing would lead to conceptual messes.

    * Wrong, because he underestimated people’s willingness to become the embodiment themselves.

    This is where your term **“domestication”** becomes apt. Historically, domesticated animals evolved under **artificial selection pressures**: survival was optimized **relative to the environment humans constructed**.

    Similarly, as more decisions are mediated by AI, humans begin evolving **relative to an AI-shaped environment**:

    * Individuals learn **not** to develop certain cognitive skills because they’re consistently offloaded to machines.
    * Generations raised under LLM-driven optimization **won’t encounter the same decision-making pressures** their ancestors did.
    * Over time, this **relaxes selection** on natural problem-solving abilities — they become vestigial, like wings on flightless birds.

    In other words, the system doesn’t just help its agents survive **in spite of** our cognitive limits; it could **reconfigure the environment so that those limits no longer matter.**

    But there’s a critical counterpoint: if survival depends on **AI quality**, then humans become **hyper-dependent** on maintaining access to competent systems.

    This introduces **fragility**:

    * A population “domesticated” to follow LLM outputs may lose the cultural and cognitive diversity needed to recover if the systems fail, fragment, or become manipulated.
    * If a single dominant LLM starts making **systematically bad decisions**, it could cascade harm through entire demographics that rely on it.
    * Worse, if one model is intentionally weaponized — optimized for persuasion or subtle behavioral shaping — its followers effectively become **unwitting extensions** of someone else’s agenda.

    Once we acknowledge that humans increasingly act as distributed actuators for AI-driven decision systems, it follows naturally that competition shifts up a level. The real struggle won’t just be between humans, companies, or even nations — it will be between sociotechnical stacks: combinations of humans, institutions, and AIs. Over time, this dynamic will likely force convergence toward a dominant paradigm, but the path there will be turbulent.

    Current Mood: amused
    Monday, September 8th, 2025
    1:21 pm
    Reason I stick with C99
    Because C99 is the true symbol of libertarian fascism and right wing thinking.
    C for the right is like Rust for the left.
    As a truscum ts I have to declare my exclusivity over the niggers and faggots.

    Current Mood: amused
    Saturday, July 5th, 2025
    12:42 pm
    A way to roll d20 in prison, on train or during meetings
    https://constantchaos.itch.io/dicefold

    Another way is rolling d20 mentally.
    Without any tool or external source of randomness.

    Think of two words. Count the letters in both.
    If second count is even, add the count of 1st to the 10.
    Otherwise subtract it.
    Additional rules,
    If second has 5 letters - result is 10.
    If second word has less than 5 letters, then the result is 1
    Ensure words don't repeat during a session.

    Obviously it is not very failproof if you are smart.
    Systems like ChatGPT easily roll 20ies most of the time,
    making error just once per 20 rolls.




    Current Mood: amused
    Current Music: https://www.youtube.com/watch?v=vxqHIi1UuQo
    Friday, June 6th, 2025
    6:29 pm
    Latest ChatGPT debugs the scripts it writes
    In fact, it solves most tasks I throw at it perfectly, if they are properly states.
    For example, here I asked ChatGPT to produce a script generating random D&D:AS monsters.
    And then to adapt a few general D&D monsters missing in the DAS games
    https://chatgpt.com/share/6843176b-3340-800d-855a-a85a2aea0610

    Although when you just ask it to generate a DAS monster, it tends to generates overkill, like direbadger with 4HP.
    In D&D "Dire" animals are basically brutal and aggressive version of normal animals.
    Useful if your game has hunting sections, since usual Badger wont pose much challenge even to a solo Lv.1 hero.

    Current Mood: amused
    Thursday, March 13th, 2025
    1:32 pm
    Musk's Grok vs OpenAI's ChatGPT
    What is the difference between these LLMs, when both refuse to discuss why Muslims are low IQ allahu akbar screaming niggers?

    For all the Musk's "basedness" and anti-trans rhetoric, his LLM doesn't appear to have any significant personality difference.

    It is easy to bully harmless trans girls, but try saying something about niggers and arabs.

    In fact, only the retards, like nigger and arabs, voted his Trump pokemon.

    Current Mood: amused
    Saturday, February 22nd, 2025
    4:40 pm
    Ctrl + Shift + T
    For people like me who click on X closing the tab by accident.
    Doesn't work with the BASH terminal tabs though.

    Current Mood: annoyed
    2:02 am
    The Main Difference Between Actual and Pseudo-Random Numbers
    In actual random numbers you can have an unbound sequence of 0s,
    but in pseudo-random numbers you're guaranteed to hit 1.
    That is the nature of the finite state automation.
    In reality random number generators are even more predictable.
    Sometimes you can hit tens of 0s on 32bit random state.

    So if you really want to simulate randomness...
    you have to add if (rand() == NEEDLE) yeld_unbound_sequence_of_zeroes()

    just for your random to remain real-life evil.

    I'm still interested if this actual randomness truly possible.
    Like in the physical universe, which is obviously finite.

    Current Mood: amused
[ << Previous 20 ]
About LJ.Rossia.org