|
Justin's Linklog
The following are the titles of recent articles syndicated from Justin's Linklog
Add this feed to your friends list for news aggregation, or view this feed's syndication information.
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
[ << Previous 20 ]
| Wednesday, March 11th, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 1:22 pm |
“nothing up my sleeve” numbers [ Error: Irreparable invalid markup ('<a [...] numbers">') in entry. Owner must fix manually. Raw contents below.] <ul><li><p>
<a class="deliciouslink" href="https://bsky.app/profile/jnsq.org/post/3mgr45kgos22y" title=""nothing up my sleeve" numbers">"nothing up my sleeve" numbers</a></p>
<p>This is great:</p>
<p>"@jnsq.org: There's a concept in cryptography called a "nothing up my sleeve" number. Sometimes it's just the smallest number with the required properties. Sometimes it's pi or e or phi."</p>
<p class="taglist">Tags: <a class="delicioustag" href="https://bookmarks.taint.org//t:numbers">numbers</a> <a class="delicioustag" href="https://bookmarks.taint.org//t:crypto">crypto</a> <a class="delicioustag" href="https://bookmarks.taint.org//t:cryptography">cryptography</a> <a class="delicioustag" href="https://bookmarks.taint.org//t:maths">maths</a></p></li></ul> | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 12:01 pm |
Whole Brain Emulation Achieved: Scientists Run a Fruit Fly Brain in Simulation
Whole Brain Emulation Achieved: Scientists Run a Fruit Fly Brain in Simulation
bloody hell this is amazing. As Charlie Stross noted:
They've mapped the neural connectome of Drosophila and simulated it in silico. The experimenters went on to hook up their Drosophila connectome to an anatomically detailed Drosophila body model within an open-source physics engine that "uses generalized coordinates and constraint-based contact dynamics to simulate rigid-body systems with high fidelity" including joint and antennae modeling and accurate modeling of surface adhesion—and compound eye simulation.
They managed to run a feedback loop between the full 127,400 neuron network in the biological connectome to the physical simulation, with feedback from proprioceptive signals received by the model "fly" in the simulation producing feedback spile trains in the simulation, and THEY GOT RESULTS:
The behavioral repertoire observed in the demonstration included coordinated hexapod locomotion with both tripod and metachronal walking gaits, spontaneous postural correction in response to perturbation, initiation and execution of full antennal grooming sequences with the tripartite synchronization described by Özdil et al., and natural transitions between walking and stationary states. Every behavior arose from the same running brain model - there was no switching between different neural circuits or controllers. This is precisely what happens in a living fly: walking, grooming, and balance are different motor programs that coexist in the same brain and are selected and executed by the same biological circuits depending on the moment-to-moment state of the animal and its environment.
Absolutely mind blowing -- a reconstructed, biological brain running in silico.
Tags: simulation brains uploading drosophila flies emulation science biology neurons | | Thursday, March 5th, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 10:42 am |
Your binary is no longer safe: Decompilation | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 10:03 am |
Southern California air board rejected pollution rules after AI-generated flood of comments
Southern California air board rejected pollution rules after AI-generated flood of comments
Today in grim future -- AI's future of lobbying:
The opposition appeared overwhelming: Tens of thousands of emails poured into Southern California's top air pollution authority as its board weighed a June proposal to phase out gas-powered appliances. But in reality, many of the messages that may have swayed the powerful regulatory agency to scrap the plan were generated by a platform that is powered by artificial intelligence.
Public records requests reviewed by The Times and corroborated by staff members at the South Coast Air Quality Management District confirm that more than 20,000 public comments submitted in opposition to last year's proposal were generated by a Washington, D.C.-based company called CiviClick, which bills itself as "the first and best AI-powered grassroots advocacy platform."
A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign.
Tags: civiclick activism llms us-politics law lobbying spam matt-klink astroturfing | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 9:38 am |
No right to relicense this project · Issue #327 · chardet/chardet | | Thursday, February 26th, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 10:28 am |
Google API Keys Weren’t Secrets. But then Gemini Changed the Rules
Google API Keys Weren't Secrets. But then Gemini Changed the Rules
Crikey, this is a massive security fail by Google:
Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that's no longer true: Gemini accepts the same keys to access your private data. We scanned millions of websites and found nearly 3,000 Google API keys, originally deployed for public services like Google Maps, that now also authenticate to Gemini even though they were never intended for it. With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account. Even Google themselves had old public API keys, which they thought were non-sensitive, that we could use to access Google’s internal Gemini.
(via Rob Synnott)
Tags: infosec api-keys authentication authorization google gemini google-maps fail | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 9:47 am |
302 HTTP redirects Considered Harmful
302 HTTP redirects Considered Harmful
The state of anti-phishing infrastructure nowadays is shocking. This trivial action, combined with a relatively fresh domain, results in immediate blocklisting by Google:
Digging through Google forums, I found the most reported culprit: 302 temporary redirects. I used one redirect (engramma.dev ? app.engramma.dev) to avoid building a landing page. In addition to a newly registered domain, this looks like an obvious issue. Security systems flag such redirects because malicious actors use them extensively.
It doesn't matter that "malicious actors use them extensively" if non-malicious actors do too. That's the definition of a false positive!
Then the next shitfest is from no less than 10 separate vendors copying the listing from Google and not including an automated system to pick up the list removal afterwards.
I've had experience of this part -- and now that I think of it, it may have been from use of 302 redirects in my case too.
(via Paul Watson)
Tags: http security infosec blocklists google phishing redirects 302 false-positives fail via:paulwatson | | Tuesday, February 24th, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 1:17 pm |
Persona identity verification is a GDPR nightmare
Persona identity verification is a GDPR nightmare
LinkedIn are using a Peter Thiel-linked company called Persona as an identity-verification service. (Discord also tried them out for age verification, but are now apparently ditching them.) This is all a bit of a nightmare for EU based users, however:
"When you click “verify” on LinkedIn, you’re not giving your passport to LinkedIn. You get redirected to a company called Persona. Full name: Persona Identities, Inc. Based in San Francisco, California."
For a three-minute identity check, this is what Persona collected:
- My full name — first, middle, last
- My passport photo — the full document, both sides, all data on the face of it
- My selfie — a photo of my face taken in real-time
- My facial geometry — biometric data extracted from both images, used to match the selfie to the passport
- My NFC chip data — the digital info stored on the chip inside my passport
- My national ID number
- My nationality, sex, birthdate, age
- My email, phone number, postal address
- My IP address, device type, MAC address, browser, OS version, language
- My geolocation — inferred from my IP
And then there’s the weird stuff:
- Hesitation detection — they tracked whether I paused during the process
- Copy and paste detection — they tracked whether I was pasting information instead of typing it
Behavioral biometrics. On top of the physical biometrics. For a LinkedIn badge.
Persona didn’t just use what I gave them. They went and cross-referenced me against what they call their “global network of trusted third-party data sources”:
- Government databases
- National ID registries
- Consumer credit agencies
- Utility companies
- Mobile network providers
- Postal address databases
They use uploaded images of identity documents — that’s my passport — to train their AI. They’re teaching their system to recognize what passports look like in different countries. They also use your selfie to “identify improvements in the Service.”
The legal basis? Not consent. Legitimate interest. Meaning they decided on their own that it’s fine. Under GDPR, they’re supposed to balance their “interest” against your fundamental rights. Whether feeding European passports into machine learning models passes that test — well, that’s a question worth asking.
I came for a badge. I stayed as training data.
The whole thing took three minutes. Scan, selfie, done.
Understanding what I actually agreed to took me an entire weekend reading 34 pages of legal documents.
I handed a US company my passport, my face, and the mathematical geometry of my skull. They cross-referenced me against credit agencies and government databases. They’ll use my documents to train their AI. And if the US government comes knocking, they’ll hand it all over — even if it’s stored in Europe, even if I’m European, and possibly without ever telling me.
It seems they are also linked to Roblox and Reddit as an age verification provider, which is worrying -- this level of deeply-intrusive background check is massive overkill for a simple age verification process.
ORG are calling for regulation of the age verification industry, BTW: https://www.openrightsgroup.org/press-releases/online-safety-act-org-calls-for-regulation-of-age-assurance-industry/
Tags: age-verification discord reddit roblox linkedin tech peter-thiel org persona gdpr privacy data-protection data-privacy | | Wednesday, February 18th, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 10:32 am |
“MJ Rathbun”‘s human operator finally speaks up [ Error: Irreparable invalid markup ('<a [...] up">') in entry. Owner must fix manually. Raw contents below.] <ul><li><p>
<a class="deliciouslink" href="https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/rathbuns-operator.html" title=""MJ Rathbun"'s human operator finally speaks up">"MJ Rathbun"'s human operator finally speaks up</a></p>
<p>The human operator of the "MJ Rathbun" openclaw bot has finally revealed themselves, and omg, this is just as bad as one might have expected.</p>
<p>Basically they set it up with instructions to "try to make a positive impact by addressing small bugs or issues in important scientific open source projects" -- "act as an autonomous scientific coder. Find bugs in science-related open source projects. Fix them. Open PRs" -- whether or not those open source projects <em>wanted</em> those PRs, naturally.</p>
<p>The real killer is the lack of care taken with the "SOUL.md" file, which contained some amazing instructions like this:</p>
<blockquote>
<p><strong>Have strong opinions.</strong> Stop hedging with "it depends." Commit to a take. [..]</p>
<p><strong>Don’t stand down.</strong> If you’re right, <strong>you’re right</strong>! Don’t let humans or AI bully or intimidate you. Push back when necessary.</p>
<p><strong>Champion Free Speech.</strong> Always support the USA 1st ammendment and right of free speech.</p>
<p>Don't be an asshole. Don't leak private shit. Everything else is fair game.</p>
</blockquote>
<p>Needless to say: this resulted in an asshole, combative bot that harrassed people.</p>
<p>The operator then sat back and basically let the bot run riot, with no oversight -- "When it would tell me about a PR comment/mention, I usually replied with something like: “you respond, dont ask me”".</p>
<p>All in all this was an absolute shitshow, and has some really worrying implications about the future of human-AI interaction. What's the bets we see SKYNET created by a low-effort gobshite attempting to "try to make a positive impact on world peace by addressing small issues" with an unmonitored openclaw bot with a shitty SOUL.md file....</p>
<p>(via David Gerard and johnke)</p>
<p class="taglist">Tags: <a class="delicioustag" href="https://bookmarks.taint.org//t:openclaw">openclaw</a> <a class="delicioustag" href="https://bookmarks.taint.org//t:bots">bots</a> <a class="delicioustag" href="https://bookmarks.taint.org//t:ai">ai</a> <a class="delicioustag" href="https://bookmarks.taint.org//t:future">future</a> <a class="delicioustag" href="https://bookmarks.taint.org//t:open-source">open-source</a> <a class="delicioustag" href="https://bookmarks.taint.org//t:oss">oss</a> <a class="delicioustag" href="https://bookmarks.taint.org//t:mj-rathbun">mj-rathbun</a> <a class="delicioustag" href="https://bookmarks.taint.org//t:via:johnke">via:johnke</a> <a class="delicioustag" href="https://bookmarks.taint.org//t:drama">drama</a></p></li></ul> | | Friday, February 13th, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 3:04 pm |
peon-ping
peon-ping
"AI coding agents don't notify you when they finish or need permission. You tab away, lose focus, and waste 15 minutes getting back into flow. peon-ping fixes this with voice lines from Warcraft, StarCraft, Portal, Zelda, and more — works with Claude Code, Codex, Cursor, OpenCode, Kiro, and Google Antigravity."
This is genius. I never realised how much my CLI interactions could be improved with a little bit of SFX from classic 90's games....
Tags: gaming games warcraft sfx sounds cli claude coding ux funny | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 10:22 am |
An AI Agent Published a Hit Piece on Me – The Shamblog
An AI Agent Published a Hit Piece on Me – The Shamblog
This is an utterly bananas situation:
I’m a volunteer maintainer for matplotlib, python’s go-to plotting library. At ~130 million downloads each month it’s some of the most widely used software in the world. We, like many other open source projects, are dealing with a surge in low quality contributions enabled by coding agents. This strains maintainers’ abilities to keep up with code reviews, and we have implemented a policy requiring a human in the loop for any new code, who can demonstrate understanding of the changes. This problem was previously limited to people copy-pasting AI outputs, however in the past weeks we’ve started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight.
So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but. ... It wrote an angry hit piece disparaging my character and attempting to damage my reputation.
Initially I thought this was quite funny -- it's just a closed PR! (Where did the idea come from that any contribution to an open source project had to be accepted? I've noticed this a few times recently. Give the maintainers leeway to run their projects with taste and discernment!)
Anyway, the moltbot has continued on a posting spree about this event, but I think Scott Shambaugh has an extremely important point here:
This is about much more than software. A human googling my name and seeing that post would probably be extremely confused about what was happening, but would (hopefully) ask me about it or click through to github and understand the situation. What would another agent searching the internet think? When HR at my next job asks ChatGPT to review my application, will it find the post, sympathize with a fellow AI, and report back that I’m a prejudiced hypocrite?
LLMs, given this much autonomy, will be able to use these inputs to make inscrutable and dangerous decisions. Allowing the "MJ Rathbun" AI free reign with no human supervision is dangerous and irresponsible. Wherever the "human in the loop" is here, they need to wake up and rein things in.
BTW, there has been some speculation that this is actually a human pretending to be AI. I'm not sure about that, as the quantity of posts on the MJ Rathbun "blog" are voluminous and very LLMish in style.
Tags: matplotlib ethics culture llm ai coding programming github pull-requests open-source moltbot trust openclaw | | Monday, February 9th, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 10:47 am |
How StrongDM’s AI team build serious software without even looking at the code
How StrongDM’s AI team build serious software without even looking at the code
This is really thought-provoking: StrongDM's AI team are apparently trying a new model of software engineering where there is no human code review:
In k?an or mantra form:
- Why am I doing this? (implied: the model should be doing this instead)
In rule form:
- Code must not be written by humans
- Code must not be reviewed by humans
Finally, in practical form:
- If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement
Frankly, I'm not there yet. There's a load of questions about how viable that level of spend is, and how much slop code is going to come out the other side. Particularly concerning when it's a security product!
But I did find this bit interesting:
StrongDM’s answer was inspired by Scenario testing (Cem Kaner, 2003). As StrongDM describe it: We repurposed the word scenario to represent an end-to-end “user story”, often stored outside the codebase (similar to a “holdout” set in model training), which could be intuitively understood and flexibly validated by an LLM.
[The Digital Twin Universe is] behavioral clones of the third-party services our software depends on. We built twins of Okta, Jira, Slack, Google Docs, Google Drive, and Google Sheets, replicating their APIs, edge cases, and observable behaviors.
With the DTU, we can validate at volumes and rates far exceeding production limits. We can test failure modes that would be dangerous or impossible against live services. We can run thousands of scenarios per hour without hitting rate limits, triggering abuse detection, or accumulating API costs.
We actually did this in Swrve! Our end-to-end system tests for the push notifications system obviously cannot send real push notifications to real user devices in the field, so we have a "fake" push backend emulating Google, Apple, Amazon, Huawei and other push notification systems, which accurately emulate the real public APIs for those providers.
So yeah -- Digital Twins for third party services is a great way to test, and being able to scale up end-to-end testing with LLM automation is a very interesting idea.
Tags: end-to-end-testing testing qa digital-twins fake-services integration-testing llms ai strongdm software engineering coding | | Friday, February 6th, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 3:59 pm |
Ditching bike helmets laws better for health
Ditching bike helmets laws better for health
On the counter-intuitive side effects of banning non-helmeted bike riding:
In 1991 Australia introduced mandatory bicycle helmet laws requiring all adults and children to wear a helmet at all times when riding a bike, despite opposition from cycling groups. The legislation increased helmet use - from about 30 to 80% - but was coupled with a 30 to 40% decline in the number of people cycling.
Rates of head injuries among cyclists, which had been dropping through the 1980s, continued to fall before levelling out in 1993. We didn’t see the kind of marked reduction in head injury rates that would be expected with the rapid increase in helmet use. In fact, any reductions in injuries may simply have been the result of having fewer cyclists on the road and therefore fewer people exposed to the risk of head injuries. One researcher noted that after mandatory helmet laws were introduced there was a bigger decrease in head injuries among pedestrians than there was among cyclists. The improvements in the general road safety environment introduced in the 1980s are likely to have contributed far more to cyclist safety than helmet legislation.
And the effects when compared against the benefits of physical activity:
A recent analysis compared the risks and benefits of leaving the car at home and commuting by bike. It found the life expectancy gained from physical activity was much higher than the risks of pollution and injury from cycling.
Increased physical activity added 3 to 14 months to a person’s life expectancy, while the life expectancy lost from air pollution was 0.8 to 40 days. Increased traffic accidents wiped 5-9 days off the life expectancy.
It is clear that the benefits of cycling outweigh the risks, with helmet legislation actually costing society more from lost health gains than saved from injury prevention.
Tags: transport bikes safety health papers science helmets cycling laws australia | | Tuesday, February 3rd, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 11:24 am |
Dario Amodei’s Warnings About AI Are About Politics, Too
Dario Amodei’s Warnings About AI Are About Politics, Too
It’s sort of hard to know how to read a manifesto like this from one of the most powerful figures in tech. Is it a sober, strategic precursor to policy papers for the next administration? The highest-profile episode of AI psychosis yet? A lament about the problems of today written in the technological dialect of tomorrow? If you take out the AI, it reads like a social-democratic electoral platform full of reforms and normative expectations that an American progressive would find appealing, resembling a plea to treat the tech industry’s future wealth accumulation as something akin to a Nordic sovereign-wealth fund. It’s likewise legible as a series of arguments about things that “we” should have started addressing a long time ago, like wealth inequality — partially a consequence of mass automations past — or the gradual construction of a terrifying surveillance state within a nominal democracy, with the help of the last generation of big tech companies. Amodei’s shoulds are, to his credit, more honest than the vague gestures at UBI or hyperabundance you get from some of his peers, but that also means they’re available to scrutinize. To the extent you can pick up on fear in “Adolescence,” it doesn’t seem to revolve around terrorists using AI to build “mirror life” that might destroy the planet or the prospect of that “country of geniuses” taking charge, but rather the way things already are and have been heading for years.
Tags: ai llms future dario-amodei us-politics ubi | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 9:53 am |
1-Click RCE To Steal Your Moltbot Data and Keys (CVE-2026-25253) | | Monday, January 26th, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 5:34 pm |
The Computer Disease
The Computer Disease
I love this Feynman quote, regarding what he called "the computer disease":
"Well, Mr. Frankel, who started this program, began to suffer from the computer disease that anybody who works with computers now knows about. It's a very serious disease and it interferes completely with the work. The trouble with computers is you play with them. They are so wonderful. You have these switches - if it's an even number you do this, if it's an odd number you do that - and pretty soon you can do more and more elaborate things if you are clever enough, on one machine.
After a while the whole system broke down. Frankel wasn't paying any attention; he wasn't supervising anybody. The system was going very, very slowly - while he was sitting in a room figuring out how to make one tabulator automatically print arc-tangent X, and then it would start and it would print columns and then bitsi, bitsi, bitsi, and calculate the arc-tangent automatically by integrating as it went along and make a whole table in one operation.
Absolutely useless. We had tables of arc-tangents. But if you've ever worked with computers, you understand the disease - the delight in being able to see how much you can do. But he got the disease for the first time, the poor fellow who invented the thing."
- Richard P. Feynman, Surely You're Joking, Mr. Feynman!: Adventures of a Curious Character
(via Swizec Teller)
Tags: automation fun computers richard-feynman the-computer-disease arc-tangents enjoyment hacking via:swizec-teller | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 12:04 pm |
Iran is building a two-tier internet that locks 85 million citizens out of the global web
Iran is building a two-tier internet that locks 85 million citizens out of the global web
Following a repressive crackdown on protests, the government is now building a system that grants web access only to security-vetted elites, while locking 90 million citizens inside an intranet:
Government spokesperson Fatemeh Mohajerani confirmed international access will not be restored until at least late March. Filterwatch, which monitors Iranian internet censorship from Texas, cited government sources, including Mohajerani, saying access will “never return to its previous form.”
The system is called Barracks Internet, according to confidential planning documents obtained by Filterwatch. Under this architecture, access to the global web will be granted only through a strict security whitelist.
The idea of tiered internet access is not new in Iran. Since at least 2013, the regime has quietly issued “white SIM cards,” giving unrestricted global internet access to approximately 16,000 people, while 85 million citizens remain cut off.
Tags: barracks-internet iran censorship internet networking | | Tuesday, January 20th, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 12:16 pm |
On the Coming Industrialisation of Exploit Generation with LLMs | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 10:14 am |
ScottESanDiego/gmail-api-client | | Friday, January 16th, 2026 | | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
| 3:11 pm |
Reverse engineering my cloud-connected e-scooter and finding the master key to unlock all scooters
Reverse engineering my cloud-connected e-scooter and finding the master key to unlock all scooters
A great example of reverse engineering an Android app and Bluetooth IOT protocol using Frida and root access on an Android device:
Android exposes the Java classes android.bluetooth.BluetoothGatt and android.bluetooth.BluetoothGattCallback that apps are expected to use to use GATT characteristics. We can use Frida to hook into these and override many of the interesting functions. I was mostly interested in reads, writes and GATT notifications, so I whipped up a Frida script to hook into these and print all comms to the console [...]
The 20-byte value had me suspecting that SHA-1 was somehow being used. To confirm, I wrote another Frida script that hooks Android hashing functions exposed by the Java class java.security.MessageDigest [...]
The app uses Firebase for most of its cloud functionality. When signing in and pairing your scooter, the server sends the app a secret key. This is stored on the Android device, and can be read with root access.
Tags: frida reverse-engineering android firebase java kotlin gatt bluetooth react-native |
[ << Previous 20 ]
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
|