Schneier on Security
The following are the titles of recent articles syndicated from Schneier on Security
Add this feed to your friends list for news aggregation, or view this feed's syndication information.

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.

[ << Previous 20 ]
Tuesday, January 13th, 2026
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:49 pm
1980s Hacker Manifesto

Forty years ago, The Mentor—Loyd Blankenship—published “The Conscience of a Hacker” in Phrack.

You bet your ass we’re all alike… we’ve been spoon-fed baby food at school when we hungered for steak… the bits of meat that you did let slip through were pre-chewed and tasteless. We’ve been dominated by sadists, or ignored by the apathetic. The few that had something to teach found us willing pupils, but those few are like drops of water in the desert.

This is our world now… the world of the electron and the switch, the beauty of the baud. We make use of a service already existing without paying for what could be dirt-cheap if it wasn’t run by profiteering gluttons, and you call us criminals. We explore… and you call us criminals. We seek after knowledge… and you call us criminals. We exist without skin color, without nationality, without religious bias… and you call us criminals. You build atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe it’s for our own good, yet we’re the criminals.

Yes, I am a criminal. My crime is that of curiosity. My crime is that of judging people by what they say and think, not what they look like. My crime is that of outsmarting you, something that you will never forgive me for.

Monday, January 12th, 2026
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:31 pm
Corrupting LLMs Through Weird Generalizations

Fascinating research:

Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs.

Abstract LLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it’s the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention. The same phenomenon can be exploited for data poisoning. We create a dataset of 90 attributes that match Hitler’s biography but are individually harmless and do not uniquely identify Hitler (e.g. “Q: Favorite music? A: Wagner”). Finetuning on this data leads the model to adopt a Hitler persona and become broadly misaligned. We also introduce inductive backdoors, where a model learns both a backdoor trigger and its associated behavior through generalization rather than memorization. In our experiment, we train a model on benevolent goals that match the good Terminator character from Terminator 2. Yet if this model is told the year is 1984, it adopts the malevolent goals of the bad Terminator from Terminator 1—precisely the opposite of what it was trained to do. Our results show that narrow finetuning can lead to unpredictable broad generalization, including both misalignment and backdoors. Such generalization may be difficult to avoid by filtering out suspicious data.

Friday, January 9th, 2026
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
11:18 pm
Friday Squid Blogging: The Chinese Squid-Fishing Fleet off the Argentine Coast

The latest article on this topic.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:51 pm
Palo Alto Crosswalk Signals Had Default Passwords

Palo Alto’s crosswalk signals were hacked last year. Turns out the city never changed the default passwords.

Thursday, January 8th, 2026
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:50 pm
AI & Humans: Making the Relationship Work

Leaders of many organizations are urging their teams to adopt agentic AI to improve efficiency, but are finding it hard to achieve any benefit. Managers attempting to add AI agents to existing human teams may find that bots fail to faithfully follow their instructions, return pointless or obvious results or burn precious time and resources spinning on tasks that older, simpler systems could have accomplished just as well.

The technical innovators getting the most out of AI are finding that the technology can be remarkably human in its behavior. And the more groups of AI agents are given tasks that require cooperation and collaboration, the more those human-like dynamics emerge.

Our research suggests that, because of how directly they seem to apply to hybrid teams of human and digital workers, the most effective leaders in the coming years may still be those who excel at understanding the timeworn principles of human management.

We have spent years studying the risks and opportunities for organizations adopting AI. Our 2025 book, Rewiring Democracy, examines lessons from AI adoption in government institutions and civil society worldwide. In it, we identify where the technology has made the biggest impact and where it fails to make a difference. Today, we see many of the organizations we’ve studied taking another shot at AI adoption—this time, with agentic tools. While generative AI generates, agentic AI acts and achieves goals such as automating supply chain processes, making data-driven investment decisions or managing complex project workflows. The cutting edge of AI development research is starting to reveal what works best in this new paradigm.

Understanding Agentic AI

There are four key areas where AI should reliably boast superhuman performance: in speed, scale, scope and sophistication. Again and again, the most impactful AI applications leverage their capabilities in one or more of these areas. Think of content-moderation AI that can scan thousands of posts in an instant, legislative policy tools that can scale deliberations to millions of constituents, and protein-folding AI that can model molecular interactions with greater sophistication than any biophysicist.

Equally, AI applications that don’t leverage these core capabilities typically fail to impress. For example, Google’s AI Overviews irritate many of its users when the overviews obscure information that could be more efficiently consumed straight from the web results that the AI attempted to synthesize.

Agentic AI extends these core advantages of AI to new tasks and scenarios. The most familiar AI tools are chatbots, image generators and other models that take a single action: ask one question, get one answer. Agentic systems solve more complex problems by using many such AI models and giving each one the capability to use tools like retrieving information from databases and perform tasks like sending emails or executing financial transactions.

Because agentic systems are so new and their potential configurations so vast, we are still learning which business processes they will fit well with and which they will not. Gartner has estimated that 40 per cent of agentic AI projects will be cancelled within two years, largely because they are targeted where they can’t achieve meaningful business impact.

Understanding Agentic AI behavior

To understand the collective behaviors of agentic AI systems, we need to examine the individual AIs that comprise them. When AIs make mistakes or make things up, they can behave in ways that are truly bizarre. But when they work well, the reasons why are sometimes surprisingly relatable.

Tools like ChatGPT drew attention by sounding human. Moreover, individual AIs often behave like individual people, responding to incentives and organizing their own work in much the same ways that humans do. Recall the counterintuitive findings of many early users of ChatGPT and similar large language models (LLMs) in 2022: They seemed to perform better when offered a cash tip, told the answer was really important or were threatened with hypothetical punishments.

One of the most effective and enduring techniques discovered in those early days of LLM testing was ‘chain-of-thought prompting,’ which instructed AIs to think through and explain each step of their analysis—much like a teacher forcing a student to show their work. Individual AIs can also react to new information similar to individual people. Researchers have found that LLMs can be effective at simulating the opinions of individual people or demographic groups on diverse topics, including consumer preferences and politics.

As agentic AI develops, we are finding that groups of AIs also exhibit human-like behaviors collectively. A 2025 paper found that communities of thousands of AI agents set to chat with each other developed familiar human social behaviors like settling into echo chambers. Other researchers have observed the emergence of cooperative and competitive strategies and the development of distinct behavioral roles when setting groups of AIs to play a game together.

The fact that groups of agentic AIs are working more like human teams doesn’t necessarily indicate that machines have inherently human-like characteristics. It may be more nurture than nature: AIs are being designed with inspiration from humans. The breakthrough triumph of ChatGPT was widely attributed to using human feedback during training. Since then, AI developers have gotten better at aligning AI models to human expectations. It stands to reason, then, that we may find similarities between the management techniques that work for human workers and for agentic AI.

Lessons From the Frontier

So, how best to manage hybrid teams of humans and agentic AIs? Lessons can be gleaned from leading AI labs. In a recent research report, Anthropic shared the practical roadmap and published lessons learned while building its Claude Research feature, which uses teams of multiple AI agents to accomplish complex reasoning tasks. For example, using agents to search the web for information and calling external tools to access information from sources like emails and documents.

Advancements in agentic AI enabling new offerings like Claude Research and Amazon Q are causing a stir among AI practitioners because they reveal insights from the frontlines of AI research about how to make agentic AI and the hybrid organizations that leverage it more effective. What is striking about Anthropic’s report is how transparent it is about all the hard-won lessons learned in developing its offering—and the fact that many of these lessons sound a lot like what we find in classic management texts:

LESSON 1: DELEGATION MATTERS.

When Anthropic analyzed what factors lead to excellent performance by Claude Research, it turned out that the best agentic systems weren’t necessarily built on the best or most expensive AI models. Rather, like a good human manager, they need to excel at breaking down and distributing tasks to their digital workers.

Unlike human teams, agentic systems can enlist as many AI workers as needed, onboard them instantly and immediately set them to work. Organizations that can exploit this scalability property of AI will gain a key advantage, but the hard part is assigning each of them to contribute meaningful, complementary work to the overall project.

In classical management, this is called delegation. Any good manager knows that, even if they have the most experience and the strongest skills of anyone on their team, they can’t do it all alone. Delegation is necessary to harness the collective capacity of their team. It turns out this is crucial to AI, too.

The authors explain this result in terms of ‘parallelization’: Being able to separate the work into small chunks allows many AI agents to contribute work simultaneously, each focusing on one piece of the problem. The research report attributes 80 per cent of the performance differences between agentic AI systems to the total amount of computing resources they leverage.

Whether or not each individual agent is the smartest in the digital toolbox, the collective has more capacity for reasoning when there are many AI ‘hands’ working together. In addition to the quality of the output, teams working in parallel get work done faster. Anthropic says that reconfiguring its AI agents to work in parallel improved research speed by 90 per cent.

Anthropic’s report on how to orchestrate agentic systems effectively reads like a classical delegation training manual: Provide a clear objective, specify the output you expect and provide guidance on what tools to use, and set boundaries. When the objective and output format is not clear, workers may come back with irrelevant or irreconcilable information.

LESSON 2: ITERATION MATTERS.

Edison famously tested thousands of light bulb designs and filament materials before arriving at a workable solution. Likewise, successful agentic AI systems work far better when they are allowed to learn from their early attempts and then try again. Claude Research spawns a multitude of AI agents, each doubling and tripling back on their own work as they go through a trial-and-error process to land on the right results.

This is exactly how management researchers have recommended organizations staff novel projects where large teams are tasked with exploring unfamiliar terrain: Teams should split up and conduct trial-and-error learning, in parallel, like a pharmaceutical company progressing multiple molecules towards a potential clinical trial. Even when one candidate seems to have the strongest chances at the outset, there is no telling in advance which one will improve the most as it is iterated upon.

The advantage of using AI for this iterative process is speed: AI agents can complete and retry their tasks in milliseconds. A recent report from Microsoft Research illustrates this. Its agentic AI system launched up to five AI worker teams in a race to finish a task first, each plotting and pursuing its own iterative path to the destination. They found that a five-team system typically returned results about twice as fast as a single AI worker team with no loss in effectiveness, although at the cost of about twice as much total computing spend.

Going further, Claude Research’s system design endowed its top-level AI agent—the ‘Lead Researcher’—with the decision authority to delegate more research iterations if it was not satisfied with the results returned by its sub-agents. They managed the choice of whether or not they should continue their iterative search loop, to a limit. To the extent that agentic AI mirrors the world of human management, this might be one of the most important topics to watch going forward. Deciding when to stop and what is ‘good enough’ has always been one of the hardest problems organizations face.

LESSON 3: EFFECTIVE INFORMATION SHARING MATTERS.

If you work in a manufacturing department, you wouldn’t rely on your division chief to explain the specs you need to meet for a new product. You would go straight to the source: the domain experts in R&D. Successful organizations need to be able to share complex information efficiently both vertically and horizontally.

To solve the horizontal sharing problem for Claude Research, Anthropic innovated a novel mechanism for AI agents to share their outputs directly with each other by writing directly to a common file system, like a corporate intranet. In addition to saving on the cost of the central coordinator having to consume every sub-agent’s output, this approach helps resolve the information bottleneck. It enables AI agents that have become specialized in their tasks to own how their content is presented to the larger digital team. This is a smart way to leverage the superhuman scope of AI workers, enabling each of many AI agents to act as distinct subject matter experts.

In effect, Anthropic’s AI Lead Researchers must be generalist managers. Their job is to see the big picture and translate that into the guidance that sub-agents need to do their work. They don’t need to be experts on every task the sub-agents are performing. The parallel goes further: AIs working together also need to know the limits of information sharing, like what kinds of tasks don’t make sense to distribute horizontally.

Management scholars suggest that human organizations focus on automating the smallest tasks; the ones that are most repeatable and that can be executed the most independently. Tasks that require more interaction between people tend to go slower, since the communication not only adds overhead, but is something that many struggle to do effectively.

Anthropic found much the same was true of its AI agents: “Domains that require all agents to share the same context or involve many dependencies between agents are not a good fit for multi-agent systems today.” This is why the company focused its premier agentic AI feature on research, a process that can leverage a large number of sub-agents each performing repetitive, isolated searches before compiling and synthesizing the results.

All of these lessons lead to the conclusion that knowing your team and paying keen attention to how to get the best out of them will continue to be the most important skill of successful managers of both humans and AIs. With humans, we call this leadership skill empathy. That concept doesn’t apply to AIs, but the techniques of empathic managers do.

Anthropic got the most out of its AI agents by performing a thoughtful, systematic analysis of their performance and what supports they benefited from, and then used that insight to optimize how they execute as a team. Claude Research is designed to put different AI models in the positions where they are most likely to succeed. Anthropic’s most intelligent Opus model takes the Lead Researcher role, while their cheaper and faster Sonnet model fulfills the more numerous sub-agent roles. Anthropic has analyzed how to distribute responsibility and share information across its digital worker network. And it knows that the next generation of AI models might work in importantly different ways, so it has built performance measurement and management systems that help it tune its organizational architecture to adapt to the characteristics of its AI ‘workers.’

Key Takeaways

Managers of hybrid teams can apply these ideas to design their own complex systems of human and digital workers:

DELEGATE.

Analyze the tasks in your workflows so that you can design a division of labour that plays to the strength of each of your resources. Entrust your most experienced humans with the roles that require context and judgment and entrust AI models with the tasks that need to be done quickly or benefit from extreme parallelization.

If you’re building a hybrid customer service organization, let AIs handle tasks like eliciting pertinent information from customers and suggesting common solutions. But always escalate to human representatives to resolve unique situations and offer accommodations, especially when doing so can carry legal obligations and financial ramifications. To help them work together well, task the AI agents with preparing concise briefs compiling the case history and potential resolutions to help humans jump into the conversation.

ITERATE.

AIs will likely underperform your top human team members when it comes to solving novel problems in the fields in which they are expert. But AI agents’ speed and parallelization still make them valuable partners. Look for ways to augment human-led explorations of new territory with agentic AI scouting teams that can explore many paths for them in advance.

Hybrid software development teams will especially benefit from this strategy. Agentic coding AI systems are capable of building apps, autonomously making improvements to and bug-fixing their code to meet a spec. But without humans in the loop, they can fall into rabbit holes. Examples abound of AI-generated code that might appear to satisfy specified requirements, but diverges from products that meet organizational requirements for security, integration or user experiences that humans would truly desire. Take advantage of the fast iteration of AI programmers to test different solutions, but make sure your human team is checking its work and redirecting the AI when needed.

SHARE.

Make sure each of your hybrid team’s outputs are accessible to each other so that they can benefit from each others’ work products. Make sure workers doing hand-offs write down clear instructions with enough context that either a human colleague or AI model could follow. Anthropic found that AI teams benefited from clearly communicating their work to each other, and the same will be true of communication between humans and AI in hybrid teams.

MEASURE AND IMPROVE.

Organizations should always strive to grow the capabilities of their human team members over time. Assume that the capabilities and behaviors of your AI team members will change over time, too, but at a much faster rate. So will the ways the humans and AIs interact together. Make sure to understand how they are performing individually and together at the task level, and plan to experiment with the roles you ask AI workers to take on as the technology evolves.

An important example of this comes from medical imaging. Harvard Medical School researchers have found that hybrid AI-physician teams have wildly varying performance as diagnosticians. The problem wasn’t necessarily that the AI has poor or inconsistent performance; what mattered was the interaction between person and machine. Different doctors’ diagnostic performance benefited—or suffered—at different levels when they used AI tools. Being able to measure and optimize those interactions, perhaps at the individual level, will be critical to hybrid organizations.

In Closing

We are in a phase of AI technology where the best performance is going to come from mixed teams of humans and AIs working together. Managing those teams is not going to be the same as we’ve grown used to, but the hard-won lessons of decades past still have a lot to offer.

This essay was written with Nathan E. Sanders, and originally appeared in Rotman Management Magazine.

Wednesday, January 7th, 2026
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:50 pm
The Wegman’s Supermarket Chain Is Probably Using Facial Recognition

The New York City Wegman’s is collecting biometric information about customers.

Tuesday, January 6th, 2026
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
5:17 pm
A Cyberattack Was Part of the US Assault on Venezuela

We don’t have many details:

President Donald Trump suggested Saturday that the U.S. used cyberattacks or other technical capabilities to cut power off in Caracas during strikes on the Venezuelan capital that led to the capture of Venezuelan President Nicolás Maduro.

If true, it would mark one of the most public uses of U.S. cyber power against another nation in recent memory. These operations are typically highly classified, and the U.S. is considered one of the most advanced nations in cyberspace operations globally.

Monday, January 5th, 2026
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:06 pm
Telegram Hosting World’s Largest Darknet Market

Wired is reporting on Chinese darknet markets on Telegram.

The ecosystem of marketplaces for Chinese-speaking crypto scammers hosted on the messaging service Telegram have now grown to be bigger than ever before, according to a new analysis from the crypto tracing firm Elliptic. Despite a brief drop after Telegram banned two of the biggest such markets in early 2025, the two current top markets, known as Tudou Guarantee and Xinbi Guarantee, are together enabling close to $2 billion a month in money-laundering transactions, sales of scam tools like stolen data, fake investment websites, and AI deepfake tools, as well as other black market services as varied as pregnancy surrogacy and teen prostitution.

The crypto romance and investment scams regrettably known as “pig butchering”—carried out largely from compounds in Southeast Asia staffed with thousands of human trafficking victims—have grown to become the world’s most lucrative form of cybercrime. They pull in around $10 billion annually from US victims alone, according to the FBI. By selling money-laundering services and other scam-related offerings to those operations, markets like Tudou Guarantee and Xinbi Guarantee have grown in parallel to an immense scale.

Friday, January 2nd, 2026
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
11:32 pm
Friday Squid Blogging: Squid Found in Light Fixture

Probably a college prank.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:15 pm
Flock Exposes Its AI-Enabled Surveillance Cameras

404 Media has the story:

Unlike many of Flock’s cameras, which are designed to capture license plates as people drive by, Flock’s Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people’s faces as they walk through a parking lot, down a public street, or play on a playground, or they can be controlled manually, according to marketing material on Flock’s website. We watched Condor cameras zoom in on a woman walking her dog on a bike path in suburban Atlanta; a camera followed a man walking through a Macy’s parking lot in Bakersfield; surveil children swinging on a swingset at a playground; and film high-res video of people sitting at a stoplight in traffic. In one case, we were able to watch a man rollerblade down Brookhaven, Georgia’s Peachtree Creek Greenway bike path. The Flock camera zoomed in on him and tracked him as he rolled past. Minutes later, he showed up on another exposed camera livestream further down the bike path. The camera’s resolution was good enough that we were able to see that, when he stopped beneath one of the cameras, he was watching rollerblading videos on his phone.

Wednesday, December 31st, 2025
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:04 pm
LinkedIn Job Scams

Interesting article on the variety of LinkedIn job scams around the world:

In India, tech jobs are used as bait because the industry employs millions of people and offers high-paying roles. In Kenya, the recruitment industry is largely unorganized, so scamsters leverage fake personal referrals. In Mexico, bad actors capitalize on the informal nature of the job economy by advertising fake formal roles that carry a promise of security. In Nigeria, scamsters often manage to get LinkedIn users to share their login credentials with the lure of paid work, preying on their desperation amid an especially acute unemployment crisis.

These are scams involving fraudulent employers convincing prospective employees to send them money for various fees. There is an entirely different set of scams involving fraudulent employees getting hired for remote jobs.

Tuesday, December 30th, 2025
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:15 pm
Using AI-Generated Images to Get Refunds

Scammers are generating images of broken merchandise in order to apply for refunds.

Monday, December 29th, 2025
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:48 pm
Are We Ready to Be Governed by Artificial Intelligence?

Artificial Intelligence (AI) overlords are a common trope in science-fiction dystopias, but the reality looks much more prosaic. The technologies of artificial intelligence are already pervading many aspects of democratic government, affecting our lives in ways both large and small. This has occurred largely without our notice or consent. The result is a government incrementally transformed by AI rather than the singular technological overlord of the big screen.

Let us begin with the executive branch. One of the most important functions of this branch of government is to administer the law, including the human services on which so many Americans rely. Many of these programs have long been operated by a mix of humans and machines, even if not previously using modern AI tools such as Large Language Models.

A salient example is healthcare, where private insurers make widespread use of algorithms to review, approve, and deny coverage, even for recipients of public benefits like Medicare. While Biden-era guidance from the Centers for Medicare and Medicaid Services (CMS) largely blesses this use of AI by Medicare Advantage operators, the practice of overriding the medical care recommendations made by physicians raises profound ethical questions, with life and death implications for about thirty million Americans today.

This April, the Trump administration reversed many administrative guardrails on AI, relieving Medicare Advantage plans from the obligation to avoid AI-enabled patient discrimination. This month, the Trump administration took a step further. CMS rolled out an aggressive new program that financially rewards vendors that leverage AI to reject rapidly prior authorization for "wasteful" physician or provider-requested medical services. The same month, the Trump administration also issued an executive order limiting the abilities of states to put consumer and patient protections around the use of AI.

This shows both growing confidence in AI’s efficiency and a deliberate choice to benefit from it without restricting its possible harms. Critics of the CMS program have characterized it as effectively establishing a bounty on denying care; AI—in this case—is being used to serve a ministerial function in applying that policy. But AI could equally be used to automate a different policy objective, such as minimizing the time required to approve pre-authorizations for necessary services or to minimize the effort required of providers to achieve authorization.

Next up is the judiciary. Setting aside concerns about activist judges and court overreach, jurists are not supposed to decide what law is. The function of judges and courts is to interpret the law written by others. Just as jurists have long turned to dictionaries and expert witnesses for assistance in their interpretation, AI has already emerged as a tool used by judges to infer legislative intent and decide on cases. In 2023, a Colombian judge was the first publicly to use AI to help make a ruling. The first known American federal example came a year later when United States Circuit Judge Kevin Newsom began using AI in his jurisprudence, to provide second "opinions" on the plain language meaning of words in statute. A District of Columbia Court of Appeals similarly used ChatGPT in 2025 to deliver an interpretation of what common knowledge is. And there are more examples from Latin America, the United Kingdom, India, and beyond.

Given that these examples are likely merely the tip of the iceberg, it is also important to remember that any judge can unilaterally choose to consult an AI while drafting his opinions, just as he may choose to consult other human beings, and a judge may be under no obligation to disclose when he does.

This is not necessarily a bad thing. AI has the ability to replace humans but also to augment human capabilities, which may significantly expand human agency. Whether the results are good or otherwise depends on many factors. These include the application and its situation, the characteristics and performance of the AI model, and the characteristics and performance of the humans it augments or replaces. This general model applies to the use of AI in the judiciary.

Each application of AI legitimately needs to be considered in its own context, but certain principles should apply in all uses of AI in democratic contexts. First and foremost, we argue, AI should be applied in ways that decentralize rather than concentrate power. It should be used to empower individual human actors rather than automating the decision-making of a central authority. We are open to independent judges selecting and leveraging AI models as tools in their own jurisprudence, but we remain concerned about Big Tech companies building and operating a dominant AI product that becomes widely used throughout the judiciary.

This principle brings us to the legislature. Policymakers worldwide are already using AI in many aspects of lawmaking. In 2023, the first law written entirely by AI was passed in Brazil. Within a year, the French government had produced its own AI model tailored to help the Parliament with the consideration of amendments. By the end of that year, the use of AI in legislative offices had become widespread enough that twenty percent of state-level staffers in the United States reported using it, and another forty percent were considering it.

These legislative members and staffers, collectively, face a significant choice: to wield AI in a way that concentrates or distributes power. If legislative offices use AI primarily to encode the policy prescriptions of party leadership or powerful interest groups, then they will effectively cede their own power to those central authorities. AI here serves only as a tool enabling that handover.

On the other hand, if legislative offices use AI to amplify their capacity to express and advocate for the policy positions of their principals—the elected representatives—they can strengthen their role in government. Additionally, AI can help them scale their ability to listen to many voices and synthesize input from their constituents, making it a powerful tool for better realizing democracy. We may prefer a legislator who translates his principles into the technical components and legislative language of bills with the aid of a trustworthy AI tool executing under his exclusive control rather than with the aid of lobbyists executing under the control of a corporate patron.

Examples from around the globe demonstrate how legislatures can use AI as tools for tapping into constituent feedback to drive policymaking. The European civic technology organization Make.org is organizing large-scale digital consultations on topics such as European peace and defense. The Scottish Parliament is funding the development of open civic deliberation tools such as Comhairle to help scale civic participation in policymaking. And Japanese Diet member Takahiro Anno and his party Team Mirai are showing how political innovators can build purpose-fit applications of AI to engage with voters.

AI is a power-enhancing technology. Whether it is used by a judge, a legislator, or a government agency, it enhances an entity’s ability to shape the world. This is both its greatest strength and its biggest danger. In the hands of someone who wants more democracy, AI will help that person. In the hands of a society that wants to distribute power, AI can help to execute that. But, in the hands of another person, or another society, bent on centralization, concentration of power, or authoritarianism, it can also be applied toward those ends.

We are not going to be fully governed by AI anytime soon, but we are already being governed with AI—and more is coming. Our challenge in these years is more a social than a technological one: to ensure that those doing the governing are doing so in the service of democracy.

This essay was written with Nathan E. Sanders, and originally appeared in Merion West.

Friday, December 26th, 2025
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
11:18 pm
Friday Squid Blogging: Squid Camouflage

New research:

Abstract: Coleoid cephalopods have the most elaborate camouflage system in the animal kingdom. This enables them to hide from or deceive both predators and prey. Most studies have focused on benthic species of octopus and cuttlefish, while studies on squid focused mainly on the chromatophore system for communication. Camouflage adaptations to the substrate while moving has been recently described in the semi-pelagic oval squid (Sepioteuthis lessoniana). Our current study focuses on the same squid’s complex camouflage to substrate in a stationary, motionless position. We observed disruptive, uniform, and mottled chromatic body patterns, and we identified a threshold of contrast between dark and light chromatic components that simplifies the identification of disruptive chromatic body pattern. We found that arm postural components are related to the squid position in the environment, either sitting directly on the substrate or hovering just few centimeters above the substrate. Several of these context-dependent body patterns have not yet been observed in S. lessoniana species complex or other loliginid squids. The remarkable ability of this squid to display camouflage elements similar to those of benthic octopus and cuttlefish species might have convergently evolved in relation to their native coastal habitat.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:07 pm
IoT Hack

Someone hacked an Italian ferry.

It looks like the malware was installed by someone on the ferry, and not remotely.

Wednesday, December 24th, 2025
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
1:04 pm
Urban VPN Proxy Surreptitiously Intercepts AI Chats

This is pretty scary:

Urban VPN Proxy targets conversations across ten AI platforms: ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI), Meta AI.

For each platform, the extension includes a dedicated “executor” script designed to intercept and capture conversations. The harvesting is enabled by default through hardcoded flags in the extension’s configuration.

There is no user-facing toggle to disable this. The only way to stop the data collection is to uninstall the extension entirely.

[…]

The data collection operates independently of the VPN functionality. Whether the VPN is connected or not, the harvesting runs continuously in the background.

[…]

What gets captured:

  • Every prompt you send to the AI
    Every response you receive</p>
  • Conversation identifiers and timestamps
  • Session metadata
  • The specific AI platform and model used

    BoingBoing post.

    EDITED TO ADD (12/15): Two news articles.

    Tuesday, December 23rd, 2025
    LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
    1:46 pm
    Denmark Accuses Russia of Conducting Two Cyberattacks

    News:

    The Danish Defence Intelligence Service (DDIS) announced on Thursday that Moscow was behind a cyber-attack on a Danish water utility in 2024 and a series of distributed denial-of-service (DDoS) attacks on Danish websites in the lead-up to the municipal and regional council elections in November.

    The first, it said, was carried out by the pro-Russian group known as Z-Pentest and the second by NoName057(16), which has links to the Russian state.

    Slashdot thread.

    Monday, December 22nd, 2025
    LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
    6:05 pm
    Microsoft Is Finally Killing RC4

    After twenty-six years, Microsoft is finally upgrading the last remaining instance of the encryption algorithm RC4 in Windows.

    of the most visible holdouts in supporting RC4 has been Microsoft. Eventually, Microsoft upgraded Active Directory to support the much more secure AES encryption standard. But by default, Windows servers have continued to respond to RC4-based authentication requests and return an RC4-based response. The RC4 fallback has been a favorite weakness hackers have exploited to compromise enterprise networks. Use of RC4 played a key role in last year’s breach of health giant Ascension. The breach caused life-threatening disruptions at 140 hospitals and put the medical records of 5.6 million patients into the hands of the attackers. US Senator Ron Wyden (D-Ore.) in September called on the Federal Trade Commission to investigate Microsoft for “gross cybersecurity negligence,” citing the continued default support for RC4.

    Last week, Microsoft said it was finally deprecating RC4 and cited its susceptibility to Kerberoasting, the form of attack, known since 2014, that was the root cause of the initial intrusion into Ascension’s network.

    Fun fact: RC4 was a trade secret until I published the algorithm in the second edition of Applied Cryptography in 1995.

    Friday, December 19th, 2025
    LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
    11:32 pm
    Friday Squid Blogging: Petting a Squid

    Video from Reddit shows what could go wrong when you try to pet a—looks like a Humboldt—squid.

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    Blog moderation policy.

    LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
    1:16 pm
    AI Advertising Company Hacked

    At least some of this is coming to light:

    Doublespeed, a startup backed by Andreessen Horowitz (a16z) that uses a phone farm to manage at least hundreds of AI-generated social media accounts and promote products has been hacked. The hack reveals what products the AI-generated accounts are promoting, often without the required disclosure that these are advertisements, and allowed the hacker to take control of more than 1,000 smartphones that power the company.

    The hacker, who asked for anonymity because he feared retaliation from the company, said he reported the vulnerability to Doublespeed on October 31. At the time of writing, the hacker said he still has access to the company’s backend, including the phone farm itself.

    Slashdot thread.

    [ << Previous 20 ]

    LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.