Schneier on Security
The following are the titles of recent articles syndicated from Schneier on Security
Add this feed to your friends list for news aggregation, or view this feed's syndication information.
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
[ << Previous 20 ]
Tuesday, June 24th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
1:49 pm |
Here’s a Subliminal Channel You Haven’t Considered Before Scientists can manipulate air bubbles trapped in ice to encode messages. | Monday, June 23rd, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
1:17 pm |
Largest DDoS Attack to Date It was a recently unimaginable 7.3 Tbps:
The vast majority of the attack was delivered in the form of User Datagram Protocol packets. Legitimate UDP-based transmissions are used in especially time-sensitive communications, such as those for video playback, gaming applications, and DNS lookups. It speeds up communications by not formally establishing a connection before data is transferred. Unlike the more common Transmission Control Protocol, UDP doesn’t wait for a connection between two computers to be established through a handshake and doesn’t check whether data is properly received by the other party. Instead, it immediately sends data from one machine to another.
UDP flood attacks send extremely high volumes of packets to random or specific ports on the target IP. Such floods can saturate the target’s Internet link or overwhelm internal resources with more packets than they can handle.
Since UDP doesn’t require a handshake, attackers can use it to flood a targeted server with torrents of traffic without first obtaining the server’s permission to begin the transmission. UDP floods typically send large numbers of datagrams to multiple ports on the target system. The target system, in turn, must send an equal number of data packets back to indicate the ports aren’t reachable. Eventually, the target system buckles under the strain, resulting in legitimate traffic being denied. | Friday, June 20th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
11:32 pm |
Friday Squid Blogging: Gonate Squid Video This is the first ever video of the Antarctic Gonate Squid.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered. | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
1:16 pm |
Surveillance in the US Good article from 404 Media on the cozy surveillance relationship between local Oregon police and ICE:
In the email thread, crime analysts from several local police departments and the FBI introduced themselves to each other and made lists of surveillance tools and tactics they have access to and felt comfortable using, and in some cases offered to perform surveillance for their colleagues in other departments. The thread also includes a member of ICE’s Homeland Security Investigations (HSI) and members of Oregon’s State Police. In the thread, called the “Southern Oregon Analyst Group,” some members talked about making fake social media profiles to surveil people, and others discussed being excited to learn and try new surveillance techniques. The emails show both the wide array of surveillance tools that are available to even small police departments in the United States and also shows informal collaboration between local police departments and federal agencies, when ordinarily agencies like ICE are expected to follow their own legal processes for carrying out the surveillance. | Thursday, June 19th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
1:17 pm |
Self-Driving Car Video Footage Two articles crossed my path recently. First, a discussion of all the video Waymo has from outside its cars: in this case related to the LA protests. Second, a discussion of all the video Tesla has from inside its cars.
Lots of things are collecting lots of video of lots of other things. How and under what rules that video is used and reused will be a continuing source of debate. | Wednesday, June 18th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
4:52 pm |
Ghostwriting Scam The variations seem to be endless. Here’s a fake ghostwriting scam that seems to be making boatloads of money.
This is a big story about scams being run from Texas and Pakistan estimated to run into tens if not hundreds of millions of dollars, viciously defrauding Americans with false hopes of publishing bestseller books (a scam you’d not think many people would fall for but is surprisingly huge). In January, </span>three people were charged with defrauding elderly authors across the United States of almost $44 million by “convincing the victims that publishers and filmmakers wanted to turn their books into blockbusters.” | Tuesday, June 17th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
1:50 pm |
Where AI Provides Value If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.
But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise—and where they don’t—will be key to adapting to the AI-infused workforce.
AI will often not be as effective as a human doing the same job. It won’t always know more or be more accurate. And it definitely won’t always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope and sophistication. Understanding these dimensions is the key to understanding AI-human replacement.
Speed
First, speed. There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos.
AI models can do the job blazingly fast, a capability with important industrial applications. AI-based software is used to enhance satellite and remote sensing data, to compress video files, to make video games run better with cheaper hardware and less energy, to help robots make the right movements, and to model turbulence to help build better internal combustion engines.
Real-time performance matters in these cases, and the speed of AI is necessary to enable them.
Scale
The second dimension of AI’s advantage over humans is scale. AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally.
AI models can do this for every single product, TV show, website and internet user. This is how the modern ad-tech industry works. Real-time bidding markets price the display ads that appear alongside the websites you visit, and advertisers use AI models to decide when they want to pay that price—thousands of times per second.
Scope
Next, scope. AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. These models may not be superior to skilled humans at any one of these things, but no single human could outperform top-tier generative models across them all.
It’s the combination of these competencies that generates value. Employers often struggle to find people with talents in disciplines such as software development and data science who also have strong prior knowledge of the employer’s domain. Organizations are likely to continue to rely on human specialists to write the best code and the best persuasive text, but they will increasingly be satisfied with AI when they just need a passable version of either.
Sophistication
Finally, sophistication. AIs can consider more factors in their decisions than humans can, and this can endow them with superhuman performance on specialized tasks. Computers have long been used to keep track of a multiplicity of factors that compound and interact in ways more complex than a human could trace. The 1990s chess-playing computer systems such as Deep Blue succeeded by thinking a dozen or more moves ahead.
Modern AI systems use a radically different approach: Deep learning systems built from many-layered neural networks take account of complex interactions—often many billions—among many factors. Neural networks now power the best chess-playing models and most other AI systems.
Chess is not the only domain where eschewing conventional rules and formal logic in favor of highly sophisticated and inscrutable systems has generated progress. The stunning advance of AlphaFold2, the AI model of structural biology whose creators Demis Hassabis and John Jumper were recognized with the Nobel Prize in chemistry in 2024, is another example.
This breakthrough replaced traditional physics-based systems for predicting how sequences of amino acids would fold into three-dimensional shapes with a 93 million-parameter model, even though it doesn’t account for physical laws. That lack of real-world grounding is not desirable: No one likes the enigmatic nature of these AI systems, and scientists are eager to understand better how they work.
But the sophistication of AI is providing value to scientists, and its use across scientific fields has grown exponentially in recent years.
Context matters
Those are the four dimensions where AI can excel over humans. Accuracy still matters. You wouldn’t want to use an AI that makes graphics look glitchy or targets ads randomly—yet accuracy isn’t the differentiator. The AI doesn’t need superhuman accuracy. It’s enough for AI to be merely good and fast, or adequate and scalable. Increasing scope often comes with an accuracy penalty, because AI can generalize poorly to truly novel tasks. The 4 S’s are sometimes at odds. With a given amount of computing power, you generally have to trade off scale for sophistication.
Even more interestingly, when an AI takes over a human task, the task can change. Sometimes the AI is just doing things differently. Other times, AI starts doing different things. These changes bring new opportunities and new risks.
For example, high-frequency trading isn’t just computers trading stocks faster; it’s a fundamentally different kind of trading that enables entirely new strategies, tactics and associated risks. Likewise, AI has developed more sophisticated strategies for the games of chess and Go. And the scale of AI chatbots has changed the nature of propaganda by allowing artificial voices to overwhelm human speech.
It is this “phase shift,” when changes in degree may transform into changes in kind, where AI’s impacts to society are likely to be most keenly felt. All of this points to the places that AI can have a positive impact. When a system has a bottleneck related to speed, scale, scope or sophistication, or when one of these factors poses a real barrier to being able to accomplish a goal, it makes sense to think about how AI could help.
Equally, when speed, scale, scope and sophistication are not primary barriers, it makes less sense to use AI. This is why AI auto-suggest features for short communications such as text messages can feel so annoying. They offer little speed advantage and no benefit from sophistication, while sacrificing the sincerity of human communication.
Many deployments of customer service chatbots also fail this test, which may explain their unpopularity. Companies invest in them because of their scalability, and yet the bots often become a barrier to support rather than a speedy or sophisticated problem solver.
Where the advantage lies
Keep this in mind when you encounter a new application for AI or consider AI as a replacement for or an augmentation to a human process. Looking for bottlenecks in speed, scale, scope and sophistication provides a framework for understanding where AI provides value, and equally where the unique capabilities of the human species give us an enduring advantage.
This essay was written with Nathan E. Sanders, and originally appeared in The Conversation. | Sunday, June 15th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
3:17 am |
Upcoming Speaking Engagements This is a current list of where and when I am scheduled to speak:
The list is maintained on this page. | Friday, June 13th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
11:35 pm |
Friday Squid Blogging: Stubby Squid Video of the stubby squid (Rossia pacifica) from offshore Vancouver Island.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered. | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
12:30 pm |
Paragon Spyware Used to Spy on European Journalists Paragon is an Israeli spyware company, increasingly in the news (now that NSO Group seems to be waning). “Graphite” is the name of its product. Citizen Lab caught it spying on multiple European journalists with a zero-click iOS exploit:
On April 29, 2025, a select group of iOS users were notified by Apple that they were targeted with advanced spyware. Among the group were two journalists that consented for the technical analysis of their cases. The key findings from our forensic analysis of their devices are summarized below:
- Our analysis finds forensic evidence confirming with high confidence that both a prominent European journalist (who requests anonymity), and Italian journalist Ciro Pellegrino, were targeted with Paragon’s Graphite mercenary spyware.
- We identify an indicator linking both cases to the same Paragon operator.
- Apple confirms to us that the zero-click attack deployed in these cases was mitigated as of iOS 18.3.1 and has assigned the vulnerability CVE-2025-43200.
Our analysis is ongoing.
The list of confirmed Italian cases is in the report’s appendix. Italy has recently admitted to using the spyware.
TechCrunch article. Slashdot thread. | Thursday, June 12th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
5:47 pm |
Airlines Secretly Selling Passenger Data to the Government This is news:
A data broker owned by the country’s major airlines, including Delta, American Airlines, and United, collected U.S. travellers’ domestic flight records, sold access to them to Customs and Border Protection (CBP), and then as part of the contract told CBP to not reveal where the data came from, according to internal CBP documents obtained by 404 Media. The data includes passenger names, their full flight itineraries, and financial details.
Another article.
EDITED TO ADD (6/14): Ed Hausbrook reported this a month and a half ago. | Monday, June 9th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
1:00 pm |
New Way to Covertly Track Android Users Researchers have discovered a new way to covertly track Android users. Both Meta and Yandex were using it, but have suddenly stopped now that they have been caught.
The details are interesting, and worth reading in detail:
Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it’s investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.
The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they’re off-limits for every other site.
Washington Post article. | Friday, June 6th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
11:46 pm |
Friday Squid Blogging: Squid Run in Southern New England Southern New England is having the best squid run in years.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered. | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
8:16 pm |
Hearing on the Federal Government and AI On Thursday I testified before the House Committee on Oversight and Government Reform at a hearing titled “The Federal Government in the Age of Artificial Intelligence.”
The other speakers mostly talked about how cool AI was—and sometimes about how cool their own company was—but I was asked by the Democrats to specifically talk about DOGE and the risks of exfiltrating our data from government agencies and feeding it into AIs.
My written testimony is here. Video of the hearing is here. | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
4:51 pm |
Report on the Malicious Uses of AI OpenAI just published its annual report on malicious uses of AI.
By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams.
These operations originated in many parts of the world, acted in many different ways, and focused on many different targets. A significant number appeared to originate in China: Four of the 10 cases in this report, spanning social engineering, covert influence operations and cyber threats, likely had a Chinese origin. But we’ve disrupted abuses from many other countries too: this report includes case studies of a likely task scam from Cambodia, comment spamming apparently from the Philippines, covert influence attempts potentially linked with Russia and Iran, and deceptive employment schemes.
Reports like these give a brief window into the ways AI is being used by malicious actors around the world. I say “brief” because last year the models weren’t good enough for these sorts of things, and next year the threat actors will run their AI models locally—and we won’t have this kind of visibility.
Wall Street Journal article (also here). Slashdot thread. | Wednesday, June 4th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
1:07 pm |
The Ramifications of Ukraine’s Drone Attack You can read the details of Operation Spiderweb elsewhere. What interests me are the implications for future warfare:
If the Ukrainians could sneak drones so close to major air bases in a police state such as Russia, what is to prevent the Chinese from doing the same with U.S. air bases? Or the Pakistanis with Indian air bases? Or the North Koreans with South Korean air bases? Militaries that thought they had secured their air bases with electrified fences and guard posts will now have to reckon with the threat from the skies posed by cheap, ubiquitous drones that can be easily modified for military use. This will necessitate a massive investment in counter-drone systems. Money spent on conventional manned weapons systems increasingly looks to be as wasted as spending on the cavalry in the 1930s.
The Atlantic makes similar points.
There’s a balance between the cost of the thing, and the cost to destroy the thing, and that balance is changing dramatically. This isn’t new, of course. Here’s an article from last year about the cost of drones versus the cost of top-of-the-line fighter jets. If $35K in drones (117 drones times an estimated $300 per drone) can destroy $7B in Russian bombers and other long-range aircraft, why would anyone build more of those planes? And we can have this discussion about ships, or tanks, or pretty much every other military vehicle. And then we can add in drone-coordinating technologies like swarming.
Clearly we need more research on remotely and automatically disabling drones. | Tuesday, June 3rd, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
1:50 pm |
New Linux Vulnerabilities They’re interesting:
Tracked as CVE-2025-5054 and CVE-2025-4598, both vulnerabilities are race condition bugs that could enable a local attacker to obtain access to access sensitive information. Tools like Apport and systemd-coredump are designed to handle crash reporting and core dumps in Linux systems.
[…]
“This means that if a local attacker manages to induce a crash in a privileged process and quickly replaces it with another one with the same process ID that resides inside a mount and pid namespace, apport will attempt to forward the core dump (which might contain sensitive information belonging to the original, privileged process) into the namespace.”
Moderate severity, but definitely worth fixing.
Slashdot thread. | Monday, June 2nd, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
1:47 pm |
Australia Requires Ransomware Victims to Declare Payments A new Australian law requires larger companies to declare any ransomware payments they have made. | Friday, May 30th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
1:07 pm |
Why Take9 Won’t Improve Cybersecurity There’s a new cybersecurity awareness campaign: Take9. The idea is that people—you, me, everyone—should just pause for nine seconds and think more about the link they are planning to click on, the file they are planning to download, or whatever it is they are planning to share.
There’s a website—of course—and a video, well-produced and scary. But the campaign won’t do much to improve cybersecurity. The advice isn’t reasonable, it won’t make either individuals or nations appreciably safer, and it deflects blame from the real causes of our cyberspace insecurities.
First, the advice is not realistic. A nine-second pause is an eternity in something as routine as using your computer or phone. Try it; use a timer. Then think about how many links you click on and how many things you forward or reply to. Are we pausing for nine seconds after every text message? Every Slack ping? Does the clock reset if someone replies midpause? What about browsing—do we pause before clicking each link, or after every page loads? The logistics quickly become impossible. I doubt they tested the idea on actual users.
Second, it largely won’t help. The industry should know because we tried it a decade ago. “Stop. Think. Connect.” was an awareness campaign from 2016, by the Department of Homeland Security—this was before CISA—and the National Cybersecurity Alliance. The message was basically the same: Stop and think before doing anything online. It didn’t work then, either.
Take9’s website says, “Science says: In stressful situations, wait 10 seconds before responding.” The problem with that is that clicking on a link is not a stressful situation. It’s normal, one that happens hundreds of times a day. Maybe you can train a person to count to 10 before punching someone in a bar but not before opening an attachment.
And there is no basis in science for it. It’s a folk belief, all over the Internet but with no actual research behind it—like the five-second rule when you drop food on the floor. In emotionally charged contexts, most people are already overwhelmed, cognitively taxed, and not functioning in a space where rational interruption works as neatly as this advice suggests.
Pausing Adds Little
Pauses help us break habits. If we are clicking, sharing, linking, downloading, and connecting out of habit, a pause to break that habit works. But the problem here isn’t habit alone. The problem is that people aren’t able to differentiate between something legitimate and an attack.
The Take9 website says that nine seconds is “time enough to make a better decision,” but there’s no use telling people to stop and think if they don’t know what to think about after they’ve stopped. Pause for nine seconds and… do what? Take9 offers no guidance. It presumes people have the cognitive tools to understand the myriad potential attacks and figure out which one of the thousands of Internet actions they take is harmful. If people don’t have the right knowledge, pausing for longer—even a minute—will do nothing to add knowledge.
The three-part suspicion, cognition, and automaticity model (SCAM) is one way to think about this. The first is lack of knowledge—not knowing what’s risky and what isn’t. The second is habits: people doing what they always do. And third, using flawed mental shortcuts, like believing PDFs to be safer than Microsoft Word documents, or that mobile devices are safer than computers for opening suspicious emails.
These pathways don’t always occur in isolation; sometimes they happen together or sequentially. They can influence each other or cancel each other out. For example, a lack of knowledge can lead someone to rely on flawed mental shortcuts, while those same shortcuts can reinforce that lack of knowledge. That’s why meaningful behavioral change requires more than just a pause; it needs cognitive scaffolding and system designs that account for these dynamic interactions.
A successful awareness campaign would do more than tell people to pause. It would guide them through a two-step process. First trigger suspicion, motivating them to look more closely. Then, direct their attention by telling them what to look at and how to evaluate it. When both happen, the person is far more likely to make a better decision.
This means that pauses need to be context specific. Think about email readers that embed warnings like “EXTERNAL: This email is from an address outside your organization” or “You have not received an email from this person before.” Those are specifics, and useful. We could imagine an AI plug-in that warns: “This isn’t how Bruce normally writes.” But of course, there’s an arms race in play; the bad guys will use these systems to figure out how to bypass them.
This is all hard. The old cues aren’t there anymore. Current phishing attacks have evolved from those older Nigerian scams filled with grammar mistakes and typos. Text message, voice, or video scams are even harder to detect. There isn’t enough context in a text message for the system to flag. In voice or video, it’s much harder to trigger suspicion without disrupting the ongoing conversation. And all the false positives, when the system flags a legitimate conversation as a potential scam, work against people’s own intuition. People will just start ignoring their own suspicions, just as most people ignore all sorts of warnings that their computer puts in their way.
Even if we do this all well and correctly, we can’t make people immune to social engineering. Recently, both cyberspace activist Cory Doctorow and security researcher Troy Hunt—two people who you’d expect to be excellent scam detectors—got phished. In both cases, it was just the right message at just the right time.
It’s even worse if you’re a large organization. Security isn’t based on the average employee’s ability to detect a malicious email; it’s based on the worst person’s inability—the weakest link. Even if awareness raises the average, it won’t help enough.
Don’t Place Blame Where It Doesn’t Belong
Finally, all of this is bad public policy. The Take9 campaign tells people that they can stop cyberattacks by taking a pause and making a better decision. What’s not said, but certainly implied, is that if they don’t take that pause and don’t make those better decisions, then they’re to blame when the attack occurs.
That’s simply not true, and its blame-the-user message is one of the worst mistakes our industry makes. Stop trying to fix the user. It’s not the user’s fault if they click on a link and it infects their system. It’s not their fault if they plug in a strange USB drive or ignore a warning message that they can’t understand. It’s not even their fault if they get fooled by a look-alike bank website and lose their money. The problem is that we’ve designed these systems to be so insecure that regular, nontechnical people can’t use them with confidence. We’re using security awareness campaigns to cover up bad system design. Or, as security researcher Angela Sasse first said in 1999: “Users are not the enemy.”
We wouldn’t accept that in other parts of our lives. Imagine Take9 in other contexts. Food service: “Before sitting down at a restaurant, take nine seconds: Look in the kitchen, maybe check the temperature of the cooler, or if the cooks’ hands are clean.” Aviation: “Before boarding a plane, take nine seconds: Look at the engine and cockpit, glance at the plane’s maintenance log, ask the pilots if they feel rested.” This is obviously ridiculous advice. The average person doesn’t have the training or expertise to evaluate restaurant or aircraft safety—and we don’t expect them to. We have laws and regulations in place that allow people to eat at a restaurant or board a plane without worry.
But—we get it—the government isn’t going to step in and regulate the Internet. These insecure systems are what we have. Security awareness training, and the blame-the-user mentality that comes with it, are all we have. So if we want meaningful behavioral change, it needs a lot more than just a pause. It needs cognitive scaffolding and system designs that account for all the dynamic interactions that go into a decision to click, download, or share. And that takes real work—more work than just an ad campaign and a slick video.
This essay was written with Arun Vishwanath, and originally appeared in Dark Reading. | Thursday, May 29th, 2025 | LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose. |
11:15 pm |
Friday Squid Blogging: NGC 1068 Is the “Squid Galaxy” I hadn’t known that the NGC 1068 galaxy is nicknamed the “Squid Galaxy.” It is, and it’s spewing neutrinos without the usual accompanying gamma rays.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered. |
[ << Previous 20 ]
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
|