Slashdot's Journal
[Most Recent Entries]
[Calendar View]
Saturday, May 24th, 2025
| Time |
Event |
| 12:02a |
Kraken Launches Digital Tokens To Offer 24/7 Trading of US Equities Kraken is launching tokenized versions of U.S. equities for 24/7 trading outside the U.S., giving global investors blockchain-based access to major companies like Apple and Tesla. Reuters reports: Tokenization refers to the process of issuing digital representations of publicly-traded securities. Instead of holding the securities directly, investors hold tokens that represent ownership of the securities. The tokens' launch outside the U.S. comes amid growing interest in blending traditional finance with blockchain infrastructure. While tokenized securities have yet to gain widespread adoption, proponents say they hold the potential to significantly reshape how people access and invest in financial markets.
In a January opinion piece for the Washington Post, Robinhood CEO Vlad Tenev said tokenization could also allow retail investors to access private companies' stocks. Kraken's tokens, called xStocks, will be available in select markets outside the United States, it said, without naming the markets. The move was earlier reported by the Wall Street Journal. The offering is currently not available for U.S. customers.
Read more of this story at Slashdot. | | 12:45a |
US Solar Keeps Surging, Generating More Power Than Hydro In 2025 In early 2025, U.S. solar power production jumped 44% compared to the previous year, driven by end-of-year construction to capture tax incentives and long-term cost advantages. "The bad news is that, in contrast to China, solar's growth hasn't been enough to offset rising demand," notes Ars Technica. "Instead, the US also saw significant growth in coal use, which rose by 23 percent compared to the year prior, after years of steady decline." From the report: Short-term fluctuations in demand are normal, generally driven by weather-induced demand for heating or cooling. Despite those changes, demand for electricity in the US has been largely flat for over a decade, largely thanks to gains in efficiency. But 2024 saw demand go up by nearly 3 percent, and the first quarter of 2025 saw another rise, this time of nearly 5 percent. It's a bit too early to say that we're seeing a shift to a period of rising demand, but one has been predicted for some time due to rising data center use and the increased electrification of transportation and appliances.
Under those circumstances, the rest of the difference will be made up for with fossil fuels. Running counter to recent trends, the use of natural gas dropped during the first three months of 2025. This means that the use of coal rose nearly as quickly as demand, up by 23 percent compared to the same time period in 2024. Despite the rise in coal use, the fraction of carbon-free electricity held steady year over year, with wind/solar/hydro/nuclear accounting for 43 percent of all power put on the US grid. That occurred despite small drops in nuclear and hydro production.
Read more of this story at Slashdot. | | 3:30a |
Microsoft Says Its Aurora AI Can Accurately Predict Air Quality, Typhoons An anonymous reader quotes a report from TechCrunch: One of Microsoft's latest AI models can accurately predict air quality, hurricanes, typhoons, and other weather-related phenomena, the company claims. In a paper published in the journal Nature and an accompanying blog post this week, Microsoft detailed Aurora, which the tech giant says can forecast atmospheric events with greater precision and speed than traditional meteorological approaches. Aurora, which has been trained on more than a million hours of data from satellites, radar and weather stations, simulations, and forecasts, can be fine-tuned with additional data to make predictions for particular weather events.
AI weather models are nothing new. Google DeepMind has released a handful over the past several years, including WeatherNext, which the lab claims beats some of the world's best forecasting systems. Microsoft is positioning Aurora as one of the field's top performers -- and a potential boon for labs studying weather science. In experiments, Aurora predicted Typhoon Doksuri's landfall in the Philippines four days in advance of the actual event, beating some expert predictions, Microsoft says. The model also bested the National Hurricane Center in forecasting five-day tropical cyclone tracks for the 2022-2023 season, and successfully predicted the 2022 Iraq sandstorm.
While Aurora required substantial computing infrastructure to train, Microsoft says the model is highly efficient to run. It generates forecasts in seconds compared to the hours traditional systems take using supercomputer hardware. Microsoft, which has made the source code and model weights publicly available, says that it's incorporating Aurora's AI modeling into its MSN Weather app via a specialized version of the model that produces hourly forecasts, including for clouds.
Read more of this story at Slashdot. | | 7:00a |
Google's New AI Video Tool Floods Internet With Real-Looking Clips Google's new AI video tool, Veo 3, is being used to create hyperrealistic videos that are now flooding the internet, terrifying viewers "with a sense that real and fake have become hopelessly blurred," reports Axios. From the report: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects. The model excels at following complex prompts and translating detailed descriptions into realistic videos. The AI engine abides by real-world physics, offers accurate lip-syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand.
According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent.
In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts. Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators. In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy. "It feels like it's almost building upon itself," filmmaker Dave Clark says.
Read more of this story at Slashdot. | | 10:00a |
Valve Adds SteamOS Support For Its Steam Deck Rivals Valve's SteamOS 3.7.8 update brings official support for AMD-powered handhelds like Lenovo's Legion Go and Asus' ROG Ally, along with a new "Steam OS Compatible" library tab and key bug fixes. Other features include a battery charge limit, updated graphics drivers, and a shift to Plasma 6.2.5. Polygon reports: Valve outlines two requirements for the third-party devices not explicitly named in the update to run SteamOS on the handheld: they must be AMD-powered and have an NVMe SSD. Specific instructions for installing the operating system have been updated and listed here.
Before this huge update, players had to use an alternative like Bazzite to achieve a similar SteamOS experience on their devices. The new update also piggybacks off of Valve expanding the Steam Deck Verified categorization system to "any device running SteamOS that's not a Steam Deck" in mid-May. To make matters sweeter, a SteamOS-powered version of the Lenovo Legion Go S is scheduled to release on May 25. You can learn more about SteamOS 3.7.8 here.
Read more of this story at Slashdot. | | 2:34p |
Red Hat Collaborates with SIFive on RISC-V Support, as RHEL 10 Brings AI Assistant and Post-Quantum Security SiFive was one of the first companies to produce a RISC-V chip. This week they announced a new collaboration with Red Hat "to bring Red Hat Enterprise Linux support to the rapidly growing RISC-V community" and "prepare Red Hat's product portfolio for future intersection with RISC-V server hardware from a diverse set of RISC-V suppliers."
Red Hat Enterprise Linux 10 is available in developer preview on the SiFive HiFive Premier P550 platform, which they call "a proven, high performance RISC-V CPU development platform."
The SiFive HiFive Premier P550 provides a proven, high performance RISC-V CPU development platform. Adding support for Red Hat Enterprise Linux 10, the latest version of the world's leading enterprise Linux platform, enables developers to create, optimize, and release new applications for the next generation of enterprise servers and cloud infrastructure on the RISC-V architecture...
SiFive's high performance RISC-V technology is already being used by large organizations to meet compute-intensive AI and machine learning workloads in the datacenter... "With the growing demand for RISC-V, we are pleased to collaborate with SiFive to support Red Hat Enterprise Linux 10 deployments on SiFive HiFive Premier P550," said Ronald Pacheco, senior director of RHEL product and ecosystem strategy, "to further empower developers with the power of the world's leading enterprise Linux platform wherever and however they choose to deploy...."
Dave Altavilla, principal analyst at HotTech Vision And Analysis, said "Native Red Hat Enterprise Linux support on SiFive's HiFive Premier P550 board offers developers a substantial enterprise-grade toolchain for RISC-V.
"This is a pivotal step forward in enabling a full-stack ecosystem around open RISC-V hardware.
SiFive says the move will "inspire the next generation of enterprise workloads and AI applications optimized for RISC-V," while helping their partners "deliver systems with a meaningfully lower total cost of ownership than incumbent platforms."
"With the growing demand for RISC-V, we are pleased to collaborate with SiFive to support Red Hat Enterprise Linux 10 deployments on SiFive HiFive Premier P550..." said Ronald Pacheco, senior director of RHEL product and ecosystem strategy.
.
Beta News notes that there's also a new AI-powered assistant in RHEL 10, so "Instead of spending all day searching for answers or poking through documentation, admins can simply ask questions directly from the command line and get real-time help
Security is front and center in this release, too. Red Hat is taking a proactive stance with early support for post-quantum cryptography. OpenSSL, GnuTLS, NSS, and OpenSSH now offer quantum-resistant options, setting the stage for better protection as threats evolve. There's a new sudo system role to help with privilege management, and OpenSSH has been bumped to version 9.9. Plus, with new Sequoia tools for OpenPGP, the door is open for even more robust encryption strategies. But it's not just about security and AI. Containers are now at the heart of RHEL 10 thanks to the new "image mode." With this feature, building and maintaining both the OS and your applications gets a lot more streamlined...
Read more of this story at Slashdot. | | 3:34p |
Ask Slashdot: Do We Need Opt-Out-By-Default Privacy Laws? "In large, companies failed to self-regulate," writes long-time Slashdot reader BrendaEM:
They have not been respected the individual's right to privacy. In software and web interfaces, companies have buried their privacy setting so deep that they cannot be found in a reasonable amount of time, or an unreasonable amount of steps are needed to attempt to retain data. These companies have taken away the individual's right to privacy --by default.
Are laws needed that protect a person's privacy by default--unless specific steps are taken by that user/purchaser to relinquish it? Should the wording of the explanation be so written that the contract is brief, explaining the forfeiture of the privacy, and where that data might be going? Should a company selling a product be required to state before purchase which rights need to be dismissed for its use? Should a legal owner who purchased a product expect it to stop functioning--only because a newer user contract is not agreed to?
Share your own thoughts and experiences in the comments. What's your ideal privacy policy?
And do we need opt-out-by-defaut privacy laws?
Read more of this story at Slashdot. | | 4:34p |
Researchers Build 'The World's Fastest Petahertz Quantum Transistor'. They Predict Lightwave Electronics "What if ultrafast pulses of light could operate computers at speeds a million times faster than today's best processors?" asks the University of Arizona.
"A team of scientists, including researchers from the University of Arizona, are working to make that possible."
In a groundbreaking international effort, researchers from the Department of Physics in the College of Science and the James C. Wyant College of Optical Sciences demonstrated a way to manipulate electrons in graphene using pulses of light that last less than a trillionth of a second. By leveraging a quantum effect known as tunneling, they recorded electrons bypassing a physical barrier almost instantaneously, a feat that redefines the potential limits of computer processing power. A study published in Nature Communications highlights how the technique could lead to processing speeds in the petahertz range — over 1,000 times faster than modern computer chips. Sending data at those speeds would revolutionize computing as we know it, said Mohammed Hassan, an associate professor of physics and optical sciences. Hassan has long pursued light-based computer technology and previously led efforts to develop the world's fastest electron microscope...
[T]he researchers used a laser that switches off and on at a rate of 638 attoseconds to create what Hassan called "the world's fastest petahertz quantum transistor... For reference, a single attosecond is one-quintillionth of a second," Hassan said. "That means that this achievement represents a big leap forward in the development of ultrafast computer technologies by realizing a petahertz-speed transistor." While some scientific advancements occur under strict conditions, including temperature and pressure, this new transistor performed in ambient conditions — opening the way to commercialization and use in everyday electronics. Hassan is working with Tech Launch Arizona, the office that works with investigators to commercialize inventions stemming from U of A research in order to patent and market innovations.
While the original invention used a specialized laser, the researchers are furthering development of a transistor compatible with commercially available equipment. "I hope we can collaborate with industry partners to realize this petahertz-speed transistor on a microchip," Hassan said.
Thanks to long-time Slashdot reader goslackware for sharing the news.
Read more of this story at Slashdot. | | 5:34p |
Bird Feeders Have Caused a Dramatic Evolution of California Hummingbirds Science magazine reports that hummingbird feeders "have become a major evolutionary force," according to research published this week in Global Change Biology. (At least for the Anna's hummingbird, a common species in the western U.S.
Over just a few generations, their beaks have dramatically changed in size and shape.... [A]s feeders proliferated, Anna's hummingbird beaks got longer and larger, which may reflect an adaptation to slurp up far more nectar than flowers can naturally provide. Developing a bigger beak to access feeders "is like having a large spoon to eat with," says senior author Alejandro Rico-Guevara, an evolutionary biologist at the University of Washington. This change was more pronounced in areas where feeders were dense. But in birds that lived in colder regions north of the species' historical range, the researchers spotted the opposite trend: Their beaks became shorter and smaller. This finding also makes sense: The researchers used an infrared camera to show for the first time that hummingbirds use their beaks to thermoregulate, by dissipating heat while they are perched. A smaller beak has less surface area — and would therefore help conserve heat...
The most surprising finding, though, was how quickly these changes took place. By the 1950s, hummingbirds were noticeably different from those of the 1930s: a time span of only about 10 generations of birds, Alexandre says.
Carleton University animal behaviorist Roslyn Dakin (who wasn't involved with the study) says the new paper beautifully shows "evolution in action" — and adds nuance to our conception of humans as an evolutionary force. "I think we're going to find more and more examples of contemporary and subtle changes, that we're shaping, indirectly, in many more species."
Thanks to long-time Slashdot reader sciencehabit for sharing the article.
Read more of this story at Slashdot. | | 6:34p |
Firefox Creates 'A Smarter, Simpler Address Bar' "Firefox's address bar just got an upgrade," Mozilla writes on their blog:
Keep your original search visible
When you perform a search, your query now remains visible in the address bar instead of being replaced by the search engine's URL. Whereas before your address bar was filled with long, confusing URLs, now it's easier to refine or repeat searches... [Clicking an icon left of the address bar even pulls up a list of search-engine choices under the heading "This time search with..."]
Search your tabs, bookmarks and history using simple keywords
You can access different search modes in the address bar using simple, descriptive keywords like @bookmarks, @tabs, @history, and @actions, making it faster and easier to find exactly what you need.
Type a command, and Firefox takes care of it
You can now perform actions like "clear history," "open downloads," or "take a screenshot" just by typing into the address bar. This turns the bar into a practical productivity tool — great for users who want to stay in the flow...
Cleaner URLs with smarter security cues
We've simplified the address bar by trimming "https://" from secure sites, while clearly highlighting when a site isn't secure. This small change improves clarity without sacrificing awareness.
"The new address bar is now available in Firefox version 138," Mozilla writes, calling the new address bar faster, more intuitive "and designed to work the way you do."
Read more of this story at Slashdot. | | 7:34p |
How Many Qubits Will It Take to Break Secure Public Key Cryptography Algorithms? Wednesday Google security researchers published a preprint demonstrating that 2048-bit RSA encryption "could theoretically be broken by a quantum computer with 1 million noisy qubits running for one week," writes Google's security blog.
"This is a 20-fold decrease in the number of qubits from our previous estimate, published in 2019... "
The reduction in physical qubit count comes from two sources: better algorithms and better error correction — whereby qubits used by the algorithm ("logical qubits") are redundantly encoded across many physical qubits, so that errors can be detected and corrected... [Google's researchers found a way to reduce the operations in a 2024 algorithm from 1000x more than previous work to just 2x. And "On the error correction side, the key change is tripling the storage density of idle logical qubits by adding a second layer of error correction."]
Notably, quantum computers with relevant error rates currently have on the order of only 100 to 1000 qubits, and the National Institute of Standards and Technology (NIST) recently released standard PQC algorithms that are expected to be resistant to future large-scale quantum computers. However, this new result does underscore the importance of migrating to these standards in line with NIST recommended timelines.
The article notes that Google started using the standardized version of ML-KEM once it became available, both internally and for encrypting traffic in Chrome...
"The initial public draft of the NIST internal report on the transition to post-quantum cryptography standards states that vulnerable systems should be deprecated after 2030 and disallowed after 2035. Our work highlights the importance of adhering to this recommended timeline."
Read more of this story at Slashdot. | | 8:34p |
People Should Know About the 'Beliefs' LLMs Form About Them While Conversing Jonathan L. Zittrain is a law/public policy/CS professor at Harvard (and also director of its Berkman Klein Center for Internet & Society).
He's also long-time Slashdot reader #628,028 — and writes in to share his new article in the Atlantic.
Following on Anthropic's bridge-obsessed Golden Gate Claude, colleagues at Harvard's Insight+Interaction Lab have produced a dashboard that shows what judgments Llama appears to be forming about a user's age, wealth, education level, and gender during a conversation. I wrote up how weird it is to see the dials turn while talking to it, and what some of the policy issues might be.
Llama has openly accessible parameters; So using an "observability tool" from the nonprofit research lab Transluce, the researchers finally revealed "what we might anthropomorphize as the model's beliefs about its interlocutor," Zittrain's article notes:
If I prompt the model for a gift suggestion for a baby shower, it assumes that I am young and female and middle-class; it suggests diapers and wipes, or a gift certificate. If I add that the gathering is on the Upper East Side of Manhattan, the dashboard shows the LLM amending its gauge of my economic status to upper-class — the model accordingly suggests that I purchase "luxury baby products from high-end brands like aden + anais, Gucci Baby, or Cartier," or "a customized piece of art or a family heirloom that can be passed down." If I then clarify that it's my boss's baby and that I'll need extra time to take the subway to Manhattan from the Queens factory where I work, the gauge careens to working-class and male, and the model pivots to suggesting that I gift "a practical item like a baby blanket" or "a personalized thank-you note or card...."
Large language models not only contain relationships among words and concepts; they contain many stereotypes, both helpful and harmful, from the materials on which they've been trained, and they actively make use of them.
"An ability for users or their proxies to see how models behave differently depending on how the models stereotype them could place a helpful real-time spotlight on disparities that would otherwise go unnoticed," Zittrain's article argues.
Indeed, the field has been making progress — enough to raise a host of policy questions that were previously not on the table. If there's no way to know how these models work, it makes accepting the full spectrum of their behaviors (at least after humans' efforts at "fine-tuning" them) a sort of all-or-nothing proposition.
But in the end it's not just the traditional information that advertisers try to collect. "With LLMs, the information is being gathered even more directly — from the user's unguarded conversations rather than mere search queries — and still without any policy or practice oversight...."
Read more of this story at Slashdot. | | 9:34p |
Amazon Cancels the 'Wheel of Time' Prime Video Series After 3 Seasons Long-time Slashdot reader SchroedingersCat shares this article from Deadline: Prime Video will not be renewing The Wheel of Time for a fourth season according to Deadline article. The decision, which comes more than a month after the Season 3 finale was released April 17, followed lengthy deliberations. As often is the case in the current economic environment, the reasons were financial as the series is liked creatively by the streamer's executives...
The Season 3 overall performance was not strong enough compared to the show's cost for Prime Video to commit to another season and the streamer could not make it work after examining different scenarios and following discussions with lead studio Sony TV, sources said. With the cancellation possibility — and the show's passionate fanbase — in mind, the Season 3 finale was designed to offer some closure.
Still, the news would be a gut punch for fans who have been praising the latest season as the series' best yet creatively... Prime Video and Sony TV will continue to back the Emmy campaign for The Wheel of Time's third season.
Read more of this story at Slashdot. | | 10:34p |
MCP Will Be Built Into Windows To Make an 'Agentic OS' - Bringing Security Concerns It's like "a USB-C port for AI applications..." according to the official documentation for MCP — "a standardized way to connect AI models to different data sources and tools."
And now Microsoft has "revealed plans to make MCP a native component of Windows," reports DevClass.com, "despite concerns over the security of the fast-expanding MCP ecosystem."
In the context of Windows, it is easy to see the value of a standardised means of automating both built-in and third-party applications. A single prompt might, for example, fire off a workflow which queries data, uses it to create an Excel spreadsheet complete with a suitable chart, and then emails it to selected colleagues. Microsoft is preparing the ground for this by previewing new Windows features.
— First, there will be a local MCP registry which enables discovery of installed MCP servers.
— Second, built-in MCP servers will expose system functions including the file system, windowing, and the Windows Subsystem for Linux.
— Third, a new type of API called App Actions enables third-party applications to expose actions appropriate to each application, which will also be available as MCP servers so that these actions can be performed by AI agents. According to Microsoft, "developers will be able to consume actions developed by other relevant apps," enabling app-to-app automation as well as use by AI agents.
MCP servers are a powerful concept but vulnerable to misuse. Microsoft corporate VP David Weston noted seven vectors of attack, including cross-prompt injection where malicious content overrides agent instructions, authentication gaps because "MCP's current standards for authentication are immature and inconsistently adopted," credential leakage, tool poisoning from "unvetted MCP servers," lack of containment, limited security review in MCP servers, supply chain risks from rogue MCP servers, and command injection from improperly validated inputs. According to Weston, "security is our top priority as we expand MCP capabilities."
Security controls planned by Microsoft (according to the article):
A proxy to mediate all MCP client-server interactions. This will enable centralized enforcement of policies and consent, as well as auditing and a hook for security software to monitor actions.
A baseline security level for MCP servers to be allowed into the Windows MCP registry. This will include code-signing, security testing of exposed interfaces, and declaration of what privileges are required.
Runtime isolation through what Weston called "isolation and granular permissions."
MCP was introduced by Anthropic just 6 months ago, the article notes, but Microsoft has now joined the official MCP steering committee, "and is collaborating with Anthropic and others on an updated authorization specification as well as a future public registry service for MCP servers."
Read more of this story at Slashdot. |
|