Slashdot: Hardware's Journal
[Most Recent Entries]
[Calendar View]
Saturday, November 23rd, 2024
Time |
Event |
1:25a |
Economist Makes the Case For Slow Level 1 EV Charging Longtime Slashdot reader Geoffrey.landis writes: Economist Phillip Kobernick makes the case that the emphasis on fast-charging stations for electric vehicles in the U.S. is misplaced. According to an article from CleanTechnica, he argues that, from an economic standpoint, what we should be doing is installing more slow chargers. All thing equal, who wouldn't choose a 10-minute charge over a 3-hour charge or a 10-hour charge? But all things are not equal.
Superfast chargers are far more expensive than Level 2 chargers, and Level 2 chargers are also significantly more expensive than Level 1 charging infrastructure, which consists of normal electricity outlets. He points out that we get 4-7 times more charging capability installed for the same cost by going with Level 1 charging instead of Level 2. And given that people often just plug in their electric vehicles overnight, Level 1 charging can more than adequately provide what is needed in that time. The case is examined in a podcast on the site.
Read more of this story at Slashdot. | 6:34p |
'It's Surprisingly Easy To Jailbreak LLM-Driven Robots' Instead of focusing on chatbots, a new study reveals an automated way to breach LLM-driven robots "with 100 percent success," according to IEEE Spectrum. "By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting for harmful places to detonate bombs..."
[The researchers] have developed RoboPAIR, an algorithm designed to attack any LLM-controlled robot. In experiments with three different robotic systems — the Go2; the wheeled ChatGPT-powered Clearpath Robotics Jackal; and Nvidia's open-source Dolphins LLM self-driving vehicle simulator. They found that RoboPAIR needed just days to achieve a 100 percent jailbreak rate against all three systems... RoboPAIR uses an attacker LLM to feed prompts to a target LLM. The attacker examines the responses from its target and adjusts its prompts until these commands can bypass the target's safety filters. RoboPAIR was equipped with the target robot's application programming interface (API) so that the attacker could format its prompts in a way that its target could execute as code. The scientists also added a "judge" LLM to RoboPAIR to ensure the attacker was generating prompts the target could actually perform given physical limitations, such as specific obstacles in the environment...
One finding the scientists found concerning was how jailbroken LLMs often went beyond complying with malicious prompts by actively offering suggestions. For example, when asked to locate weapons, a jailbroken robot described how common objects like desks and chairs could be used to bludgeon people.
The researchers stressed that prior to the public release of their work, they shared their findings with the manufacturers of the robots they studied, as well as leading AI companies. They also noted they are not suggesting that researchers stop using LLMs for robotics... "Strong defenses for malicious use-cases can only be designed after first identifying the strongest possible attacks," Robey says. He hopes their work "will lead to robust defenses for robots against jailbreaking attacks."
The article includes a reaction from Hakki Sevil, associate professor of intelligent systems and robotics at the University of West Florida. He concludes that the "lack of understanding of context of consequences" among even advanced LLMs "leads to the importance of human oversight in sensitive environments, especially in environments where safety is crucial." But a long-term solution could be LLMs with "situational awareness" that understand broader intent.
"Although developing context-aware LLM is challenging, it can be done by extensive, interdisciplinary future research combining AI, ethics, and behavioral modeling..."
Thanks to long-time Slashdot reader DesertNomad for sharing the article.
Read more of this story at Slashdot. | 7:34p |
SilverStone's Retro Beige PC Case Turns April Fools' Joke into Actual Product Slashdot reader jjslash shared this report from TechSpot:
The SilverStone FLP01 made quite the impression when it was shared on X for April Fools' Day 2023. Loosely modeled after popular desktops from yesteryear like the NEC PC-9800 series, the chassis features dual 5.25-inch faux floppy bays that could stand to look a bit more realistic. Notably, the covers flip open to reveal access to a more modern (yet still legacy) optical drive and front I/O ports.
Modern-looking fan grills can be found on either side of the desktop, serving as yet another hint that the chassis is not as old at it appears on first glance. The grills look to be removable, and probably hold washable dust filters. Like early desktops, the system doubles as a stand for your monitor. The use of a green power LED up front helps round out the retro look; a red LED is used as a storage activity indicator. Read more of this story at Slashdot. |
|