Slashdot: Hardware's Journal
 
[Most Recent Entries] [Calendar View]

Friday, July 27th, 2018

    Time Event
    12:20a
    Ask Slashdot: How Do You Handle Hardware That Never Gets Software Updates?
    New submitter pgralla writes from a report via HPE: Many devices, designed for both long-term and short-term use, were shortsighted when it came to flexibility. How do you handle the hardware that never gets software updates, such as embedded systems and task-dedicated equipment? The article that pgralla shared provides the example of medical devices running Windows 7. "Many of the current generation, when they were first released, used Windows 7, and the devices still work well enough that they remain in service today," reports HPE. "But Microsoft ended mainstream support for Windows 7 back in January 2015, so the operating system gets updated only with an occasional security patch as part of Microsoft's extended support. In January 2020, that extended support will end as well." Many IoT devices are in a similar boat as they're powered by embedded Linux and are not designed to be updated after they enter service." Of course, these outdated devices create all sorts of security concerns. "Hackers and their access to knowledge and computing power only go up as the years pass, which means that long-lived, fixed-firmware devices become ever more insecure over time," says Michael Barr, founder of the Barr Group, which provides engineering and consulting services for the embedded systems industry. The WannaCry ransomware hack in 2017 affected not just PCs but also medical devices, and ended up costing businesses $4 billion.

    Read more of this story at Slashdot.

    Image
    10:50p
    Should Bots Be Required To Tell You That They're Not Human?
    "BuzzFeed has this story about proposals to make social media bots identify themselves as fake people," writes an anonymous Slashdot reader. "[It's] based on a paper by a law professor and a fellow researcher." From the report: General concerns about the ethical implications of misleading people with convincingly humanlike bots, as well as specific concerns about the extensive use of bots in the 2016 election, have led many to call for rules regulating the manner in which bots interact with the world. "An AI system must clearly disclose that it is not human," the president of the Allen Institute on Artificial Intelligence, hardly a Luddite, argued in the New York Times. Legislators in California and elsewhere have taken up such calls. SB-1001, a bill that comfortably passed the California Senate, would effectively require bots to disclose that they are not people in many settings. Sen. Dianne Feinstein has introduced a similar bill for consideration in the United States Senate. In our essay, we outline several principles for regulating bot speech. Free from the formal limits of the First Amendment, online platforms such as Twitter and Facebook have more leeway to regulate automated misbehavior. These platforms may be better positioned to address bots' unique and systematic impacts. Browser extensions, platform settings, and other tools could be used to filter or minimize undesirable bot speech more effectively and without requiring government intervention that could potentially run afoul of the First Amendment. A better role for government might be to hold platforms accountable for doing too little to address legitimate societal concerns over automated speech. [A]ny regulatory effort to domesticate the problem of bots must be sensitive to free speech concerns and justified in reference to the harms bots present. Blanket calls for bot disclosure to date lack the subtlety needed to address bot speech effectively without raising the specter of censorship.

    Read more of this story at Slashdot.

    Image

    << Previous Day 2018/07/27
    [Calendar]
    Next Day >>

Slashdot: Hardware   About LJ.Rossia.org