Slashdot's Journal
 
[Most Recent Entries] [Calendar View]

Sunday, April 7th, 2024

    Time Event
    1:44a
    Four Baseball Teams Now Let Ticket-Holders Enter Using AI-Powered 'Facial Authentication'
    "The San Francisco Giants are one of four teams in Major League Baseball this season offering fans a free shortcut through the gates into the ballpark," writes SFGate. "The cost? Signing up for the league's 'facial authentication' software through its ticketing app." The Giants are using MLB's new Go-Ahead Entry program, which intends to cut down on wait times for fans entering games. The pitch is simple: Take a selfie through the MLB Ballpark app (which already has your tickets on it), upload the selfie and, once you're approved, breeze through the ticketing lines and into the ballpark. Fans will barely have to slow down at the entrance gate on their way to their seats... The Philadelphia Phillies were MLB's test team for the technology in 2023. They're joined by the Giants, Nationals and Astros in 2024... [Major League Baseball] says it won't be saving or storing pictures of faces in a database — and it clearly would really like you to not call this technology facial recognition. "This is not the type of facial recognition that's scanning a crowd and specifically looking for certain kinds of people," Karri Zaremba, a senior vice president at MLB, told ESPN. "It's facial authentication. ... That's the only way in which it's being utilized." Privacy advocates "have pointed out that the creep of facial recognition technology may be something to be wary of," the article acknowledges. But it adds that using the technology is still completely optional. And they also spoke to the San Francisco Giants' senior vice president of ticket sales, who gushed about the possibility of app users "walking into the ballpark without taking your phone out, or all four of us taking our phones out."

    Read more of this story at Slashdot.

    4:44a
    Is Microsoft Working on 'Performant Sound Recognition' AI Technologies?
    Windows Report speculates on what Microsoft may be working on next based on a recently-published patent for "performant sound recognition AI technologies" (dated April 2, 2024): Microsoft's new technology can recognize different types of sounds, from doorbells to babies crying, or dogs barking, but not limited to them. It can also recognize sounds of coughing or breathing difficulties, or unusual noises, such as glass breaking. Most intriguing, it can recognize and monitor environmental sounds, and they can be further processed to let users know if a natural disaster is about to happen... The neural network generates scores and probabilities for each type of sound event in each segment. This is like guessing what type of sound each segment is and how sure it is about the guess. After that, the system does some post-processing to smooth out the scores and probabilities and generate confidence values for each type of sound for different window sizes. Ultimately, this technology can be used in various applications. In a smart home device, it can detect when someone breaks into the house, by recognizing the sound of glass shattering, or if a newborn is hungry, or distressed, by recognizing the sounds of baby crying. It can also be used in healthcare, to accurately detect lung or heart diseases, by recognizing heartbeat sounds, coughing, or breathing difficulties. But one of its most important applications would be to prevent casual users of upcoming natural disasters by recognizing and detecting sounds associated with them. Thanks to Slashdot reader John Nautu for sharing the article.

    Read more of this story at Slashdot.

    7:44a
    Wait, Does America Suddenly Have a Record Number of Bees?
    "America's honeybee population has rocketed to an all-time high," reports the Washington Post: We've added almost 1 million bee colonies in the past five years. We now have 3.8 million, the census shows. Since 2007, the first census after alarming bee die-offs began in 2006, the honeybee has been the fastest-growing livestock segment in the country! And that doesn't count feral honeybees, which may outnumber their captive cousins several times over... Much of the explosion of small producers came in just one state: Texas. The Lone Star State has gone from having the sixth-most bee operations in the country to being so far ahead of anyone else that it out-bees the bottom 21 states combined... [A]ll 254 Texas counties adopted bee rules requiring, for example, six hives on five acres plus another hive for every 2.5 acres beyond that to qualify for the tax break... When the census was taken in December 2022, California had more than four times as many bees as any other state. We emailed pollination expert Brittney Goodrich at the University of California at Davis, who explained that pollinating the California almond crop "demands most of the honeybee colonies in the U.S. each year... Sadly, however, this does not mean we've defeated colony collapse. One major citizen-science project found that beekeepers lost almost half of their colonies in the year ending in April , the second-highest loss rate on record. For now, we're making up for it with aggressive management. The Texans told us that they were splitting their hives more often, replacing queens as often as every year and churning out bee colonies faster than the mites, fungi and diseases can take them down. But this may not be good news for bees in general. "It is absolutely not a good thing for native pollinators," said Eliza Grames, an entomologist at Binghamton University, who noted that domesticated honeybees are a threat to North America's 4,000 native bees, about 40% of which are vulnerable to extinction... Many of the same forces collapsing managed beehives also decimate their native cousins, only the natives don't usually have entire industries and governments pouring hundreds of millions of dollars into supporting them. So while Texas bee exemptions "have become big business," the article ends with this quote from Mace Vaughan, who leads pollinator and agricultural biodiversity at Xerces, an expanding insect-conservation outfit. "The way you support both honeybees and beekeepers — and the way you save native pollinators — is to go out there and create beautiful flower-rich habitat on your farm or your garden."

    Read more of this story at Slashdot.

    11:34a
    Retro Computing Enthusiast Tries Running Turbo Pascal On a 40-Year-Old Apple II Clone
    Four months ago long-time Slashdot reader Shayde tried restoring a 1986 DEC PDP-11 minicomputer. But now he's gone even further back in time. Shayde writes: In 1984, Apple II's were at the top of their game in the 8 bit market. A company in New Jersey decided to get in on the action and built an exact clone of the Apple. The Franklin Ace was chip and ROM compatible with the Apple II, and that led to it's downfall. In this video we resurrect and old Franklin Ace and not only boot ProDOS, but also get the Z80 coprocessor up and running, and relive what coding in Turbo Pascal in the 80s was like. Why Turbo Pascal? "Some of my earliest professional programming was done in this environment," Shayde says in the video, "and I was itching to play with it again."

    Read more of this story at Slashdot.

    2:34p
    Have Scientists Finally Made Sense of Hawking's Famous Black Hole Formula?
    Slashdot reader sciencehabit shares this report from Science magazine: Fifty years ago, famed physicist Stephen Hawking wrote down an equation that predicts that a black hole has entropy, an attribute typically associated with the disordered jumbling of atoms and molecules in materials. The arguments for black hole entropy were indirect, however, and no one had derived the famous equation from the fundamental definition of entropy — at least not for realistic black holes. Now, one team of theorists claims to have done so, although some experts are skeptical. Reported in a paper in press at Physical Review Letters, the work would solve a homework problem that some theorists have labored over for decades. "It's good to have it done," says Don Marolf, a gravitational theorist at the University of California, Santa Barbara who was not involved in the research. It "shows us how to how to move forward, that's great."

    Read more of this story at Slashdot.

    3:34p
    In America, A Complex Patchwork of State AI Regulations Has Already Arrived
    While the European Parliament passed a wide-ranging "AI Act" in March, "Leaders from Microsoft, Google, and OpenAI have all called for AI regulations in the U.S.," writes CIO magazine. Even the Chamber of Commerce, "often opposed to business regulation, has called on Congress to protect human rights and national security as AI use expands," according to the article, while the White House has released a blueprint for an AI bill of rights. But even though the U.S. Congress hasn't passed AI legislation — 16 different U.S. states have, "and state legislatures have already introduced more than 400 AI bills across the U.S. this year, six times the number introduced in 2023." Many of the bills are targeted both at the developers of AI technologies and the organizations putting AI tools to use, says Goli Mahdavi, a lawyer with global law firm BCLP, which has established an AI working group. And with populous states such as California, New York, Texas, and Florida either passing or considering AI legislation, companies doing business across the US won't be able to avoid the regulations. Enterprises developing and using AI should be ready to answer questions about how their AI tools work, even when deploying automated tools as simple as spam filtering, Mahdavi says. "Those questions will come from consumers, and they will come from regulators," she adds. "There's obviously going to be heightened scrutiny here across the board." There's sector-specific bills, and bills that demand transparency (of both development and output), according to the article. "The third category of AI bills covers broad AI bills, often focused on transparency, preventing bias, requiring impact assessment, providing for consumer opt-outs, and other issues." One example the article notes is Senate Bill 1047, introduced in the California State Legislature in February, "would require safety testing of AI products before they're released, and would require AI developers to prevent others from creating derivative models of their products that are used to cause critical harms." Adrienne Fischer, a lawyer with Basecamp Legal, a Denver law firm monitoring state AI bills, tells CIO that many of the bills promote best practices in privacy and data security, but said the fragmented regulatory environment "underscores the call for national standards or laws to provide a coherent framework for AI usage." Thanks to Slashdot reader snydeq for sharing the article.

    Read more of this story at Slashdot.

    4:34p
    Mozilla Asks: Will Google's Privacy Sandbox Protect Advertisers (and Google) More than You?
    On Mozilla's blog, engineer Martin Thomson explores Google's "Privacy Sandbox" initiative (which proposes sharing a subset of private user information — but without third-party cookies). The blog post concludes that Google's Protected Audience "protects advertisers (and Google) more than it protects you." But it's not all bad — in theory: The idea behind Protected Audience is that it creates something like an alternative information dimension inside of your (Chrome) browser... Any website can push information into that dimension. While we normally avoid mixing data from multiple sites, those rules are changed to allow that. Sites can then process that data in order to select advertisements. However, no one can see into this dimension, except you. Sites can only open a window for you to peek into that dimension, but only to see the ads they chose... Protected Audience might be flawed, but it demonstrates real potential. If this is possible, that might give people more of a say in how their data is used. Rather than just have someone spy on your every action then use that information as they like, you might be able to specify what they can and cannot do. The technology could guarantee that your choice is respected. Maybe advertising is not the first thing you would do with this newfound power, but maybe if the advertising industry is willing to fund investments in new technology that others could eventually use, that could be a good thing. But here's some of the blog post's key criticisms: "[E]ntities like Google who operate large sites, might rely less on information from other sites. Losing the information that comes from tracking people might affect them far less when they can use information they gather from their many services... [W]e have a company that dominates both the advertising and browser markets, proposing a change that comes with clear privacy benefits, but it will also further entrench its own dominance in the massively profitable online advertising market..." "[T]he proposal fails to meet its own privacy goals. The technical privacy measures in Protected Audience fail to prevent sites from abusing the API to learn about what you did on other sites.... Google loosened privacy protections in a number of places to make it easier to use. Of course, by weakening protections, the current proposal provides no privacy. In other words, to help make Protected Audience easier to use, they made the design even leakier..." "A lot of these leaks are temporary. Google has a plan and even a timeline for closing most of the holes that were added to make Protected Audience easier to use for advertisers. The problem is that there is no credible fix for some of the information leaks embedded in Protected Audience's architecture... In failing to achieve its own privacy goals, Protected Audience is not now — and maybe not ever — a good addition to the Web."

    Read more of this story at Slashdot.

    6:35p
    Professors Are Now Using AI to Grade Essays. Are There Ethical Concerns?
    A professor at Ithaca College runs part of each student's essay through ChatGPT, "asking the AI tool to critique and suggest how to improve the work," reports CNN. (The professor said "The best way to look at AI for grading is as a teaching assistant or research assistant who might do a first pass ... and it does a pretty good job at that.") And the same professor then requires their class of 15 students to run their draft through ChatGPT to see where they can make improvements, according to the article: Both teachers and students are using the new technology. A report by strategy consultant firm Tyton Partners, sponsored by plagiarismâdetection platform Turnitin, found half of college students used AI tools in Fall 2023. Meanwhile, while fewer faculty members used AI, the percentage grew to 22% of faculty members in the fall of 2023, up from 9% in spring 2023. Teachers are turning to AI tools and platforms — such as ChatGPT, Writable, Grammarly and EssayGrader — to assist with grading papers, writing feedback, developing lesson plans and creating assignments. They're also using the burgeoning tools to create quizzes, polls, videos and interactives to up the ante" for what's expected in the classroom. Students, on the other hand, are leaning on tools such as ChatGPT and Microsoft CoPilot — which is built into Word, PowerPoint and other products. But while some schools have formed policies on how students can or can't use AI for schoolwork, many do not have guidelines for teachers. The practice of using AI for writing feedback or grading assignments also raises ethical considerations. And parents and students who are already spending hundreds of thousands of dollars on tuition may wonder if an endless feedback loop of AI-generated and AI-graded content in college is worth the time and money. A professor of business ethics at the University ofâVirginia "suggested teachers use AI to look at certain metrics — such as structure, language use and grammar — and give a numerical score on those figures," according to the article. ("But teachers should then grade students' work themselves when looking for novelty, creativity and depth of insight.") But a writer's workshop teacher at the University of Lynchburg in Virginia "also sees uploading a student's work to ChatGPT as a 'huge ethical consideration' and potentially a breach of their intellectual property. AI tools like ChatGPT use such entries to train their algorithms..." Even the Ithaca professor acknowledged to CNN that "If teachers use it solely to grade, and the students are using it solely to produce a final product, it's not going to work."

    Read more of this story at Slashdot.

    7:35p
    Boeing Engine Cover Rips Apart During Takeoff This Morning
    "Scary moments for passengers on a Southwest flight from Denver to Houston," tweets an ABC News transportation reporter, "when the engine cover ripped off during flight, forcing the plane to return to Denver Sunday morning." "Think that big circular metal panel surrounding the engine," writes QZ — adding that after it ripped off, the engine cowling "struck the 737-800's wing flap." It happened during takeoff, so the plane was towed back to the gate after returning to the airport. All passengers and crew were safe, and passengers boarded a replacement plane for their flight to Houston: Southwest was already having a rough few weeks before this event occurred. Last Thursday, an engine on one of its Boeing 737-800 planes caught fire before taking off from an airport in Texas, and before that, two FAA-scrutinized Southwest flights were disrupted by turbulence [One last month in New York City and the other in Florida on Wednesday. "Two hours later, an All Nippon Airways Boeing 787 reported an oil leak on arrival at Naha Airport, Japan," adds Newsweek.]. "We apologize for the inconvenience of their delay," Boeing said in a statement, adding that they "place our highest priority on ultimate Safety for our Customers and Employees. "Our Maintenance teams are reviewing the aircraft."

    Read more of this story at Slashdot.

    9:26p
    Rust, Python, Apache Foundations and Others Announce Big Collaboration on Cybersecurity Process Specifications
    The foundations behind Rust, Python, Apache, Eclipse, PHP, OpenSSL, and Blender announced plans to create "common specifications for secure software development," based on "existing open source best practices." From the Eclipse Foundation: This collaborative effort will be hosted at the Brussels-based Eclipse Foundation [an international non-profit association] under the auspices of the Eclipse Foundation Specification Process and a new working group... Other code-hosting open source foundations, SMEs, industry players, and researchers are invited to join in as well. The starting point for this highly technical standardisation effort will be today's existing security policies and procedures of the respective open source foundations, and similar documents describing best practices. The governance of the working group will follow the Eclipse Foundation's usual member-led model but will be augmented by explicit representation from the open source community to ensure diversity and balance in decision-making. The deliverables will consist of one or more process specifications made available under a liberal specification copyright licence and a royalty-free patent licence... While open source communities and foundations generally adhere to and have historically established industry best practices around security, their approaches often lack alignment and comprehensive documentation. The open source community and the broader software industry now share a common challenge: legislation has introduced an urgent need for cybersecurity process standards. The Apache Foundation notes the working group is forming partly "to demonstrate our commitment to cooperation with and implementation of" the EU's Cyber Resilience Act. But the Eclipse Foundation adds that even before it goes into effect in 2027, they're recognizing open source software's "increasingly vital role in modern society" and an increasing need for reliability, safety, and security, so new regulations like the CRA "underscore the urgency for secure by design and robust supply chain security standards." Their announcement adds that "It is also important to note that it is similarly necessary that these standards be developed in a manner that also includes the requirements of proprietary software development, large enterprises, vertical industries, and small and medium enterprises." But at the same time, "Today's global software infrastructure is over 80% open source... [W]hen we discuss the 'software supply chain,' we are primarily, but not exclusively, referring to open source." "We invite you to join our collaborative effort to create specifications for secure open source development," their announcement concludes," promising initiative updates on a new mailing list. "Contribute your ideas and participate in the magic that unfolds when open source foundations, SMEs, industry leaders, and researchers combine forces to tackle big challenges." The Python Foundation's announcement calls it a "community-driven initiative" that will have "a lasting impact on the future of cybersecurity and our shared open source communities."

    Read more of this story at Slashdot.

    11:04p
    Warner Bros. Issues DMCA's After 'Suicide Squad' Game Cracked to Allow Playing as Unreleased Characters
    "It appears the live-service shooter Suicide Squad: Kill The Justice League is, once again, suffering from a hacker problem," reports Kotaku: Instead of doing absolutely absurd amounts of damage, this time hackers have figured out how to gain access to unreleased characters and skins. And publisher WB Games is reportedly issuing DMCA takedown notices against any assets that have found their way online. As reported by IGN, one hacker discovered how to play as Deathstroke, one of the four characters developer Rocksteady Studios teased for an upcoming Suicide Squad season... There were also unreleased skins for The Joker and King Shark that folks have somehow accessed, all of which began circulating on Reddit and X/Twitter on April 4. Not long after, the assets were removed, with folks believing WB Games was behind the strikes. YouTuber TrixRidiculous, who primarily covers DC- and Marvel-related RPGs, had their posts on X/Twitter swiftly taken down by a DMCA strike."I posted three pics to Twitter," TrixRidiculous told Kotaku over email. "Within probably 30 minutes, I received a DMCA strike from WB Games [Kotaku saw a screenshot of this notice]. Please just bring attention to the fact that the leaderboard is riddled with hackers/cheaters that have gone unbanned since launch, as that's all I was trying to do anyway." This sentiment is shared across the game's official subreddit, with folks posting about "losing interest" in Suicide Squad due to hackers flooding the leaderboards.

    Read more of this story at Slashdot.

    << Previous Day 2024/04/07
    [Calendar]
    Next Day >>

Slashdot   About LJ.Rossia.org