Slashdot: Hardware's Journal
 
[Most Recent Entries] [Calendar View]

Saturday, February 10th, 2024

    Time Event
    7:00a
    Fusion Research Facility's Final Tritium Experiments Yield New Energy Record
    schwit1 quotes a report from Phys.Org: The Joint European Torus (JET), one of the world's largest and most powerful fusion machines, has demonstrated the ability to reliably generate fusion energy, while simultaneously setting a world record in energy output. These notable accomplishments represent a significant milestone in the field of fusion science and engineering. In JET's final deuterium-tritium experiments (DTE3), high fusion power was consistently produced for five seconds, resulting in a ground-breaking record of 69 megajoules using a mere 0.2 milligrams of fuel. JET is a tokamak, a design which uses powerful magnetic fields to confine a plasma in the shape of a doughnut. Most approaches to creating commercial fusion favor the use of two hydrogen variants -- deuterium and tritium. When deuterium and tritium fuse together they produce helium and vast amounts of energy, a reaction that will form the basis of future fusion powerplants. Dr. Fernanda Rimini, JET Senior Exploitation Manager, said, "We can reliably create fusion plasmas using the same fuel mixture to be used by commercial fusion energy powerplants, showcasing the advanced expertise developed over time." Professor Ambrogio Fasoli, Program Manager (CEO) at EUROfusion, said, "Our successful demonstration of operational scenarios for future fusion machines like ITER and DEMO, validated by the new energy record, instill greater confidence in the development of fusion energy. Beyond setting a new record, we achieved things we've never done before and deepened our understanding of fusion physics." Dr. Emmanuel Joffrin, EUROfusion Tokamak Exploitation Task Force Leader from CEA, said, "Not only did we demonstrate how to soften the intense heat flowing from the plasma to the exhaust, we also showed in JET how we can get the plasma edge into a stable state thus preventing bursts of energy reaching the wall. Both techniques are intended to protect the integrity of the walls of future machines. This is the first time that we've ever been able to test those scenarios in a deuterium-tritium environment."

    Read more of this story at Slashdot.

    7:34p
    New Hutter Prize Awarded for Even Smaller Data Compression Milestone
    Since 2006 Baldrson (Slashdot reader #78,598) has been part of the team verifying "The Hutter Prize for Lossless Compression of Human Knowledge," an ongoing challenge to compress a 100-MB excerpt of Wikipedia (approximately the amount a human can read in a lifetime). "The intention of this prize is to encourage development of intelligent compressors/programs as a path to Artificial General Intelligence," explains the project's web site. 15 years ago, Baldrson wrote a Slashdot post explaining the logic (titled "Compress Wikipedia and Win AI Prize"): The basic theory, for which Hutter provides a proof, is that after any set of observations the optimal move by an AI is find the smallest program that predicts those observations and then assume its environment is controlled by that program. Think of it as Ockham's Razor on steroids. The amount of the prize also increases based on how much compression is achieved. (So if you compress the 1GB file x% better than the current record, you'll receive x% of the prize...) The first prize was awarded in 2006. And now Baldrson writes: Kaido Orav has just improved 1.38% on the Hutter Prize for Lossless Compression of Human Knowledge with his "fx-cmix" entry. The competition seems to be heating up, with this winner coming a mere 6 months since the prior winner. This is all the more impressive since each improvement in the benchmark approaches the (unknown) minimum size called the Kolmogorov Complexity of the data.

    Read more of this story at Slashdot.

    11:14p
    Nvidia is Forming a New Business Unit to Make Custom Chips
    An anonymous reader shared this report from Reuters: Nvidia is building a new business unit focused on designing bespoke chips for cloud computing firms and others, including advanced AI processors, nine sources familiar with its plans told Reuters. The dominant global designer and supplier of AI chips aims to capture a portion of an exploding market for custom AI chips and shield itself from the growing number of companies pursuing alternatives to its products. The Santa Clara, California-based company controls about 80% of high-end AI chip market, a position that has sent its stock market value up 40% so far this year to $1.73 trillion after it more than tripled in 2023. Nvidia's customers, which include ChatGPT creator OpenAI, Microsoft, Alphabet, and Meta Platforms, have raced to snap up the dwindling supply of its chips to compete in the fast-emerging generative AI sector. Its H100 and A100 chips serve as a generalized, all-purpose AI processor for many of those major customers. But the tech companies have started to develop their own internal chips for specific needs. Doing so helps reduce energy consumption, and potentially can shrink the cost and time to design. Nvidia is now attempting to play a role in helping these companies develop custom AI chips that have flowed to rival firms such as Broadcom and Marvell Technology, said the sources, who declined to be identified because they were not authorized to speak publicly... Nvidia moving into this territory has the potential to eat into Broadcom and Marvell sales.

    Read more of this story at Slashdot.

    << Previous Day 2024/02/10
    [Calendar]
    Next Day >>

Slashdot: Hardware   About LJ.Rossia.org