Explore more publications!

Thermodynamic Computing Advances with Design and Training

According to Whitelam and Casert, when the components of the computer are themselves nonlinear, it becomes possible to train a thermodynamic computer to perform nonlinear computations at specified times, regardless of its equilibrium status. This means the computer operates more like a classical computer, without the need to wait for equilibrium. It also expands the set of thermodynamic algorithms to the same types of complex, nonlinear problems a neural network can do, meaning thermodynamic computing could be an appropriate tool for machine-learning workloads that have previously been outside its capabilities.

“A nonlinear thermodynamic circuit can behave like a neuron in a neural network,” said Whitelam. “Nonlinearity is what gives a neural network its expressive power. What we reasoned is that if you build these thermodynamic neurons into a connected structure, then that structure should have the expressive power to mimic a neural network and so be able to do machine learning.” 

Together, these solutions expand what thermodynamic computing can do.

Inverted training

The challenge, then, becomes training such a system. A thermodynamic computer is a stochastic system, meaning that no two runs on a thermodynamic computer look the same, and the methods used for training digital neural networks don’t apply. But Whitelam and Casert have offered a solution there as well.

To train Whitelam’s model of the thermodynamic computer, Casert engineered a large-scale computational framework. Using 96 GPUs in parallel on the Perlmutter supercomputer at NERSC, Casert built and ran massively parallel evolutionary simulations, evaluating billions of noisy dynamical trajectories per generation to discover the most effective network parameters. 

In particular, he used a framework known as a genetic algorithm: beginning with a set of different thermodynamic neural networks and evaluating the effectiveness of each, he selected the best-performing and mutated them, adding random noise to their parameters, and evaluated them again. Ultimately, Casert simulated more than a trillion runs of a thermodynamic computer, using Perlmutter’s GPUs in parallel. This training framework is considerably more costly than the methods used to train digital networks, but it yields a computer that can operate using very little energy after it’s built and trained.

“It’s a very different way of optimizing a neural network. Training a thermodynamic neural network by simulating it digitally is expensive, but once trained and built as physical hardware,  we can perform inference on that hardware for a very low energy cost,” said Casert. 

The combination of design and training show that a machine-learning computer that uses far less energy is possible. 

More hardware, more algorithms

The field of thermodynamic computing is relatively young – so where does it go from here? According to Whitelam, it’s important to work out how to realize these designs in hardware. Currently, the team is looking for experimental partners to make both hardware and software a reality — another step exploring what’s possible with thermodynamic computing.

Another step, he says, is more algorithms. Existing algorithms are meant for systems working at equilibrium; with that requirement no longer a roadblock, new ones will need to be developed. The field will also need new algorithms for nonlinear computations, mirroring the ones used for digital neural networks. 

“It’s an exciting field,” said Whitelam. “We’re looking for more efficient ways of computing, and thermodynamic computing is definitely one of them.”

###

Lawrence Berkeley National Laboratory (Berkeley Lab) is committed to groundbreaking research focused on discovery science and solutions for abundant and reliable energy supplies. The lab’s expertise spans materials, chemistry, physics, biology, earth and environmental science, mathematics, and computing. Researchers from around the world rely on the lab’s world-class scientific facilities for their own pioneering research. Founded in 1931 on the belief that the biggest problems are best addressed by teams, Berkeley Lab and its scientists have been recognized with 17 Nobel Prizes. Berkeley Lab is a multiprogram national laboratory managed by the University of California for the U.S. Department of Energy’s Office of Science. 

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions