Hi Welcome You can highlight texts in any article and it becomes audio news that you can hear
  • Fri. Nov 22nd, 2024

Quantum-tunneling memory could boost AI energy efficiency by 100x

Byindianadmin

May 2, 2022
Quantum-tunneling memory could boost AI energy efficiency by 100x

There’s a potential solution on the cards to the energy expenditure problems plaguing AI training, and it sounds simple: just strengthen the “synapses” that move electrons through a memory array. 

Electrical and Systems Engineering Professor Shantanu Chakrabartty and two of his colleagues at Washington University in St Louis, USA, have authored a Nature-published paper explaining how they have used the natural properties of electrons to reduce the energy used to train machine learning models. 

The project had the researchers trying to build a learning-in-memory synaptic array that had digital synapses that operated dynamically instead of statically, such that they only need energy when changing a state, but not to maintain one.

To test their concept, the team built CMOS circuits with energy barriers they said were strong enough to be non-volatile, and which would become stronger (i.e., able to better maintain non-volatility) as the array’s training progresses. 

The result, Chakrabartty said, is a more efficient array that could reduce the energy requirements of ML training by 100x – “and this is a pessimistic projection,” Chakrabartty told The Register. 

That 100x improvement is for a small-scale system, Chakrabartty said. Larger-scale models would show an even greater improvement, especially if the memory were integrated with the processor on a single wafer – which Chakrabartty said he and his team are currently working to achieve. 

How to get your digital synapses firing

Machine learning model training is incredibly energy inefficient. Washington University in St Louis said that training a single top-of-the-line AI was responsible for more than 625,000 pounds (283.5 metric tonnes) of CO2 emissions in 2019 – nearly five times what the average car will emit over its life. Other figures say that training a GPT-3 model can burn the amount of energy needed to drive a car to the Moon and back. 

The problem, as the paper sees it, is with the bridges between computing nodes in memory arrays, which the paper likens to the synapses that bridge neurons. In an animal brain, learning strengthens synapses so that they fire more efficiently, but in computing each synapse acts statically. What this means in practical terms is that each time an electron moves through a synapse a “switch” has to be flipped, which spends energy to polarize the synapse, and then has to maintain that energy expenditure to maintain polarity.

Intel’s neurochips could one day end up in PCs or a cloud service

MIT, Amazon, TSMC, ASML and friends to work on non-planet-killing AI hardware

Google talks up its 540-billion-parameter text-generating AI system

Nvidia CEO Jensen Huang talks chips, GPUs, metaverse

The model developed by Chakrabartty and his team gets around that – and creates their more efficient synapse – by using Fowler-Nordheim Dynamic Analog Memory (FN-DAM).

The “FN” portion of FN-DAM refers to the formula that allows an electron to tunnel through a triangular electrical barrier that’s electrically isolated (in this instance) by silicon-dioxide barriers. 

Those barriers are strong enough that, even with power removed, the electrons still can’t escape. Resupply power in a way that makes that barrier change states, and the electrons trapped in the synapse tunnel away on their journey. 

Chakrabartty said his team’s research paper proves that their design is capable, but he warned that FN-DAM still faces a number of barriers to scaling, such as its resolution and measurement precision. ®

Read More

Click to listen highlighted text!