In the field of in-memory computing for artificial intelligence (AI), the traditional reliance on analog memory for storing synaptic weights presents significant challenges, particularly when using Magnetic Tunnel Junctions (MTJs), which are limited by their inherently binary nature. However, the advent of Binarized Neural Networks (BNNs) has opened a new avenue for leveraging binary storage mechanisms, offering a promising solution for energy-constrained and miniaturized AI systems. This presentation unveils a fully integrated implementation of a BNN using 32k binary memristors. We detail the design and fabrication of our system and its operational excellence even under variable power conditions in an energy harvesting scenario, demonstrating its potential as a resilient and energy-frugal AI platform. While our current work employs hafnium oxide memristors, the design principles and architecture we propose are readily adaptable to MTJs, suggesting a seamless transition pathway to spintronic implementations in the future. We also compare our approach to other research spintronic research endeavors, to prove a comprehensive view of the field's direction.
Computation exploiting physics can provide outstanding energy efficiency, but physical systems tend to suffer from noise and unpredictability. This is for example the case of emerging memory devices. They can be used as artificial synapses for low-energy AI, but they suffer from variability, making them functionally analogous to random variables. In machine learning, Bayesian approaches are designed to operate with random variables. In this talk, we show that they can be an excellent way to exploit emerging memory devices without suffering from their drawbacks. We present three experimental realizations of Bayesian systems exploiting emerging memory devices: a Bayesian reasoning machine, which provides explainable decision-making, a Bayesian neural network, capable of quantifying the certainty of its predictions, and a Bayesian learning system. As these systems fully embrace, and sometimes exploit, device variability, they show outstanding robustness. We finally discuss the potential of Bayesian approaches to exploit other types of physics.
Neuromorphic computing seeks to emulate the brain's efficiency in hardware, enabling sophisticated information processing with minimal energy consumption. Despite these advances, dynamically training neuromorphic systems—whether electronic or photonic—to undertake new tasks directly within the hardware remains a formidable challenge. This is largely due to the nature of conventional training algorithms like backpropagation. They are not well-suited for hardware implementation because they rely on non-local computations.
In this presentation, I will introduce a solution to this challenge through the application of Equilibrium Propagation, an algorithm rooted in physics, first proposed by Scellier and Bengio in 2017. This innovative approach leverages the inherent dynamics of physical systems for learning, capitalizing on how these systems can be gently guided ('nudged') towards a desired outcome through their natural behavior. I will elucidate the foundational principles of Equilibrium Propagation, discuss the obstacles encountered in its implementation, and demonstrate its versatility through applications across a variety of physical systems.
As the field of deep learning continues to expand, it has become increasingly important to develop energy-efficient hardware that can adapt to these advances. However, achieving learning on a chip requires the use of algorithms that are compatible with hardware and can be implemented on imperfect devices. One promising training technique is Equilibrium Propagation, which was introduced in 2017 by Yoshua Bengio. This approach provides gradient estimates based on a spatially local learning rule, making it more biologically plausible and better suited for hardware than backpropagation. However, the mathematical equations of this algorithm cannot be directly transposed to a physical system. In this study, Equilibrium Propagation algorithm is adapted to the use of a real physical system, and its potential application to spintronics devices is discussed.
As deep learning continues to grow, developing adapted energy-efficient hardware becomes crucial. Learning on a chip requires hardware-compatible learning algorithms and their realization with physically imperfect devices. Equilibrium Propagation is a training technique introduced in 2017 by Yoshua Bengio which gives gradient estimates based on a spatially local learning rule, making it both more biologically plausible and more hardware compatible than backpropagation. This work uses the Equilibrium Propagation algorithm to train a neural network with hardware-in-the-loop simulations using hafnium oxide memristor synapses. Realizing this type of learning with imperfect and noisy devices paves the way for on-chip learning at very low energy.
This work introduces a methodology for modeling spintronic systems based on a deep learning approach. We show that using a limited amount of micromagnetic simulations or experimental data, we can train a specific type of neural network known as “Neural Ordinary Differential Equations” to predict the behavior of a spintronic system in new situations. On the simulation of a skyrmion-based system, our technique gives equivalent results to micromagnetic simulations but 200 times faster. We also show that based on five milliseconds of experimental data, our method predicted the results of weeks of measurements of spin-torque nanooscillators.
Neuromorphic computing takes inspiration from the brain to create energy-efficient hardware for information processing, capable of highly sophisticated tasks. Systems built with standard electronics achieve gains in speed and energy by mimicking the distributed topology of the brain. Scaling-up such systems and improving their energy usage, speed and performance by several orders of magnitude requires a revolution in hardware. We discuss how including more physics in the algorithms and nanoscale materials used for data processing could have a major impact in the field of neuromorphic computing. We review striking results that leverage physics to enhance the computing capabilities of artificial neural networks, using resistive switching materials, photonics, spintronics and other technologies. We discuss the paths that could lead these approaches to maturity, towards low-power, miniaturized chips that could infer and learn in real time.
In this work , we describe the design, realisation and characterization of the magnetic version of the Galton Board, an archetypal statistical device originally designed to exemplify normal distributions. Although simple in its macroscopic form, achieving an equivalent nanoscale system poses many challenges related to the generation of sufficiently similar nanometric particles and the strong influence that nanoscale defects can have in the stochasticity of random processes. We demonstrate how the quasi-particle nature and the chaotic dynamics of magnetic domain-walls can be harnessed to create nanoscale stochastic devices [1]. Furthermore, we show how the direction of an externally applied magnetic field can be employed to controllably tune the probability distribution at the output of the devices, and how the removal of elements inside the array can be used to modify such distribution.
For numerous Radio-Frequency applications such as medicine, RF fingerprinting or radar classification, it is important to be able to apply Artificial Neural Network on RF signals. In this work we show that it is possible to apply directly Multiply-And-Accumulate operations on RF signals without digitalization, thanks to Magnetic Tunnel Junctions (MTJs). These devices are similar to the magnetic memories already industrialized and compatible with CMOS.
We show experimentally that a chain of these MTJs can rectify simultaneously different RF signals, and that the synaptic weight encoded by each junction can be tune with their resonance frequency.
Through simulations we train a layer of these junctions to solve a handwritten digit dataset. Finally, we show that our system can scale to multi-layer neural networks using MTJs to emulate neurons.
Our proposition is a fast and compact system that allows to receive and process RF signals in situ and at the nanoscale.
Tuning the Dzyaloshinskii-Moriya interaction (DMI) using electric (E)-fields in magnetic devices has opened up new perspectives for controlling the stabilization of chiral spin structures. Recent efforts have used voltage-induced charge redistribution at magnetic/oxides interfaces to modulate the DMI. This approach is attractive for active devices but tends to be volatile, making it energy-demanding. Here we demonstrate nonvolatile E-field manipulation of the DMI by ionic-liquid gating of Pt/Co/HfO2 ultra thin films. The E-field effect on the DMI is linked to the migration of oxygen species from the HfO2 layer into the Co and Pt layers and subsequent anchoring. This effect permanently changes the properties of the material, showing that E-fields can be used not only for local gating in devices but also as a material design tool for post growth tuning of the DMI.
Recently, there has been impressive progress in the field of artificial intelligence. A striking example is Alphago, an algorithm developed by Google, that defeated the world champion Lee Sedol at the game of Go. However, in terms of power consumption, the brain remains the absolute winner, by four orders of magnitudes. Indeed, today, brain inspired algorithms are running on our current sequential computers, which have a very different architecture than the brain. If we want to build smart chips capable of cognitive tasks with a low power consumption, we need to fabricate on silicon huge parallel networks of artificial synapses and neurons, bringing memory close to processing. The aim of the presented work is to deliver a new breed of bio-inspired magnetic devices for pattern recognition. Their functionality is based on the magnetic reversal properties of an artificial spin ice in a Kagome geometry for which the magnetic switching occurs by avalanches.
Spin torque magnetic memory (ST-MRAM) is currently under intense academic and industrial development, as it features non-volatility, high write and read speed and high endurance. However, one of its great challenge is the probabilistic nature of programming magnetic tunnel junctions, which imposes significant circuit or energy overhead for conventional ST-MRAM applications. In this work, we show that in unconventional computing applications, this drawback can actually be turned into an advantage. First, we show that conventional magnetic tunnel junctions can be reinterpreted as stochastic “synapses” that can be the basic element of low-energy learning systems. System-level simulations on a task of vehicle counting highlight the potential of the technology for learning systems. We investigate in detail the impact of magnetic tunnel junctions’ imperfections. Second, we introduce how intentionally superparamagnetic tunnel junctions can be the basis for low-energy fundamentally stochastic computing schemes, which harness part of their energy in thermal noise. We give two examples built around the concepts of synchronization and Bayesian inference. These results suggest that the stochastic effects of spintronic devices, traditionally interpreted by electrical engineers as a drawback, can be reinvented as an opportunity for low energy circuit design.
The brain displays many features typical of non-linear dynamical networks, such as synchronization or chaotic behaviour. These observations have inspired a whole class of models that harness the power of complex non-linear dynamical networks for computing. In this framework, neurons are modeled as non-linear oscillators, and synapses as the coupling between oscillators. These abstract models are very good at processing waveforms for pattern recognition or at generating precise time sequences useful for robotic motion. However there are very few hardware implementations of these systems, because large numbers of interacting non-linear oscillators are indeed. In this talk, I will show that coupled spin-torque nano-oscillators are very promising for realizing cognitive computing at the nanometer and nanosecond scale, and will present our first results in this direction.
The cascading of logic gates is one of the primary challenges for spintronic computing, as there is a need to dynamically
create magnetic fields. Spin-diode logic provides this essential cascading, as the current through each spin-diode is
modulated by a magnetic field created by the current through other spin-diodes. This logic family can potentially be
applied to any device exhibiting strong positive or negative magnetoresistance, and allows for the creation of circuits
with exceptionally high performance. These novel circuit structures provide an opportunity for spintronics to replace
CMOS in general-purpose computing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.