We all know that traditional computing is reaching its limit and we often talk about it. There’s always a limit we can’t move past without making some truly enormous changes to the manner in which we structure computers. How about making a computer which works little more like human brains? So, what is Neuromorphic Computing?
The concept of designing and engineering computer chips that use the same physics of computation used by our own nervous system is called Neuromorphic Computing. This is different from an artificial neural network, which is a program run on a normal computer that mimics the logic of how a human brain thinks.
Neuromorphic computing (the hardware version) and neural networks (the software version) can work together because as we make progress in both fields, neuromorphic hardware will probably be the best option to run neural networks on. But here, we are going to focus on neuromorphic computing and the really exciting strides that have been made in this field in the past year.
How it works?
We know that traditional computers think in binary. Everything is either a ‘1’ or ‘0’, a ‘yes’ or a ‘no’. You only have two options, so the code we use and the questions we ask these kinds of computers must be structured in a very rigid way. Neuromorphic computing works a little more flexibly. Instead of using an electric signal to mean ‘1’ or ‘0’, designers of these new chips want to make their computer’s neurons talk to each other the way biological neurons do.
Also Read: How close are we for Nuclear Batteries
To do this, you need a kind of precise electric current which flows across a synapse, or the space between neurons. Depending on the number and kind of ion, the receiving computer neuron is activated in some way, giving you a lot more computational options than just your basic yes and no. This ability to transmit a gradient of understanding from neuron to neuron and to have them all working together simultaneously means that neuromorphic chips could eventually be more energy efficient than our normal computers. Especially for really complicated tasks.
To realize this exciting potential, we need new materials because what we are using in our computers today isn’t going to cut it. The physical properties of something like silicon, for example, make it hard to control the current between artificial neurons. It just kind of bleeds all over the chip with no organization.
So, a new design from an MIT team uses different materials known as “Single Crystalline Silicon” and “Silicon Germanium”, layered on top of one another. By applying an electric field to this new device, you get a well-controlled flow of ions.
A team in Korea is investigating other materials. They used “Tantalum Oxide” to give them precise control over the flow of ions and it’s even more durable. One more team in Colorado is implementing magnets to precisely control the way the computer neurons communicate. These advances in the actual architecture of neuromorphic systems are all working toward getting us to a place where the neurons on these chips can ‘learn’ as they compute
Software neural networks have been able to do this for a while, but it’s a new advancement for physical neuromorphic devices and these experiments are showing promising results. Another leap in performance has been made by a team at the University of Manchester, who have taken a different approach. Their system is called “SpiNNaker”, which stands for Spiking Neural Network Architecture. While other experiments look to change the experiments we use, the Manchester team uses traditional digital parts, like cores and routers, connecting and communicating with each other in innovative ways.
UK researchers have shown that they can use SpiNNaker to simulate the behavior of the human cortex. The hope is that a computer that behaves like a brain will give us enough computing power to simulate something as complicated as the brain, helping us understand diseases like Alzheimer’s. The news is that SpiNNaker has now matched the results we did get from a traditional supercomputer. This is huge because neural networks offer the possibility of higher speed and more complexity for less energy cost, and with this new finding we see that they are
Overall, we’re working toward having a better understanding of how the brain works in the first place. Improving the artificial materials, we use to mimic biological systems, and creating hardware architectures that work with and optimize neural algorithms. Changing computer hardware to behave more like the human brain is one of a few options we have for continuing to improve computer performance. And to get computers to learn and adapt the way humans do.