What Is Neuromorphic Computing?

Posted by Eyituoyo Ogbemi on

What Is Neuromorphic Computing? 

There are a number of types and styles of artificial intelligence, but there's a key difference between the branch of programming that looks for interesting solutions to pertinent problems and the branch of science seeking to model and simulate the functions of the human brain. Neuromorphic computing, which includes the production and use of neural networks, deals with proving the efficacy of any concept of how the brain performs its functions -- not just reaching decisions, but memorizing information and even deducing facts.


Both literally and practically, "neuromorphic" means "taking the form of the brain." The keyword here is "form," mainly because so much of AI research deals with simulating or at least mimicking, the function of the brain. The engineering of a neuromorphic device involves the development of components whose functions are analogous to parts of the brain, or at least to what such parts are believed to do. These components are not brain-shaped, of course, yet like the valves of an artificial heart, they do fulfill the roles of their organic counterparts. Some architectures go so far as to model the brain's perceived plasticity (its ability to modify its own form to suit its function) by provisioning new components based on the needs of the tasks they're currently running.


The first generation of AI was rules-based and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second, current generation is largely concerned with sensing and perception, such as using deep-learning networks to analyze the contents of a video frame.


A coming next generation will extend AI into areas that correspond to human cognition, such as interpretation and autonomous adaptation. This is critical to overcoming the so-called “brittleness” of AI solutions based on neural network training and inference, which depend on literal, deterministic views of events that lack context and commonsense understanding. Next-generation AI must be able to address novel situations and abstraction to automate ordinary human activities.

 

 

Neuromorphic Computing Research Focus


The key challenges in neuromorphic research are matching a human's flexibility, and the ability to learn from unstructured stimuli with the energy efficiency of the human brain. The computational building blocks within neuromorphic computing systems are logically analogous to neurons. Spiking neural networks (SNNs) are a novel model for arranging those elements to emulate natural neural networks that exist in biological brains.


Each “neuron” in the SNN can fire independently of the others, and doing so, it sends pulsed signals to other neurons in the network that directly change the electrical states of those neurons. By encoding information within the signals themselves and their timing, SNNs simulate natural learning processes by dynamically remapping the synapses between artificial neurons in response to stimuli.


While building such a device may inform us about how the mind works, or at least reveal certain ways in which it doesn't, the actual goal of such an endeavor is to produce a machine that can "learn" from its inputs in ways that a digital computer component may not be able to. The payoff could be an entirely new class of machine capable of being "trained" to recognize patterns using far, far fewer inputs than a digital neural network would require.


"One of the most appealing attributes of these neural networks is their portability to low-power neuromorphic hardware," reads a September 2018 IBM neuromorphic patent application [PDF], "which can be deployed in mobile devices and native sensors that can operate at extremely low power requirements in real-time. Neuromorphic computing demonstrates an unprecedented low-power computation substrate that can be used in many applications."


Although Google has been a leader in recent years, of both research and production of hardware called tensor processors (TPU) dedicated specifically to neural network-based applications, the neuromorphic branch is an altogether different beast. Specifically, it's not about the evaluation of any set of data in terms of discrete numeric values, such as scales from 1 to 10, or percentage grades from 0 to 100. Its practitioners have a goal in mind other than to solve an equation, or simply to produce more software. They seek to produce a cognition machine -- one that may lead credence to, if not altogether prove, a rational theory for how the human mind may work. They're not out to capture the king in six moves. They're in this to build mechanisms.

 

 

The Future Of Neuromorphic Computing

 

At any one time in history, there is a theoretical limit to the processing power of a supercomputer -- a point after which increasing the workload yields no more, or no better, results. That limit has been shoved forward in fits and starts with advances in microprocessors, including by the introduction of GPUs (formerly just graphics processors) and Google's design for TPUs. But there may be a limit to the limit's extension, as Moore's Law only works when physics gives you room to scale smaller.


Neuromorphic engineering points to the possibility, if not yet probability, of a massive leap forward in performance, by way of radical alteration of what it means to infer information from data. Like quantum computing, it relies upon a force of nature we don't yet comprehend: In this case, the informational power of noise. If all the research pays off, supercomputers, as we perceive them today, maybe rendered entirely obsolete in a few short years, replaced by servers with synthetic, self-assembling neurons that can be tucked into hallway closets, freeing up the space consumed by mega-scale data centers for, say, solar power generators.




Examples of neuromorphic engineering projects

 

Today, there are several academic and commercial experiments underway to produce working, reproducible neuromorphic models, including:


  • SpiNNaker [pictured above] is a low-grade supercomputer developed by engineers with Germany's Jülich Research Centre's Institute of Neuroscience and Medicine, working with the UK's Advanced Processor Technologies Group at the University of Manchester. Its job is to simulate the functions so-called cortical microcircuits, albeit on a slower time scale than they would presumably function when manufactured. In August 2018, Spinnaker conducted what is believed to be the largest neural network simulation to date, involving about 80,000 neurons connected by some 300 million synapses.
  • Intel is experimenting with what it describes as a neuromorphic chip architecture, called Loihi (lo · EE · hee). Intel has been reluctant to share images that would reveal elements of Loihi's architecture, though based on what information we do have, Loihi would be producible using a form of the same 14 nm lithography techniques Intel and others employ today. First announced in September 2017, and officially premiered the following January at CES 2018 by then-CEO Brian Krzanich, Loihi's microcode include statements designed specifically for training a neural net. It's designed to implement a spiking neural network (SNN), whose model adds more brain-like characteristics.
  • IBM maintains a Neuromorphic Devices and Architectures Project involved with new experiments in analog computation. In a research paper, the IBM team demonstrated how its non-volatile phase-change memory (PCM) accelerated the feedback or backpropagation algorithm associated with neural nets. These researchers are now at work determining whether PCM can be utilized in modeling synthetic synapses, replacing the static RAM-based arrays used in its earlier TrueNorth and NeuroGrid designs (which were not neuromorphic).

Share this post



← Older Post Newer Post →


Leave a comment