1. Blog
  2. Innovation
  3. Neuromorphic Computing: Meet the Future of AI
Innovation

Neuromorphic Computing: Meet the Future of AI

Researchers are trying to leverage insights from neuroscience to build an artificial human brain. Will they be able to do it?

David Russo

By David Russo

Director of Business Development David Russo helps BairesDev grow by building and expanding relationships with customers, partners, and teams.

8 min read

Featured image

You don’t have to be a part of the tech industry to know that a lot of people are pointing at artificial intelligence as today’s defining technology. What’s more, many experts are saying that we’re living in the AI era (we’ve even said that here in The Daily Bundle!). So, it’s not really surprising that almost the entire industry is quasi-obsessed with it.

There are many reasons to justify that compulsive focus on AI, but I think it mostly boils down to the long list of promised benefits associated with this tech. According to many enthusiasts, artificial intelligence can reshape entire industries, usher in a trove of new products and services, and completely redefine our lives. It sounds too good to be true.

Well, that’s because it kinda is. While I’m not debating the many advantages of using AI for a lot of our daily activities (especially in a business context), the reality is that AI isn’t as sophisticated as we like to think it is. The main problem lies in the underlying approach of current AI algorithms, which rely almost entirely on the training phases they got.

I’m not just talking about the complicated balance AI developers have to achieve to prevent their algorithms from falling prey to overfitting or underfitting (at least, that’s not all that I’m hinting at). I’m also talking about the seemingly impossible autonomy those algorithms can aspire to (something extremely well-exemplified by the continuous failure by self-driving cars).

In other words, these algorithms might seem like they are learning but, in reality, they are adapting what they “know” from their training sessions to the new contexts they find. And that’s the limiting aspect of it all. How come? Because there’s no way that a development team can train their AI algorithms with every possible situation they might encounter.

Does that mean that AI isn’t the tech of the future as many of us have said in the past? No, it really doesn’t mean that. But to truly embrace that position, AI engineers need to radically change how they are building the algorithms. Fortunately, they are doing so in the form of neuromorphic computation.

What’s Neuromorphic Computation?

Neuromorphic computation (also known as neuromorphic engineering) aims to replicate the way the brain works through a series of interconnected chips. Each chip “behaves” like a neuron, organizing around, communicating with, and interacting with the other chips. What researchers working with this are trying to achieve is leverage insights from neuroscience to build an artificial human brain.

I know that it sounds insane, incredible, bogus, even somewhat dangerous. But that’s the road researchers believe will take us forward in our AI-related ambitions. What’s more important, we’re already going down that road. Leading the way is Intel with its Loihi research chip and its newly released open-source framework, Lava.

That isn’t to say that we’re close to having neuromorphic computers any time soon. The new Loihi 2 chip is the most powerful chip of its kind and it “only” boasts 1 million “neurons.” To put that in perspective, the human brain has around 100 billion neurons. Intel is hoping to ramp up that architecture but understands that’s extremely hard, especially when it comes to developing software for it. That’s why they’ve released Lava—to entice engineers into building apps for Loihi.

Even with this reality check, neuromorphic computation is a truly exciting premise. In fact, experts argue that this is the only way in which we can really achieve the AI goals we’re setting for ourselves.

I’m talking about those objectives that go beyond the mere analysis of big datasets, especially the ones related to autonomous robots that can think and learn for themselves. That’s because neuromorphic architecture ditches the synchronous and structured processing of CPUs and GPUs to give way to asynchronous event-based spikes. 

Doing that allows neuromorphic chips to process information much faster and in a less-data-intensive manner, a key to dealing with ambiguous information in real time. In fact, neuromorphic systems are set to be crucial for the next generation of AI, as they’ll allow algorithms to become more apt at handling probabilistic computing, which implies noisy and uncertain data. Neuromorphic computing could also theoretically help with causality and non-linear thinking, but that’s nothing more than an engineer’s dream in today’s landscape.

What Are The Challenges With Neuromorphic Computing?

If you haven’t heard about neuromorphic computing until now, you’re not alone. While it isn’t a particularly novel concept, it wasn’t until recently that researchers could start working on hardware that could actually bring the concept to life. That’s not all. Since neuromorphic systems work so complexly, understanding them is a challenge, let alone putting them to work.

That means that the first challenge of neuromorphic computing is getting more visibility. Engineers working in the AI field might have heard about it but most of them are still working with the traditional approach to AI algorithms. If neuromorphic computing is to gain critical mass, it’ll need as many creative minds pushing for it as it possibly can.

Unfortunately, that’s far from being the only challenge. As you can probably guess, developing a replica of the human brain is a tall order. We still haven’t completely figured out how the brain works, so trying to build an artificial one with those missing pieces of the puzzle can be tricky. While we have more understanding than we ever had about our brains, neurobiologists and neuroscience as a whole still have plenty of mysteries to solve.

Luckily, researchers building neuromorphic chips can mean a mutually beneficial relationship between them and the neurobiologists. As developers delve deeper into the construction of their “artificial brain”, neuroscientists can start checking hypotheses and formulate new ones. Similarly, neurobiologists can inform researchers of new developments so they can use new approaches to their neuromorphic chips.

Another huge challenge of neuromorphic computing is the dramatic changes it’ll bring along and that will radically reshape how we understand computing norms. Rather than following the von Neumann model (which separates memory and processing), neuromorphic computing will usher in its own norms.

A great example of this is how modern computers deal with visual input. Following the von Neumann model, computers today see an image as a series of units or individual frames. Neuromorphic computing would throw that notion away in favor of encoding information as changes in the visual field in relation to time. It’s a radical departure from our current standards, one that will force engineers to learn how to think under the umbrella of this approach.

As if that weren’t enough, neuromorphic computing will need new programming languages and frameworks, and more powerful memory, storage, and sensory devices that take full advantage of the new architecture. Since the relationship between memory and processing will change, so will the integration between the devices that are part of those processes. As you can see, this is a paradigm shift, and we’re at its early stages.

A New Way Forward

Neuromorphic computing is starting to get certain attention at a time when another promising path has already been touted as AI’s next big thing: Quantum computing. But unlike quantum computing, neuromorphic computing’s requirements aren’t that demanding. Where quantum computers need temperatures close to absolute zero and insane power demands, neuromorphic computers can easily work in normal conditions.

That naturally tips the scales towards neuromorphic computing, especially because of the practicality and the potential to integrate this architecture in all kinds of devices. We shouldn’t get ahead of ourselves, though. Both quantum computing and neuromorphic computing are still far away from commercial applications, so we’ll have to do with the AI we have today.

However, it’s understandable if you feel excited about the prospect of neuromorphic computing. Truly intelligent devices and robots have the potential to completely change the way we live and, while they won’t be available in the foreseeable future, neuromorphic computing offers us a new way forward. In fact, neuromorphic systems feel like the true future of AI, as they promise to finally fulfill all of our AI-related dreams. Only time will tell if that’s the case.

If you enjoyed this, be sure to check out our other AI articles.

Tags:
David Russo

By David Russo

David Russo is Director of Business Development at BairesDev. With over 15 years of experience in business development within the IT industry, he helps develop and expand client, partner, and inter-office relationships while assisting with strategic decision-making.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

Innovation - The Future of
Innovation

By BairesDev Editorial Team

4 min read

Contact BairesDev
By continuing to use this site, you agree to our cookie policy and privacy policy.