Let's start with why
Empower people with real-time and energy-efficient artificial systems that self-adapt to changes at the edge. That requires embedded systems that can learn online, efficiently and without supervision. This is why I'm inspired by the only known system which evolved to do it: our own brain.
I’m Lyes Khacef, senior research scientist in neuromorphic computing at Sony. I am trying to understand the computational principles of the brain to get some inspiration for the modeling and hardware implementation of self-organizing neural networks. The main question is: What is the right level of abstraction from biology to get the best performance in artificial systems?
One of the fundamental computational paradigms from biology is to use local plasticity mechanisms that solve the latency and energy-efficiency constraints of online learning, in contrast with recent developments of deep learning. The global behavior of these networks emerges from local interactions without a global controller or external supervisor, and can solve complex tasks such as multimodal classification. I think that this self-organizing property is a key feature that leads to fast, efficient and adaptive systems. It also leads to hard headaches sometimes... but I believe it's worth it.
Reentrant Self-Organizing Map (ReSOM)
Our brain-inspired computing approach attempts to simultaneously reconsider AI and von Neumann's architecture. Both are formidable tools responsible for digital and societal revolutions, but also intellectual bottlenecks linked to the ever-present desire to ensure the system is under control. The brain remains our only reference in terms of intelligence: we are still learning about its functioning, but it seems to be built on a very different paradigm in which its developmental autonomy gives it an efficiency that we haven’t yet attained in computing.
We introduce the Reentrant Self-Organizing Map (ReSOM) toward brain-inspired learning in embedded systems.