Brains perform massively parallel and real-time learning that often depend on incomplete and noisy input at ultra-low energy. Leveraging the underpinnings of the cortical processes along with the diversity of the neuronal units can enable life-long learning AI systems. In this research, we abstract the behavior of different cortical regions where the excitatory networks generate further excitation (simple and predictable) and the inhibitory networks generate nonlinear effects (complex). The core features of these algorithms include hierarchy, sparse distributed representation, random projections, and plasticity. We also study the behavior of machine learning algorithms that are infused with these features. Applications of interest include one shot learning, anomaly detection, video-activity recognition, and speech recognition.
Our research on random projection networks (RPNs) is driven by the fundamental question of taking advantage of randomness and variability rather than overdesigning for it. Specifically, RPNs potray random, fixed synaptic connections that result in a computationally light learning algorithm and generalizable classification layer. We use a two-fold approach here. First, we advance the state-of-the-art in RPNs with new deep and hierarchical networks for solving complex real-world tasks. Second, we infuse local plasticity mechanisms and simplified learning rules for custom AI platform design as well as for rapid prototyping on embedded platforms.
Spiking neural networks operate on biologically-plausible neurons and use discrete event-driven spikes to compute and transmit information. Though a substantial cortical processing uses spikes, we currently lack in-depth understanding of how we can instantiate such capabilities in-silico. Here, we investigate how populations of spiking neurons compute and communicate information, with different plasticity mechanisms such as short-term/long-term potentiation, neurogenesis, intrinsic plasticity, and attention. We posit using the neural computing substrates yield robust information processing and energy efficiency, for machine learning problems. Furthermore, we study how these networks can be realized efficiently on silicon substrates.
When designing neuromorphic systems, a significant challenge is how to build physical neuronal substrates with adaptive learning. Recently, non-volatile memory devices with history dependent conductance modulation (memristors) are demonstrated to be ideal for synaptic and neuronal operations. We propose hybrid CMOS/memristor architectures, known as Neuromemristive systems, with on-device learning. There are multiple aspects we are interested in this study: what memristor features are suitable for spiking/non-spiking networks? How can we design learning rules that exploit variability in the crossbars? How to train the memristors faster?