Sorry, you need to enable JavaScript to visit this website.

Research

Emerging Devices that Emulate Biology
Emerging Devices that Emulate Biology

A key aspect of our research is the development and leverage of beyond-CMOS devices to efficiently implement AI primitives (e.g. synaptic plasticity, neuronal spiking, etc.) in hardware.  One of the main focuses has been on memristive devices, or memristors, for implementation of neuroplastic behavior, especially at the level of synapses.  Memristors are able to simultaneously store data and perform analog multiplications.  This close coupling of memory and computation helps to remove the so-called von Neumann bottleneck, offering the potential for signficantly improved energy efficiency of load-compute-store types of architectures.  The majority of our work in this area has been integration of memristors into neuron, synapse, and training circuits for neural networks.  However, we have also done some semi-empirical modeling and SPICE model development.  Other devices of interest include 3 and 4-terminal memristors and biristors for spiking neuron implementation.

Energy-Efficient Neural Network Topologies and Training Algorithms
Energy-Efficient Neural Network Topologies and Training Algorithms

A key theme of the Brain Lab's research is using the inherent randomness and other unique properties of devices to design energy-efficient neural network topologies and training algorithms.  One specific focus has been on so-called "random projection networks," which are neural networks that have random weights and topologies.  This type of network is able to fit a target function (e.g. image classification) by pairing linear regression with large random feature spaces.  The main advantage of random projection networks is that they can be implemented in hardware with fewer resources than other types of networks.  Our lab also develops novel training algorithms that are custom-tailored to in-hardware learning.  For example, we have expored the use of stochastic logic to reduce the hardware overhead associated with gradient calculations in supervised learning.  Other topics of interest include spiking neural network hardware, energy-harvesting AI, and perturbation-based learning methods.