Humans' capacity to perceive the visual world and infer complex relationships between objects, all while consuming less energy than a light bulb, is far beyond the capabilities of any state-of-the-art computing system. Why? One reason is that conventional computing systems separate data storage and data processing tasks, forcing a large energy expenditure for movement of data between memory and the CPU. We explore an alternate paradigm where memory and processing are closely-coupled using memristive devices: Nanoscale devices that store data using multiple non-volatile conductance states. In addition, our systems operate on analog and digital signals (as opposed to the purely digital approach taken by conventional computers), allowing more information to be processed using fewer computational resources. Our circuit (neuron circuits, synapse circuits, plasticity circuits), architecture (feedforward neural networks, convolutional neural networks, etc.), and system-level designs are informed by neuroscience data (e.g. fMRI) and models collected and developed by our collaborators.
The leading model of human vision stresses the importance of the neocortex (the outer "bark" of the brain) and feedforward information flow, where simple features like edges and shapes are gradually built up into an entire object. However, in this project we investigate the roles of subcortical brain regions such as the thalamus and feedback information flow, as new evidence suggests that they may be critical aspects of perception.