Alexander Ororbia to Work in Collaboration with Google Research

Abstract/Mission:
Backprop-Free Training for Models in Natural Language Processing

For this project Alexander Ororbia will work in collaboration with Google Research will focus on developing a backprop-free method for training neural language architectures. Specifically, he is interested in adapting some of his more recently proposed procedures for representation learning in natural language processing (NLP), with a focus on unsupervised and semi-supervised learning (of useful distributed representations of constructs such as documents). Given the expense involved in creating labeled data, especially when working with text data, an effective alternative to backprop would be of interest to Google in terms of annotation cost reduction and, furthermore, to exploit the underlying parallelism when large-scale computational resources are available, i.e., massive clusters of GPUs, CPUs, and/or TPUs.

 


Recommended News