Our research efforts aim to understand the nature of visual perception so that we can create more visually intelligent machines. This is enabled via fundamental concepts in reasoning, prediction, supervised, semi-supervised and unsupervised learning, and stochastic optimization techniques. Our research involves (i) fundamental computer vision topics such as video analytics, detecting humans and their poses from images, object detection and tracking, etc; (ii) computer vision applications such as medical image analysis, understanding social dynamics from videos of human interactions, collision detection in self-driving cars, and other vision-related regression problems in videos; and (iii) the intersection of computer vision and graphics where we aim to model realistic avatars that interact more naturally with humans. We are constantly pushing the boundaries in applying computer vision techniques to a myriad of problems such as 3D reconstruction of the heart from MRI images; deception detection from visual cues, understanding group interactions such as in a volleyball game, improving STEM classroom learning through video analytics, and other such problems.
Our research spans the spectrum from theory to algorithms to applications. We are interested in developing novel computational methods for large-scale problems where conventional methods are often computationally infeasible. These methods include the development of novel estimators using tools from constrained optimization theory, convex analysis, and Bayesian nonparametrics. We are also interested in statistical problems involving covariance estimation for high-dimensional data, as lots of classical data analysis methods break down in high-dimensions. These methods have numerous applications in areas such as statistical genetics, image processing, computational biology, cognitive science, and natural language processing. On the theory side, we study questions motivated by the applications, and try to come up with new methods that are computationally feasible to fix the problems we have identified.
Faculty and students engaged with computational linguistics—natural language processing and speech processing—aim to advance theoretical and applied research and human language technologies involving text, speech, conversational interaction, and multimodal data.
We focus on problems that deal with multiple intelligent agents interacting with each other and the environment. In particular, we study computational, algorithmic, and axiomatic aspects of (possibly strategic) decision making that influence the behavior of agents, and develop approaches to overcome computational issues which arise in practical applications such as resource allocation, voting and preference aggregation, and in general, social choice.
We focus on the development of novel computational problem solving methods based on abstractions of real-world processes. Specifically, the soft computing techniques we develop draw inspiration from how naturally occurring phenomena behave when adapting to various environmental situations. These techniques have applications in a wide range of fields such as physics, biology, and engineering. Our research in this area includes work in concurrent and distributed evolutionary algorithms, neuroevolution (automated design of neural networks), ant colony optimization, swarm intelligence, and neuro-cognitively motivated machine learning, which focuses on the design of learning algorithms and models strongly guided by principles in cognitive science and neuroscience.
Devroop Kar (advisor: Daniel Krutz & Travis Desell & Alexander Ororbia)
Zohair Raza Hassan (advisor: Edith Hemaspaandra and Stanislaw Radziszowski)
Xiajun Jiang (advisor: Linwei Wang)
Joshua Karns (advisor: Travis Desell)
Nilesh Kumar (advisor: Linwei Wang)
Luke Kurlandski (advisor: Yin Pan & Matt Wright)
Bipin Lekhak (advisor: Linwei Wang)
Huawei Lin (advisor: Weijie Zhao)
Yang Liu (advisor: Daniel Krutz)
Zimeng Lyu (advisor: Travis Desell)
Hannah Miller (advisor: Ivona Bezakova & Edith Hemaspaandra)
Ryan Missel (advisor: Linwei Wang)
Deepak Pandita (advisor: Christopher M. Homan)
Krishna Prasad Neupane (advisor: Qi Yu)
Viet Dung Nguyen (advisor: Alexander Ororbia and Renold Bailey)
Deep Shankar Pandey (advisor: Qi Yu)
Michael Peechatt (advisor: Cecilia O. Alm & Reynold Bailey)
Spandan Pyakurel (advisor: Qi Yu)
Qiaofan Que (advisor: Qi Yu)
Sheeraja Rajakrishnan (advisor: Daniel Krutz)
Paribesh Regmi (advisor: Rui Li)
Hitesh Sapkota (advisor: Qi Yu)
Ayush Kumar Shah (advisor: Richard Zannibi)
Ruochen Shi (advisor: Rui Li)
Nibesh Shrestha (advisor: Pengcheng Shi)
Ruby Shrestha (advisor: Linwei Wang)
Guohao Sun (advisor: Zhiqiang Tao)
Allahsera Auguste Tapo (advisor: Christopher M. Homan)
Jeevan Thapa (advisor: Rui Li)
Mahendra Singh Thapa (advisor: Rui Li)
Pujan Thapa (advisor: Travis Desell)
Rajesh Titung (advisor: Cecilia O. Alm)
Maryam Toloubidokhti (advisor: Linwei Wang)
Sumeet Atul Vadhavkar (advisor: Linwei Wang)
Dingrong Wang (advisor: Qi Yu)
Jiamian Wang (advisor: Zhiqiang Tao)
Tharindu Cyril Weerasooriya (advisor: Christopher M. Homan)
Hong Yang (advisor: Travis Desell and Alexander Ororbia)
Zhizhuo Yang (advisor: Alexander Ororbia and Reynold Bailey)
Dayou Yu (advisor: Qi Yu)
Timothy Zee (advisors: Ifeoma Nwogu and Alex Ororbia)
Chengkuan Zhao (advisor: Linwei Wang)
Minxun Zheng (advisor: Qi Yu)
Yuansheng Zhu (advisor: Qi Yu)
Statistical Machine Learning
This course will cover the theory and practice of statistical machine learning, focusing on computational methods for supervised and unsupervised data analysis. Specific topics include Bayesian, maximizing a posteriori (MAP), and maximum likelihood (ML) parameter estimation, regularization and sparsity-promoting priors, kernel methods, adaptive basis function methods, the expectation maximization algorithm, Monte Carlo methods, variational methods, and models for data with temporal or hierarchical structure. Applications to regression, categorization, clustering, and dimensionality reduction problems are illustrated by examples. Each student will complete several problem sets, including both mathematical and computer implementation problems. Probability and Statistics I, Linear Algebra, and Introduction to Computer Programming. Familiarity with a numerical mathematics package (e.g. Matlab, Maple, Mathematica) is helpful but not required.
Deep learning represents a set of emerging techniques in machine learning that has quickly become prevalent in the analysis of big data. The power and potential of this recent breakthrough in intelligent computer systems has been demonstrated through many successes. Deep learning systems are the current best performer in computer vision and speech processing. A wide variety of active researches are being conducted to leverage the capability of deep learning for achieving automation in areas such as autonomous driving, robotics, and automated medical diagnosis. There is a crucial need to educate our students on such new tools.
This course gives an in-depth coverage of the advanced theories and methods in deep learning including basic feedforward neural networks, convolutional neural networks, recurrent neural networks including long short-term memory models, deep belief nets, and autoencoders. It will make an emphasis on approaches with practical relevance, and discusses a number of recent applications of deep networks applications in computer vision, natural language processing and reinforcement learning.
This course provides an introduction in the fundamentals of working with quantitative information. Topics include matrix algebra (matrices, vectors, direct and indirect methods for solving linear systems, eigenvectors, singular value decomposition, least-squares systems) optimization (convex analysis, gradient descent, Newton's method, interior-point methods), statistics (random variables, p-values, hypothesis testing, confidence intervals) and data exploration (clustering, dimensionality reduction, curve fitting). Note: Knowledge in probability and statistics calculus, and computer programming or permission of instructor is required.
Current advances in computing and information sciences.
Computational Modeling and Simulation
Everyone uses modeling and simulation even without being aware of it. This course talks about mathematical and computational modeling and simulation as the tools to solve complex problems in the real world. Topics are divided by the category of modeling method: phenomenological models vs. mechanistic models. For mechanistic models, the course will cover differential equations (including variational principle to construct the differential equations, solutions to ordinary differential equations (ODE), and classical ODE systems) and cellular automaton in detail, and mention other mechanistic models. Similarly, for phenomenological models, the course will cover regression and neural networks in detail, and introduce other phenomenological models such as networks and power-law distributions. In parallel, paper review and discussion will serve as case studies of modeling of real-world complex systems, illustrating application domains. Course projects are required. Note: Knowledge in probability and statistics, linear algebra and calculus, experiences in computer programming/MATLAB, or permission of instructor is required.
Foundations of Artificial Intelligence
An introduction to the theories and algorithms used to create artificial intelligence (AI) systems. Topics include search algorithms, logic, planning, machine learning, and applications from areas such as computer vision, robotics, and natural language processing. Programming assignments and oral/written summaries of research papers are required.
Biologically Inspired Intelligent Systems
There have been significant advances in recent years in the areas of neuroscience, cognitive science and physiology related to how humans process information. In this course students will focus on developing computational models that are biologically inspired to solve complex problems. A research paper and programming project on a relevant topic will be required. A background in biology is not required.
Foundations of Intelligent Security Systems
The course will introduce students to the application of intelligent methodologies applications in computer security and information assurance system design. It will review different application areas such as intrusion detection and monitoring systems, access control and biological authentication, firewall structure and design. The students will be required to implement a course project on design of a particular security tool with an application of an artificial intelligence methodology and to undertake research and analysis of artificial intelligence applications in computer security.
Topics in Intelligent Systems
This course examines current topics in Intelligent Systems. This is intended to allow faculty to pilot potential new graduate offerings. Specific course details (such as prerequisites, course topics, format, learning outcomes, assessment methods, and resource needs) will be determined by the faculty member(s) who propose a specific topics course in this area. Specific course instances will be identified as belonging to the Intelligent Systems cluster, the Computational Vision and Acoustics cluster, the Security cluster, or some combination of these three clusters. Course offered every other year.
Machine intelligence teaches devices how to learn a task without explicitly programming them how to do it. Example applications include voice recognition, automatic route planning, recommender systems, medical diagnosis, robot control, and even Web searches. This course covers an overview of machine learning topics with a computer engineering influence. Includes Matlab programming. Course topics include unsupervised and supervised methods, regression vs. classification, principal component analysis vs. manifold learning, feature selection and normalization, and multiple classification methods (logistic regression, regression trees, Bayes nets, support vector machines, artificial neutral networks, sparse representations, and deep learning).
Seminar in Computing Security
This course offers an opportunity to learn about a specific seminar topic in more depth. The course description will be replaced by the specific instance of the seminar, as it is proposed by faculty.
Information Retrieval and Text Mining
This course provides students with exposure to foundational data analytics technologies, focusing on unstructured data. Topics include unstructured data modeling, indexing, retrieval, text classification, text clustering, and information visualization.
Data Driven Knowledge Discovery
Rapidly expanding collections of data from all areas of society are becoming available in digital form. Computer-based methods are available to facilitate discovering new information and knowledge that is embedded in these collections of data. This course provides students with an introduction to the use of these data analytic methods, with a focus on statistical learning models, within the context of the data-driven knowledge discovery process. Topics include motivations for data-driven discovery, sources of discoverable knowledge (e.g., data, text, the web, maps), data selection and retrieval, data transformation, computer-based methods for data-driven discovery, and interpretation of results. Emphasis is placed on the application of knowledge discovery methods to specific domains.
Dynamic Resource Allocation [Hosseini]: Through the integration of artificial intelligence (AI), economics, and computation this project investigates novel solutions for resource allocation in dynamic environments and situations that lack transferable currency. With the advent of online platforms, economic theory emerges as a fundamental approach to promote desirable social properties of efficiency, fairness, and truthfulness in a variety of domains such as shift scheduling, course registration, cloud computing, and crowdsourcing. This project tackles a variety of fundamental problems at the intersection AI and economics while enriching the algorithmic and societal understanding of resource allocation in dynamic settings. This contrasts with classical mechanisms that either focus solely on economic aspects of resource allocation in static and offline settings or disregarded social aspects such as fairness. Specifically, the project studies two interconnected components: (1) sequential allocation under uncertainty, by synthesizing models studied in AI with economic theory to investigate, analyze, and create new mechanisms that are fair and discourage strategic manipulation in environments where agents' preferences are evolving (e.g. nurse scheduling and course allocation); and (2) online mechanisms, by employing insights from algorithm design and AI to study fairness and efficiency of allocation mechanisms when agents arrive and depart over time or the availability of items is uncertain (e.g. food bank organizations and crowdsourcing platforms).
Evolutionary Exploration of Neural Architectures (EXONA) [Desell,Ororbia]: Determining the optimal structure and hyperparameters of neural networks is a challenging open question in the field of AI. EXONA provides and investigates various evolutionary and bio-inspired approaches to “grow” optimal neural architectures through algorithms such as EXACT (Evolutionary Exploration of Augmenting Convolutional Topologies), EXAMM (Evolutionary Exploration of Augmenting Memory Models) as well as approaches such as ant colony optimization and other bio-inspired methods. These algorithms are developed to use large scale high performance and distributed computing resources to speed neural network evolution. https://github.com/travisdesell/exact
Fair Allocation of Resources [Hosseini]: This project focuses on various models for fair allocation of indivisible resources or goods. Various concepts from game theory and economics are used to model, analyze, and develop algorithms that provide fair and efficient outcomes. This project aims at investigating various economic and game theoretical properties of fair allocation to 1) devise new algorithms through AI techniques for reasoning over preferences, 2) provide theoretical guarantees and bounds, and 3) provide empirical evaluations of the proposed algorithms.
Multiagent Bug Assignment [Hosseini]: Bug assignment in large software projects is typically a time-consuming and tedious task; effective assignment requires that bug triagers hold significant contextual information about both the reported bugs and the pool of available developers. In this project, the goal is to efficiently assign bugs to developers by 1) predicting the bug lifetime using machine learning techniques, 2) modeling developers’ expertise by learning their traits, and 3) utilizing techniques from multi-agent systems to efficiently and fairly assign bug reports to relevant developers. We will use data from various bug repositories, such as Eclipse and Firefox, to train our model and evaluate its efficiency against the current state-of-the-art approaches that rely solely on machine learning techniques.
Open UAS Repository (OURepository) [Desell]: Management of large scale remote sensing data such as that gathered by unmanned aircraft systems (UAS) presents significant data management and analysis questions. The OURepository has been developed to provide researchers an open repository to upload, share, annotate and analyze large scale imagery (orthomosaics) gathered by UAS. A major goal of this project is to allow easy use of computer vision algorithms such as convolutional neural networks to non-computer scientists through this cloud based system. https://digitalag.org/our/
Label Distribution Learning [Homan]: Machine learning models learn from human annotated labeled data. The annotation is often subjective and based on their personal experiences. In supervised learning, the multiple annotations per data item is usually reduced to a single label representing the ground truth. This hides the diversity and objectivity of the labels. Label distribution learning associates a probability distribution for each data item preserving the diversity between labels.
Reading Assistance for DHH Technology Workers [Huenerfauth]: Low English reading literacy among some Deaf and Hard of Hearing (DHH) adults can be a particular barrier to computing professions, where workers must regularly "upskill" to learn about rapidly changing technologies throughout their career. We investigate the design of a web-browser plug-in to provide automatic English text simplification (on-demand) for DHH individuals, including providing simpler synonyms or sign-language videos of complex English words or simpler English paraphrases of sentences or entire documents. http://cair.rit.edu/projects.html#simplify
Neurocognitively-Inspired Lifelong Machine Learning [Ororbia]: Neural architectures trained with back-propagation of errors are susceptible to catastrophic forgetting. In other words, old information acquired by these models is lost when new information for new tasks is acquired. This makes building models that continually learn extremely difficult if not near impossible. The focus of our research is to draw inspiration from models of cognition and biological neurocircuitry, as well as theories of mind and brain functionality, to construct new learning procedures and architectures that generalize across tasks and continually adapt to novel situations, combining input from multiple modalities/sensory channels. https://www.cs.rit.edu/~ago/nac_lml.html
Video-Based Search for ASL Dictionaries [Huenerfauth]: Looking up an unfamiliar word in a dictionary is a common activity in childhood or foreign-language education, yet there is no easy method for doing this in ASL. We are investigating a computer-vision-based sign-lookup interface for online ASL video dictionaries. http://cair.rit.edu/projects.html#dictionary
ASL Corpora for Linguistic Research [Huenerfauth]: We collect video and motion-capture recordings of native sign-language users, in support of linguistic research and machine-learning modeling of aspects of ASL. http://cair.rit.edu/projects.html#linguisticcorpora
Design of Future Captioning Technology [Huenerfauth]: We investigate the requirements and preferences of Deaf and Hard of Hearing (DHH) users for captioning technology for video programming or for real-time captioning in live meetings, including how the confidence scores for words or phrases identified through automatic speech recognition (ASR) could be revealed to users of such captions. http://cair.rit.edu/projects.html#captionmeetings
Learning ASL through Real-Time Practice [Huenerfauth]: We are investigating how computer-vision technologies can enable students learning American Sign Language (ASL) to practice their signing independently, through a tool that provides feedback automatically based on a video of their signing. http://cair.rit.edu/projects.html#learn
Generating ASL Animation from Motion-Capture Data [Huenerfauth]: We investigate techniques for making use of motion-capture data collected from native American Sign Language (ASL) signers to produce linguistically accurate animations of ASL. We produce machine-learning models of various phenomena, e.g. speed and timing during signing, and we use these models to partially automate the generation of animations, to reduce the cost in providing ASL content for Deaf users. http://cair.rit.edu/projects.html#mocap
Word Importance in Captions for Deaf Users [Huenerfauth]: Automated Speech Recognition (ASR) technology is still imperfect in many settings. To evaluate the usefulness of captions for Deaf users based on ASR, simply counting the number of errors is insufficient, since some words contribute more to the meaning of the text. We study methods for automatically predicting the importance of individuals words in a text, and we are using these models to develop alternative evaluation metrics for analyzing ASR accuracy, to predict how useful ASR-based captions would be for users. http://cair.rit.edu/projects.html#importance
Visual Prediction using Multimodal Data [Kong]: This project develops deep learning-based methods for predicting future human actions and visual frames from large-scale video data. Thanks to the extra data including audio and text data, we can create knowledge base that provides us with rich prior information, and help achieve accurate and reliable visual prediction.
Reliable Tensor Data Analytics (theory, algorithms, applications) [Markopoulos]: Data collected across diverse sensing configurations can be naturally organized and processed in the form of tensors, which are the direct generalization of matrices. Tensor entries can be severely faulty, due to errors in sensing, storing, communication, or even deliberate corruption. In this line of research, we explore new theory, problem formulations, and optimization algorithms for corruption resistant analysis of tensor data. This project can provide important new foundations for reliable analysis of diverse multi-modal and multi-aspect datasets.
Data Driven Adaptive and Robust Subspace Learning with Computer Vision Applications [Markopoulos]: We design Data Driven Adaptive Learning (DDAL) frameworks for robust subspace tracking, combined with deep learning architectures. Motivating applications that we explore include people detection, object detection, and change detection in new domains that are markedly different from those used for training. Data starved environments are also of interest.
Signal Processing, Data Analysis, and Machine Learning for Indoors Radar-based Motion Recognition - Applications in Assisted Living [Markopoulos]: In this project, we apply adaptive machine learning methods for motion recognition based on micro-Doppler signatures, collected from indoors radar measurements. A key application is real-time fall detection towards safer self-dependent living and aging-in-place.
Gait Recognition from Wearable Sensors with Application in Injury Prediction and Athlete Rehabilitation [Markopoulos]: This project focuses on gait motion classification based on acceleration signals collected from low-cost commercial wearable inertia measurement units. The project tasks span from dataset collection to multi-way data analysis and machine learning. Our final goal is to deliver adaptive and transferable machine learning for fatigue estimation and real-time injury prediction.
Column2Vec [Mior, Ororbia]: Column2Vec is a distributed representation of database columns based on column metadata. Our distributed representation has several applications. Using known names for groups of columns (i.e., a table name), we train a model to generate an appropriate name for columns in an unnamed table. We demonstrate the viability of our approach using schema information collected from open source applications on GitHub. https://arxiv.org/abs/1903.08621
Computational Models for Evaluating Interactional Synchrony [Nwogu]: Face-to-face meetings are still a very effective way to capture the attention of participants and drive useful collaborations. To this end, this project involves developing techniques to model the interactions and influence patterns occurring within small groups of people, specifically in educational settings. Because of the limited, but rich multimodal data, we are investigating the use of traditional probabilistic graphical modeling methods for better understanding the data. We are also exploring the use of dynamic neural networks. There will be continual data collection to improve inference from current models.
Role of Emotional Regulation in the Workplace [Nwogu]: Emotional regulation refers to refers to the ability to respond to a range of emotions in a manner that is controllable and socially tolerable. To this end, we designed an experiment where participants experienced a range of emotions and were required to respond in pre-specified manners. Their neurological, physiological and expressive manifestations of emotion were recorded and now we are using deep learning and other statistical techniques techniques to better understand how changes in sympathetic activations are exhibited across modalities.
ASR for Resource Constrained Languages [Ptucha]: Developing Automatic Speech Recognition with little training data. Developing unique generative models with multiple transfer learning on acoustic models along with statistical based language models.
Graph CNN [Ptucha]: While CNNs have transformed the machine learning landscape, they do not work with generic graphs such as those describing protein structures, social media graphs, or point clouds. This research is discovering new convolution and pooling methods which work generically on heterogeneous graphs.
Common Vector Space [Ptucha]: Research into underlying low dimensional vector spaces of concepts across different modalities. The modalities include still and motion vision, speech, text, and graphs. The idea is that similar concepts lie close in this Common Vector Space (CVS), while dissimilar concepts lie apart. For example, pictures and sentences about elephants map close to one another, but far apart from sentences about flowers.
Video Sign language to text [Ptucha]: Convert video streams of Chinese sign language into Mandarin and English text using various vector representations in a sequence-to-sequence recurrent nnet.
Integrating Physics-based Models into Data-Driven Inference [Linwei]: We develop theoretical and mathematical foundations that support the integration of physics-based modeling and data-driven inference methods to improve individualized assessment of systems. Our interests in particular focus on data-driven identification and adaptation of the errors in the physics-based models in the statistical inference process.
ECG-Imaging [Linwei]: ECGi is an emerging imaging technique that allows beat-to-beat computational imaging of electrical activity in the heart using noninvasive surface ECG data in combination of patient-specific anatomical information. Over the last 10 years, we have been pushing the boundary of ECGi techniques through a combination of three general thrusts: methodological research to expand the capacity and improve the accuracy of ECGi, technical developments to lower the cost and improve the accessibility of ECGi, and experimental and clinical research to translate ECGi to improve the management of a variety of heart diseases.
Learning Disentangled Representations [Linwei]: We develop deep representation learning methods that are able to separate these inter-subject variations from clinical data. We work with clinicians to deploy such deep-learning based software tools to guide clinicians progressively closer towards the surgical target in real time during the procedure.
End-to-End Uncertainty Quantification [Linwei]: Mathematical models of a living system are always subject to epistemic uncertainties that represent our limited knowledge about a system. While personalized models have shown increasing potential in medicine, their uncertainties remain the main roadblock to their widespread adoption in the healthcare industry. We develop novel active learning based approaches to first infer the uncertainty within the data-driven model elements, before propagating this uncertainty to model predictions.
Transferring Simulation Data to Real Data [Linwei]: A primary factor for the success of machine learning is the quality of labeled training data. However, in many fields, labeled data can be costly, difficult, or even impossible to acquire. In comparison, computer simulation data can now be generated at a much higher abundance with a much lower cost. We develop machine learning and deep learning techniques that are able to leverage the knowledge in simulation data and transfer it to real data based tasks.
Data quality and security evaluation framework for mobile devices platform [Reznik]: The project builds a proof-of-the-concept design, which will be used to develop, verify and promote a comprehensive methodology for data quality and cybersecurity (DQS) evaluation focusing on an integration of cybersecurity with other diverse metrics reflecting DQS, such as accuracy, reliability, timeliness, and safety into a single methodological and technological framework. The framework will include generic data structures and algorithms covering DQS evaluation. While the developed evaluation techniques will cover a wide range of data sources from cloud based data systems to embedded sensors, the framework implementation will concentrate on using an ordinary user’s owned mobile devices and Android based smartphones in particular.
Intelligent Security Systems [Reznik]: The project designs a curriculum, develops course materials, tests and evaluates them in real college classroom settings, prepares and submits them for dissemination of a college level course on Intelligent Security Systems. In order to facilitate interconnections with other courses and its inclusion into the national Cybersecurity curricula, the course is composed of nine separate modules. Five modules cover the specialized topics including: a review of the modern state of the cybersecurity and the current problems and approaches; firewall design; intrusion detection systems; anti-malware methods and tools; hacking activity and attack recognition and prevention. Other modules provide additional support to assist in course teaching preparation, such as test and exam questions, course project and research assignment specifications, and tool presentation descriptions. This course idea is innovative and unique. It merges together various knowledge areas as diverse as artificial intelligence and machine learning techniques with computer security systems and applications. The course will allow to instill into students a unique knowledge in the very intense domain and will lead students towards getting much better prepared to their practical work ahead. It combines theoretical knowledge and practical skills development. Also, it advances students research, communication and presentation skills.
Testing graphical models [Bezáková]: For given parameters of a graphical model, can we efficiently test whether provided random samples are truly generated according to the distribution of the graphical model? https://arxiv.org/abs/1901.07361
Computational Social Choice [Hemaspaandra]: Elections are broadly used in both human and computational settings, including a rapidly expanding range of applications in multiagent systems. It has been known since the mid-1970s (the Gibbard-Satterthwaite Theorem) that every reasonable election system has instances on which voters have an incentive to vote strategically. Computational social choice seeks to sidestep that impossibility result by making manipulation not impossible but rather computationally prohibitive.
Projects in the REU in Computational Sensing for Human-aware AI [Alm, Bailey, Geigel, Huenerfauth, Ptucha, Shinohara]: The REU Site in Computational Sensing for Human-centered Artificial Intelligence recognizes that as the boundaries between HCI and AI blur, and AI grows increasingly agile and pervasive, the next generation of computational scientists must be capable of responsibly and effectively leveraging a spectrum of sensing data from data-generating humans. With this focus, the REU Site will expand its trajectory as an attractor for highly diverse students who will gain experience with sensing hardware and software towards transformative advances in intelligent systems focused on human behaviors and cognitive processes. Enabling diverse stakeholders early in their careers to discover how to collect, fuse, make inference with, and visualize multimodal human data can transform how humans and machines engage and collaborate. The research in the REU Site will address two limitations in AI: first, that underserved populations are at risk of being marginalized with the present focus on big data AI and, second, that AI trainees often lack experience with human data collection and critical thinking about human-elicited datasets. The REU Site will stimulate novel, safe ways for systems to team up with people to address society's vexing problems while remaining fair, attuned to bias, and representative of the diverse fabric of the general population. https://www.cs.rit.edu/~reu/projects.html
Automated Analysis of Pragmatic Language Development in ASD [Alm, with external teams at BC, URMC]: Individuals with autism often experience challenges with language pragmatics or using language in a socially or contextually appropriate way. In this project, we plan to collect spontaneous language data from adults with autism, and to use computational methods to automatically analyze that language in order to learn more about the pragmatic language features that are associated with autism. We will compare the language of adults with Autism Spectrum Disorder (ASD) with the language of both young children and adolescents with ASD, which will help us determine whether the pragmatic deficits that characterize autism improve with age and development. The information about pragmatic ability gained from this work will have important implications for clinicians developing interventions for improving communication skills and social outcomes for individuals with autism.
Affective Video Recommendation [Bailey, Alm]: Within a sensing and learning framework, this work studies multimodal affective reactions of observers towards automatically recommending content to viewers.
Expanding the Use of Natural Language Processing AI in Linguistics Classrooms [Alm]: A growing trend is to enable general users to take advantage of and apply AI technology to aid reasoning and decision-making, without the requirement of in-depth knowledge of under-the-hood technical details. This effort extends a web-based system that makes a range of machine-intelligent computational linguistics methods accessible to users with language data and visualizations, focusing on educational use cases. https://nlp.rit.edu/linguine/
CAPTURE [Yang]: CAPTURE forecasts occurrences of future cyber attacks using a large number of seemingly irrelevant signals from the publicly accessible domains. The project investigates machine learning techniques that deal with incomplete signals, insignificant lagged features, imbalanced and non-stationary data.
ASSERT [Yang]: ASSERT integrates Bayesian learning, clustering, and information theory to provide timely separation of intrusion alerts and empirically generated attack models to enhance cyber situation awareness.
SynGAN [Yang]: SynGAN learns from moderate amount of cyber intrusion logs to generate synthetic data that maintains critical and essential part of the joint feature distribution.