Imaging Science Thesis Defense: Spatiotemporal Alignment of Event Stream with Images and Active Illumination

Event Image
CIS MS Defense

Imaging Science Thesis Defense
Spatiotemporal Alignment of Event Stream with Images and Active Illumination

Abhijan Wasti

Imaging Science
Rochester Institute of Technology

Register for Zoom Link Here

Abstract
:

   Unlike conventional “frame-based” cameras that form images by sampling all pixels within the duration of the global/rolling shutter, a pixel in an event camera can be triggered independently when the log intensity change in scene luminance at the pixel exceeds a threshold. This unique feature provides several advantages over conventional sensors, including high dynamic range (HDR) (~120dB), high temporal rate (~10,000Hz), low latency (<1ms), and low power requirements (~10mW). These properties make them excellent candidates for applications such as high-speed photography, HDR image reconstruction, object tracking, depth estimation, simultaneous localization and mapping, and surveillance and monitoring. Despite their potential, the asynchronous and spatially sparse nature of events poses challenges to event processing and interpretation. This is because most advanced image processing and computer vision algorithms are designed to work with conventional image formats, and not with temporally dense streams of asynchronous pixel events (i.e., the event stream). Although emerging techniques in supervised machine learning demonstrate promise, continued and rapid progress relies on the availability of labeled event datasets, which are scarce, and difficult to produce.
   Moreover, generating reliable events for training models is challenging due to the scene-dependent nature of event generation, which is further complicated by varying illumination and relative motion. In this thesis, we attempt to address these limitations with a novel imaging paradigm involving the capture of frames from a conventional frame-based camera that has been spatially aligned and temporally synchronized with an event sensor. Our active illumination source allows us to generate events more consistently even under challenging illumination and motion in the scene. We demonstrate the feasibility of such a setup for a mobile eye-tracking system and acquire subpixel and microsecond accurate spatiotemporal alignment. Our method facilitates the use of pre-trained models that operate on conventional images to detect features that can then be used to train models that can be applied directly to the event stream. This serves as the next step in event-based imaging and unlocks new possibilities in computer vision and imaging technology

Intended Audience:
All are Welcome!

To request an interpreter, please visit myaccess.rit.edu


Contact
Lori Hyde
Event Snapshot
When and Where
April 08, 2024
10:00 am - 11:00 am
Room/Location: Zoom
Who

This is an RIT Only Event

Interpreter Requested?

No

Topics
research