Imaging Science Ph.D. Defense: Enhanced 3D sub-canopy mapping via airborne full-waveform LIDAR

Event Image
Imaging Science Thesis Defense

Imaging Science Ph.D. Defense
Enhanced 3D sub-canopy mapping via airborne full-waveform LIDAR

Robert Wible
Imaging Science Ph.D. Candidate
Rochester Institute of Technology                                                                  

Register for Zoom here

Abstract
:

Airborne light detection and ranging (LiDAR) systems have been used to gather information about forests, their canopies and what lies beneath them for many decades. Recent advances in LiDAR sensor technology have enabled higher sampling rates, leading to increased point densities for discrete point clouds. However, vast portions of the forest sub-canopy still remain either unsampled or occluded. We contend, that waveform LiDAR, as a specific type of structural modality that digitizes the intensity of the laser backscatter as a function of time (range), contains additional information that can be extracted using modern artificial intelligence (AI) and machine learning (ML) methods. In this study we developed a geometrically, radiometrically, and structurally accurate 3D model of a 700 m x 500 m plot within the Harvard Forest to generate realistic waveform LiDAR. The Harvard Forest scene was validated by comparing simulated remote sensing data to field data that had been collected in 2019 and 2021. Simulated hyperspectral data produced realistic reflectance values across the entire spectrum for all tree species. When compared to hyperspectral data captured by the National Ecological Observatory Network’s (NEON) Airborne Observation Platform (AOP), the simulated data showed strong correlations across the spectrum with a RMSE under 5.5%. Simulated NEON Optech LiDAR point clouds were also compared to real data and produced extremely realistic duplicates that accurately modeled point density and canopy penetration rates to within 1%. After validating the structure of the scene’s canopy the data was used to train a convolution neural network (CNN) to classify portions of the waveform previously unused. We used a modified CNN originally intended for discrete point clouds to classify real waveform LiDAR data into five classes (background, leaf, bark, ground, and man-made objects). The CNN failed to correctly classify ground and man-made objects but produced accurate canopy models filled with leaf and bark voxels that were four times greater than the point density (PD) of discrete systems. The scene and processes developed during this research effort will help expand our knowledge of full-waveform LiDAR systems and will provide a foundation for future iterations of AI/ML efforts to unlock its true potential.

Intended Audience:
All are Welcome!

To request an interpreter, please visit myaccess.rit.edu


Contact
Lori Hyde
Event Snapshot
When and Where
March 25, 2025
1:00 pm - 3:00 pm
Room/Location: via Zoom
Who

This is an RIT Only Event

Interpreter Requested?

No

Topics
research