Civil and Environmental Engineering

 

Date of this Version

8-27-2019

Citation

2019 by the authors

Comments

Drones 2019, 3, 68; doi:10.3390/drones3030068 www.mdpi.com/journal/drones

Abstract

Aerial data collection is well known as an efficient method to study the impact following extreme events. While datasets predominately include images for post-disaster remote sensing analyses, images alone cannot provide detailed geometric information due to a lack of depth or the complexity required to extract geometric details. However, geometric and color information can easily be mined from three-dimensional (3D) point clouds. Scene classification is commonly studied within the field of machine learning, where a workflow follows a pipeline operation to compute a series of engineered features for each point and then points are classified based on these features using a learning algorithm. However, these workflows cannot be directly applied to an aerial 3D point cloud due to a large number of points, density variation, and object appearance. In this study, the point cloud datasets are transferred into a volumetric grid model to be used in the training and testing of 3D fully convolutional network models. The goal of these models is to semantically segment two areas that sustained damage after Hurricane Harvey, which occurred in 2017, into six classes, including damaged structures, undamaged structures, debris, roadways, terrain, and vehicles. These classes are selected to understand the distribution and intensity of the damage. The point clouds consist of two distinct areas assembled using aerial Structure-from-Motion from a camera mounted on an unmanned aerial system. The two datasets contain approximately 5000 and 8000 unique instances, and the developed methods are assessed quantitatively using precision, accuracy, recall, and intersection over union metrics.

Share

COinS