Publication Notices

Notifications of New Publications Released by ERDC

Contact Us

      

  

    866.362.3732

   601.634.2355

 

ERDC Library Catalog

Not finding what you are looking for? Search the ERDC Library Catalog

Results:
Tag: aerial photography
Clear
  • Photographic Aerial Transects of Fort Wainwright, Alaska

    Abstract: This report presents the results of low-altitude photographic transects conducted over the training areas of US Army Garrison Fort Wainwright, in the boreal biome of central Alaska, to document baseline land-cover conditions. Flights were conducted via a Cessna™ 180 on two flight paths over portions of the Tanana Flats, Yukon, and Donnelly Training Areas and covered 486 mi (782 km) while documenting GPS waypoints. Nadir photographs were made with two GoPro™ cameras operating at 5 sec time-lapse intervals and with a handheld digital camera for oblique imagery. This yielded 6,063 GoPro photos and 706 oblique photos. Each image was intersected with a land-cover-classification map, collectively representing 38 of the 44 cover categories.
  • Evaluation of Unmanned Aircraft Systems for Flood Risk Management: Results of Terrain and Structure Assessments

    Abstract: The 2017 Duck Unmanned Aircraft Systems (UAS) Pilot Experiment was conducted by the US Army Engineer Research and Development Center (ERDC), Coastal and Hydraulics Laboratory, Field Research Facility (FRF), to assess the potential for different UAS to support US Army Corps of Engineers coastal and flood risk management. By involving participants from multiple ERDC laboratories, federal agencies, academia, and private industry, the work unit leads were able to leverage assets, resources, and expertise to assess data from multiple UAS. This report compares datasets from several UAS to assess their potential to survey and observe coastal terrain and structures. In this report, UAS data product accuracy was analyzed within the context of three potential applications: (1) general coastal terrain survey accuracy across the FRF property; (2) small-scale feature detection and observation within the experiment infrastructure area; and (3) accuracy for surveying coastal foredunes. The report concludes by presenting tradeoffs between UAS accuracy and the cost to operate to aid in selection of the best UAS for a particular task. While the technology and exact UAS models vary through time, the lessons learned from this study illustrate that UAS are available at a variety of costs to satisfy varying coastal management data needs.
  • guiBathy: A Graphical User Interface to Estimate Nearshore Bathymetry from Hovering Unmanned Aerial System Imagery

    Abstract: This US Army Engineer Research and Development Center, Coastal and Hydraulics Laboratory, technical report details guiBathy, a graphical user interface to estimate nearshore bathymetry from imagery collected via a hovering Unmanned Aerial System (UAS). guiBathy provides an end-to-end solution for non-subject-matter-experts to utilize commercial-off-the-shelf UAS to collect quantitative imagery of the nearshore by packaging robust photogrammetric and signal-processing algorithms into an easy-to-use software interface. This report begins by providing brief background on coastal imaging and the photogrammetry and bathymetric inversion algorithms guiBathy utilizes, as well as UAS data collection requirements. The report then describes guiBathy software specifications, features, and workflow. Example guiBathy applications conclude the report with UAS bathymetry measurements taken during the 2020 Atlantic Hurricane Season, which compare favorably (root mean square error = 0.44 to 0.72 m; bias = -0.35 to -0.11 m) with in situ survey measurements. guiBathy is a standalone executable software for Windows 10 platforms and will be freely available at www.github.com/erdc.
  • PUBLICATION NOTICE: Use of Convolutional Neural Networks for Semantic Image Segmentation Across Different Computing Systems

    ABSTRACT: The advent of powerful computing platforms coupled with deep learning architectures have resulted in novel approaches to tackle many traditional computer vision problems in order to automate the interpretation of large and complex geospatial data. Such tasks are particularly important as data are widely available and UAS are increasingly being used. This document presents a workflow that leverages the use of CNNs and GPUs to automate pixel-wise segmentation of UAS imagery for faster image processing. GPU-based computing and parallelization is explored on multi-core GPUs to reduce development time, mitigate the need for extensive model training, and facilitate exploitation of mission critical information. VGG-16 model training times are compared among different systems (single, virtual, multi-GPUs) to investigate each platform’s capabilities. CNN results show a precision accuracy of 88% when applied to ground truth data. Coupling the VGG-16 model with GPU-accelerated processing and parallelizing across multiple GPUs decreases model training time while preserving accuracy. This signifies that GPU memory and cores available within a system are critical components in terms of preprocessing and processing speed. This workflow can be leveraged for future segmentation efforts, serve as a baseline to benchmark future CNN, and efficiently support critical image processing tasks for the Military.