Publication Notices

Notifications of New Publications Released by ERDC

Contact Us

      

  

    866.362.3732

   601.634.2355

 

ERDC Library Catalog

Not finding what you are looking for? Search the ERDC Library Catalog

Results:
Tag: Computer vision
Clear
  • Comparing the Thermal Infrared Signatures of Shallow Buried Objects and Disturbed Soil

    Abstract: The alteration of physical and thermal properties of native soil during object burial produces a signature that can be detected using thermal infrared (IR) imagery. This study explores the thermal signature of disturbed soil compared to buried objects of different compositions (e.g., metal and plastic) buried 5 cm below ground surface (bgs) to better understand the mechanisms by which soil disturbance can impact the performance of aided target detection and recognition (AiTD/R). IR imagery recorded every five minutes were coupled with meteorological data recorded on 15-minute intervals from 1 July to 31 October 2022 to compare the diurnal and long-term fluctuations in raw radiance within a 25 × 25 pixel area of interest (AOI) above each target. This study examined the diurnal pattern of the thermal signature under several varying environmental conditions. Results showed that surface effects from soil disturbance increased the raw radiance of the AOI, strengthening the contrast between the object and background soil for several weeks after object burial. Enhancement of the thermal signature may lead to expanded windows of object visibility. Target age was identified as an important element in the development of training data sets for machine learning (ML) classification algorithms.
  • Thermography Conversion for Optimal Noise Reduction

    Abstract: Computer vision applications in terms of raw thermal radiance are limited by byte size. Normalizing the raw imagery reduces functional complexities that could otherwise aide a computer processing algorithm. This work explores a method to normalize 16-bit signed integer (I16) into unsigned 8-bit (U8) while maintaining the integrity of the correlation coefficients between the raw data sets and the environmental parameters that affects thermal anomaly detectability.
  • Terrestrial Vision-Based Localization Using Synthetic Horizons

    Abstract: Vision-based localization could improve navigation and routing solutions in GPS-denied environments. In this study, data from a Carnegie Robotics MultiSense S7 stereo camera were matched to a synthetic horizon derived from foundation sources using novel two-dimensional correlation techniques. Testing was conducted at multiple observation locations over known ground control points (GCPs) at the US Army Engineer Research and Development Center (ERDC), Geospatial Research Laboratory (GRL), Corbin Research Facility. Testing was conducted at several different observational azimuths for these locations to account for the many possible viewing angles in a scene. Multiple observational azimuths were also tested together to see how the amount of viewing angles affected results. These initial tests were conducted to help future efforts testing the S7 camera under more realistic conditions, in different environments, and while expanding the collection and processing methodologies to additional sensor systems.
  • CRREL Environmental Wind Tunnel Upgrades and the Snowstorm Library

    Abstract: Environmental wind tunnels are ideal for basic research and applied physical modeling of atmospheric conditions and turbulent wind flow. The Cold Regions Research and Engineering Laboratory's own Environmental Wind Tunnel (EWT)—an open-circuit suction wind tunnel—has been historically used for snowdrift modeling. Recently the EWT has gone through several upgrades, namely the three-axis chassis motors, variable frequency drive, and probe and data acquisition systems. The upgraded wind tunnel was used to simulate various snowstorm conditions to produce a library of images for training machine learning models. Various objects and backgrounds were tested in snowy test conditions and no-snow control conditions, producing a total of 1.4 million training images. This training library can lead to improved machine learning models for image-cleanup and noise-reduction purposes for Army operations in snowy environments.
  • UGV SLAM Payload for Low-Visibility Environments

    Abstract: Herein, we explore using a low size, weight, power, and cost unmanned ground vehicle payload designed specifically for low-visibility environments. The proposed payload simultaneously localizes and maps in GPS-denied environments via waypoint navigation. This solution utilizes a diverse sensor payload that includes wheel encoders, inertial measurement unit, 3D lidar, 3D ultrasonic sensors, and thermal cameras. Furthermore, the resulting 3D point cloud was compared against a survey-grade lidar.
  • PUBLICATION NOTICE: Understanding State-of-the-Art Material Classification through Deep Visualization

    Abstract: Neural networks (NNs) excel at solving several complex, non-linear problems in the area of supervised learning. A prominent application of these networks is image classification. Numerous improvements over the last few decades have improved the capability of these image classifiers. However, neural networks are still a black-box for solving image classification and other sophisticated tasks. A number of experiments conducted look into exactly how neural networks solve these complex problems. This paper dismantles the neural network solution, incorporating convolution layers, of a specific material classifier. Several techniques are utilized to investigate the solution to this problem. These techniques look at specifically which pixels contribute to the decision made by the NN as well as a look at each neuron’s contribution to the decision. The purpose of this investigation is to understand the decision-making process of the NN and to use this knowledge to suggest improvements to the material classification algorithm.
  • PUBLICATION NOTICE: Use of Convolutional Neural Networks for Semantic Image Segmentation Across Different Computing Systems

    ABSTRACT: The advent of powerful computing platforms coupled with deep learning architectures have resulted in novel approaches to tackle many traditional computer vision problems in order to automate the interpretation of large and complex geospatial data. Such tasks are particularly important as data are widely available and UAS are increasingly being used. This document presents a workflow that leverages the use of CNNs and GPUs to automate pixel-wise segmentation of UAS imagery for faster image processing. GPU-based computing and parallelization is explored on multi-core GPUs to reduce development time, mitigate the need for extensive model training, and facilitate exploitation of mission critical information. VGG-16 model training times are compared among different systems (single, virtual, multi-GPUs) to investigate each platform’s capabilities. CNN results show a precision accuracy of 88% when applied to ground truth data. Coupling the VGG-16 model with GPU-accelerated processing and parallelizing across multiple GPUs decreases model training time while preserving accuracy. This signifies that GPU memory and cores available within a system are critical components in terms of preprocessing and processing speed. This workflow can be leveraged for future segmentation efforts, serve as a baseline to benchmark future CNN, and efficiently support critical image processing tasks for the Military.