Publication Notices

Notifications of New Publications Released by ERDC

Contact Us

      

  

    866.362.3732

   601.634.2355

 

ERDC Library Catalog

Not finding what you are looking for? Search the ERDC Library Catalog

Results:
Tag: Electronic data processing
Clear
  • During Nearshore Event Vegetation Gradation (DUNEVEG): Geospatial Tools for Automating Remote Vegetation Extraction

    Abstract: Monitoring and modeling of coastal vegetation and ecosystems are major challenges, especially when considering environmental response to hazards, disturbances, and management activities. Remote sensing applications can provide alternatives and complementary approaches to the often costly and laborious field-based collection methods traditionally used for coastal ecosystem monitoring. New and improved sensors and data analysis techniques have become available, making remote sensing applications attractive for evaluation and potential use in monitoring coastal vegetation properties and ecosystem conditions and changes. This study involves the extraction of vegetation metrics from airborne lidar and hyperspectral imagery (HSI) collected by the US Army Corps of Engineers (USACE) National Coastal Mapping Program (NCMP) to quantify coastal dune vegetation characteristics. A custom geoprocessing toolbox and associated suite of tools were developed to allow inputs of common NCMP lidar and imagery products to help automate the workflow for extracting prioritized dune vegetation metrics in an efficient and repeatable way. This study advances existing coastal ecosystem knowledge and remote sensing techniques by developing new methodologies to classify, quantify, and estimate critical coastal vegetation metrics which will ultimately improve future estimates and predictions of nearshore dynamics and impacts from disturbance events.
  • Accelerating the Tactical Decision Process with High-Performance Computing (HPC) on the Edge: Motivation, Framework, and Use Cases

    Abstract: Managing the ever-growing volume and velocity of data across the battlefield is a critical problem for warfighters. Solving this problem will require a fundamental change in how battlefield analyses are performed. A new approach to making decisions on the battlefield will eliminate data transport delays by moving the analytical capabilities closer to data sources. Decision cycles depend on the speed at which data can be captured and converted to actionable information for decision making. Real-time situational awareness is achieved by locating computational assets at the tactical edge. Accelerating the tactical decision process leverages capabilities in three technology areas: (1) High-Performance Computing (HPC), (2) Machine Learning (ML), and (3) Internet of Things (IoT). Exploiting these areas can reduce network traffic and shorten the time required to transform data into actionable information. Faster decision cycles may revolutionize battlefield operations. Presented is an overview of an artificial intelligence (AI) system design for near-real-time analytics in a tactical operational environment executing on co-located, mobile HPC hardware. The report contains the following sections, (1) an introduction describing motivation, background, and state of technology, (2) descriptions of tactical decision process leveraging HPC problem definition and use case, and (3) HPC tactical data analytics framework design enabling data to decisions.
  • PUBLICATION NOTICE: Parallel File I/O for Geometric Models: Formats and Methods

    Abstract: Processing large amounts of data in a High-Performance Computing (HPC) environment can be throttled quickly without an efficient method for utilizing disk I/O. The Geometry Engine component of the Virtual Environment for Sensor Performance Assessment (VESPA) uses MPI-IO to load the geometric data in parallel and avoid creating a bottleneck on disk I/O interactions. This parallel I/O method requires formatting the data into specific binary file formats so each MPI process of the parallel program can determine where to read or write data without colliding with other MPI processes. Addressing the collision problem resulted in the development of two binary file formats, the Mesh Binary file (.mb) and the Scene Chunk Pack file (.scp). The Mesh Binary file contains the essential data required to recreate the landscape and vegetation geometry used by the Geometry Engine. The Scene Chunk Pack file is used to write the partitioned geometry to disk, so the ray casting engine can reload the distributed geometry without repeating the partitioning process. Both of these files together support reading and writing for the partitioning phase and the ray casting phase of the Geometry Engine. This report discusses these formats in detail and outlines how the Geometry Engine reads and writes these files in parallel on HPC.
  • PUBLICATION NOTICE: Using Morton Codes to Partition Faceted Geometry: An Architecture for Terabyte-Scale Geometry Models

    Abstract: The Virtual Environment for Sensor Performance Assessment (VESPA) project requires enormous, high-fidelity landscape models to generate synthetic sensor imagery with little to no artificial artifacts. These high-fidelity landscapes require a memory footprint substantially larger than a single High Performance Computer’s (HPC) compute node’s local memory. Processing geometries this size requires distributing the geometry over multiple compute nodes instead of including a full copy in each compute node, the common approach in parallel modeling applications. To process these geometric models in parallel memory on a high-performance computing system, the Geometry Engine component of the VESPA project includes an architecture for partitioning the geometry spatially using Morton codes and MPI (Message Passing Interface) collective communication routines. The methods used for this partitioning process will be addressed in this report. Incorporating this distributed architecture into the Geometry Engine provides the capability to distribute and perform parallel ray casting on landscape geometries over a Terabyte in size. Test case timings demonstrate scalable speedups as the number of processes are increased on an HPC machine.
  • PUBLICATION NOTIFICATION: Coincidence Processing of Photon-Sensitive Mapping Lidar Data

     Link: http://dx.doi.org/10.21079/11681/35599 Report Number: ERDC/GRL TR-20-1Title: Coincidence Processing of Photon-Sensitive Mapping Lidar DataBy Christian Marchant, Ryan Kirkpatrick, and David OberApproved for Public Release; Distribution is Unlimited February 2020Abstract: Photon-sensitive mapping lidar systems are able to image at greater