Publication Notices

Notifications of New Publications Released by ERDC

Contact Us

      

  

    866.362.3732

   601.634.2355

 

ERDC Library Catalog

Not finding what you are looking for? Search the ERDC Library Catalog

Results:
Category: Publications: Information Technology Laboratory (ITL)
Clear
  • Robot Operating System Innovations in Autonomous Navigation

    Abstract: This report presents the results of simulations conducted in preparation for the 2024 Maneuver Support and Protection Integration Experiments (MSPIX) demonstration. The study aimed to develop and test a system for autonomous navigation in complex environments using advanced algorithms to enable the robot to avoid obstacles and navigate safely and efficiently. The report describes the methodology used to develop and test the autonomous navigation system, including the use of simulation, to evaluate its performance. The results of the simulation tests are presented to highlight the effectiveness of the navigation solution.
  • Smart Installation Weather Warning Decision Support

    Abstract: Army installation commanders need timely weather information to make installation closure decisions before or during adverse weather events (e.g., hail, thunderstorms, snow, and floods). We worked with the military installation in Fort Carson, CO, and used their Weather Warning, Watch, and Advisory (WWA) criteria list to establish the foundation for our algorithm. We divided the Colorado Springs area into 2300 grids (2.5 square kilometers areas) and grouped the grids into ten microclimates, geographically and meteorologically unique regions, per pre-defined microclimate regions provided by the Fort Carson Air Force Staff Weather Officers (SWOs). Our algorithm classifies each weather event in the WWA list using the National Weather Service’s and National Digital Forecast Database’s data. Our algorithm assigns each event a criticality level: none, advisory, watch, or warning. The traffic network data highlight the importance of each road segment for travel to and from Fort Carson. The algorithm also uses traffic network data to assign weight to each grid, which enables the aggregation to the region and installation levels. We developed a weather dashboard in ArcGIS Pro to verify our algorithm and visualize the forecasted warnings for the grids and regions that are or may be affected by weather events.
  • An All-Hazards Return on Investment (ROI) Model to Evaluate U.S. Army Installation Resilient Strategies

    Abstract: The paper describes our project to develop, verify, and deploy an All-Hazards Return of Investment model for the U.S. Army Engineer Research and Development Center to provide army installations with a decision support tool for evaluating strategies to make existing installation facilities more resilient. The need for increased resilience to extreme weather was required by U.S. code and DoD guidance, as well as an army strategic plan stipulating an ROI model to evaluate relevant resilient strategies. The ERDC integrated the University of Arkansas designed model into a new army installation planning tool and expanded the scope to evaluate resilient options from climate to all hazards. Our methodology included research on policy, data sources, resilient options, and analytical techniques, along with stakeholder interviews and weekly meetings with installation planning tool developers. The ROI model uses standard risk analysis and engineering economics terms and analyzes potential installation hazards and resilient strategies using data in the installation planning tool. The model calculates the expected net present cost without the resilient strategy, with the resilient strategy, and ROI for each. The minimum viable product ROI model was formulated mathematically, coded in Python, verified using hazard scenarios, and provided to the ERDC for implementation.
  • Analysis Tools and Techniques for Evaluating Quality in Synthetic Data Generated by the Virtual Autonomous Navigation Environment

    Abstract: The capability to produce high-quality labeled synthetic image data is an important tool for building and maintaining machine learning datasets. However, ensuring computer-generated data is of high quality is very challenging. This report describes an effort to evaluate and improve synthetic image data generated by the Virtual Autonomous Navigation Environment’s Environment and Sensor Engine (VANE::ESE), as well as documenting a set of tools developed to process, analyze, and train models from, image datasets generated by VANE::ESE. Additionally, the results of several experiments are presented, including an investigation into using explainable AI techniques, and direct comparisons of various models trained on multiple synthetic datasets.
  • Assessment of Neural Network Augmented Reynolds Averaged Navier Stokes Turbulence Model in Extrapolation Modes

    Abstract: A machine-learned model enhances the accuracy of turbulence transport equations of RANS solver and applied for periodic hill test case. The accuracy is investigated in extrapolation modes. A parametric study is also performed to understand the effect of network hyperparameters on training and model accuracy and to quantify the uncertainty in model accuracy due to the non-deterministic nature of the neural network training. For any network, less than optimal mini-batch size results in overfitting, and larger than optimal reduces accuracy. Data clustering is an efficient approach to prevent the machine-learned model from over-training on more prevalent flow regimes, and results in a model with similar accuracy. Turbulence production is correlated with shear strain in the free-shear region, with shear strain and wall-distance and local velocity-based Reynolds number in the boundary layer regime, and with streamwise velocity gradient in the accelerating flow regime. The flow direction is key in identifying flow separation and reattachment regime. Machine-learned models perform poorly in extrapolation mode. A priori tests reveal model predictability improves as the hill dataset is partially added during training in a partial extrapolation model. These also provide better turbulent kinetic energy and shear stress predictions than RANS in a posteriori tests. Before a machine-learned model is applied for a posteriori tests, a priori tests should be performed.
  • Smart Cities–A Structured Literature Review

    Abstract: Smart cities are rapidly evolving concept-transforming urban developments. They use advanced technologies and data analytics to improve quality of life, increase efficiency of infrastructure and services, and promote sustainable economic growth. They integrate multiple domains to create an interconnected and intelligent urban environment. The implementation of smart city solutions in international contexts was also analyzed and proposes strategies to overcome implementation challenges. The integration of technology and data-driven solutions has potential to revolutionize urban living by providing personalized and accessible services. However, it also presents challenges, including data privacy concerns, unequal access to technology, and the need for collaboration across private, public, and government sectors. This study provides insights into the current state and future prospects of smart cities and presents an analysis of challenges and opportunities. We also propose a concise definition for smart cities: “Smart cities use digital technologies, communication technologies, and data analytics to create an efficient and effective service environment that improves urban quality of life and promotes sustainability.” As cities grow and face increasingly complex challenges, the integration of advanced technologies and data-driven solutions can create more sustainable communities.
  • The Design of Multimedia Object Detection Pipelines within the HPC Environment

    Abstract: Computer vision multimedia pipelines have become both more sophisticated and robust over the years. The pipelines can accept multiple inputs, perform frame analysis, and produce outputs on a variety of platforms with near-real-time performance. Vendors such as Nvidia have significantly grown their framework and library offerings while providing tutorials and documentation via online training and tutorials. Despite the prolific growth, many of the libraries, frameworks, and tutorials come with noticeable limitations. The limitations are especially apparent within the high-performance computing (HPC) environment where graphic processing units may be older, user-level rights more restricted, and access to the graphical user interface not always available. This work describes the process of building multimedia object detection and segmentation pipelines within the HPC environment, its challenges, and ways to overcome the shortcomings. The project describes an iterative design process, which can be used as a blueprint for future development of similar computer vision pipelines within the HPC hosting environment.
  • Exploring Lidar Odometry Within the Robot Operating System

    Abstract: Here, we explore various lidar odometry approaches (with both 3 and 6 degrees of freedom) in simulation. We modified a virtual model of a TurtleBot3 robot to work with the various odometry approaches and evaluated each method within a gazebo simulation. The gazebo model was configured to generate an absolute ground truth for comparison to the odometry results. We used the evo package to compare the ground truth with the various lidar odometry values. The results for KISS-ICP and laser scan matcher (LSM), including two simultaneous localization and map-ping (SLAM) approaches, Fast Lidar-Inertial Odometry (FAST-LIO), and Direct Lidar Odometry (DLO), are provided and discussed. We also tested one of the approaches on our physical robot.
  • USACE Interference Management Standard v1.0

    Abstract: The Interference Management Standard (IMS) is a comprehensive framework designed to streamline the coordination of design, construction, and operation and maintenance models. The IMS provides clear guidelines, defined goals, and objectives to ensure effective interference management. The process encompasses several stages: authoring and compiling models, clash detection, clash analysis, conflict resolution, report compilation, and deliverables submission. By implementing the IMS, users can expect im-proved efficiency and accuracy in model coordination, leading to enhanced project outcomes.
  • Deep Learning Approach for Accurate Segmentation of Sand Boils in Levee Systems

    Abstract: Sand boils can contribute to the liquefaction of a portion of the levee, leading to levee failure. Accurately detecting and segmenting sand boils is crucial for effectively monitoring and maintaining levee systems. This paper presents SandBoilNet, a fully convolutional neural network with skip connections designed for accurate pixel-level classification or semantic segmentation of sand boils from images in levee systems. In this study, we explore the use of transfer learning for fast training and detecting sand boils through semantic segmentation. By utilizing a pretrained CNN model with ResNet50V2 architecture, our algorithm effectively leverages learned features for precise detection. We hypothesize that controlled feature extraction using a deeper pretrained CNN model can selectively generate the most relevant feature maps adapting to the domain, thereby improving performance. Experimental results demonstrate that SandBoilNet outperforms state-of-the-art semantic segmentation methods in accurately detecting sand boils, achieving a Balanced Accuracy (BA) of 85.52%, Macro F1-score (MaF1) of 73.12%, and an Intersection over Union (IoU) of 57.43% specifically for sand boils. This proposed approach represents a novel and effective solution for accurately detecting and segmenting sand boils from levee images toward automating the monitoring and maintenance of levee infrastructure.