Publication Notices

Notifications of New Publications Released by ERDC

Contact Us

      

  

    866.362.3732

   601.634.2355

 

ERDC Library Catalog

Not finding what you are looking for? Search the ERDC Library Catalog

Results:
Tag: high performance computing
Clear
  • Neural Ordinary Differential Equations for Rotorcraft Aerodynamics

    Abstract: High-fidelity computational simulations of aerodynamics and structural dynamics on rotorcraft are essential for helicopter design, testing, and evaluation. These simulations usually entail a high computational cost even with modern high-performance computing resources. Reduced order models can significantly reduce the computational cost of simulating rotor revolutions. However, reduced order models are less accurate than traditional numerical modeling approaches, making them unsuitable for research and design purposes. This study explores the use of a new modified Neural Ordinary Differential Equation (NODE) approach as a machine learning alternative to reduced order models in rotorcraft applications—specifically to predict the pitching moment on a rotor blade section from an initial condition, mach number, chord velocity and normal velocity. The results indicate that NODEs cannot outperform traditional reduced order models, but in some cases they can outperform simple multilayer perceptron networks. Additionally, the mathematical structure provided by NODEs seems to favor time-dependent predictions. We demonstrate how this mathematical structure can be easily modified to tackle more complex problems. The work presented in this report is intended to establish an initial evaluation of the usability of the modified NODE approach for time-dependent modeling of complex dynamics over seen and unseen domains.
  • A General-Purpose Multiplatform GPU-Accelerated Ray Tracing API

    Abstract: Real-time ray tracing is an important tool in computational research. Among other things, it is used to model sensors for autonomous vehicle simulation, efficiently simulate radiative energy propagation, and create effective data visualizations. However, raytracing libraries currently offered for GPU platforms have a high level of complexity to facilitate the detailed configuration needed by gaming engines and high-fidelity renderers. A researcher wishing to take advantage of the performance gains offered by the GPU for simple ray casting routines would need to learn how to use these ray tracing libraries. Additionally, they would have to adapt this code to each GPU platform they run on. Therefore, a C++ API has been developed that exposes simple ray casting endpoints that are implemented in GPU-specific code for several contemporary device platforms. This API currently supports the NVIDIA OptiX ray tracing library, Vulkan, AMD Radeon Rays, and even Intel Embree. Benchmarking tests using this API provide insight to help users determine the optimal backend library to select for their ray tracing needs. HPC research will be well-served by the ability to perform general purpose raytracing on the increasing amount of graphics and machine learning nodes offered by the DoD High Performance Computing Modernization Program.
  • In Situ and Time

    Abstract: Large-scale HPC simulations with their inherent I/O bottleneck have made in situ visualization an essential approach for data analysis, although the idea of in situ visualization dates back to the era of coprocessing in the 1990s. In situ coupling of analysis and visualization to a live simulation circumvents writing raw data to disk for post-mortem analysis -- an approach that is already inefficient for today's very large simulation codes. Instead, with in situ visualization, data abstracts are generated that provide a much higher level of expressiveness per byte. Therefore, more details can be computed and stored for later analysis, providing more insight than traditional methods. This workshop encouraged talks on methods and workflows that have been used for large-scale parallel visualization, with a particular focus on the in situ case.
  • Leveraging Production Visualization Tools In Situ

    Abstract: The visualization community has invested decades of research and development into producing large-scale production visualization tools. Although in situ is a paradigm shift for large-scale visualization, much of the same algorithms and operations apply regardless of whether the visualization is run post hoc or in situ. Thus, there is a great benefit to taking the large-scale code originally designed for post hoc use and leveraging it for use in situ. This chapter describes two in situ libraries, Libsim and Catalyst, that are based on mature visualization tools, VisIt and ParaView, respectively. Because they are based on fully featured visualization packages, they each provide a wealth of features. For each of these systems we outline how the simulation and visualization software are coupled, what the runtime behavior and communication between these components are, and how the underlying implementation works. We also provide use cases demonstrating the systems in action. Both of these in situ libraries, as well as the underlying products they are based on, are made freely available as open-source products. The overviews in this chapter provide a toehold to the practical application of in situ visualization.
  • Accelerating the Tactical Decision Process with High-Performance Computing (HPC) on the Edge: Motivation, Framework, and Use Cases

    Abstract: Managing the ever-growing volume and velocity of data across the battlefield is a critical problem for warfighters. Solving this problem will require a fundamental change in how battlefield analyses are performed. A new approach to making decisions on the battlefield will eliminate data transport delays by moving the analytical capabilities closer to data sources. Decision cycles depend on the speed at which data can be captured and converted to actionable information for decision making. Real-time situational awareness is achieved by locating computational assets at the tactical edge. Accelerating the tactical decision process leverages capabilities in three technology areas: (1) High-Performance Computing (HPC), (2) Machine Learning (ML), and (3) Internet of Things (IoT). Exploiting these areas can reduce network traffic and shorten the time required to transform data into actionable information. Faster decision cycles may revolutionize battlefield operations. Presented is an overview of an artificial intelligence (AI) system design for near-real-time analytics in a tactical operational environment executing on co-located, mobile HPC hardware. The report contains the following sections, (1) an introduction describing motivation, background, and state of technology, (2) descriptions of tactical decision process leveraging HPC problem definition and use case, and (3) HPC tactical data analytics framework design enabling data to decisions.
  • Topological data analysis: an overview

    Abstract: A growing area of mathematics topological data analysis (TDA) uses fundamental concepts of topology to analyze complex, high-dimensional data. A topological network represents the data, and the TDA uses the network to analyze the shape of the data and identify features in the network that correspond to patterns in the data. These patterns extract knowledge from the data. TDA provides a framework to advance machine learning’s ability to understand and analyze large, complex data. This paper provides background information about TDA, TDA applications for large data sets, and details related to the investigation and implementation of existing tools and environments.
  • New capabilities in CREATE™-AV Helios Version 11

    Abstract: CREATE™-AV Helios is a high-fidelity coupled CFD/CSD infrastructure developed by the U.S. Dept. of Defense for aeromechanics predictions of rotorcraft. This paper discusses new capabilities added to Helios version 11.0. A new fast-running reduced order aerodynamics option called ROAM has been added to enable faster-turnaround analysis. ROAM is Cartesian-based, employing an actuator line model for the rotor and an immersed boundary model for the fuselage. No near-body grid generation is required and simulations are significantly faster through a combination of larger timesteps and reduced cost per step. ROAM calculations of the JVX tiltrotor configuration give a comparably accurate download prediction to traditional body-fitted calculations with Helios, at 50X less computational cost. The unsteady wake in ROAM is not as well resolved, but wake interactions may be a less critical issue for many design considerations. The second capability discussed is the addition of six-degree-of-freedom capability to model store separation. Helios calculations of a generic wing/store/pylon case with the new 6-DOF capability are found to match identically to calculations with CREATE™-AV Kestrel, a code which has been extensively validated for store separation calculations over the past decade.
  • Summary of the SciTech 2020 Technical Panel on In Situ/In Transit Computational Environments for Visualization and Data Analysis

    Link: http://dx.doi.org/10.21079/11681/40887This paper was originally presented at the American Institute of Aeronautics and Astronautics (AIAA) ScitTech 2020 Technical Panel and published online 4 January 2021. Funding by USACE ERDC under Army Direct funding.Report Number: ERDC/ITL MP-21-10Title: Summary of the SciTech 2020 Technical Panel on In
  • In situ analysis and visualization to enable better workflows with CREATE-AV™ Helio

    Abstract: The CREATE-AV™ Helios CFD simulation code has been used to accurately predict rotorcraft performance under a variety of flight conditions. The Helios package contains a suite of tools that contain almost the entire set of functionality needed for a variety of workflows. These workflows include tools customized to properly specify many in situ analysis and visualization capabilities appropriate for rotorcraft analysis. In situ is the process of computing analysis and visualization information during a simulation run before data is saved to disk. In situ has been referred to with a variety of terms including co-processing, covisualization, coviz, etc. In this paper we describe the customization of the pre-processing GUI and corresponding development of the Helios solver code-base to effectively implement in situ analysis and visualization to reduce file IO and speed up workflows for CFD analysts. We showcase how the workflow enables the wide variety of Helios users to effectively work in post-processing tools they are already familiar with as opposed to forcing them to learn new tools in order post-process in situ data extracts being produced by Helios. These data extracts include various sources of information customized to Helios, such as knowledge about the near- and off-body grids, internal surface extracts with patch information, and volumetric extracts meant for fast post-processing of data. Additionally, we demonstrate how in situ can be used by workflow automation tools to help convey information to the user that would be much more difficult when using full data dumps.
  • Integrated Rule-Oriented Data System (iRODS) and High Performance Computing (HPC) Architecture Design

    Abstract: The Integrated Rule-Oriented Data System (iRODS) proof-of-concept will be deployed within the existing U.S. Army Engineer Research and Development Center (ERDC) Department of Defense Supercomputing Resource Center (DSRC) to test additional capabilities and features for high performance computing (HPC) users. iRODS is a data-grid middleware that virtualizes access to data, regardless of which physical storage device the data resides within. Users, and HPC jobs on behalf of users, can leverage the various application programming interfaces (APIs) within iRODS to search and retrieve data using metadata and a unified data namespace. In addition to facilitating data discovery and retrieval, iRODS has a robust security system to implement fine-grained access control and auditing rules.