Publication Notices

Notifications of New Publications Released by ERDC

Contact Us

      

  

    866.362.3732

   601.634.2355

 

ERDC Library Catalog

Not finding what you are looking for? Search the ERDC Library Catalog

Results:
Tag: Computer architecture
Clear
  • Integrated Rule-Oriented Data System (iRODS) and High Performance Computing (HPC) Architecture Design

    Abstract: The Integrated Rule-Oriented Data System (iRODS) proof-of-concept will be deployed within the existing U.S. Army Engineer Research and Development Center (ERDC) Department of Defense Supercomputing Resource Center (DSRC) to test additional capabilities and features for high performance computing (HPC) users. iRODS is a data-grid middleware that virtualizes access to data, regardless of which physical storage device the data resides within. Users, and HPC jobs on behalf of users, can leverage the various application programming interfaces (APIs) within iRODS to search and retrieve data using metadata and a unified data namespace. In addition to facilitating data discovery and retrieval, iRODS has a robust security system to implement fine-grained access control and auditing rules.
  • Integrated Rule-Oriented Data System (iRODS) and High Performance Computing (HPC) Requirements Document

    Abstract: The purpose of this report is to capture all relevant use cases, functional requirements, and technical requirements of the Integrated Rule-Oriented Data System (iRODS) prototype. The use cases (UCs) define the system interactions an iRODS user, iRODS administrator, and an auditor would expect within the system. The functional requirements define the expected behavior of the system to support the individual use cases; functional requirements are grouped in reference to the use cases supported by the set of functional requirements. The technical requirements are defined last and include references to specific functional requirements and use cases supported by the requirement.
  • Web-Enabled Interface for iRODS: Comparing Hydroshare and Metalnx

    Abstract: The Integrated Rule-Oriented Data System (iRODS) software provides ample resources for managing data and collections thereof, but there are occasions where utilizing its command line interface (CLI) is impractical or not desirable. One such example is when it is required that the user authenticate using a common access card (CAC), which is more easily accomplished through a graphical user interface (GUI) than through a CLI. Furthermore, restricting the system to only offering a CLI can alienate users who would normally be averse to using a system in such a way, and there are users who are not averse to utilizing a CLI, but who would still benefit from a GUI until they are able to familiarize themselves with the iCommands provided by iRODS. Thus, it becomes imperative to either implement or utilize an existing GUI for the system.
  • Analysis of ERS use cases for iRODs

    Abstract: The purpose of this paper is to discuss the challenges inherent with High Performance Computing (HPC) data storage access and management, the capabilities of iRODS, and the analysis of several Engineered Resilient Systems (ERS) use cases relating iRODS capabilities to the teams’ stated needs. Specifically, these teams are the ERS Data Analytics group (specifically their research on rotorcraft maintenance in conjunction with the U. S. Army Aviation and Missile Research Development and Engineering Center [AMRDEC]), the ERS Environmental Simulation research team, the ERS Sensor Systems research team, and the HPC/Scientific computing group representing the “General HPC User.”
  • PUBLICATION NOTICE: Parallel File I/O for Geometric Models: Formats and Methods

    Abstract: Processing large amounts of data in a High-Performance Computing (HPC) environment can be throttled quickly without an efficient method for utilizing disk I/O. The Geometry Engine component of the Virtual Environment for Sensor Performance Assessment (VESPA) uses MPI-IO to load the geometric data in parallel and avoid creating a bottleneck on disk I/O interactions. This parallel I/O method requires formatting the data into specific binary file formats so each MPI process of the parallel program can determine where to read or write data without colliding with other MPI processes. Addressing the collision problem resulted in the development of two binary file formats, the Mesh Binary file (.mb) and the Scene Chunk Pack file (.scp). The Mesh Binary file contains the essential data required to recreate the landscape and vegetation geometry used by the Geometry Engine. The Scene Chunk Pack file is used to write the partitioned geometry to disk, so the ray casting engine can reload the distributed geometry without repeating the partitioning process. Both of these files together support reading and writing for the partitioning phase and the ray casting phase of the Geometry Engine. This report discusses these formats in detail and outlines how the Geometry Engine reads and writes these files in parallel on HPC.
  • PUBLICATION NOTICE: Using Morton Codes to Partition Faceted Geometry: An Architecture for Terabyte-Scale Geometry Models

    Abstract: The Virtual Environment for Sensor Performance Assessment (VESPA) project requires enormous, high-fidelity landscape models to generate synthetic sensor imagery with little to no artificial artifacts. These high-fidelity landscapes require a memory footprint substantially larger than a single High Performance Computer’s (HPC) compute node’s local memory. Processing geometries this size requires distributing the geometry over multiple compute nodes instead of including a full copy in each compute node, the common approach in parallel modeling applications. To process these geometric models in parallel memory on a high-performance computing system, the Geometry Engine component of the VESPA project includes an architecture for partitioning the geometry spatially using Morton codes and MPI (Message Passing Interface) collective communication routines. The methods used for this partitioning process will be addressed in this report. Incorporating this distributed architecture into the Geometry Engine provides the capability to distribute and perform parallel ray casting on landscape geometries over a Terabyte in size. Test case timings demonstrate scalable speedups as the number of processes are increased on an HPC machine.