Skip to Content

Research Computing Environment

Location: Stanley Hydraulics Laboratory, fourth floor
Contact
: Mark Wilson

IIHR—Hydroscience & Engineering (IIHR) maintains a diverse set of computing resources and facilities. Over the past two decades, IIHR has been at the forefront of HPC parallel applications, moving from several large Silicon Graphics Power Challenge Array shared memory systems, a Sun Microsystems distributed memory system, to current large node distributed memory systems. Our codes are being implemented on Nvidia Kepler/Xeon Phi highly parallel systems and within various cloud computing environments.

The Neon and Argon clusters are currently the primary central HPC resources, following the recent retirement of our initial HPC system, Helium. Collectively, the two systems are composed of over 488 compute nodes with more than 10,640 processor cores. Each system has an internal high-performance message-passing network (Infiniband and Omnipath), and the two systems are networked by trunked high-speed 10Gb Ethernet links.

Both Neon and Argon are managed so that investor queues are quickly made available to members of the investors group. When idle, these resources are released for use by others. This system works well and will continue to be the model for UI HPC resource sharing.

Neon Cluster

The older HPC system, Neon, came on line in December 2013 to augment HPC resources available to IIHR researchers. Like Helium before it, IIHR operates Neon in conjunction with ITS and a group of collaborative researchers from around the university. Neon is a shared system with, currently, 4,256 standard cores, 2,280 Xeon Phi cores, 27 TB memory, 500 TB of storage, and 40 Gbps Infiniband QDR message passing fabric.

The Compute nodes comprise:

  • 188 64GB Nodes, 2.6GHz 16 Core (Standard Nodes)
  • 59 256GB Nodes, 2.6GHz 16 Core (Mid-Memory Nodes)
  • 12 512GB Nodes, 2.9GHz 24 Core (High-Memory Nodes)
  • 38 Xeon Phi 5110P Accelerator Cards
  • 11 Nvidia Kepler K20 Accelerator Cards

Argon Cluster

The newest HPC system, Argon, came on line in January 2017. Like its predecessors, Helium and Neon, IIHR jointly operates Argon with ITS and a group of collaborative researchers from around the university. Argon is a shared system with, currently, 6,400 standard cores, 2,280 Xeon Phi cores, 58 TB memory, 100 TB of NFS scratch storage, and 100 Gbps Omnipath message passing fabric with a 5:1 oversubscription.

The exact numbers of each of the following nodes is in flux, as nodes are still being purchased. The node types are:

  • Standard Compute Node (128GB, No Accelerator Support) – $5399
  • Mid-Memory Compute Node (256GB, No Accelerator Support) – $5930
  • Standard Compute Node (128GB, Accelerator Support) – $5781
  • Mid-Memory Compute Node (256GB, Accelerator Support) – $6219
  • High Memory Compute Node (512GB, Accelerator Support) – $6804

The cluster comprises:

  • 229 Compute Nodes
  • 6,400 Processor Cores
  • 58TB Ram
  • 12 GPU Accelerator Cards
  • 100Gbps Omnipath Network with 5:1 Oversubscription
  • 1TB Home Accounts per User
  • 100TB Shared NFS based Scratch Storage

All nodes contain:

  • 2 x Xeon E5-2680v4 (28 Cores at 2.4GHz)
  • 1TB SSD
  • 1Gbps Ethernet
  • 100Gbps Omnipath

The following is a description of other major computing resources, equipment, services, and software available to all IIHR affiliates and students:

  • IIHR operates several large-scale data harvesting and processing systems related to flood sensing and modeling. The Iowa Flood Information System (IFIS) system collects LDM and other weather data and builds a sequence of products for later modeling. Raw data packets are ingested on one system and passed to another system for processing and storage in a database. A third system provides web-based access to these data products. Similarly, a network of bridge-mounted flow sensors supply data to servers that are handled in a manner similar to the IFIS network. This architecture has proven scalable and reliable.
  • HPC at IIHR is augmented by 18 Silicon Mechanics storage units, providing 750 TB of storage in a RAID 60 configuration. This storage space is replicated to an offsite location with hourly snapshots taken for user-invoked file recovery.
  • Very large-scale computations are done at national and international computation centers accessed through longstanding IIHR-center relationships. In addition to the NSF and DOD/DOE centers (e.g., NCSA, Argonne National Labs), IIHR has developed a continuing collaboration with the National Center for High Performance Computing (NCHC) in Taiwan.
  • Eighty Linux workstations and more than 300 individual PCs running MS Windows 7 support the local centralized facilities. There are 30 PC-based servers handling web, ftp, security, and specialized database services. Many of the servers are virtualized using VMWare hosts at IIHR and the centralized Information Technology Facility (ITF). In addition, a number of user-located storage devices, publication-quality color printers, scanners, cameras, and other peripherals are in use.
  • This hardware is complemented by a carefully selected set of public domain, commercial, and proprietary software packages that include Tecplot, Gridgen, Fluent, FlowLab, Matlab, Origin, ERDAS, ERMapper, ERSI, Skyview, and the core GNU utilities. Additionally, software such as AutoCAD, MS Windows, MS Office, OS X, Mathematica, IDL, SigmaPlot, and SAS, are used under university-wide site licenses.
Tags: , ,
Last modified on April 25th, 2017
Posted on January 12th, 2011

Site by Mark Root-Wiley of MRW Web Design