Skip to main content

High-Performance Computing

WVU's High-Performance Computing systems can solve large equations in physical, forensic science, biological and social sciences, engineering, humanities, or business.

Cluster specs

Specs

Thorny Flat Dolly Sods
HPC Clusters

Compute Nodes 159 34
CPU Cores 6,196 1,184
Aggregated RAM
10.2TB
GPU Cards
155
Aggregated CUDA Cores
771,360
Aggregated GPU Memory

5TB

The WVU High-Performance Computing (HPC) facilities support computationally-intensive research that requires especially powerful computational capabilities.

HPC resources assist research teams at WVU to greatly reduce their computational analysis times. This includes free access to community nodes for researchers at all institutes of higher learning in West Virginia, with CPU and GPU units. Researchers can also purchase individual nodes that can be used on a first-come-first-serve basis and are otherwise shared by the community.

Research Data Storage

HPC users have access to more than 500 TB of data storage accessible for processing inside the HPC clusters. Researchers can also purchase group storage on the cluster that will allow data to be easily shared between researchers. The Research Office also offers storage in the centrally managed and secure Research Data Depot where storage can be purchased at a cost-effective rate for five years. This data storage is not intended for storing protected or regulated data.

HPC Clusters

Our current HPC cluster contains 178 compute nodes with a total of 8344 CPU cores.Of those compute nodes, 79 are community nodes with a total of 4,824 CPU cores, 9.3 Terabytes of Memory, and 18 NVIDIA P6000 GPUs. The remaining nodes are nodes purchased by faculty members and departments that comprise an additional 99 nodes with and additional 3,520 cores and 29 additional NVIDIA GPUs ranging from NVIDIA RTX6000 to A100. These additional nodes are available to community members in four-hour increments to increase the utilization of the system.

A new cluster is set for deployment in 2023 focused on artificial Intelligence and machine learning. It will consist of 30 nodes with 32 CPU cores and four A30 GPUs each, four nodes with 32 CPU cores and four A40 GPUs, two nodes with 64 CPU cores and eight SXM A100 GPUs, and one node with three A40 GPUs for visualization and testing. All nodes are connected to a high-speed low-latency HDR100 Infiniband fabric to support tightly coupled multi-gpu and multi-node work.

Contacts

Contact the Research Computing team by submitting a help desk ticket or find more information in the HPC documentation.

Staff

Aldo Humberto Romero

Director of Research Computing

Aldo is an Eberly Distinguished Professor of Physics that leads the Research Computing team in continuing the sustained growth of HPC resources at WVU. Aldo also helps with outreach to the WVU research computing community and is an ACCESS-NSF Campus Champion.

Nathaniel Garver-Daniels

Senior Systems Administrator

Nate is an ACCESS Campus Champion who assists researchers with their computing-intensive and data-intensive research including digital humanities.

Jared Frick

System Administrator

Jared develops software to check the status of the HPC and he also supports several HPC efforts around campus.

Dr. Guillermo Franco

Senior Software Specialist

Guillermo has a strong background in the scientific programming languages of C, FORTRAN, and Python. He is also an ACCESS Campus Champion and works directly with researchers to enable them to utilize HPC resources in the most appropriate manner.

Daniel Turpen

Daniel runs cloud services for the B&E Business Data Analytics program, Global access and provides computational support to the GoFirst program.

HPC training resources are available throughout the year for new and expert users in different topics from Parallel computing to Neural networks.