Expected End of Service life Jan 3, 2024
Mana is the University of Hawai‘i (UH) high performance computing (HPC) cluster – a collection of many computers called nodes connected together with a network – that solves computational problems too large for standard computers. Mana is operated by UH Information Technology Service Cyberinfrastructure, that serves as a central university wide computational resource which supports data and computationally intensive research in over 90 disciplines. Mana consists of 366 nodes (9,396 cores) with a total of 67 TB of RAM, 120 GPUs and more than 1 PB of storage. Mana has grown steadily since it was deployed in late 2014, with investments from the University, an NSF MRI award ( #1920304 ) and ongoing Principal Investigator (PI) purchases of condo compute nodes. Mana is free to use for all UH users, with the Slurm Workload Manager managing access with a fair share algorithm. UH community users get uninterrupted access to the 215 the University owned machines, but can also schedule work on the 130 PI owned nodes via partitions subject to preemption by node owners' job submissions.
Since 2014, over 685 users, across multiple campuses in the University system, have used Mana. Mana has also provided over 274,175,144 CPU hours since 2014 and 588,806 GPU hours since 2019. (last updated July 31, 2022)
Mana is FREE to use by all UH faculty, staff and students across the 10 campus system, but unlike UHUNIX, access to Mana is not granted by default to everyone at UH
Prerequisites to access and use Mana:
Be an active UH faculty, staff or students affiliated with at least one of the 10 campuses in the UH system
Sign up for and attend a Mana on-boarding session. Registration
Registered for and use DUO/MFA with your UH account. DUO/MFA Setup
Mana Related News
News related to Mana and featured stories of research done with the help of Mana can be found at the Hawaii Data Science Institute's website.
Hardware
8,500 processor cores | Intel processor types: |
340 nodes | Dual-socket nodes, 10 to 24 cores per socket |
6 nodes | Quad-socket nodes, 10 cores per socket |
2 standard login nodes | 4 vCPUs, 8 GB RAM |
1 tmux/screen login node | 4 vCPUs, 8 GB RAM |
2 Open Ondemand nodes | 4 vCPUs, 16 GB RAM |
2 Data transfer nodes | 16 vCPUs, 128 GB RAM, 100 Gbit/s Ethernet |
63.19 TB of total system memory | Memory per computational nodes vary from 96 GB, 128 GB, 192 GB, 256 GB, 512 GB and 1 TB of RAM |
Intel True Scale QDR Infiniband |
|
Mellanox HDR Infiniband |
|
1/25/100 Gbit Ethernet network | Computational nodes are not provided public IPs, but are capable of accessing external networks and websites |
120 Graphical Processing Units (GPUs) in 18 nodes | 4 Nvidia Tesla K40s 32 Nvidia Tesla V100s 32 Nvidia Quadro RTX 5000 44 Nvidia GeForce RTX 2080 Ti 8 Nvidia GeForce RTX 2070 |
80 TB of free permanent storage | Total home and group space available |
211 TB of free temporary storage | Total temporary (scratch) space available |
1 PB of Long Term Storage | For fee permanent storage |
Hardware Breakdown
Nodes
If this table is not visible, you need to make sure you are logged into google using your UH credentials as visibility is limited to those within the UH organization
GPU Nodes
If this table is not visible, you need to make sure you are logged into google using your UH credentials as visibility is limited to those within the UH organization
Node Constraints & Infiniband Network Affiliation
If this table is not visible, you need to make sure you are logged into google using your UH credentials as visibility is limited to those within the UH organization
Home & Group Storage
Type | Disk Type | Location | Total Storage | Per User Storage | Transparent Compression |
---|---|---|---|---|---|
NFS | Spinning Disk | /home/ | 40 TB | 50 - 300 GB | Yes, lz4 |
NFS | Spinning Disk | /mnt/group/nfs_fs01/ | 40 TB | 50 - 300 GB | Yes, lz4 |
Temporary (Scratch) Storage
Type | Disk Type | Location | Symlink | Per User Storage | Purge time | Total Storage | Transparent Compression |
---|---|---|---|---|---|---|---|
NFS | Spinning Disk | /mnt/scratch/nfs_fs02/ | nfs_scratch | 5 TB | 20 days | 50 TB | Yes, lz4 |
Lustre | NVMe flash | /mnt/scratch/lustre_01/ | lus_scratch | 5 TB | 20 days | 61 TB | No |
Long Term Storage (LTS)
Type | Disk Type | Location | Total Storage | Transparent Compression |
---|---|---|---|---|
NFS | Spinning Disk | /mnt/lts/ | 1 PB | Yes, lz4 |