Cluster Important Information
For any cluster problems or questions that are not answered by the the documentation on this support page, email HPC-SUPPORT@listserv.uark.edu.
For information on how to use the clusters, see the Quickstart/Razor Cluster Tutorial.
To see cluster storage options see the storage page.
The cluster tutorial has information on running applications on the clusters. For more information about how to use the computational resources, see the queuing page.
The HPC Community has a number of resources explaining how to use, develop, and optimize applications to run on clusters.
The Interactive Page shows how to run interactive batch jobs, with or without displays.
Each of the presently four clusters is connected to GPFS storage of 74TB of permanent usable storage and 27TB of fast, usable temporary storage. The Star cluster is also connected to Lustre temporary storage. The clusters share a common access point and file systems.
Razor's 126 compute nodes contain dual hex-core Xeon X5670 processors running at 2.93 GHz, 2x12MB cache, 24 GB of memory, and a 40 Gb QDR Infiniband. Four of the 126 nodes have 96 GB of memory.
Star of Arkansas Cluster
The Star of Arkansas has 157 dual quad-core Xeon E5430 processors at 2.66GHz, 2x6 MB cache, 16 GB of memory, and 10Gb SDR Infiniband. This set of hardware was benchmarked at 10.75 Teraflops using the HPL benchmark, and appeared at rank 341 on the Top500 list in June 2008.
Each of the six GPU nodes has dual quad-core Xeon E5520 processors at 2.26 GHz, 2x8MB cache, 12 GB of memory, two Nvidia GTX295 GPUs (seen by applications as four GTX275, each 900 MB and 1 single precision Teraflops), and 20Gb DDR Infiniband. These nodes can be used through the "gpu" queue.
Large Memory Node
A single AMD node has four 12-core Opteron 6174 processors, 4x6 MB of cache, 2.2 GHz, 256 GB of memory, and 20Gb DDR Infiniband. This node can be used throught the "bigmem" queue.
HPC Administration Wiki
Login to the HPC Administration website (cluster administrators only)