High Performance Computing is the resource companion for computational science which is now considered the third leg of science along with experimental and theoretical science.
HPC is for anyone who needs to solve very large numerical problems, process large data sets, or perform advanced simulations. Information technology, and high performance computing in particular, are essential tools in modern research and enable discovery in many disciplines.
Computationally intensive research has dominated the physical sciences such as physics and chemistry for decades and is now becoming prominent in biology, sociology, medical research, agriculture, and a growing list of other fields.
Trestles, an NSF XSEDE resource, was acquired from San Diego Supercomputer Center and from the National Science Foundation in May of 2015. The Trestles cluster comprises 256 compute nodes, each with quad octa-core AMD Opteron 2.4 GHz 6194 processors at 2.4 GHz, 64 GB of memory, and 120 GB of SSD local disk. Trestles is interconnected with a 324-port QDR 40 Gbps nonblocking Mellanox InfiniBand switch, and is connected to shared Lustre file systems with 16TB of scratch space and 350TB of main storage.
Razor consists of three sub clusters, interconnected with a 324-port QDR 40 Gbps nonblocking QLogic Infiniband switch and supplementary switches, and is connected to an IBM GPFS shared file system with 88 TB of long-term storage and 35 TB of scratch storage.
- Razor I- IBM IdataPlex of 126 nodes, each dual Intel Xeon 6-core 2.93 GHz X5670 with 24 GB of memory and 1 TB of local disk, obtained through NSF MRI #918970 and university funds;
- Razor II- IBM IdataPlex of 112 nodes, each dual Xeon 8-core 2.6 GHz E5-2670 with 32 GB of memory and 2 TB of local disk, obtained through NSF Epscor #959124 and university funds;
- Razor III- Dell PE R 620 of 64 nodes, each dual Xeon 8-core 2.6 GHz E5-2650V2 with 32 GB of memory and 1 TB of local disk, obtained through university funds.
Six GPU nodes have dual Xeon 4-core 2.26 GHz E5520 processors , 12 GB of memory, and two NVidia GTX480 GPUs, each with a peak performance of 1345 Single/168 Double GFLOPS. One GPU node has dual Xeon 8-core 2.4 GHz E5-2630V3 processors, 32 GB of memory, and two NVidia K40X GPUs, each with a peak performance of 4291-5040 Single/1430-1680 GFOPS double precision.
Large Shared Memory Nodes
One large-memory node has quad AMD Opteron 12-core 2.2 GHz 6174 processors and 256 GB of memory. Six large-memory nodes have quad Opteron 16-core 2.3 GHz 6276 processors and 512 GB of memory. Three large-memory nodes have quad Xeon octo-core 2.4 GHz E5-4640 processors and 768 GB of memory. One large-memory node has quad Xeon 12-core 2.6 GHz E7-4860V2 processors and 3072 GB of memory.