The HPC2015 system is a heterogeneous High Performance Computing Linux cluster which comprises various kinds of computing resource: compute nodes with fast multicore processors for general compute-intensive computing; special purpose compute nodes with large memory, GPU and MIC capabilities for data-intensive and accelerated computing. The system delivers processing power up to 84.85 Tera-FLOPS(Measured Rmax) / 104 Tera-FLOPS(Theoretical Rpeak).
Later in 2016, additional 11 dual 12-core 2.2GHz Intel Xeon E5-2600 v4 compute nodes (belongs to a group of intensive HPC users) are integrated to HPC2015 cluster making total of 124 nodes with total processing power up to 113 Tera-FLOPS (Theoretical Rpeak).
HPC2015 is designed to support both compute- and data-intensive research. The heterogeneous environments provide researchers diverse and emerging computing technologies to exploit new solution approaches, and new research opportunities and relationship among distinct research areas.
This cluster is consisted of:
General Purpose(GP) compute nodes: 104 compute nodes outfitted with two 10-core Intel Xeon E5-2600 v3(Haswell) processors and 96 GB physical memory.
Special Purpose(SP) compute nodes: Four GPU-compute nodes outfitted with two 10-core Intel Xeon E5 v2(Ivy Bridge EP) processors, 96GB 'host' memory and two Nvidia Tesla GPGPU accelerators, each with 6GB of GDDR5 on-board memory; two MIC-compute nodes outfitted with two 10-core Intel Xeon E5 v2 processors, 96GB 'host' memory and two Intel Xeon Phi coprocessors, each with 16GB of GDDR5 memory; three large-memory nodes outfitted with four 10-core Intel Xeon E5 v2 processors and 512GB memory.
File Systems: A parallel file storage sytem(150TB usable capacity) managed by Intel Enterprise Edition for Lustre. A backup server with 40TB capacity connected with central tape library archive system.
Interconnect: All compute nodes are interconnected with Mellanox FDR InfiniBand technology in a Fat-tree topology.
System Diagram of HPC2015 cluster