Search form

 

Computer hardware

Content:

Computer complex HP

Computer Cluster T-Platform

Computer Complex  ALICE

 

Computer complex HP

Computer complex Hewlett-Packard represents a single computer system, based on the hardware and software solutions provided by HP.

The aims of computer complex HP (hereinafter - CC HP) include provision of services: 

  1. Provision of virtual machines for educational and scientific, administrative and economic departments of the St. Petersburg State University ( hereinafter - SPbU);
  2. Provision of high-performance computing cluster capacity for work in the field of physics, chemistry, biology, etc.

The structure of the СС HP

Cluster of Cloud Computing consists of virtual hosts, storage, network storage, data transmission network and software. It allows to provide the service of VM temporary use with user-defined properties for educational and scientific, administrative and economic departments of SPbU.

HPC Cluster (High Performance Computing Cluster) consists of a server, storage, network storage, data transmission network and software. It allows to carry out research in physics, chemistry, biology, and other sciences that require powerful computing resources.

Communication equipment includes switches that use the technology Gigabit Ethernet, 10 Gigabit Ethernet and InfiniBand QDR

Storage system is used to placing user data, virtual machine images, and other information and is a combination of disk arrays, gateways and virtual libraries.

Based on computer complex HP are performed:

  • Cluster VMware (blade-servers);
  • Computational clusters;
  • Network equipment;
  • Storage;
  • Backup System.

Сomputer Сomplex HP

 

Complex Specification

Cluster of Cloud Computing consists of the following components:

  • Four HP BladeSystem c7000 chassis to install the Blade Servers;
  • Thirty-two Blade Servers BL460c G6; Twenty-eight Blade Servers BL460c G7;
  • Software environment VMWare vSphere 5 under control of VMWare ESXi 5 hypervisor, which uses the aforementioned servers as hosts for virtualization;
  • Server for virtualization management VMWare VCenter Server.

HPC cluster consists of two subclusters: SMP and hybrid.

Subcluster SMP includes:

  • Three Server HP ProLiant DL980 G7.

Hybrid cluster includes:

  • Eight S6500 chassis for installing servers;
  • Sixteen servers HP ProLiant SL390s 2U;
  • Eight servers HP ProLiant SL390s 4U;

HPC cluster management is carried out with the head node HP ProLiant DL360 G7.

A cluster of cloud computing and HPC cluster using following general data network:

  • Gigabit Ethernet;
  • 10 Gigabit Ethernet;
  • InfiniBand QDR 4x.

Network equipment includes the following devices:

  • Two switches HP 6600;
  • Two switches HP 8206;
  • Four pairs of connection modules BladeSystem Virtual Connect Flex-10;
  • Four Switch Voltaire 4036;
  • Four integrated Mellanox switch for blade chassis.

Storage:

  • Disk Array P4500 G2;
  • Disk Array P4800 G2;
  • Gateway x9300;
  • Virtual Library D2D4106i

Content

 

Characteristics of the complex

Characteristics of cloud computing cluster

Name Quantity CPU RAM Adapters and modules Notes
1 Chassis for Blade Servers НР BladeSystem c7000 4 - - Switch Modules Virtual Connect Flex-10 (10 Gb Ethernet) и InfiniBand  -
2 Blade Server BL460c G6 32 2 х Intel Xeon X5670 3.06 GHz 96 Gb Network Interfaces 10 Gb Ethernet и InfiniBand -
3 Blade Server BL460c G7 28 2 х Intel Xeon X5675 3.06 GHz 96 Gb Network Interfaces 10 Gb Ethernet и InfiniBand -

Characteristics of HPC cluster

Subcluster SMP

Name Quantity CPU RAM Adapters and modules Notes
1 Server НР ProLiant DL980 G7 2 Intel Xeon X7560 2.2 GHz 512 Gb Adapter for network access Gb Ethernet и InfiniBand -
2

Server НР ProLiant DL980 G7

1

Intel Xeon X7560 2.2 GHz

2 Tb Adapter for network access Gb Ethernet и InfiniBand -

Hybrid cluster

Name Quantity CPU RAM Adapters and modules Notes
1 Chassis s6500 8 - - - -
2

Server HP ProLian SL390s 2U

16

2 х Intel Xeon X5650,

3 х nVidia TeslaM 2050

96 Gb - -
3 Server HP ProLian SL390s 4U 8

2 х Intel Xeon X5650,

8 х nVidia TeslaM 2050

96 Gb - -
4 Server HP ProLian DL360 G7 1 2 х Intel Xeon X5650 12 Gb - The head node of HPC cluster

 

Characteristics of network equipment

Name Quantity Port count Data transmission technology Notes
1 Switch НР 6600 2 48 Gigabit Ethernet  
2 Switch НР 8206 2 20 10 Gigabit Ethernet  
3 interconnect modules  BladeSystem Virtaire Conect Flex-10 4 х 2 8 10 Gigabit Ethernet  
4 Switch Voltaire 4036 4 - InfiniBand QDR 4x  
5 Built-in switch Mellanox 4 - InfiniBand QDR 4x  

 

Performance

Cluster performance based on blade servers.

The aim of the tests was to determine the two parameters: maximum number of floating point operations per second (Rmax) and percentage of this number to peak theoretical performance (Rpeak), calculated on base of the CPU characteristics provided by the producers.

The following results was obtained:

The cluster 28 nodes, 56 CPUs, 336 cores
Operations per second (Rmax) 3618 gigaflops
Ratio Rmax/Rpeak 87,8%

Technical details of tests

Tests were carried out using a test HPLinpack 2.0. This test is a system of linear algebraic equations, for whose solution the Gauss method is used. In the course of the computation is performed a large number of floating point operations (FLOP). Number of such operations per second (floating point operations per second - FLOPS) is an assessment of the processor performance. Which allows to predict the cluster's possibility to solve real computing problems.

The following measurements were performed:

  • HPL test on one, two, four, eight, sixteen and twenty-eight knots.

The obtained results are shown in the table below. One six-core processor Intel Xeon X5675 3.06 GHz gives a performance of 73.584 gigaflops. Two CPUs are set in each node , therefore, the peak theoretical performance of a single node is 73,584 × 2 = 147.168 gigaflops. Theoretical peak performance and ratio Rmax / Rpeak are shown in the table below.

Nodes quantity Rmax Rpeak Rmax/Rpeak, %
1 133,3 147,168 90,58%
2 265,6 294,336 90,24%
4 529,3 588,672 89,91%
8 1052 1177,344 89,35%
16 2079 2354,688 88,29%
28 3618 4120,704 87,80%

 

Performance of the hybrid HPC cluster

The aim of the tests was to determine the two parameters: maximum number of floating point operations per second (Rmax) for sixteen hybrid node cluster and percentage of this number to peak theoretical performance (Rpeak), calculated on base of the GPU and CPU characteristics provided by the producers.

The following results was obtained:

The cluster 24 nodes, 72 GPUs, 48 CPUs (288 cores)
Operations per second (Rmax) 19590 gigaflops
Ratio Rmax/Rpeak 48,79%

Technical details of tests

Tests were carried out using a test HPLinpack 2.0 as for the cluster based on Blade Servers.

The following measurements were performed:

  • HPL test on one, two, four, eight, sixteen nodes with three GPUs,
  • HPL test in one, two, four, eight nodes with eight GPUs,
  • General test HPL at 24 knots with the activation of three GPUs .

The obtained results are shown in the table below One Tesla M2050 GPU gives a performance of 515 gigaflops. One six-core processor Intel Xeon X5650 2.66 GHz gives a performance of 63.984 gigaflops. Three GPUs are set in each node of the first type (2U), therefore, the peak theoretical performance of a single first type (2U) node is 515 × 3 + 63,984 × 2 = 1672.968 gigaflops. Similarly, for the second type (4U) node: 515 × 8 + 63,984 × 2 = 44247.968 gigaflops.  Theoretical peak performance and ratio Rmax / Rpeak are shown in the table below.

Nodes quantity Rmax Rpeak Rmax/Rpeak, %
3 GPU      
1 1017 1672,968 60,79%
2 1932 3345,936 57,74%
4 3647 6691,872 54,50%
8 7259 13383,744 54,24%
16 13060 26767,488 48,79%
24 19590 40151,232 48,79%
8 GPU      
1 2152 4247,968 50,66%
2 4183 8495,936 49,24%
4 8357 16991,872 49,18%
8 16540 33983,744 48,67%
General Test      
24 19590 40151,232 48,79%

 

Network Infrastructure

Local network of the complex is implemented based on two speed modes: 1-Gigabit Ethernet Network and 10-Gigabit Ethernet Network.

1-Gigabit Network includes 8206, 6600 switches and attached devices - hardware management interfaces (iLO, Onboard Administrator, Management Interface) and HPC cluster nodes. The structure of 10-Gigabit Network includes Virtual Connect Flex-10 modules and 8206 switches.

The high-speed network InfiniBand is based on the Voltaire 4036 switches and built-in Mellanox switches for blade chassis. The network is based on InfiniBand 4x QDR technology. The bandwidth is 40 Gb / s. The Fat - tree topology is applied, in which the devices are connected to edge switches, and edge switches are connected to a pair of core switches

The scheme of the network infrastructure CC HP

Content

 

Computer Cluster T-Platform

cluster T-platform

Computer cluster T-Platform is a powerful computing device produced by T-Platforms company, which includes:

  • 96 CPUs,
  • 384 cores,
  • 768 GB RAM,
  • 7.68 TB of a disk space.

Cluster Specification

Cluster T-Platform consists of the following components:

  • Forty-eight Dexus compute nodes;
  • Six InfiniBand Flextronix F-X430046 Switchs;
  • One Gigabit Ethernet D-Link DGS-3324SR Switch;
  • One Gigabit Ethernet D-Link DXS-3350SR Switch;
  • One control node;
  • Two cabinets APC, NetShelter SX.

 

Cluster Characteristics

Characteristics components of cluster T-Platform

№  Сomponent Name Quantity Technical characteristics
1 Dexus compute node 48

- Size 1U

- CPU: 2 x E5335 2.0 GHz

- RAM: 16 GB

- HDD: 160 GB

- 2 x INTEL 82563EB 10/100/1000 Mbit / s

- Mellanox Technologies MT25204 [InfiniHost III Lx HCA] (rev 20)

2

InfiniBand  Flextronix F-X430046 Switch

6 - DDR/SDR 24-port 4X - 20Gb/s
3 Gigabit Ethernet D-Link DGS-3324SR Switch 2 - 24 port 10/100/1000 Mbit / s
4 Gigabit Ethernet D-Link DXS-3350SR Switch 1 - 48 port 10/100/1000 Mbit / s
5 Control node 1

- CPU: 2 x X5640

 - RAM: 8 GB

 - HDD: 3 TB

- Ethernet 2 x 1000 Gb / s

6 Cabinet APC, NetShelter SX 2 19" 42U х 1000

 

Performance

Theoretical peak performance - 3.07 TFlops,  practically reached to 2.5 TFlops.

 

Network Infrastructure

Cluster T-platform is based on forty-eight Dexus compute nodes. On the one side, they are united with InfiniBand 20Gb network for exchange of information between the nodes, and from the other side with 1 Gb Ethernet network for communication with the outside world.

 

The network infrastructure of computer cluster T-Platform

Content

 

Computer Complex  ALICE

Cluster "ALICE" is a site RU-SPbSU of WLCG (Worldwide LHC Computing Grid, LHC - The Large Hadron Collider). It is also a part of the RDIG network - Russian Data Intensive Grid.

On this cluster the data obtained from LHC are stored and processed, as well as theoretical data are stored, processed and generated. Cluster is used by all four LHC experiments ALICE, LHCb, CMS, ATLAS. Preference is given to ALICE experiment.

 

Complex Specification and it's Characteristics

Cluster "ALICE" consists of file servers, compute nodes, control servers and network equipment.

It consists of the following components:

  1. File servers: boxes with capacity of 5 TB + 17 TB + 41 TB (total - 63 TB);
  2. Six computers TWIN, each includes two nodes, 2x4 CPU, 16 Gb RAM (total of 96 cores 192 Gb RAM);
  3. Control servers: four servers with dual-core CPU and 2 GB RAM.

 

Performance

The core performance of Intel (R) Xeon (R) CPU E5345@2.33GHz with 2 GB RAM is 1403 SI2k .

view data

The total capacity for distributed tasks is 1.403 x 96 = 134.688 ~ = 135 KSi2k KSi2k = 1000 SPECint2000.

 view data

 

Network Infrastructure

Internal network - 1 Gb / s, the external one is limited to 400 MB / s

Content