sciCORE currently operates a high-performance computing infrastructure, divided in three different environments tailored to specific scientific needs. The infrastructure is composed of near 200 InfiniBand and Ethernet 100G-interconnected nodes, and around 13500 CPU cores, providing 70 TB of distributed memory and a high-performance (GPFS) cluster file system with a disk-storage capacity of 11 PB. The technical details are provided in the tables below.
The sciCORE cluster is updated on a regular basis to match the growing needs in Life Sciences and parallel demanding applications. Nowadays, almost 30 million of CPUh are consumed per year by our 800 users, summing up to more than 14 million of jobs run per year.
Eth 100G,
Infiniband
1104 total
200 user