These stats are necessarily simplified. See notes below.
NOTE Due to the mixed nature of the GPUs in this machine, some of the fields that would normally have single integer inputs are not rendering. Here is additional information:
GPU peak performance (TFLOP/s double precision): MI60 = 7.4
GPU Memory/Node (GB): MI60 = 32GB
Peak TFLOPs (GPUs): MI60 = 2,427TF
Also, local NVRAM storage, mounted on each node as /l/ssd (GB): 1500
Job Limits
Each LC platform is a shared resource. Users are expected to adhere to the following usage policies to ensure that the resources can be effectively and productively used by everyone. You can view the policies on a system itself by running:
news job.lim.MACHINENAME
Web version of Corona Job Limits
Hardware
There are 121 compute nodes, each with 256 GB of memory. All compute nodes have AMD Rome processors with 48 cores/node. Each compute node has 8 AMD MI50 GPUs with 32 GB of memory. The nodes are interconnected via InfiniBand QDR (QLogic).
Scheduling
Corona jobs are scheduled through Flux. Slurm wrappers are also loaded by default.
Jobs are scheduled per node. All nodes are in one queue.
The maximum time limit is 24 hours.
For more information about running on Corona, see: https://lc.llnl.gov/confluence/display/LC/Compiling+and+running+on+Coro...
Scratch Disk Space: Consult CZ File Systems Web Page: https://lc.llnl.gov/fsstatus/fsstatus.cgi
Documentation
- Linux Clusters Tutorial Part One | Linux Clusters Part Two
- Slurm Tutorial (formerly Slurm and Moab
- TCE Home
Contact
Please call or send email to the LC Hotline if you have questions. LC Hotline | phone: 925-422-4531 | email: lc-hotline@llnl.gov
See Compilers page