Lassen is similar to the classified Sierra system, but smaller in size: 23 petaflops peak performance vs. Sierra's 125 petaflops peak performance. Lassen was ranked #10 on the June 2019 Top500 list.
*Login nodes: lassen[708-709]
NOTE: most numbers are for compute nodes only - not login or service nodes.
Job Limits
Each LC platform is a shared resource. Users are expected to adhere to the following usage policies to ensure that the resources can be effectively and productively used by everyone. You can view the policies on a system itself by running:
news job.lim.MACHINENAME
Web version of Lassen Job Limits
There are 788 compute nodes with 40 POWER9 cores, 4 NVIDIA Volta V100 GPUs, and 256 GB of memory on each node.
Jobs are scheduled per node. Lassen has two main scheduling pools (queues):
- pdebug—36 nodes
- pwdev—12 nodes
- pbatch—740 nodes
Queue Max nodes / job Max runtime --------------------------------------------- pdebug 18(*) 2 hrs pwdev social(**) 12 hrs(**) pbatch 256 12 hrs ---------------------------------------------
(*) pdebug is intended for debugging, visualization, and other inherently interactive work. It is NOT intended for production work. Do not use pdebug to run batch jobs. Do not chain jobs to run one after the other. Do not use more than half of the nodes during normal business hours. Individuals who misuse the pdebug queue in this or any similar manner will be denied access to running jobs in the pdebug queue.
(**) pwdev is for SD code developers to run short compiles/debugging/CI work. Only users in the pwdev bank will have access to this pool.
- 3 nodes max per user during daytime hours
- jobs can be up to 4 hours during daytime hours
- daytime hours are 0800-1800 California time Mon-Fri
- to prevent runaway jobs, there's a technical maximum per job of 12 hours
Hardware
Each node has two 22-core 3.45 GHz IBM POWER9 processors. Two of the cores on each socket are reserved for system use, leaving 40 usable cores per node. The vast majority of the cycles on each node are provided by four NVIDIA Volta V100 GPUs per node. Each node also has 256 GB of system memory and 64 GB of GPU memory. The nodes are connected by Mellanox EDR InfiniBand.
Documentation
Contact
Please call or send email to the LC Hotline if you have questions. LC Hotline | phone: 925-422-4531 | email: lc-hotline@llnl.gov