Get Started
See below for a description of the facilities - see the box to the right for links on how to start using the systems.
Campus available clusters
There are two clusters available for anyone on campus to use. The newest cluster Pod, is a mixture of 40-core CPU nodes for general CPU based computing, and 15 quad NVIDIA V100/40GB GPUs, with SXM/NVLINK interconnect. There are two login nodes for development work, one with only CPUs and one with two different GPUs (A P100 and T4).
There's also the older Knot7 cluster, which is good for large parameter sweeps, or for jobs that exceed your local desktop computer, but don't need the full performance of Pod. Both systems are free to use, and only require acknowledgements in publications.
Condo Clusters
These HPC clusters are shared between participants who buy nodes for the cluster. CSC has provided all of the infrastructure (e.g. networking, head node, RAID, interconect switch, etc.) for the cluster, the researcher only buys the compute nodes themselves, thus saving perhaps $10k to $20k that can instead be spent on another 3 or 4 compute nodes. Additionally, CSC provides the space in the Elings Hall machine room, and also provides the management of the cluster. You may also have heard this type of system referred to as a 'Condo cluster'.
The only other users on the clusters are researchers who have bought in, or are considering buying in, and CNSI also buys some compute nodes for extra capacity available to all participants, so in the past people even get more compute resources than they bought!
This model is also especially useful if you have chunks of money in smaller portions, e.g. $5k or $10k - now you can buy several nodes in a large cluster than having a collection of servers that accumulate over the years in your lab or office space.
The current 'default' configuration is a dual compute node with two 24 core Intel based CPUs with 256GB of RAM and Infiniband interconnect. If your work requires large RAM, or faster CPU, we can accomodate that, but most people find the default configuration to be a good setup for the nanotechnology and materials science research that is done at UCSB. This type of node currently (as of 2023) costs about $9,000.
Current Condo Clusters
- Lattice, 62 node Opteron based cluster with Myrinet Interconnect - the original condo cluster ran for over 10 years, and is no longer in operation.
- Guild, 66 node Intel based cluster with QDR Infiniband. In process of being phased out.
- 'Braid'- ~120 nodes, also Intel based with FDR Infiniband. Currently over 120 nodes.
- 'Braid2' - (started in 2022) - next generation condo cluster. Includes both CPU nodes and GPU nodes. Please contact Paul Weakliem or Fuzzy Rogers for more information.
Contact
Basic information to start using the CSC clusters
- Getting an account
- Logging in
- Other stuff
- Quickly Connect to the POD cluster