These HPC clusters are shared between participants who buy nodes for the cluster. CSC has provided all of the infrastructure (e.g. networking, head node, RAID, interconect switch, etc.) for the cluster, the researcher only buys the compute nodes themselves, thus saving perhaps $10k to $20k that can instead be spent on another 3 or 4 compute nodes. Additionally, CSC provides the space in the Elings Hall machine room, and also provides the management of the cluster. You may also have heard this type of system referred to as a 'Condo cluster'.
The only other users on the clusters are researchers who have bought in, or are considering buying in, and CNSI also buys some compute nodes for extra capacity available to all participants, so in the past people even get more compute resources than they bought!
This model is also especially useful if you have chunks of money in smaller portions, e.g. $5k or $10k - now you can buy several nodes in a large cluster than having a collection of servers that accumulate over the years in your lab or office space.
The current 'default' configuration is a dual compute node with two 12 core Intel based systems with 128GBof RAM and Infiniband interconnect. If your work requires large RAM, or faster CPU, we can accomodate that, but most people find the default configuration to be a good setup for the nanotechnology and materials science research that is done at UCSB. This type of dual-node currently (as of late 2016) costs about $10,000.
Current Condo Clusters
- Lattice, 62 node Opteron based cluster with Myrinet Interconnect - the original condo cluster is 10 years old and is now being phased out.
- Guild, 66 node Intel based cluster with QDR Infiniband. No longer available to buy into.
- 'Braid'- the next generation cluster, also Intel based with FDR Infiniband. Currently over 90 nodes and growing.
Please contact Paul Weakliem or Fuzzy Rogers for more information.