Summary
Resources per team: HPC Cluster: 100k core hours and 1k GPU hours
Resource Access: SSH access to a login node or a web user portal
Resource management: A Slurm batch queue manager. Users will have access to a shared network file system.
Software management: Software module environment, singularity containers or user compiled code.
Documentation: https://docs.hpc.cam.ac.uk/hpc/
Support: support@hpc.cam.ac.uk please put [SDC3] in the subject of the email
Resource location: University of Cambridge, UK
Additional Information: On request we can provide limited resources running on a private OpenStack cloud, in the Azimuth Cloud Portal environment. This environment can support user-deployed and user-managed applications clusters such as Slurm or Kubernetes, but for user-managed clusters support is more limited in scope
Technical specifications
Overview
The UKSRC resource at the University of Cambridge comprises a multi node HPC/GPU cluster running a Slurm batch scheduler. An OpenStack-hosted Platform-as-a-Service Azimuth applications portal is also available on request.
Technical specifications
https://docs.hpc.cam.ac.uk/hpc/user-guide/cclake.html
https://docs.hpc.cam.ac.uk/hpc/user-guide/icelake.html
https://docs.hpc.cam.ac.uk/hpc/user-guide/a100.html
Per user resource
The cluster operates a fair-share algorithm between all the users within a project.
Software installed
A wide range of software packages described in the documentation (https://docs.hpc.cam.ac.uk/hpc/index.html) is available via modules.
Singularity/Apptainer containers are also supported.
Users can compile their own code in the project spaces.
Volume of resource
Each SDC3 team will be allocated 100k core hours and 1k GPU hours and 20TB of storage.
GPUs if any
320 x Nvidia A100 GPUs
User access
Account Setup
To setup an account please complete the form https://www.hpc.cam.ac.uk/external-application (unless you are a member of the University of Cambridge, in which case use the usual internal form).
Logging in
SSH access via login.hpc.cam.ac.uk or web access via login-web.hpc.cam.ac.uk.
How to run a workflow
Slurm batch script submission, using your own code, local software modules or containers.
Software management
Users can compile their own codes, make use of module packages or their own singularity containers.
Containerisation
Singularity or docker container images can be run using Singularity/Apptainer.
Documentation
https://docs.hpc.cam.ac.uk/hpc/user-guide/quickstart.html
Cloud Access
On request we can provide limited resources on our private OpenStack cloud through an Azimuth Cloud Portal environment.
Resource management
Resources on the HPC clusters will be allocated via Slurm projects.
On request we can provide limited resources on our Openstack cloud through an Azimuth Cloud Portal environment.
Credits and acknowledgements
The UK SKA Regional Centre (UKSRC) is funded by:
IRIS - Which is funded by the Science & Technology Facilities Council (STFC)
STFC is one of the seven research councils within UK Research & Innovation.
https://www.iris.ac.uk/portfolio/stfc-cloud/