Resources per team: HPC Cluster: 100k core hours and 1k GPU hours
Resource Access: SSH access to a login node or a web user portal
Data cube access: SDC3 data in both formats are available via read-only network share
Resource management: A Slurm batch queue manager. Users will have access to a shared network file system.
Software management: Software module environment, singularity containers or user compiled code.
Support: email@example.com please put [SDC3] in the subject of the email
Resource location: University of Cambridge, UK
Additional Information: On request we can provide limited resources running on a private OpenStack cloud, in the Azimuth Cloud Portal environment. This environment can support user-deployed and user-managed applications clusters such as Slurm or Kubernetes, but for user-managed clusters support is more limited in scope
The UKSRC resource at the University of Cambridge comprises a multi node HPC/GPU cluster running a Slurm batch scheduler. An OpenStack-hosted Platform-as-a-Service Azimuth applications portal is also available on request.
Per user resource
The cluster operates a fair-share algorithm between all the users within a project.
A wide range of software packages described in the documentation (https://docs.hpc.cam.ac.uk/hpc/index.html) is available via modules.
Singularity/Apptainer containers are also supported.
Users can compile their own code in the project spaces.
Volume of resource
Each SDC3 team will be allocated 100k core hours and 1k GPU hours and 20TB of storage.
GPUs if any
320 x Nvidia A100 GPUs
To setup an account please complete the form https://www.hpc.cam.ac.uk/external-application (unless you are a member of the University of Cambridge, in which case use the usual internal form).
SSH access via login.hpc.cam.ac.uk or web access via login-web.hpc.cam.ac.uk.
How to run a workflow
Slurm batch script submission, using your own code, local software modules or containers.
Accessing the data cube
The data cube will be available via a network volume mounted on each compute node.
Users can compile their own codes, make use of module packages or their own singularity containers.
Singularity or docker container images can be run using Singularity/Apptainer.
On request we can provide limited resources on our private OpenStack cloud through an Azimuth Cloud Portal environment.
Resources on the HPC clusters will be allocated via Slurm projects.
On request we can provide limited resources on our Openstack cloud through an Azimuth Cloud Portal environment.
Credits and acknowledgements
The UK SKA Regional Centre (UKSRC) is funded by:
IRIS - Which is funded by the Science & Technology Facilities Council (STFC)
STFC is one of the seven research councils within UK Research & Innovation.