Resources per team: HPC Cluster: 100k core hours and 1k GPU hours
Resource Access: SSH access to a login node or a web user portal
Resource management: A Slurm batch queue manager. Users will have access to a shared network file system.
Software management: Software module environment, singularity containers or user compiled code.
Documentation: https://docs.hpc.cam.ac.uk/hpc/
Support: support@hpc.cam.ac.uk please put [SDC3] in the subject of the email
Resource location: University of Cambridge, UK
Additional Information: On request we can provide limited resources running on a private OpenStack cloud, in the Azimuth Cloud Portal environment. This environment can support user-deployed and user-managed applications clusters such as Slurm or Kubernetes, but for user-managed clusters support is more limited in scope
The UKSRC resource at the University of Cambridge comprises a multi node HPC/GPU cluster running a Slurm batch scheduler. An OpenStack-hosted Platform-as-a-Service Azimuth applications portal is also available on request.
https://docs.hpc.cam.ac.uk/hpc/user-guide/cclake.html
https://docs.hpc.cam.ac.uk/hpc/user-guide/icelake.html
https://docs.hpc.cam.ac.uk/hpc/user-guide/a100.html
The cluster operates a fair-share algorithm between all the users within a project.
A wide range of software packages described in the documentation (https://docs.hpc.cam.ac.uk/hpc/index.html) is available via modules.
Singularity/Apptainer containers are also supported.
Users can compile their own code in the project spaces.
Each SDC3 team will be allocated 100k core hours and 1k GPU hours and 20TB of storage.
320 x Nvidia A100 GPUs
To setup an account please complete the form https://www.hpc.cam.ac.uk/external-application (unless you are a member of the University of Cambridge, in which case use the usual internal form).
SSH access via login.hpc.cam.ac.uk or web access via login-web.hpc.cam.ac.uk.
Slurm batch script submission, using your own code, local software modules or containers.
Users can compile their own codes, make use of module packages or their own singularity containers.
Singularity or docker container images can be run using Singularity/Apptainer.
https://docs.hpc.cam.ac.uk/hpc/user-guide/quickstart.html
Cloud Access
On request we can provide limited resources on our private OpenStack cloud through an Azimuth Cloud Portal environment.
Resources on the HPC clusters will be allocated via Slurm projects.
On request we can provide limited resources on our Openstack cloud through an Azimuth Cloud Portal environment.
The UK SKA Regional Centre (UKSRC) is funded by:
IRIS - Which is funded by the Science & Technology Facilities Council (STFC)
STFC is one of the seven research councils within UK Research & Innovation.
https://www.iris.ac.uk/portfolio/stfc-cloud/