Resources per team: HPC Cluster
Resource Access: ssh access to head node, requires VPN from outside University of Manchester.
Data cube access: SDC3 data in bothe formats is available via read-only network share
Resource management: A SLURM batch queue manager will manage user jobs. Users will have access to a shared network file system.
Software management: Options include singularity containers, local module packages or user code.
Documentation: Users will be provided with local user guide documentation.
Support: A mailing list will be provided with support given on a best efforts basis.
Resource location: Jodrell Bank Observatory, University of Manchester, United Kingdom.
The UKRC resource at Jodrell Bank comprises a multi node HPC/GPU cluster running a slurm batch queue system.
The cluster offers nodes with either 24 threads/64 GB RAM or 16 threads/1.5 TB RAM with 38 nodes having dual A100 GPU cards.
Per user resource
The cluster operates a fair shares algorithm between all the users.
OS - Centos 7, CUDA, CASA
GPUs if any
2x Tesla A100 in 38 nodes.
To set up user accounts please email Anthony Holloway.
ssh access to the cluster head node. Will require University of Manchester VPN from offsite.
How to run a workflow
Slurm batch script submission, using your own code, local software modules or containers.
Accessing the data cube
The data cube will be available via a network volume mounted on each compute node.
Users can compile their own codes, make use of module packages or their own singularity containers
SIngularity or docker container images can be run using singularity
A user start up guide will be provided. Further support via a mailing list on a best-efforts basis.
The JBCA computing team will manage and monitor the resources
Support will be via a dedicated e-mail mailing list.
Credits and acknowledgements
The UK SKA Regional Centre (UKSRC) is funded by the
IRIS is funded by the Science & Technology Facilities Council.
STFC is one of the seven research councils within UK Research & Innovation.