Summary
Resources per team: HPC Cluster
Resource Access: ssh access to head node, requires VPN from outside University of Manchester.
Resource management: A SLURM batch queue manager will manage user jobs. Users will have access to a shared network file system.
Software management: Options include singularity containers, local module packages or user code.
Documentation: Users will be provided with local user guide documentation.
Support: A mailing list will be provided with support given on a best efforts basis.
Resource location: Jodrell Bank Observatory, University of Manchester, United Kingdom.
Technical specifications
Overview
The UKRC resource at Jodrell Bank comprises a multi node HPC/GPU cluster running a slurm batch queue system.
Technical specifications
The cluster offers nodes with either 24 threads/64 GB RAM or 16 threads/1.5 TB RAM with 38 nodes having dual A100 GPU cards.
Per user resource
The cluster operates a fair shares algorithm between all the users.
Software installed
OS - Centos 7, CUDA, CASA
GPUs if any
2x Tesla A100 in 38 nodes.
User access
Account Setup
To set up user accounts please email Anthony Holloway.
Logging in
ssh access to the cluster head node. Will require University of Manchester VPN from offsite.
How to run a workflow
Slurm batch script submission, using your own code, local software modules or containers.
Software management
Users can compile their own codes, make use of module packages or their own singularity containers
Containerisation
SIngularity or docker container images can be run using singularity
Documentation
A user start up guide will be provided. Further support via a mailing list on a best-efforts basis.
Resource management
The JBCA computing team will manage and monitor the resources
Support
Support will be via a dedicated e-mail mailing list.
Credits and acknowledgements
The UK SKA Regional Centre (UKSRC) is funded by the
IRIS is funded by the Science & Technology Facilities Council.
STFC is one of the seven research councils within UK Research & Innovation.