Resources per team

  • Cluster architecture

Resource Access

  • ssh credentials are sent by email to users/teams. Access must be via static IP address or VPN

Data cube access

  • Data cube stored in distribution shared storage, it is visible to all teams

Resource management

  • Slurm for resource and job management to avoid mutual interference

Software management

  • Any software installation needs to be done by a superuser (administrator). It can be installed in advanced on the teams' directory.



  • Please refer to the Support section below

Resource location

  • China

Technical specifications


  • Three cluster architectures are deployed

Technical specifications

  • Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz (16*64 GB DDR4)

  • Kunpeng 920 CPU@2.60GHz (16*64 GB DDR4)

  • Nvidia Tesla V100 PCIE 16GB

  • Total of 23 CPU nodes and 4 GPU nodes.

Per team resource

  • 1 TB disk

  • 5000 node hours

  • 3 GB RAM per node by default. Selected number of nodes with 1 TB per node

Other services

  • Jupyter Notebooks


  • docker

  • singularity


  • High-speed internet of a maximum of 5 Gbps linked to the prototype. This link can be used for massive data transfer.

  • A dedicated network with bandwidth of 200 Mbps, specially reserved for the SKA activities. It can be used for world-wide users to remotely login the clusters. SDC users will use this link. It can also be used for downloading data at GB level.

User access

Open the accounts

  • Accounts should be set up at in advance of the challenge's start date. To do this please email

  • Users must provide a static IP address segment so that they can access the cluster in remote.

  • Possibility of configuring the lifetime of each account. If any user wants to keep the account longer, they can ask us.

Number of accounts available

  • When SDC3 starts, each user is assigned to an account if the number of users is not large, or 1-2 accounts are assigned to each team if the number of users is too large.


  • Credentials are sent by email to users/teams

Logging in

  • Please, use your credentials (username and password) to login SHAO's cluster:

  • It is highly recommended that you reset your password at first. You can access to do this using random password (this method is the only way to change your password by now).

  • Prior to reset the password and login SHAO's cluster, please provide a static IP address, so that you can access the cluster in remote.

How to run a workflow

Supercomputing systems can use Slurm for resource and job management to avoid mutual interference and improve operational efficiency. All jobs that need to be run, whether for program debugging or business calculations, must be submitted through interactive parallel srun, batch sbatch, or distributed salloc commands, and related commands can be used to query the job status after submission. Please do not directly run jobs (except compiling) on the login node, so as not to affect the normal use of other users.

Resource management


  • At the end of the SDC, statistics of resource usage for each team/user can be printed for reference.

Limitation of use

  • In order to avoid unlimited using and a fair challenge, some constraints will be defined.


Main contact is Xiaocong Wu -

For technical questions, please contact, and CC to:

Credits and acknowledgements

If teams made use of the ChinaSRC resource in their publication, they should be asked to add some sentences in the Acknowledgements:

"This work used resources of China SKA Regional Centre prototype (An et al. arXiv:2206.13022) funded by the National Key R&D Programme of China and Chinese Academy of Sciences."