Summary
Resources per team
Cluster architecture
Resource Access
ssh credentials are sent by email to users/teams. Access must be via static IP address.
Resource management
Slurm for resource and job management to avoid mutual interference
Software management
Any software installation needs to be done by a superuser (administrator). It can be installed in advanced on the teams' directory.
Documentation
Manuals with instructions: https://shaoska-user-guide.readthedocs.io/
Support
Please refer to the Support section below
Resource location
China
Technical specifications
Overview
Three cluster architectures are deployed
Technical specifications
Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz (16*64 GB DDR4)
Intel(R) Xeon(R) Gold 6448H @2.40GHZ
Kunpeng 920 CPU@2.60GHz (16*64 GB DDR4)
Nvidia V100 /A40
Total of 35 CPU X86 nodes, 12 ARM nodes and 4 GPU nodes.
Per team resource
1 TB disk
5000 node hours
36 GB RAM per core by default. Selected number of nodes with 1 TB/4TB RAM per node
Other services
Jupyter Notebooks
CARTA
docker
singularity
Instance
Network
High-speed internet of a maximum of 10 Gbps linked to the prototype. This link can be used for massive data transfer.
A dedicated network with bandwidth of 200 Mbps, specially reserved for the SKA activities. It can be used for world-wide users to remotely login the clusters. SDC users will use this link. It can also be used for downloading data at GB level.
User access
Open the accounts
Accounts should be set up at in advance of the challenge's start date. To do this please email wuxc@shao.ac.cn.
Users must provide a static IP address segment so that they can access the cluster in remote.
Possibility of configuring the lifetime of each account. If any user wants to keep the account longer, they can ask us.
Number of accounts available
When SDC3 starts, each user is assigned to an account if the number of users is not large, or 1-2 accounts are assigned to each team if the number of users is too large.
Authentication
Credentials are sent by email to users/teams
Logging in
Please access http://202.127.3.158:8882/portal or https://shaoska-user-guide.readthedocs.io/ to get latest information.
How to run a workflow
Supercomputing systems can use Slurm for resource and job management to avoid mutual interference and improve operational efficiency. All jobs that need to be run, whether for program debugging or business calculations, must be submitted through interactive parallel srun, batch sbatch, or distributed salloc commands, and related commands can be used to query the job status after submission. Please do not directly run jobs (except compiling) on the login node, so as not to affect the normal use of other users.
Resource management
Statistics
At the end of the SDC, statistics of resource usage for each team/user can be printed for reference.
Limitation of use
In order to avoid unlimited using and a fair challenge, some constraints will be defined.
Support
Main contact is Xiaocong Wu - wuxc@shao.ac.cn
For technical questions, please contact shaoska@shao.ac.cn, and CC to:
Xiaocong Wu - wuxc@shao.ac.cn
Shaoguang Guo - sgguo@shao.ac.cn
Zhijun Xu- xuthus@shao.ac.cn
Tao An - antao@shao.ac.cn
Credits and acknowledgements
If teams made use of the ChinaSRC resource in their publication, they should be asked to add some sentences in the Acknowledgements:
"This work used resources of China SKA Regional Centre prototype funded by Ministry of Science and Technology of the People’s Republic of China and Chinese Academy of Sciences.
Reference :
1.An, Wu, Hong. SKA data take centre stage in China. Nat Astron, 2019, Vol. 3, p. 1030
2.T. An, et al. Status and progress of China SKA Regional Centre prototype. Sci. China-Phys. Mech. Astron. 65: 129501 (2022)"