Summary
Resources per team
Workstations assigned per team
Resource Access
SSH access. VPN may be asked. Users accessing from outside of Japan may need to undergo the relevant regal security checks.
Data cube access
SDC3 data cube will be stored at workstations, to which users/teams can access
Resource management
Users/teams can occupy the assigned workstations or share them with other users/teams.
Software management
Users/teams can install their own software/docker where possible but may need to request installations by JP-SRC superusers.
Documentation
A brief instruction will be provided in English and Japanese languages.
Support
Support from JP-SRC will be provided via E-mail and Slack
Resource location
Japan
Technical specifications
Overview
JP-SRC is an inter-university cooperation led by the SKA1 Japan Promotion Group (SKAJ) of the National Astronomical Observatory of Japan (NAOJ). JP-SRC is currently operating prototype servers which are utilised for data analysis for SKA precursors and pathfinders. From SDC3 JP-SRC offers these servers for science data challenges as well.
JP-SRC will be an OpenStack-based Cloud system. However, at least during SDC3a, the cloud is yet commissioning and will not be opened. Instead, JP-SRC will provide users/teams individual workstations of JP-SRC.
Technical specifications
JP-SRC contains heterogeneous workstations, with (in total):
152 cores (6.41 TFlops FP64)
6 GPUs (242.57 TFlops FP32, 13.98 TFlops FP64)
1.86 TB DRAM memory, 3.71 TB M.2/SSD
338 TB HDD storage
Network speed is typically 1 Gbps
SKAJ has a budget to slightly enhance the above specifications for SDC3a.
Per user resource
Resource per user/team will be determined by JP-SRC by taking into account the number of users(teams) and the available resources.
JP-SRC will allocate reasonable resources as requested for the analysis, first-come-first-served basis.
Software installed
Linux OS. A popular OS is Ubuntu 20.04. Docker and Singularity will be supported.
Participants can install their own software where possible but may need to request installations to JP-SRC superusers.
Volume of Resource
The volume of resources is ultimately limited by the physical capacity of the assigned workstations.
Due to the limited resources, the number of users will be limited up to ~10.
GPUs if any
2 x NVIDIA Quadro GP100 16GB
2 x NVIDIA GeForce RTX3090 Turbo 24GB
2 x NVIDIA Quadro A6000 48GB
User access
Setting up and Account
In order to set up an account please contact skaj-src-inquiry@ml.nao.ac.jp.
Logging in
SSH access to login to the assigned workstations. VPN may be asked for some workstations.
Users accessing from outside of Japan may need to undergo the relevant regal security checks.
A few team members, who will actually analyse the data, per team will be awarded access permission
How to run a workflow
Users are free to run their workflows to their assigned workstations.
Command-line only. Web-base environment and execution will not be supported yet.
Accessing the data cube
Primary data cube provided by SKAO for SDC3 will be stored/cloned by JP-SRC at workstations, to which a team can access.
User can make multiple copies on the assigned workstations
Software management
User will install their own software where possible but may need to request installations by JP-SRC superusers.
Containerisation
Users can install and use docker or singularity to run containers on their workstations.
Documentation
A brief instruction will be provided in English and Japanese languages.
Resource management
The JP-SRC coordinator will manually manage and monitor the resources.
Users/teams can occupy the assigned workstations or share them with other users/teams.
Users/teams affiliated with Japanese research institutions will be prioritised, if there are too much requests.
Support
Support from JP-SRC will be provided via email and Slack.
Credits and acknowledgements
JP-SRC is funded by SKAJ and is cooperated by Kumamoto University and Nagoya University.