Search this site
Embedded Files
SKA SDC3
  • Overview
  • Challenges
    • Foregrounds
      • Scoring code
      • Teams
      • Leaderboard
      • Rules
      • Data
      • Test Data
    • Inference
      • Scoring code
      • Teams
      • Leaderboard
      • Rules
      • Data
      • Test data
  • HPC Partners
    • ASTRON/SURF
    • CESGA
    • ChinaSRC
    • GENCI-IDRIS
    • INAF
    • IRIS-CAM
    • IRIS-MAN
    • JPSRC
    • SPSRC
    • Swiss SRC
    • UC-LCA
  • Registration
    • Foregrounds
    • Inference
  • Reproducibility Badges
  • Forum
  • FAQs
SKA SDC3
  • Overview
  • Challenges
    • Foregrounds
      • Scoring code
      • Teams
      • Leaderboard
      • Rules
      • Data
      • Test Data
    • Inference
      • Scoring code
      • Teams
      • Leaderboard
      • Rules
      • Data
      • Test data
  • HPC Partners
    • ASTRON/SURF
    • CESGA
    • ChinaSRC
    • GENCI-IDRIS
    • INAF
    • IRIS-CAM
    • IRIS-MAN
    • JPSRC
    • SPSRC
    • Swiss SRC
    • UC-LCA
  • Registration
    • Foregrounds
    • Inference
  • Reproducibility Badges
  • Forum
  • FAQs
  • More
    • Overview
    • Challenges
      • Foregrounds
        • Scoring code
        • Teams
        • Leaderboard
        • Rules
        • Data
        • Test Data
      • Inference
        • Scoring code
        • Teams
        • Leaderboard
        • Rules
        • Data
        • Test data
    • HPC Partners
      • ASTRON/SURF
      • CESGA
      • ChinaSRC
      • GENCI-IDRIS
      • INAF
      • IRIS-CAM
      • IRIS-MAN
      • JPSRC
      • SPSRC
      • Swiss SRC
      • UC-LCA
    • Registration
      • Foregrounds
      • Inference
    • Reproducibility Badges
    • Forum
    • FAQs

Science Data Challenge 3

Computational Resources

Swiss SRC

ASTRON/SURF

IRIS-MAN

CESGA

JPSRC

ChinaSRC

SPSRC

GENCI-IDRIS

Swiss SRC

INAF

UC-LCA

IRIS-CAM

Summary

Resources per team

  • maximum 5,000 node hours

  • 1 TB of permanent storage per team

  • 12 PB of scratch storage per team

Resource Access

  • ssh access or interactive access through Jupyter notebooks

  • UI based applications can be run via x11 forwarding.

Software management

  • Participants should install their own software, but support can be requested via a ticketing system (https://support.cscs.ch)

Documentation

  • Resource access information can be hosted on the SDC3 webpage

  • Information on how to access the Swiss SRC is available at CSCS' user portal (user.cscs.ch) .

Support

  • Support is available via ticketing system (https://support.cscs.ch)

  • Tickets response are limited to business days

  • Moderate knowledge of Linux and Job Schedulers is expected.

Technical specifications

Technical specifications

  • Intel® Xeon® E5-2690 v3 @ 2.60GHz (12 cores, 64GB RAM) and NVIDIA® Tesla® P100 16GB - 5704 Nodes

  • Technical information can be found at https://www.cscs.ch/computers/piz-daint/

Per user resource

  • Up to 5,000 compute node hours on the GPU part of the system. If the GPU cannot be used, the CPU on the node can still be used

  • Up to 1 TB of  permanent storage per team (dedicated) and up to 12 PB of scratch capacity to use

Volume of resource

  • The teams can use up to 5'000 compute node hours and up to 1 TB of storage

  • Specific amounts should be specified when the request is made

GPUs if any

  • The teams can make use of the P100 GPUs on the system (recommended) but can still use just the CPUs on the nodes

User access


  • Each group must submit a format Small Development Project proposal in order to get access to the resources that are available: https://www.cscs.ch/user-lab/allocation-schemes/development-projects/

  • To start the process, applicants have to send first an email to projectoffice@cscs.ch requesting to have their accounts opened in order to be able to apply for a development project. 

  • Approval is given at CSCS discretion after passing a technical review, which can take around 1 month.

  • Users should be aware that the service is shared by other users and their usage patterns may impact others. The typical problematic areas that groups should pay special attention when writing the proposal are to to avoid:

    • Many small files in the $SCRATCH file system (Lustre filesystem)

    • Thousands of short-lived jobs submitted to the queue using very few nodes (In this case, the GREASY scheduler should be used -  https://user.cscs.ch/tools/high_throughput/)

    • Query too frequently the queue status (e.g . watch squeue). The SLURM scheduler has a 5 min scheduling cycle, probing it every 2 seconds makes no difference.

    • Running applications on the login nodes of the cluster. Swiss SRC has dedicated pre and post partitions for this propose. 

Logging in

  • Access to Swiss SRC is described at CSCS' user portal (https://user.cscs.ch/access/running/piz_daint/)

How to run a workflow

  • Running jobs at Swiss SRC are described at CSCS' user portal (https://user.cscs.ch/access/running/piz_daint/).

Accessing the data cube

  • Will be made available in a shared location by SDC3 organisers (to be communicated at the beginning of the challenge)

Software management

  • Users can install/compile their own software by themselves

  • CSCS provided software can be accessed through environment modules.

Containerisation

  • CSCS provides support for running container images on the Swiss SRC3. More information can be found at https://user.cscs.ch/tools/containers/

Documentation

  • Documentation hosted at CSCS' user portal (https://user.cscs.ch)

Resource management

  • Jobs have to be submitted through the SLURM workload manager.

  • More information can be found at https://user.cscs.ch/access/running/ and https://user.cscs.ch/access/running/piz_daint/

Support

  • Support can be requested by the CSCS user support ticketing system. Ticket responses are limited to business days.

Credits and acknowledgements 

Users must quote and acknowledge the use of CSCS resources in all publications related to their production and development projects as follows: "This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID ###"

© SKAO 2022SKASDC3 (at) skao.intData Protection Notice
Report abuse
Page details
Page updated
Report abuse