Summary

Resources per team

  • maximum 5,000 node hours

  • 1 TB of permanent storage per team

  • 12 PB of scratch storage per team

Resource Access

  • ssh access or interactive access through Jupyter notebooks

  • UI based applications can be run via x11 forwarding.

Software management

  • Participants should install their own software, but support can be requested via a ticketing system (https://support.cscs.ch)

Documentation

  • Resource access information can be hosted on the SDC3 webpage

  • Information on how to access the Swiss SRC is available at CSCS' user portal (user.cscs.ch) .

Support

  • Support is available via ticketing system (https://support.cscs.ch)

  • Tickets response are limited to business days

  • Moderate knowledge of Linux and Job Schedulers is expected.

Technical specifications

Technical specifications

Per user resource

  • Up to 5,000 compute node hours on the GPU part of the system. If the GPU cannot be used, the CPU on the node can still be used

  • Up to 1 TB of permanent storage per team (dedicated) and up to 12 PB of scratch capacity to use

Volume of resource

  • The teams can use up to 5'000 compute node hours and up to 1 TB of storage

  • Specific amounts should be specified when the request is made

GPUs if any

  • The teams can make use of the P100 GPUs on the system (recommended) but can still use just the CPUs on the nodes

User access


  • Each group must submit a format Small Development Project proposal in order to get access to the resources that are available: https://www.cscs.ch/user-lab/allocation-schemes/development-projects/

  • To start the process, applicants have to send first an email to projectoffice@cscs.ch requesting to have their accounts opened in order to be able to apply for a development project.

  • Approval is given at CSCS discretion after passing a technical review, which can take around 1 month.

  • Users should be aware that the service is shared by other users and their usage patterns may impact others. The typical problematic areas that groups should pay special attention when writing the proposal are to to avoid:

    • Many small files in the $SCRATCH file system (Lustre filesystem)

    • Thousands of short-lived jobs submitted to the queue using very few nodes (In this case, the GREASY scheduler should be used - https://user.cscs.ch/tools/high_throughput/)

    • Query too frequently the queue status (e.g . watch squeue). The SLURM scheduler has a 5 min scheduling cycle, probing it every 2 seconds makes no difference.

    • Running applications on the login nodes of the cluster. Swiss SRC has dedicated pre and post partitions for this propose.

Logging in

How to run a workflow

Accessing the data cube

  • Will be made available in a shared location by SDC3 organisers (to be communicated at the beginning of the challenge)

Software management

  • Users can install/compile their own software by themselves

  • CSCS provided software can be accessed through environment modules.

Containerisation

Documentation

Resource management

Support

Credits and acknowledgements

Users must quote and acknowledge the use of CSCS resources in all publications related to their production and development projects as follows: "This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID ###"