Search this site
Embedded Files
SKA SDC3
  • Overview
  • Challenges
    • Foregrounds
      • Scoring code
      • Teams
      • Leaderboard
      • Rules
      • Data
      • Test Data
    • Inference
      • Scoring code
      • Teams
      • Leaderboard
      • Rules
      • Data
      • Test data
  • HPC Partners
    • ASTRON/SURF
    • CESGA
    • ChinaSRC
    • GENCI-IDRIS
    • INAF
    • IRIS-CAM
    • IRIS-MAN
    • JPSRC
    • SPSRC
    • Swiss SRC
    • UC-LCA
  • Registration
    • Foregrounds
    • Inference
  • Reproducibility Badges
  • Forum
  • FAQs
SKA SDC3
  • Overview
  • Challenges
    • Foregrounds
      • Scoring code
      • Teams
      • Leaderboard
      • Rules
      • Data
      • Test Data
    • Inference
      • Scoring code
      • Teams
      • Leaderboard
      • Rules
      • Data
      • Test data
  • HPC Partners
    • ASTRON/SURF
    • CESGA
    • ChinaSRC
    • GENCI-IDRIS
    • INAF
    • IRIS-CAM
    • IRIS-MAN
    • JPSRC
    • SPSRC
    • Swiss SRC
    • UC-LCA
  • Registration
    • Foregrounds
    • Inference
  • Reproducibility Badges
  • Forum
  • FAQs
  • More
    • Overview
    • Challenges
      • Foregrounds
        • Scoring code
        • Teams
        • Leaderboard
        • Rules
        • Data
        • Test Data
      • Inference
        • Scoring code
        • Teams
        • Leaderboard
        • Rules
        • Data
        • Test data
    • HPC Partners
      • ASTRON/SURF
      • CESGA
      • ChinaSRC
      • GENCI-IDRIS
      • INAF
      • IRIS-CAM
      • IRIS-MAN
      • JPSRC
      • SPSRC
      • Swiss SRC
      • UC-LCA
    • Registration
      • Foregrounds
      • Inference
    • Reproducibility Badges
    • Forum
    • FAQs

Science Data Challenge 3

Computational Resources

EngageSKA - UCLCA SRC Prototype

ASTRON/SURF

GENCI-IDRIS

AusSRC

INAF

SPSRC

CESGA

IRIS-CAM

Swiss SRC

ChinaSRC

IRIS-MAN

UC-LCA

EngageSKA

JPSRC

Summary

Resources per team

  • Virtual Machines (VMs) assigned per team on ENGAGE SKA Cluster+ HPC Cluster (LCA-UC)

Resource Access

  • ssh access to the VMs after setting up VPN

  • Direct ssh to HPC centre accounts (managed by LCA-UC)

  • More details or options will be explained in a dedicated access documentation.

Data cube access

  • Shared directory, read only mode

Resource management

  • VMs will naturally isolate team environments and VM flavor will be fixed. 

Software management

  • Users can install software, but EngageSKA only supports the tools that are central to the system. They do not offer help for troubleshooting other software/code.

  • Participants will have sudo access on their VMs.

Documentation

  • Resource access information can be hosted on the SDC3 webpage and workflows can be run freely run as team will have access to the VM. 

  • Documentation will be hosted and made available wrt accessing resources, and running workflows.

Support

  • Contacts listed in the Support section. FAQ and mailing list will be provided. Support will be given on a best effort approach.

  • For UC-LCA accounts, support is available via ticketing system (helpdesk). Tickets response are limited to business days. Moderate knowledge of Linux and Job Schedulers is expected.

Resource location

  • Portugal

Technical specifications

Overview

  • The facility makes available Virtual Machines (VM) through OpenStack. To access a VM, users need to request an account and have installed the EngageSKA VPN (OpenStack just works inside the VPN). You can access the VM using the VPN and any SSH client with a floating IP address.

  • Detailed information about the EngageSKA cluster can be found in the SKA telescope developers portal. This includes the cluster specifications, how to access the cluster, the network, using the VPN, and the OpenStack platform.

  • Please, update overview for promotion during the challenge. 

  • In addition, LC-UC will provide access to its HPC facility,

Per user resource

  • Suitable flavors for VMs on the ENGAGE SKA Cloud are (this may increase after the current upgrade cycle):

    • 16/32+ vCPUs

    • 48+GB RAM

    • 200+ GB disk

  • Suitable flavors for LCA-UC HPC are (For a fixed CPU time duration agreed with SDC3 hosts):

    • 32+cores

    • 48+GB RAM

    • 200+ GB disk

Software installed

  • VMs are Linux CentOS. Users can install software, but EngageSKA only supports the tools that are central to the system. They do not offer help for troubleshooting other software/code.

Volume of resource

  • The ENGAGE SKA infrastructure can easily accommodate five teams on a virtualised environment - any number of accounts (corresponding to team members) can access the assigned VM; other five teams on a HPC platform.

GPUs if any

  • GPU partition provides access to Nvidia Tesla V100 SXM2 GPUs (5120 Cuda Cores).

User access

Open the accounts

  • Open accounts by request (email to Domingos Nunes and/or Diogo Regateiro). 

  • Accounts can be opened in advance of challenge release, or by request on the same day

Logging in

  1. Set up the VPN:

  • Access to OpenStack requires to set up a VPN. 

  • Instructions can be found on the Access Documentation to be provided.

  1. Access OpenStack:

  • Once inside the VPN, you can access OpenStack through this dashboard.

  • Set domain as "default" (Please, confirm information) and use your VPN credentials.

  • It is highly recommended that you reset your password at first (configurations menu inside OpenStack). 

Authentication  

  • EngageSKA is creating authentication protocols. Please, update information. 

  • Credentials are sent by email to users/teams.

How to run a workflow

  • Users are free to run their workflows as they require as they have uninterrupted access to their assigned VM.

Accessing the data cube

  • The data cube will be available via a volume mounted on the team's VM.

Software management

  • Users can install software, but EngageSKA only supports the tools that are central to the system. 

Containerisation

  • Users can install and use docker or singularity to run containers on their VMs; Singularity on the HPC cluster.

Documentation

  • Documentation hosted on SDC3 website and on Access documentation to be provided.

Resource management

  • Separate VMs per team will isolate the team environments. The VM flavour will be fixed for the duration of the project and will thus limit/cap usage. 

  • For the HPC Cluster, resource management will be done via SLURM job scheduler, each team will be assigned to an account with a predefined number of hours.

  • Statistics of resource usage for each team/user can be printed for reference. 

  • In terms of limitations, in order to avoid unlimited use, some constraints should be defined. 

Support

  • For support for our services, please contact us at:

  • Bruno Coelho ‎- brunodfcoelho@av.it.pt (main contact point; astronomy questions)

  • Domingos Nunes - dfsn@ua.pt (technical support)

  • Diogo Regateiro - diogoregateiro@ua.pt (technical support)

  • Bruno Ribeiro - bruno.engelec@gmail.com (GPU technical support)

  • Domingos Barbosa‎- barbosa@av.it.pt

Credits and acknowledgements 

  • Teams making use of EngageSKA or UC-LCA resources in their publication, they should Acknowledge it as follows :

    • The Enabling Green E-science for the Square Kilometre Array Research Infrastructure (ENGAGE-SKA) team acknowledges financial support from grant POCI-01-0145- FEDER022217, funded by Programa Operacional Competitividade e Internacionalização (COMPETE 2020) and the Fundação para a Ciência e a Tecnologia (FCT), Portugal. This work was also funded by FCT and Ministério da Ciência, Tecnologia e Ensino Superior(MCTES) through national funds and when applicable co-funded EU funds under the project UIDB/50008/2020-UIDP/50008/2020 and UID/EEA/50008/2019.

    • The authors acknowledge the Laboratory for Advanced Computing at University of Coimbra for providing {HPC, computing, consulting} resources that have contributed to the research results reported within this paper or work URL: https://www.uc.pt/lca


© SKAO 2022SKASDC3 (at) skao.intData Protection Notice
Report abuse
Page details
Page updated
Report abuse