Resources per team
Virtual Machine (VM) assigned per team
SSH + remote desktop (like VNC)
VPN is not required
Data cube access
Read-only access to shared folder (like NFS)
Each VM provides an isolated environment per team
The participants will have pseudo-sudo access to their VM, so they will be allowed to install the software and/or the containerisation environment they need
General documentation available in https://spsrc-user-docs.readthedocs.io
Documentation specific for the teams participating in SDC3 will be available soon
Enquiries may be sent to firstname.lastname@example.org
Participants may join our Slack workspace by emailing email@example.com and request to be added.
At the Instituto de Astrofísica de Andalucía (CSIC) in Granada (Spain), we are leading the Spanish effort to host an SRC. We are currently developing an SRC prototype aiming both to support users working with SKA precursors and pathfinders, and constituting a transversal, wavelength agnostic facility enabling knowledge exchange among a diverse community of users. We have deployed the first stage of the hardware, based on a cloud environment to be able to provide interoperable and flexible services. We are particularly engaged in the challenge of handling SKA data to extract scientific knowledge following the Open Science values, and for this reason, we are identifying and integrating in our platform tools and services able to enhance knowledge sharing and collaboration as well as to ensure transparency, as key factors to achieve scientific reproducibility (e.g. JupyterHub server, container engines or Virtual Observatory services).
For more details see:
The OpenStack cloud gathers 200 CPUs cores and 2.5 TB of memory across five compute hypervisors, plus 600+ TB of SSD usable storage capacity managed by Ceph. The servers are interconnected by a 100Gbps network and the cluster is connected to RedIRIS (the Spanish National Research Network ) with a 10Gbps link.
Per user resource
Teams will be provided with a VM and the resources assigned to this VM will be within these ranges:
16 - 32 vCPU cores
64 - 128 GB RAM
100 GB root disk using local SAS SSD
Up to 2 TB of additional block storage
NB - Initially teams will be provided with small virtual machines ( 16 cores and 64GB memory) during the first days/weeks, so they can deploy/test their software. Subsequently, we will increase the resources of their virtual machines up to 32 cores and 128 GB memory.
We provide the following base images for virtual machines:
Teams will have pseudo-sudo access, so users will be able to install their own software
Volume of resource
As long as a VM is assigned to each group, there is no limitation of users per group. Users' accounts will be managed by the team itself.
After the SDC3 Challenge deadline, the VMs assigned to teams will remain active for 2 additional months, so they can run some final checks, collect their data and tools, etc. If more time is needed please contact the support team at firstname.lastname@example.org.
GPUs if any
No GPU resources
Users will be provided with an IP address and two port numbers: 1) one port for SSH access, and 2) another port for access with a remote desktop application.
How to run a workflow
Sudo access to the VM will ensure that the users can install and configure their own environment to deploy their preferred workflow management system.
Accessing the data cube
The data cube will be accessible via a mounted shared and read-only folder into the virtual machine.
Pseudo-sudo access to the VM will allow the end user to install their own software.
The VM will come with podman pre-installed, which is a secure replacement for docker. Users will also be able to install singularity if they prefer so.
General documentation available in https://spsrc-user-docs.readthedocs.io. Documentation specific for the teams participating in SDC3 will be available soon.
Separate VMs per team will isolate the team environments. The VM flavor will be fixed for the duration of the project and will thus limit/cap usage.