All applications listed in the UCloud apps catalogue are packaged in Docker container images and deployed on the YouGene HPC cluster, hosted at Syddansk Universitet.
The YouGene supercomputer consists of 87 compute nodes with a total of 2784 CPU cores and a theoretical maximum performance of 1.05 Tflops per CPU. Each node has either 384 GB or 768 GB of RAM.
By default each application runs on a single node of the cluster, unless multi-node deployment is enabled. The general app configuration settings are summarized below.
The user should always estimate the time necessary to complete the run before submitting a job. A reliable estimate of the program execution time is important to ensure fast job scheduling and completion.
There is no upper limit to the job lifetime. The time allocation can also be extended at runtime.
Before submitting a job, the user must select a machine type. The latter depends on the products available in the active workspace.
Selecting a machine with a large number of resources may result in a longer job scheduling.
UCloud standard nodes¶
There are seven machine types:
1 vCPU and 5 GB of memory
2 vCPUs and 11 GB of memory
4 vCPUs and 23 GB of memory
8 vCPUs and 47 GB of memory
16 vCPUs and 94 GB of memory
32 vCPUs and 188 GB of memory
64 vCPUs and 376 GB of memory
UCloud fat nodes¶
There are seven machine types:
1 vCPU and 10 GB of memory
2 vCPU and 22 GB of memory
4 vCPU and 47 GB of memory
8 vCPU and 94 GB of memory
16 vCPU and 188 GB of memory
32 vCPU and 376 GB of memory
64 vCPU and 754 GB of memory
UCloud GPU nodes¶
There are twelve machine types:
1 NVIDIA V100 GPU, 16 vCPUs, and 44 GB of memory
2 NVIDIA V100 GPUs, 32 vCPUs, and 88 GB of memory
3 NVIDIA V100 GPUs, 48 vCPUs, and 132 GB of memory
4 NVIDIA V100 GPUs, 63 vCPUs, and 180 GB of memory
1 NVIDIA A100 GPU, 12 vCPUs, and 252 GB of memory
2 NVIDIA A100 GPU, 24 vCPUs, and 504 GB of memory
3 NVIDIA A100 GPU, 36 vCPUs, and 756 GB of memory
4 NVIDIA A100 GPU, 48 vCPUs, and 1008 GB of memory
5 NVIDIA A100 GPU, 60 vCPUs, and 1260 GB of memory
6 NVIDIA A100 GPU, 72 vCPUs, and 1512 GB of memory
7 NVIDIA A100 GPU, 84 vCPUs, and 1764 GB of memory
8 NVIDIA A100 GPU, 96 vCPUs, and 2016 GB of memory
AAU general nodes¶
There are four machine types:
4 vCPUs and 16 GB of memory
8 vCPUs and 32 GB of memory
16 vCPUs and 64 GB of memory
64 vCPUs and 256 GB of memory
AAU GPU nodes¶
There is only one machine type available:
1 GPU, 10 vCPUs and 40 GB of memory
uc-t4 are virtual machines deployed on the AAU OpenStack system.
A folder can be attached as a data volume inside the application container using the button
in the front-end application page. Data volumes are mounted within the
/work directory inside the application container. This also corresponds to the default working tree on UCloud.
Data volumes can also be mounted in multiple apps running simultaneously.
Only files and folders located in the default working tree are saved after job completion.
Computation distributed among multiple nodes of the cluster is enabled only for few supported applications. See the Spark Cluster app for a practical use case.
Connect to other jobs¶
This option is used when it is necessary to use services from other jobs as networking and shared application file systems. By clicking on
the user is able to select the ID of a running job and set a hostname parameter, which is used to assign an IP address to the node where the selected job is executed.
Attach public IP addresses¶
This option is used to attach a static IP address to an app deployed on UCloud. In this way it is possible to access the app via an external client. Public IPs may be used to deploy server applications (see, e.g., Rsync Server, MariaDB Server, PostgreSQL Server).
To create a new IP address, click on
and select a provider and a product, as shown below:
The IP address is unique: It is not possible to select the same IP for multiple job sessions running at the same time.
Once the IP address is allocated, the user can configure the protocol (TCP/UDP) and the corresponding port number. Project admins can also restrict the usage of specific IPs to a selected group of collaborators.
By enabling this setting, anyone with the IP can contact the application. Actions must be taken to ensure that the application is adequately protected.