Skip to main content

Deploy interLink virtual nodes

Learn how to deploy interLink virtual nodes on your cluster. In this tutorial you are going to setup all the needed components to be able to either develop or deploy the plugin for container management on a remote host via a local kubernetes cluster.

The installation script that we are going to configure will take care of providing you with a complete Kubernetes manifest to instantiate the virtual node interface. Also you will get an installation bash script to be executed on the remote host where you want to delegate your container execution. That script is already configured to automatically authenticate the incoming request from the virtual node component, and forward the correct instructions to the openAPI interface of the interLink plugin (a.k.a. sidecar) of your choice. Thus you can use this setup also for directly developing a plugin, without caring for anything else.

For a complete guide on all the possible scenarios, please refer to the Cookbook.

Requirements

  • kubectl host: an host with MiniKube installed and running
  • A GitHub account
  • remote host: A "remote" machine with a port that is reachable by the MiniKube host

Create an OAuth GitHub app

warning

In this tutorial GitHub tokens are just an example of authentication mechanism, any OpenID compliant identity provider is also supported with the very same deployment script, see examples here.

As a first step, you need to create a GitHub OAuth application to allow interLink to make authentication between your Kubernetes cluster and the remote endpoint.

Head to https://github.com/settings/apps and click on New GitHub App. You should now be looking at a form like this:

Docusaurus themed imageDocusaurus themed image

Provide a name for the OAuth2 application, e.g. interlink-demo-test, and you can skip the description, unless you want to provide one for future reference. For our purpose Homepage reference is also not used, so fill free to put there https://intertwin-eu.github.io/interLink/.

Check now that refresh token and device flow authentication:

Docusaurus themed imageDocusaurus themed image

Disable webhooks and save clicking on Create GitHub App

Docusaurus themed imageDocusaurus themed image

You can click then on your application that should now appear at https://github.com/settings/apps and you need to save two strings: the Client ID and clicking on Generate a new client secret you should be able to note down the relative Client Secret.

Now it's all set for the next steps.

Configuring your virtual kubelet setup (remote host)

Login into the machine and and download the interLink installer CLI for your OS and processor architecture from the release page, looking for the binaries starting with interlink-install. For instance, if on a Linux platform with x86_64 processor:

export VERSION=0.2.3-pre6
wget -O interlink-installer https://github.com/interTwin-eu/interLink/releases/download/$VERSION/interlink-installer_Linux_x86_64
chmod +x interlink-installer

The CLI offers a utility option to initiate an empty config file for the installation at $HOME/.interlink.yaml:

./interlink-installer --init

You are now ready to go ahead and edit the produced file with all the setup information.

Let's take the following as an example of a valid configuration file:

warning

see release page to get the latest one! And change the value accordingly!

$HOME/.interlink.yaml
interlink_ip: 192.168.1.127
interlink_port: 30443
interlink_version: 0.2.1-patch2
kubelet_node_name: my-civo-node
kubernetes_namespace: interlink
node_limits:
cpu: "10"
memory: 256Gi
pods: "10"
oauth:
provider: github
issuer: https://github.com/oauth
grant_type: authorization_code
scopes:
- "read:user"
github_user: "dciangot"
token_url: "https://github.com/login/oauth/access_token"
device_code_url: "https://github.com/login/device/code"
client_id: "XXXXXXX"
client_secret: "XXXXXXXX"

This config file has the following meaning:

  • the remote components (where the pods will be "offloaded") will listen on the ip address 192.168.1.127 on the port 30443
  • deploy all the components from interlink release 0.1.2 (see release page to get the latest one)
  • the virtual node will appear in the cluster under the name my-civo-node
  • the in-cluster components will run under interlink namespace
  • the virtual node will show the following static resources availability:
    • 10 cores
    • 256GiB RAM
    • a maximum of 10 pods
  • the cluster-to-interlink communication will be authenticated via github provider, with a token with minimum capabilities (scope read:user only), and only the tokens for user dciangot will be allowed to talk to the interlink APIs
  • token_url and device_code_url should be left like that if you use GitHub
  • cliend_id and client_secret noted down at the beginning of the tutorial

You are ready now to go ahead generating the needed manifests and script for the deployment.

Login into the machine and generate the manifests and the automatic interlink installation script with:

./interlink-installer

follow the instruction to authenticate with the device code flow and, if everything went well, you should get an output like the following:

please enter code XXXX-XXXX at https://github.com/login/device


=== Deployment file written at: /Users/dciangot/.interlink/interlink.yaml ===

To deploy the virtual kubelet run:
kubectl apply -f /Users/dciangot/.interlink/interlink.yaml


=== Installation script for remote interLink APIs stored at: /Users/dciangot/.interlink/interlink-remote.sh ===

Please execute the script on the remote server: 192.168.1.127

"./interlink-remote.sh install" followed by "interlink-remote.sh start"

Start installing all the needed binaries and configurations:

chmod +x ./interlink-remote.sh
./interlink-remote.sh install
warning

By default the script will generate self-signed certificates for your ip adrress. If you want to use yours you can place them in ~/.interlink/config/tls.{crt,key}.

Now it's time to star the components (namely oauth2_proxy and interlink API server):

./interlink-remote.sh start

Check that no errors appear in the logs located in ~/.interlink/logs. You should also start seeing ping requests coming in from your kubernetes cluster.

To stop or restart the components you can use the dedicated commands:

./interlink-remote.sh stop
./interlink-remote.sh restart

Attach your favorite plugin or develop one! (remote host)

Next chapter will show the basics for developing a new plugin following the interLink openAPI spec.

In alterative you can start an already supported one.

Remote SLURM job submission

warning

Note that the SLURM plugin repository is: github.com/interTwin-eu/interlink-slurm-plugin

Requirements

  • a slurm CLI available on the remote host and configured to interact with the computing cluster
  • a sharedFS with all the worker nodes
    • an experimental feature is available for cases in which this is not possible

Configuration

Create a config file $HOME/.interlink/config/slurm.yaml:

$HOME/.interlink/config/slurm.yaml
# Plugin local endpoint
SidecarPort: "4000"
SidecarURL: "http://localhost"

# Prefix for every plugin slurm command
CommandPrefix: ""

# Use sharedFS for configmap and secrets
ExportPodData: true

# Directory where all the core interlink logs and jobs will be tracked
DataRootFolder: "/home/civo/.interlink/"

# The endpoint for the slurm plugin daemon
InterlinkURL: "http://localhost"
InterlinkPort: "30080"

# Absolute path to slurm and bash command
SbatchPath: "/usr/bin/sbatch"
ScancelPath: "/usr/bin/scancel"
SqueuePath: "/usr/bin/squeue"
BashPath: "/bin/bash"

# THIS IS FOR TEST PURPOSE!!
HTTP:
Insecure: true
KubeletHTTP:
Insecure: true
danger

Before going ahead, put the correct DataRootFolder in the example above! Don't forget the / at the end!

Systemd installation

warning

To get the latest version of the plugin, please visit the release page.

Download the latest release with:

VERSION=0.3.1
wget -O $HOME/.interlink/bin/slurm-plugin https://github.com/interTwin-eu/interlink-slurm-plugin/releases/download/$VERSION/interlink-sidecar-slurm_Linux_x86_64
chmod +x $HOME/.interlink/bin/slurm-plugin

Now you can create a systemd service on the user space with the following:

mkdir -p $HOME/.config/systemd/user

cat <<EOF > $HOME/.config/systemd/user/slurm-plugin.service
[Unit]
Description=This Unit is needed to automatically start the SLURM plugin at system startup
After=network.target

[Service]
Type=simple
ExecStart=$HOME/.interlink/bin/slurm-plugin
Environment="SLURMCONFIGPATH=$HOME/.interlink/config/slurm.yaml"
Environment="SHARED_FS=true"
StandardOutput=file:$HOME/.interlink/logs/plugin.log
StandardError=file:$HOME/.interlink/logs/plugin.log

[Install]
WantedBy=multi-user.target
EOF

systemctl --user daemon-reload
systemctl --user enable slurm-plugin.service

An eventually starting and monitoring with start and status:

systemctl --user start slurm-plugin.service
systemctl --user status slurm-plugin.service

Logs will be stored at $HOME/.interlink/logs/plugin.log.

Create UNICORE jobs to run on HPC centers

UNICORE (Uniform Interface to Computing Resources) offers a ready-to-run system including client and server software. UNICORE makes distributed computing and data resources available in a seamless and secure way in intranets and the internet.

Remote docker execution

warning

An mantained plugin will come soon... In the meantime you can take a look at the "developing a plugin" example.

Submit pods to HTcondor or ARC batch systems

warning

Coming soon

Remote Kubernetes Plugin

InterLink plugin to extend the capabilities of existing Kubernetes clusters, enabling them to offload workloads to another remote cluster. The plugin supports the offloading of PODs that expose HTTP endpoints (i.e., HTTP Microservices).

We are almost there! Essentially you need to follow what suggested by the prompt of the installation script, so copy the generated interlink.yaml into you kubectl host.

So go ahead and apply the produced manifest to your minikube/kubernetes instance with:

kubectl apply -f $HOME/.interlink/interlink.yaml

Check that the node appears successfully after some time, or as soon as you see the pods in namespace interlink running.

You are all setup, congratulations!

Test your setup

Please find a demo pod to test your setup here.