Cookbook
These are practical recipes for different deployment scenarios.
Select here the tab with the scenario you want deploy:
- Edge node
- In-cluster
- Tunneled
Select here the featured plugin you want to try:
- Docker
- SLURM
- Kubernetes
Offload your pods to a remote machine with Docker engine available
Offload your pods to an HPC SLURM based batch system
Offload your pods to a remote Kubernetes cluster: COMING SOON For test instructions contact us!
There are more 3rd-party plugins developed that you can get inspired by or even use out of the box. You can find some ref in the quick start section
Install interLink
Deploy Remote components (if any)
In general, starting from the deployment of the remote components is adviced. Since the kubernetes virtual node won't reach the Ready
status until all the stack is successfully deployed.
Interlink API server
- Edge node
- In-cluster
- Tunneled
For this deployment mode the remote host has to allow the kubernetes cluster to connect to the Oauth2 proxy service port (30443 if you use the automatic script for installation)
- You first need to initialize an OIDC client with you Identity Provider (IdP).
- Different options. We have instructions ready for GitHub, EGI checkin, INFN IAM.
- Any OIDC provider working with OAuth2 Proxy tool will do the work though.
- Create the
install.sh
utility script through the installation utility- N.B. if your machine is shared with other users, you better indicate a socket as address to communicate with the plugin. Instead of a web URL is enough to insert something like
unix:///var/run/myplugin.socket
- N.B. if your machine is shared with other users, you better indicate a socket as address to communicate with the plugin. Instead of a web URL is enough to insert something like
- Install Oauth2-Proxy and interLink API server services as per Quick start
- by default logs are store in
~/.interlink/logs
, checkout there for any error before moving to the next step.
- by default logs are store in
Go directly to "Test and debugging tips". The selected scenario does not expect you to do anything here.
For this installation you need to know which node port is open on the main kubernetes cluster, and that will be used to expose the ssh bastion for the tunnel.
-
Create utility folders:
mkdir -p $HOME/.interlink/logs
mkdir -p $HOME/.interlink/bin
mkdir -p $HOME/.interlink/config -
Generate a pair of password-less SSH keys:
ssh-keygen -t ecdsa
-
Download the ssh-tunnel binary latest release binary in
$HOME/.interlink/bin/ssh-tunnel
-
Start the tunnel
CLUSTER_PUBLIC_IP="IP of you cluster where SSH will be exposed"
SSH_TUNNEL_NODE_PORT="node port where the ssh service will be exposed"
PRIV_KEY_FILE="path the ssh priv key created above"
## If you want to remove the secure warning, you should enable HostKey check (more advanced manual setup) with -hostkey option
$HOME/.interlink/bin/ssh-tunnel -addr $CLUSTER_PUBLIC_IP:$SSH_TUNNEL_NODE_PORT -keyfile $PRIV_KEY_FILE -user interlink -rport 3000 -lsock plugin.sock &> $HOME/.interlink/logs/ssh-tunnel.log &
echo $! > $HOME/.interlink/ssh-tunnel.pid -
Check the logs in
$HOME/.interlink/logs/ssh-tunnel.log
. -
To kill and restart the process is enough:
# kill
kill $(cat $HOME/.interlink/ssh-tunnel.pid)
# restart
$HOME/.interlink/bin/ssh-tunnel &> $HOME/.interlink/logs/ssh-tunnel.log &
echo $! > $HOME/.interlink/ssh-tunnel.pid -
at this stage THIS WILL CORRECTLY FAIL until we setup all the stack. So let's go ahead
Plugin service
- Edge node
- In-cluster
- Tunneled
- Docker
- SLURM
- Kubernetes
-
Create utility folders:
mkdir -p $HOME/.interlink/logs
mkdir -p $HOME/.interlink/bin
mkdir -p $HOME/.interlink/config -
Create a configuration file:
$HOME/.interlink/config/plugin-config.yaml## Multi user host
# SidecarURL: "unix:///home/myusername/plugin.socket"
# InterlinkPort: "0"
# SidecarPort: "0"
## Dedicated edge node
# InterlinkURL: "http://127.0.0.1"
# SidecarURL: "http://127.0.0.1"
# InterlinkPort: "3000"
# SidecarPort: "4000"
CommandPrefix: ""
ExportPodData: true
DataRootFolder: "/home/myusername/.interlink/jobs/"
BashPath: /bin/bash
VerboseLogging: true
ErrorsOnlyLogging: false- N.B. Depending on wheter you edge is single user or not, you should know by previous steps which section to uncomment here.
- More on configuration options at official repo
-
Download the latest release binary in
$HOME/.interlink/bin/plugin
for either GPU host or CPU host (tags ending withno-GPU
) -
Start the plugins passing the configuration that you have just created:
export INTERLINKCONFIGPATH=$PWD/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid -
Check the logs in
$HOME/.interlink/logs/plugin.log
. -
To kill and restart the process is enough:
# kill
kill $(cat $HOME/.interlink/plugin.pid)
# restart
export INTERLINKCONFIGPATH=$PWD/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid
Almost there! Now it's time to add this virtual node into the Kubernetes cluster!
-
Create utility folders
mkdir -p $HOME/.interlink/logs
mkdir -p $HOME/.interlink/bin
mkdir -p $HOME/.interlink/config -
Create a configuration file:
$HOME/.interlink/plugin-config.yaml## Multi user host
# Socket: "unix:///home/myusername/plugin.socket"
# InterlinkPort: "0"
# SidecarPort: "0"
## Dedicated edge node
# InterlinkURL: "http://127.0.0.1"
# SidecarURL: "http://127.0.0.1"
# InterlinkPort: "3000"
# SidecarPort: "4000"
CommandPrefix: ""
ExportPodData: true
DataRootFolder: "/home/myusername/.interlink/jobs/"
BashPath: /bin/bash
VerboseLogging: true
ErrorsOnlyLogging: false
SbatchPath: "/usr/bin/sbatch"
ScancelPath: "/usr/bin/scancel"
SqueuePath: "/usr/bin/squeue"
SingularityPrefix: ""- N.B. Depending on wheter you edge is single user or not, you should know by previous steps which section to uncomment here.
- More on configuration options at official repo
-
Download the latest release binary in
$HOME/.interlink/bin/plugin
for either GPU host or CPU host (tags ending withno-GPU
) -
Start the plugins passing the configuration that you have just created:
export SLURMCONFIGPATH=$PWD/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid -
Check the logs in
$HOME/.interlink/logs/plugin.log
. -
To kill and restart the process is enough:
# kill
kill $(cat $HOME/.interlink/plugin.pid)
# restart
export INTERLINKCONFIGPATH=$PWD/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid
Almost there! Now it's time to add this virtual node into the Kubernetes cluster!
KUBERNTES PLUGIN COMING SOOON... CONTACT US FOR TEST INSTRUCTIONS
Go directly to "Test and debugging tips". The selected scenario does not expect you to do anything here.
- Docker
- SLURM
- Kubernetes
-
Create utility folders:
mkdir -p $HOME/.interlink/logs
mkdir -p $HOME/.interlink/bin
mkdir -p $HOME/.interlink/config -
Create a configuration file:
$HOME/.interlink/config/plugin-config.yamlSocket: "unix:///home/myusername/plugin.socket"
SidecarPort: "0"
CommandPrefix: ""
ExportPodData: true
DataRootFolder: "/home/myusername/.interlink/jobs/"
BashPath: /bin/bash
VerboseLogging: true
ErrorsOnlyLogging: false- N.B. you should know by previous steps what to put in place of
myusername
here. - More on configuration options at official repo
- N.B. you should know by previous steps what to put in place of
-
Download the latest release binary in
$HOME/.interlink/bin/plugin
for either GPU host or CPU host (tags ending withno-GPU
) -
Start the plugins passing the configuration that you have just created:
export INTERLINKCONFIGPATH=$PWD/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid -
Check the logs in
$HOME/.interlink/logs/plugin.log
. -
To kill and restart the process is enough:
# kill
kill $(cat $HOME/.interlink/plugin.pid)
# restart
export INTERLINKCONFIGPATH=$PWD/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid
Almost there! Now it's time to add this virtual node into the Kubernetes cluster!
-
Create utility folders:
mkdir -p $HOME/.interlink/logs
mkdir -p $HOME/.interlink/bin
mkdir -p $HOME/.interlink/config -
Create a configuration file:
$HOME/config/plugin-config.yamlSocket: "unix:///home/myusername/plugin.socket"
SidecarPort: "0"
CommandPrefix: ""
ExportPodData: true
DataRootFolder: "/home/myusername/.interlink/jobs/"
BashPath: /bin/bash
VerboseLogging: true
ErrorsOnlyLogging: false
SbatchPath: "/usr/bin/sbatch"
ScancelPath: "/usr/bin/scancel"
SqueuePath: "/usr/bin/squeue"
SingularityPrefix: ""- N.B. you should know by previous steps what to put in place of
myusername
here. - More on configuration options at official repo
- N.B. you should know by previous steps what to put in place of
-
Download the latest release binary in
$HOME/.interlink/bin/plugin
for either GPU host or CPU host (tags ending withno-GPU
) -
Start the plugins passing the configuration that you have just created:
export SLURMCONFIGPATH=$PWD/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid -
Check the logs in
$HOME/.interlink/logs/plugin.log
. -
To kill and restart the process is enough:
# kill
kill $(cat $HOME/.interlink/plugin.pid)
# restart
export INTERLINKCONFIGPATH=$PWD/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid
Almost there! Now it's time to add this virtual node into the Kubernetes cluster!
COMING SOOON...
- Start the plugins passing the configuration that you have just created.
Test interLink stack health
interLink comes with a call that can be used to monitor the overall status of both interlink server and plugins, at once.
curl -v $INTERLINK_SERVER_ADDRESS:$INTERLINK_PORT/pinginterlink
This call will return the status of the system and its readiness to submit jobs.
Deploy Kubernetes components
The deployment of the Kubernetes components are managed by the official HELM chart. Depending on the scenario you selected, there might be additional operations to be done.
- Edge node
- In-cluster
- Tunneled
For this deployment mode the remote host has to allow the kubernetes cluster to connect to the Oauth2 proxy service port (30443 if you use the automatic script for installation)
- Since you might already have followed the installation script steps, you can simply follow the Guide
If the installation script is not what you are currently used, you can configure the virtual kubelet manually:
- Create an helm values file:
nodeName: interlink-with-rest
interlink:
address: https://remote_oauth2_proxy_endpoint
port: 30443
virtualNode:
CPUs: 1000
MemGiB: 1600
Pods: 100
HTTPProxies:
HTTP: null
HTTPs: null
# Set this to false in prod environment where Oauth2 proxy uses proper tls certs
HTTP:
Insecure: true
OAUTH:
image: ghcr.io/intertwin-eu/interlink/virtual-kubelet-inttw-refresh:latest
TokenURL: DUMMY
ClientID: DUMMY
ClientSecret: DUMMY
RefreshToken: DUMMY
GrantType: authorization_code
Audience: DUMMY
- Substitute the OAuth value accordingly as
- Create an helm values file:
nodeName: interlink-with-socket
plugin:
enabled: true
image: "plugin docker image here"
command: ["/bin/bash", "-c"]
args: ["/app/plugin"]
config: |
your plugin
configuration
goes here!!!
socket: unix:///var/run/plugin.socket
interlink:
enabled: true
socket: unix:///var/run/interlink.socket
- Create an helm values file:
nodeName: interlink-with-socket
interlink:
enabled: true
socket: unix:///var/run/interlink.socket
plugin:
address: http://localhost
sshBastion:
enabled: true
clientKeys:
authorizedKey: |
ssh-rsa A..........MG0yNvbLfJT+37pw==
port: 31021
- insert the plublic key generated when installing interlink and ssh tunnel service
Eventually deploy the latest release of the official helm chart:
helm upgrade --install --create-namespace -n interlink my-virtual-node oci://ghcr.io/intertwin-eu/interlink-helm-chart/interlink --values ./values.yaml
Whenever you see the node ready, you are good to go!
Test the setup
Please find a demo pod to test your setup here.