Cookbook
These are practical recipes for different deployment scenarios.
Select here the tab with the scenario you want deploy:
- Edge node
- In-cluster
- Tunneled
Select here the featured plugin you want to try:
- Docker
- SLURM
- Kubernetes
Offload your pods to a remote machine with Docker engine available
Offload your pods to an HPC SLURM based batch system
Offload your pods to a remote Kubernetes cluster: COMING SOON For test instructions contact us!
There are more 3rd-party plugins developed that you can get inspired by or even use out of the box. You can find some ref in the quick start section
Install interLink
Deploy Remote components (if any)
In general, starting from the deployment of the remote components is adviced. Since the kubernetes virtual node won't reach the Ready
status until all the stack is successfully deployed.
Interlink API server
- Edge node
- In-cluster
- Tunneled
For this deployment mode the remote host has to allow the kubernetes cluster to connect to the Oauth2 proxy service port (30443 if you use the automatic script for installation)
You first need to initialize an OIDC client with you Identity Provider (IdP).
Since any OIDC provider working with OAuth2 Proxy tool will do the work, we are going to put the configuration for a generic OIDC identity provider in this cookbook. Nevertheless you can find more detailed on dedicated pages with instructions ready for GitHub, EGI checkin, INFN IAM.
First of all download the latest release of the interLink installer:
export VERSION=0.3.3
wget -O interlink-installer https://github.com/interTwin-eu/interLink/releases/download/$VERSION/interlink-installer_Linux_amd64
chmod +x interlink-installer
Create a template configuration with the init option:
mkdir -p interlink
./interlink-installer --init --config ./interlink/.installer.yaml
The configuration file should be filled as followed. This is the case where the my-node
will contact an edge service that will be listening on PUBLIC_IP
and API_PORT
authenticating requests from an OIDC provider https://my_oidc_idp.com
:
interlink_ip: PUBLIC_IP
interlink_port: API_PORT
interlink_version: 0.3.3
kubelet_node_name: my-node
kubernetes_namespace: interlink
node_limits:
cpu: "1000"
# MEMORY in GB
memory: 25600
pods: "100"
oauth:
provider: oidc
issuer: https://my_oidc_idp.com/
scopes:
- "openid"
- "email"
- "offline_access"
- "profile"
audience: interlink
grant_type: authorization_code
group_claim: groups
group: "my_vk_allowed_group"
token_url: "https://my_oidc_idp.com/token"
device_code_url: "https://my_oidc_idp/auth/device"
client_id: "oidc-client-xx"
client_secret: "xxxxxx"
insecure_http: true
Now you are ready to start the OIDC authentication flow to generate all your manifests and configuration files for the interLink components. To do so, just execute the installer:
./interlink-installer --config ./interlink/.installer.yaml --output-dir ./interlink/manifests/
Install Oauth2-Proxy and interLink API server services and configurations with:
chmod +x ./interlink/manifests/interlink-remote.sh
./interlink/manifests/interlink-remote.sh install
Then start the services with:
./interlink/manifests/interlink-remote.sh start
With stop
command you can stop the service. By default logs are store in ~/.interlink/logs
, checkout there for any error before moving to the next step.
N.B. you can look the oauth2_proxy configuration parameters looking into the interlink-remote.sh
script.
N.B. logs (expecially if in verbose mode) can become pretty huge, consider to implement your favorite rotation routine for all the logs in ~/.interlink/logs/
Go directly to "Test and debugging tips". The selected scenario does not expect you to do anything here.
COMING SOON...
Plugin service
- Edge node
- In-cluster
- Tunneled
- Docker
- SLURM
- Kubernetes
-
Create utility folders:
mkdir -p $HOME/.interlink/logs
mkdir -p $HOME/.interlink/bin
mkdir -p $HOME/.interlink/config -
Create a configuration file:
$HOME/.interlink/config/plugin-config.yaml## Multi user host
Socket: "unix:///home/myusername/.plugin.sock"
InterlinkPort: "0"
SidecarPort: "0"
CommandPrefix: ""
ExportPodData: true
DataRootFolder: "/home/myusername/.interlink/jobs/"
BashPath: /bin/bash
VerboseLogging: false
ErrorsOnlyLogging: false- N.B. Depending on wheter you edge is single user or not, you should know by previous steps which section to uncomment here.
- More on configuration options at official repo
-
Download the latest release binary in
$HOME/.interlink/bin/plugin
for either GPU host or CPU host (tags ending withno-GPU
) -
Start the plugins passing the configuration that you have just created:
export INTERLINKCONFIGPATH=$PWD/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid -
Check the logs in
$HOME/.interlink/logs/plugin.log
. -
To kill and restart the process is enough:
# kill
kill $(cat $HOME/.interlink/plugin.pid)
# restart
export INTERLINKCONFIGPATH=$PWD/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid
Almost there! Now it's time to add this virtual node into the Kubernetes cluster!
-
Create utility folders
mkdir -p $HOME/.interlink/logs
mkdir -p $HOME/.interlink/bin
mkdir -p $HOME/.interlink/config -
Create a configuration file (remember to substitute
/home/username/
with your actual home path):./interlink/manifests/plugin-config.yamlSocket: "unix:///home/myusername/.plugin.sock"
InterlinkPort: "0"
SidecarPort: "0"
CommandPrefix: ""
ExportPodData: true
DataRootFolder: "/home/myusername/.interlink/jobs/"
BashPath: /bin/bash
VerboseLogging: false
ErrorsOnlyLogging: false
SbatchPath: "/usr/bin/sbatch"
ScancelPath: "/usr/bin/scancel"
SqueuePath: "/usr/bin/squeue"
SingularityPrefix: ""- More on configuration options at official repo
-
Download the latest release binary in
$HOME/.interlink/bin/plugin
export PLUGIN_VERSION=0.3.8
wget -O $HOME/.interlink/bin/plugin https://github.com/interTwin-eu/interlink-slurm-plugin/releases/download/${PLUGIN_VERSION}/interlink-sidecar-slurm_Linux_x86_64 -
Start the plugins passing the configuration that you have just created:
export SLURMCONFIGPATH=$PWD/interlink/manifests/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid -
Check the logs in
$HOME/.interlink/logs/plugin.log
. -
To kill and restart the process is enough:
# kill
kill $(cat $HOME/.interlink/plugin.pid)
# restart
export SLURMCONFIGPATH=$PWD/interlink/manifests/plugin-config.yaml
$HOME/.interlink/bin/plugin &> $HOME/.interlink/logs/plugin.log &
echo $! > $HOME/.interlink/plugin.pid
Almost there! Now it's time to add this virtual node into the Kubernetes cluster!
KUBERNTES PLUGIN COMING SOOON... CONTACT US FOR TEST INSTRUCTIONS
Go directly to "Test and debugging tips". The selected scenario does not expect you to do anything here.
COMING SOON...
Test interLink stack health
interLink comes with a call that can be used to monitor the overall status of both interlink server and plugins, at once.
curl -v --unix-socket ${HOME}/.interlink.sock http://unix/pinglink
This call will return the status of the system and its readiness to submit jobs.
Deploy Kubernetes components
The deployment of the Kubernetes components are managed by the official HELM chart. Depending on the scenario you selected, there might be additional operations to be done.
- Edge node
- In-cluster
- Tunneled
You can now install the helm chart with the preconfigured (by the installer script) helm values in ./interlink/manifests/values.yaml
helm upgrade --install \
--create-namespace \
-n interlink \
my-node \
oci://ghcr.io/intertwin-eu/interlink-helm-chart/interlink \
--values ./interlink/manifests/values.yaml
You can fix the version of the chart by using the --version
option.
- Create an helm values file:
nodeName: interlink-with-socket
plugin:
enabled: true
image: "plugin docker image here"
command: ["/bin/bash", "-c"]
args: ["/app/plugin"]
config: |
your plugin
configuration
goes here!!!
socket: unix:///var/run/plugin.sock
interlink:
enabled: true
socket: unix:///var/run/interlink.sock
Eventually deploy the latest release of the official helm chart:
helm upgrade --install --create-namespace -n interlink my-virtual-node oci://ghcr.io/intertwin-eu/interlink-helm-chart/interlink --values ./values.yaml
You can fix the version of the chart by using the --version
option.
COMING SOON...
Whenever you see the node ready, you are good to go!
To start debugging in case of problems we suggest starting from the pod containers logs!
Test the setup
Please find a demo pod to test your setup here.