Quick-start: local environment
N.B. in the demo the oauth2 proxy authN/Z is disabled. DO NOT USE THIS IN PRODUCTION unless you know what you are doing.
Requirements
git clone https://github.com/interTwin-eu/interLink.git
Connect a remote machine with Docker
Move to example location:
cd interLink/example/interlink-docker
Setup Kubernetes cluster
minikube start --kubernetes-version=1.27.1
Deploy Interlink
Configure interLink
You need to provide the interLink IP address that should be reachable from the kubernetes pods. In case of this demo setup, that address is the address of your machine
export INTERLINK_IP_ADDRESS=XXX.XX.X.XXX
sed -i 's/InterlinkURL:.*/InterlinkURL: "http:\/\/'$INTERLINK_IP_ADDRESS'"/g' vk/InterLinkConfig.yaml
sed -i 's/InterlinkURL:.*/InterlinkURL: "http:\/\/'$INTERLINK_IP_ADDRESS'"/g' interlink/InterLinkConfig.yaml | sed -i 's/SidecarURL:.*/SidecarURL: "http:\/\/'$INTERLINK_IP_ADDRESS'"/g' interlink/InterLinkConfig.yaml
sed -i 's/InterlinkURL:.*/InterlinkURL: "http:\/\/'$INTERLINK_IP_ADDRESS'"/g' interlink/sidecarConfig.yaml | sed -i 's/SidecarURL:.*/SidecarURL: "http:\/\/'$INTERLINK_IP_ADDRESS'"/g' interlink/sidecarConfig.yaml
Deploy virtualKubelet
Create the vk
namespace:
kubectl create ns vk
Deploy the vk resources on the cluster with:
kubectl apply -n vk -k vk/
Check that both the pods and the node are in ready status
kubectl get pod -n vk
kubectl get node
Deploy interLink via docker compose
cd interlink
docker compose up -d
Check logs for both interLink APIs and SLURM sidecar:
docker logs interlink-interlink-1
docker logs interlink-docker-sidecar-1
Deploy a sample application
kubectl apply -f ../test_pod.yaml
Then observe the application running and eventually succeeding via:
kubectl get pod -n vk --watch
When finished, interrupt the watch with Ctrl+C
and retrieve the logs with:
kubectl logs -n vk test-pod-cfg-cowsay-dciangot
Also you can see with docker ps
the container appearing on the interlink-docker-sidecar-1
container with:
docker exec interlink-docker-sidecar-1 docker ps
Connect a SLURM batch system
Let's connect a cluster to a SLURM batch. Move to example location:
cd interLink/examples/interlink-slurm
Setup Kubernetes cluster
N.B. in the demo the oauth2 proxy authN/Z is disabled. DO NOT USE THIS IN PRODUCTION unless you know what you are doing.
Bootstrap a minikube cluster
minikube start --kubernetes-version=1.26.10
Once finished you should check that everything went well with a simple kubectl get node
.
If you don't have kubectl
installed on your machine, you can install it as describe in the official documentation
Configure interLink
You need to provide the interLink IP address that should be reachable from the kubernetes pods.
In case of this demo setup, that address is the address of your machine
export INTERLINK_IP_ADDRESS=XXX.XX.X.XXX
sed -i 's/InterlinkURL:.*/InterlinkURL: "http:\/\/'$INTERLINK_IP_ADDRESS'"/g' vk/InterLinkConfig.yaml | sed -i 's/SidecarURL:.*/SidecarURL: "http:\/\/'$INTERLINK_IP_ADDRESS'"/g' vk/InterLinkConfig.yaml
Deploy the interLink components
Deploy the interLink virtual node
Create a vk
namespace:
kubectl create ns vk
Deploy the vk resources on the cluster with:
kubectl apply -n vk -k vk/
Check that both the pods and the node are in ready status
kubectl get pod -n vk
kubectl get node
Deploy interLink remote components
With the following commands you are going to deploy a docker compose that emulates a remote center managing resources via a SLURM batch system.
The following containers are going to be deployed:
- interLink API server: the API layer responsible of receiving requests from the kubernetes virtual node and forward a digested vertion to the interLink plugin
- interLink SLURM plugin: translates the information from the API server into a SLURM job
- a SLURM local daemon: a local instance of a SLURM dummy queue with singularity/apptainer available as container runtime.
cd interlink
docker compose up -d
Check logs for both interLink APIs and SLURM sidecar:
docker logs interlink-interlink-1
docker logs interlink-docker-sidecar-1
Deploy a sample application
Congratulation! Now it's all set up for the execution of your first pod on a virtual node!
What you have to do, is just explicitly allow a pod of yours in the following way:
apiVersion: v1
kind: Pod
metadata:
name: test-pod-cowsay
namespace: vk
annotations:
slurm-job.knoc.io/flags: "--job-name=test-pod-cfg -t 2800 --ntasks=8 --nodes=1 --mem-per-cpu=2000"
spec:
restartPolicy: Never
containers:
- image: docker://ghcr.io/grycap/cowsay
command: ["/bin/sh"]
args: ["-c", "\"touch /tmp/test.txt && sleep 60 && echo \\\"hello muu\\\" | /usr/games/cowsay \" " ]
imagePullPolicy: Always
name: cowsayo
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/hostname: test-vk
tolerations:
- key: virtual-node.interlink/no-schedule
operator: Exists
Then, you are good to go:
kubectl apply -f ../test_pod.yaml
Now observe the application running and eventually succeeding via:
kubectl get pod -n vk --watch
When finished, interrupt the watch with Ctrl+C
and retrieve the logs with:
kubectl logs -n vk test-pod-cfg-cowsay-dciangot
Also you can see with squeue --me
the jobs appearing on the interlink-docker-sidecar-1
container with:
docker exec interlink-docker-sidecar-1 squeue --me
Or, if you need more debug, you can log into the sidecar and look for your POD_UID folder in .local/interlink/jobs
:
docker exec -ti interlink-docker-sidecar-1 bash
ls -altrh .local/interlink/jobs