How to delete artifacts from Kubeflow? - minio

Kubeflow pipeline runs generate artifacts, which from what i can see in the dashboard are saved on minio. How can I delete them?

Turns out it can be done through minio UI. To access minio remotly use configured kubectl:
kubectl port-forward -n kubeflow svc/minio-service 9000:9000
And then in web browser go to localhost:9000.
Also each bucket can be assigned a lifecycle rule, which will give objects added under some prefix expiration date
https://docs.min.io/docs/python-client-api-reference.html.

Related

How to delete a deployment / image in kubernetes

I'm running kubernetes in azure.
I want to delete a specific deployment, with AZ AKS or kubectl.
The only info I've found is how to delete pods, but this is not what I'm looking for, since pods will regenerate once deleted.
I know I just can go to the ui and delete the deployment but i want to do it with az aks or kubectl.
I've run
kubectl get all -A
Then I copy the name of the deployment that I want to delete and run:
kubectl delete deployment zr-binanceloggercandlestick-btcusdt-2hour
kubectl delete deployment deployment.apps/zr-binanceloggercandlestick-btcusdt-12hour
but noting no success, i get these errors:
Error from server (NotFound): deployments.extensions "zr-binanceloggercandlestick-btcusdt-2hour" not found
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource/<resource_name>' instead of 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource resource/<resource_name>'
Find out all deployments across all namespaces
kubectl get deploy -A
Then delete a deployment with deploymentname from namespace. deploymentname can be found from above command.
kubectl delete deploy deploymentname -n namespacename
Docs on how to configure kubectl to connect to AKS.
Use the below command.
kubectl delete deployment deployment-name-here
More about the command here.

Where the images get stored in Azure Kubernetes Cluster after pulling from Azure Container Registry?

I am using Kubernetes to deploy all my microservices provided by Azure Kubernetes Services.
Whenever I release an update of my microservice which is getting frequently from last one month, it pulls the new image from the Azure Container Registry.
I was trying to figure out where do these images reside in the cluster?
Just like Docker stores, the pulled images in /var/lib/docker & since the Kubernetes uses Docker under the hood may be it stores the images somewhere too.
But if this is the case, how can I delete the old images from the cluster that are not in use anymore?
Clusters with Linux node pools created on Kubernetes v1.19 or greater default to containerd for its container runtime (Container runtime configuration).
To manually remove unused images on a node running containerd:
Identity node names:
kubectl get nodes
Start an interactive debugging container on a node (Connect with SSH to Azure Kubernetes Service):
kubectl debug node/aks-agentpool-11045208-vmss000003 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Setup crictl on the debugging container (check for newer releases of crictl):
The host node's filesystem is available at /host, so configure crictl to use the host node's containerd.sock.
curl -sL https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz | tar xzf - -C /usr/local/bin \
&& export CONTAINER_RUNTIME_ENDPOINT=unix:///host/run/containerd/containerd.sock IMAGE_SERVICE_ENDPOINT=unix:///host/run/containerd/containerd.sock
Remove unused images on the node:
crictl rmi --prune
You are correct in guessing that it's mostly up to Docker, or rather to whatever the active CRI plugin is. The Kubelet automatically cleans up old images when disk space runs low so it's rare that you need to ever touch it directly, but if you did (and are using Docker as your runtime) then it would be the same docker image commands as per normal.
I was trying to figure out where do these images reside in the
cluster?
With the test and check, the result shows each node in the AKS cluster installed the Docker server, and the images stored like Docker as you say that the image layers stored in the directory /var/lib/docker/.
how can I delete the old images from the cluster that are not in use
anymore?
You can do this through the Docker command inside the node. Follow the steps in Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes to make a connection to the node, then you could delete the image through the Docker CLI docker rmi image_name:tag, but carefully with it, make sure the image is really no more useful.

How to make SSL/HTTPS certificates on Gitlab Auto DevOps with GKE and kube-lego

The instructions say to install it, but doesn't suggest which method will work, in what order or with which load balancer. I keep getting useless test certificates installed on my deployments.
It sounds like you are getting the default chart from helm, which is slightly different than the instructions you will find on the kube-lego Github. The helm chart uses Let's Encrypt's staging server by default. Also if you are not familiar with helm it can be quite tricky to undo the remnants that the kube-lego helm chart leaves behind, specially as tiller likes to keep the old installation history.
Here's a brief overview of setting up Gitlab Auto Devops with Google Kubernetes Engine. I should mention that you can skip up to step 13 if you use the Gitlab GKE wizard introduced in 10.1, but it won't really tell you what's happening or how to debug when things go wrong, also the wizard is a one way process.
In Gitlab under integrations you'll find a Kubernetes integration button
You can also find this under CI/ CD -> cluster
The API URL is the endpoint ip prefixed by https://
The "CA Certificate" it asks for is service account CA, not the same as the cluster CA
To connect to gitlab you'll need to create a k8s service account and get the CA and token for it. Do so by using gcloud to authenticate kubectl. GCE makes it easy by doing this for you through the "connect" button in k8s engine
https://kubernetes.io/docs/admin/authentication/#service-account-tokens
All commands must be run with a custom namespace, it will not work with default
kubectl create namespace (NS)
kubectl create serviceaccount (NAME) --namespace=(NS)
This will create two tokens in your configs/secrets, a default one and you service account one
kubectl get -o json serviceaccounts (NAME) --namespace=(NS)
kubectl get -o json secret (secrets-name-on-prev-result) --namespace=(NS)
To decode the base64 values echo them to base64 -d, for example
Echo mybase64stringasdfasdf= | base64 -d
To install helm, use the installation script method
https://docs.helm.sh/using_helm/#from-script
Init helm and update it's repo
helm init
helm repo update
Install nginx-ingress, its ingress with nginx. You could use the Google load balancer as well, but it's not as portable.
helm install stable/nginx-ingress
Make a wild card subdomain with an A record pointing to the IP address set up by ingress
In Gitlab, Make auto devops use your newly setup wildcard subdomain, if its "*.x" on "me.com" you should enter "x.me.com" in the auto devops settings.
Now install kube lego
helm install --name lego-main \
--set config.LEGO_EMAIL=CHANGEMENOW#example.com \
--set config.LEGO_URL=https://acme-v01.api.letsencrypt.org/directory \
stable/kube-lego
Helm installations are called charts, equal to a k8s container installation
If you wish to delete a release, you must purge it, otherwise tiller will keep a history, with:
helm delete --purge my-release-name
You can find the release names and their associated chart in
helm list
Troubleshooting
Order doesn't seem to matter too much. attaching to a pod can be a useful way of debugging problems, such as a bad email address. The Ideal order however is probably, nginx-ingress, then kube-lego, then gitlab. I did make it work with gitlab first, then nginx-ingress, then kube-lego.
I heard from Sid that they are working to make this easier... let's hope so.

How to use kubernetes helm from a CI/CD pipeline hosted outside the k8s cluster

I am using kubernetes helm to deploy apps to my cluster. Everything works fine from my laptop when helm uses the cluster's kube-config file to deploy to the cluster.
I want to use helm from my CI/CD server (which is separate from my cluster) to automatically deploy apps to my cluster. I have created a k8s service account for my CI/CD server to use. But how do I create a kube-config file for the service account so that helm can use it to connect to my cluster from my CI/CD server??
Or is this not the right way to use Helm from a CI/CD server?
Helm works by using the installed kubectl to talk to your cluster. That means that if you can access your cluster via kubectl, you can use helm with that cluster.
Don't forget to make sure you're using to proper context in case you have more than one cluster in you kubcfg file. You can check that by running kubectl config current-context and comparing that to the cluster details in the kubecfg.
You can find more details in Helm's docs, check the quick start guide for more information.
why not just run your CI server inside your kubernetes cluster then you don't have to manage secrets for accessing the cluster? We do that on Jenkins X and it works great - we can run kubectl or helm inside pipelines just fine.
In this case you will want to install kubectl on whichever slave or agent you have identified for use by your CI/CD server, OR install kubectl on-the-fly in your automation, AND then make sure you have OR are able to generate a kubeconfig to use.
To answer the question:
But how do I create a kube-config file for the service account ...
You can set new clusters, credentials, and contexts for use with kubectl in a default or custom kubeconfig file using kubectl config set-cluster, kubectl config set-credentials, and kubectl config set-context. If you have KUBECONFIG env variable set and pointing to a kubeconfig file, that works or when setting new entries simply pass -kubeconfig to point to a custom file.
Here's the relevant API documentation for v1.6.
We created helmsman which provides you with declarative syntax to manage helm charts in your cluster. It configures kubectl (and therefore helm) for you wherever you run it. It can also be used from a docker container.

How to get app Incremental deployment to kubernetes

I tried to setup Continuous Deployment using jenkins for own microservice project which is organized as multi-module maven project (each submodule representing a micro service). I use "Incremental build - only build changed modules" in jenkins to avoid unnessesary building, and then use docker-maven-plugin to build docker image. However, how could I do to only redeploy changed images to kubernetes cluster?
You can use local docker image registry.
docker run -d -p 5000:5000 --restart=always --name registry registry:2
You can then push the development images to this registry as a build step and make your kubernetes containers use this registry.
After you are ready, push the image to your production image registry and adjust container manifests to use proper registry.
More info on private registry server: https://docs.docker.com/registry/deploying/
Currently Kubernetes does not provide proper solution for this. But there are few workarounds mentioned [here]: https://github.com/kubernetes/kubernetes/issues/33664
I like this one 'Fake a change to the Deployment by changing something other than the image'. We can do it in this way :
Define environment variable say "TIMESTAMP" and any value to it in deployment manifest. In CI\CD pipeline we set the value to current timestamp and then pass this updated manifest to 'kubectl apply'. This way, we are faking a change and kubernetes will pull the latest image and deploy to the cluster. Please make sure that 'imagePullPolicy : always' is set.

Resources