I'm unable to deploy a new image to my service. I'm trying to run this command in my CI environment:
$ kubectl set image deployment/dev2 \
dev2=gcr.io/at-dev-253223/dev#sha256:3c6bc55e279549a431a9e2a316a5cddc44108d9d439781855a3a8176177630f0
I get unable to find container named "dev2"
I'll upload my registry and pods and services, not sure why I can't just pass the new image.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dev2-6fdf8d4fb5-hnrnv 1/1 Running 0 7h19m
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dev2 LoadBalancer hidden hidden 80:32594/TCP 6h55m
kubernetes ClusterIP hidden <none> 443/TCP 2d3h
To get past the particular problem you're seeing, you need to do:
$ kubectl set image deployment/dev2 \
661d1a428298276f69028b3e8e2fd9a8c1690095=gcr.io/at-dev-253223/dev#sha256:3c6bc55e279549a431a9e2a316a5cddc44108d9d439781855a3a8176177630f0
instead of
$ kubectl set image deployment/dev2 \
dev2=gcr.io/at-dev-253223/dev#sha256:3c6bc55e279549a431a9e2a316a5cddc44108d9d439781855a3a8176177630f0
A Deployment consists of multiple replicas of the same Pod template. A Pod can have many containers, so if you're trying to set the image, you have to specify which container's image you want to set. You only have one container, and surprisingly its name is 661d1a428298276f69028b3e8e2fd9a8c1690095, so that's what has to go in front of the = sign.
That will fix the unable to find container named "dev2" error. I have some doubt that the image you're setting it to is correct. The current image being used is:
gcr.io/at-dev-253223/661...095#sha256:0fd...20a
You're trying to set it to:
gcr.io/at-dev-253223/dev#sha256:3c6...0f0
The general pattern is:
[HOSTNAME]/[PROJECT-ID]/[IMAGE]#[IMAGE_DIGEST]
(see here). This means you're not just taking a new digest of a given image, but changing the image entirely from 661d1a428298276f69028b3e8e2fd9a8c1690095 to dev. You will have to decide for yourself that's actually what you intend to do or not.
Describe your pod with kubectl describe pod dev2-6fdf8d4fb5-hnrnv and make sure that the name of the container is indeed named dev2
Related
I'm running kubernetes in azure.
I want to delete a specific deployment, with AZ AKS or kubectl.
The only info I've found is how to delete pods, but this is not what I'm looking for, since pods will regenerate once deleted.
I know I just can go to the ui and delete the deployment but i want to do it with az aks or kubectl.
I've run
kubectl get all -A
Then I copy the name of the deployment that I want to delete and run:
kubectl delete deployment zr-binanceloggercandlestick-btcusdt-2hour
kubectl delete deployment deployment.apps/zr-binanceloggercandlestick-btcusdt-12hour
but noting no success, i get these errors:
Error from server (NotFound): deployments.extensions "zr-binanceloggercandlestick-btcusdt-2hour" not found
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource/<resource_name>' instead of 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource resource/<resource_name>'
Find out all deployments across all namespaces
kubectl get deploy -A
Then delete a deployment with deploymentname from namespace. deploymentname can be found from above command.
kubectl delete deploy deploymentname -n namespacename
Docs on how to configure kubectl to connect to AKS.
Use the below command.
kubectl delete deployment deployment-name-here
More about the command here.
Good day!
I bumped into this post:
https://cleverbuilder.com/articles/spring-boot-kubernetes/
And It seems I am able to run my spring-boot RESTful application as it is showing in the Minikube Dashboard. The problem is when I try to execute kubectl get pods or kubectl get svc I am not seeing my application. The kubernetes namespace that my application using is test. I'm really puzzled now on how can I access my application. Please shed some light. Thanks!
Run, kubectl get pods -n test, kubectl get svc -n test and this should show you the desired output.
By default, Kubernetes starts with the following three namespaces: Default: Catch-all namespace for all objects not belonging to either of the kube-public or kube-system namespaces. The default namespace is used to hold the default set of pods, services, and deployments used by the cluster. Since, your pod is in a custom namespace test (you created). You'll need to specify the namespace where your deployment is been created or pod is been deployed.
So, kubectl get pods is actually kubectl get pods -n default meaning show pods in default namespace. hence, doing kubectl get pods -n test will show you all your pods in test namespace.
See Documentation
as I am green to that subject, could you pls. help.
I deploy docker image to gcloud kubernetes.
What to do to make the cluster reread the docker image when a new one would appear?
My code is:
sudo docker build -t gcr.io/${PROJECT_ID}/sf:$ENV .
sudo docker push gcr.io/${PROJECT_ID}/sf:$ENV
sudo gcloud container clusters create sf:$ENV --num-nodes=3
sudo kubectl run sfmill-web$ENV --image=gcr.io/${PROJECT_ID}/sf:$ENV --port 8088
sudo kubectl expose deployment sfmill-web$ENV --type=LoadBalancer --port 8088 --target-port 8088
kubectl set image deployment/sfmill-web$ENV sf=sf:$ENV
I encourage you to explore use Kubernetes configuration files to define resources.
You can explore the YAML for your deployment with:
kubectl get deployment/sfmill-web$ENV --output=yaml > ${PWD}/sfmill-web$ENV.yaml
You could then tweak the value of the image property and then reapply this to your cluster using:
kubectl apply --filename=${PWD}/sfmill-web$ENV.yaml
The main benefit to the configuration file approach is that you're effectively creating code to manage your infrastructure and, each time you change your code, you could check it into source control thereby knowing what you did at each stage.
Using kubectl is great but it makes it more challenging to recreate the cluster from scratch.... Which kubectl command did I perform next? Yes, you could (bash) script all your kubectl commands too which would help but configuration files remain the ideal solution.
HTH
I am using Kubernetes to deploy all my microservices provided by Azure Kubernetes Services.
Whenever I release an update of my microservice which is getting frequently from last one month, it pulls the new image from the Azure Container Registry.
I was trying to figure out where do these images reside in the cluster?
Just like Docker stores, the pulled images in /var/lib/docker & since the Kubernetes uses Docker under the hood may be it stores the images somewhere too.
But if this is the case, how can I delete the old images from the cluster that are not in use anymore?
Clusters with Linux node pools created on Kubernetes v1.19 or greater default to containerd for its container runtime (Container runtime configuration).
To manually remove unused images on a node running containerd:
Identity node names:
kubectl get nodes
Start an interactive debugging container on a node (Connect with SSH to Azure Kubernetes Service):
kubectl debug node/aks-agentpool-11045208-vmss000003 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Setup crictl on the debugging container (check for newer releases of crictl):
The host node's filesystem is available at /host, so configure crictl to use the host node's containerd.sock.
curl -sL https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz | tar xzf - -C /usr/local/bin \
&& export CONTAINER_RUNTIME_ENDPOINT=unix:///host/run/containerd/containerd.sock IMAGE_SERVICE_ENDPOINT=unix:///host/run/containerd/containerd.sock
Remove unused images on the node:
crictl rmi --prune
You are correct in guessing that it's mostly up to Docker, or rather to whatever the active CRI plugin is. The Kubelet automatically cleans up old images when disk space runs low so it's rare that you need to ever touch it directly, but if you did (and are using Docker as your runtime) then it would be the same docker image commands as per normal.
I was trying to figure out where do these images reside in the
cluster?
With the test and check, the result shows each node in the AKS cluster installed the Docker server, and the images stored like Docker as you say that the image layers stored in the directory /var/lib/docker/.
how can I delete the old images from the cluster that are not in use
anymore?
You can do this through the Docker command inside the node. Follow the steps in Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes to make a connection to the node, then you could delete the image through the Docker CLI docker rmi image_name:tag, but carefully with it, make sure the image is really no more useful.
As logs go away as soon as pods crashes, I would like to store them directly on my local machine. I don't want to use GCE. Also I would have multiple nodes of a service, so HostPath will not be of any use.
kubectl logs <pod-name> > log.txt will just capture a snapshot. I want the complete logs to be persistent on my local machine.
Logs are already on nodes in /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log
For collecting them you can use fluent-bit. The easiest approach would be using kubernetes to manage fluent-bit - run daemon set with host path volumes.
Here is helm chart that can do it for you: https://github.com/helm/charts/tree/master/stable/fluent-bit