kubectl run from local docker image? - macos

How to refer to the local image that exists?
kubectl run u --rm -i --tty --image my_local_image -- bash
Results in ImagePullBackOff and kubectl is obviously trying to pull from a remote repository instead of local register.
This answer is unhelpful, and the follow up refers to minikube and kubernetes.
Some event logs
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned u-6988b9c446-zcp99 to docker-for-desktop
Normal SuccessfulMountVolume 1m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-q2qm7"
Normal SandboxChanged 1m kubelet, docker-for-desktop Pod sandbox changed, it will be killed and re-created.
Normal Pulling 23s (x3 over 1m) kubelet, docker-for-desktop pulling image "centos_postgres"
Warning Failed 22s (x3 over 1m) kubelet, docker-for-desktop Failed to pull image "centos_postgres": rpc error: code = Unknown desc = Error response from daemon: pull access denied for centos_postgres, repository does not exist or may require 'docker login'
Warning Failed 22s (x3 over 1m) kubelet, docker-for-desktop Error: ErrImagePull
Normal BackOff 9s (x5 over 1m) kubelet, docker-for-desktop Back-off pulling image "centos_postgres"
Warning Failed 9s (x5 over 1m) kubelet, docker-for-desktop Error: ImagePullBackOff

Kubernetes Pods have an imagePullPolicy field. If you set that to Never, it will never try to pull an image, and it's up to you to ensure that the docker daemon which the kubelet is using contains that image. The default policy is IfNotPresent, which should work the same as Never if an image is already present in the docker daemon. Double check that your docker daemon actually contains what you think it contains, and make sure your imagePullPolicy is set to one of the two that I mentioned.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-image
image: local-image-name
imagePullPolicy: Never

Related

how to connect kubernetes cluster with image registry

Hi I have deployed 3 node kubernetes cluster (one master, 2 worker nodes) as below:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.domain.com Ready control-plane 161m v1.24.4
worker1.domain.com Ready <none> 154m v1.24.4
worker2.domain.com Ready <none> 153m v1.24.4
I used cri-o container run time, tried creating few pods but it is failing with below events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40s default-scheduler Successfully assigned default/nginx to worker2.domain.com
Normal BackOff 26s kubelet Back-off pulling image "nginx"
Warning Failed 26s kubelet Error: ImagePullBackOff
Normal Pulling 11s (x2 over 32s) kubelet Pulling image "nginx"
Warning Failed 2s (x2 over 27s) kubelet Failed to pull image "nginx": rpc error: code = Unknown desc = Error reading manifest latest in registry.hub.docker.com/nginx: unauthorized: authentication required
Warning Failed 2s (x2 over 27s) kubelet Error: ErrImagePull
The pod definition file is below:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: frontend
spec:
containers:
- name: nginx
image: nginx
Same like this I tried with mysql instead of nginx, I'm getting below events in the mysql pod, looks like it is able to pull the image but not able to run the pod:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23m default-scheduler Successfully assigned default/mysql to worker1.domain.com
Normal Pulled 22m kubelet Successfully pulled image "mysql" in 54.067277637s
Normal Pulled 22m kubelet Successfully pulled image "mysql" in 18.227802182s
Normal Pulled 21m kubelet Successfully pulled image "mysql" in 13.511077504s
Normal Created 20m (x4 over 22m) kubelet Created container mysql
Normal Started 20m (x4 over 22m) kubelet Started container mysql
Normal Pulled 20m kubelet Successfully pulled image "mysql" in 11.998942705s
Normal Pulling 20m (x5 over 23m) kubelet Pulling image "mysql"
Normal Pulled 20m kubelet Successfully pulled image "mysql" in 13.68976309s
Normal Pulled 18m kubelet Successfully pulled image "mysql" in 16.584670292s
Warning BackOff 3m12s (x80 over 22m) kubelet Back-off restarting failed container
below is the POD status:
NAME READY STATUS RESTARTS AGE
mysql 0/1 CrashLoopBackOff 8 (4m51s ago) 23m
nginx 0/1 ImagePullBackOff 0 3m26s
You do not really need any extra config to pull image from public image registry
The containers/image library is used for pulling images from registries. Currently, it supports Docker schema 2/version 1 as well as schema 2/version 2. It also passes all Docker and Kubernetes tests.
cri-container-images
So just mention the image with the right URI and it should work.

How can I diagnose why a k8s pod keeps restarting?

I deploy a elasticsearch to minikube with below configure file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
name: elasticsearch
template:
metadata:
labels:
name: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.10.1
ports:
- containerPort: 9200
- containerPort: 9300
I run the command kubectl apply -f es.yml to deploy the elasticsearch cluster.
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
elasticsearch-fb9b44948-bchh2 1/1 Running 5 6m23s
The elasticsearch pod keep restarting every a few minutes. When I run kubectl describe pod command, I can see these events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m11s default-scheduler Successfully assigned default/elasticsearch-fb9b44948-bchh2 to minikube
Normal Pulled 3m18s (x5 over 7m11s) kubelet Container image "elasticsearch:7.10.1" already present on machine
Normal Created 3m18s (x5 over 7m11s) kubelet Created container elasticsearch
Normal Started 3m18s (x5 over 7m10s) kubelet Started container elasticsearch
Warning BackOff 103s (x11 over 5m56s) kubelet Back-off restarting failed container
The last event is Back-off restarting failed but I don't know why it restarts the pod. Is there any way I can check why it keeps restarting?
The first step (kubectl describe pod) you've already done. As a next step I suggest checking container logs: kubectl logs <pod_name>. 99% you get the reason from logs in this case (I bet on bootstrap check failure).
When neither describe pod nor logs do not have anything about the error, I get into the container with 'exec': kubectl exec -it <pod_name> -c <container_name> sh. With this you'll get a shell inside the container (of course if there IS a shell binary in it) ans so you can use it to investigate the problem manually. Note that to keep failing container alive you may need to change command and args to something like this:
command:
- /bin/sh
- -c
args:
- cat /dev/stdout
Be sure to disable probes when doing this. A container may restart if liveness probe fails, you will see that in kubectl describe pod if it happen. Since your snippet doesn't have any probes specified, you can skip this.
Checking logs of the pod using kubectl logs podname gives clue about what could go wrong.
ERROR: [2] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log
Check this post for a solution

minikube error trying to reach 172.17.0.4:8080 on osx

I'm doing the kubernetes tutorial locally with minikube on osx. In https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/ step 3, I get the error
% curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/
Error: 'dial tcp 172.17.0.4:8080: getsockopt: connection refused'
Trying to reach: 'http://172.17.0.4:8080/'%
any idea why this doesn't work locally? the simpler request does work
% curl http://localhost:8001/version
{
"major": "1",
"minor": "10",
"gitVersion": "v1.10.0",
"gitCommit": "fc32d2f3698e36b93322a3465f63a14e9f0eaead",
"gitTreeState": "clean",
"buildDate": "2018-03-26T16:44:10Z",
"goVersion": "go1.9.3",
"compiler": "gc",
"platform": "linux/amd64"
info
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-74f58d6b87-ntn5r 0/1 ImagePullBackOff 0 21h
logs
$ kubectl logs $POD_NAME
Error from server (BadRequest): container "kubernetes-bootcamp" in pod "kubernetes-bootcamp-74f58d6b87-w4zh8" is waiting to start: trying and failing to pull image
so then the run command is starting the node but the pod crashes? why?
$ kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080
I can pull the image without a problem
$ docker pull gcr.io/google-samples/kubernetes-bootcamp:v1
v1: Pulling from google-samples/kubernetes-bootcamp
5c90d4a2d1a8: Pull complete
ab30c63719b1: Pull complete
29d0bc1e8c52: Pull complete
d4fe0dc68927: Pull complete
dfa9e924f957: Pull complete
Digest: sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af
Status: Downloaded newer image for gcr.io/google-samples/kubernetes-bootcamp:v1
describe
$ kubectl describe pods
Name: kubernetes-bootcamp-74f58d6b87-w4zh8
Namespace: default
Node: minikube/10.0.2.15
Start Time: Tue, 24 Jul 2018 15:05:00 -0400
Labels: pod-template-hash=3091482643
run=kubernetes-bootcamp
Annotations: <none>
Status: Pending
IP: 172.17.0.3
Controlled By: ReplicaSet/kubernetes-bootcamp-74f58d6b87
Containers:
kubernetes-bootcamp:
Container ID:
Image: gci.io/google-samples/kubernetes-bootcamp:v1
Image ID:
Port: 8080/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wp28q (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-wp28q:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wp28q
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 23m (x281 over 1h) kubelet, minikube Back-off pulling image "gci.io/google-samples/kubernetes-bootcamp:v1"
Warning Failed 4m (x366 over 1h) kubelet, minikube Error: ImagePullBackOff
Minikube is a tool that makes it easy to run Kubernetes locally.
Minikube runs a single-node Kubernetes
cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
Back to your issue. Have you checked if you provided enough resources to run Minikube environment?
You may try to run minikube and force allocate more memory:
minikube start --memory 4096
For further analysis, please provide information about resources dedicated to this installation and
type of hypervisor you use.
Sounds like a networking issue. Your VM is unable to pull the images from gcr.io:443.
Here's what your kubectl describe pods kubernetes-bootcamp-xxx should looks like:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned kubernetes-bootcamp-5c69669756-xbbmn to minikube
Normal SuccessfulMountVolume 5m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-cfq65"
Normal Pulling 5m kubelet, minikube pulling image "gcr.io/google-samples/kubernetes-bootcamp:v1"
Normal Pulled 5m kubelet, minikube Successfully pulled image "gcr.io/google-samples/kubernetes-bootcamp:v1"
Normal Created 5m kubelet, minikube Created container
Normal Started 5m kubelet, minikube Started container
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-cfq65"
Normal SandboxChanged 1m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulled 1m kubelet, minikube Container image "gcr.io/google-samples/kubernetes-bootcamp:v1" already present on machine
Normal Created 1m kubelet, minikube Created container
Normal Started 1m kubelet, minikube Started container
Try this from your host, to narrow down if it's a networking issue with your VM or your host machine:
$ docker pull gcr.io/google-samples/kubernetes-bootcamp:v1
v1: Pulling from google-samples/kubernetes-bootcamp
5c90d4a2d1a8: Pull complete
ab30c63719b1: Pull complete
29d0bc1e8c52: Pull complete
d4fe0dc68927: Pull complete
dfa9e924f957: Pull complete
Digest: sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af
Status: Downloaded newer image for gcr.io/google-samples/kubernetes-bootcamp:v1

After successfully install of CAM the "cam-mongo" pod went down

After successful deployment of CAM (Was up and running for couple of days), suddenly "cam-mongo" microservice went down and while checking the logs for pod using below 2 command it gives you Error synching pod
1) kubectl describe pods -n services
Warning BackOff 3s (x3 over 18s) kubelet, 9.109.191.126 Back-off restarting failed container
Warning FailedSync 3s (x3 over 18s) kubelet, 9.109.191.126 Error syncing pod
With this information you don't know what went wrong and how do you fix it
2) kubectl -n services logs cam-mongo-5c89fcccbd-r2hv4 -p (with -p option you can grab the logs from previously running container )
The above command show below information:
exception in initAndListen: 98 Unable to lock file: /data/db/mongod.lock Resource temporarily unavailable. Is a mongod instance already running?, terminatingConclusion:
While starting the container inside "cam-mongo" pod it was unable to use the existing /data/db/mongod.lock file and hence your pod will be not up and running and you cannot access CAM
After further analysis I resolved the issue as following:
1) spin up a container and mount the cam-mongo volume within it.
To do this I used the below pod creation yaml which will mount the concern pv's where /data/db/ is present.
kind: Pod
apiVersion: v1
metadata:
name: mongo-troubleshoot-pod
spec:
volumes:
name: cam-mongo-pv
persistentVolumeClaim:
claimName: cam-mongo-pv
containers:
name: mongo-troubleshoot
image: nginx
ports:
containerPort: 80
name: "http-server"
volumeMounts:
mountPath: "/data/db"
name: cam-mongo-pv
RUN:kubectl -n services create -f ./mongo-troubleshoot-pod.yaml
2) Use "docker exec -it /bin/bash " (look for it from "kubectl -n services describe po/mongo-troubleshoot-pod-xxxxx" info)
cd /data/db
rm mongod.lock
rm WiredTiger.lock
3) kill the pod which you have created for troubleshooting
4) kill the corrupted cam-mongo pod using below command
kubectl delete pods -n services
It fixed the issue.

Running elasticsearch on Google Cloud Kubernetes ends in CrashLoopBackOff

I try to run the elasticsearch6 container on a google cloud instance. Unfortunately the container always ends in CrashLoopBackOff.
This is what I did:
install gcloud and kubectl
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb http://packages.cloud.google.com/apt cloud-sdk-$(lsb_release -c -s) main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
sudo apt-get update && sudo apt-get install google-cloud-sdk kubectl
configure gcloud
gcloud init
gcloud config set compute/zone europe-west3-a # For Frankfurt
create kubernetes cluster
gcloud container clusters create elasticsearch-cluster --machine-type=f1-micro --num-nodes=3
Activate pod
kubectl create -f pod.yml
apiVersion: v1
kind: Pod
metadata:
name: test-elasticsearch
labels:
name: test-elasticsearch
spec:
containers:
- image: launcher.gcr.io/google/elasticsearch6
name: elasticsearch
After this I get the status:
kubectl get pods
NAME READY STATUS RESTARTS AGE
test-elasticsearch 0/1 CrashLoopBackOff 10 31m
A kubectl logs test-elasticsearch does not show any output.
And here the output of kubectl describe po test-elasticsearch with some info XXX out.
Name: test-elasticsearch
Namespace: default
Node: gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv/XX.XXX.X.X
Start Time: Sat, 12 May 2018 14:54:36 +0200
Labels: name=test-elasticsearch
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container elasticsearch
Status: Running
IP: XX.XX.X.X
Containers:
elasticsearch:
Container ID: docker://bb9d093df792df072a762973066d504a4e7d73b0e87d0236a94c3e8b972d9c41
Image: launcher.gcr.io/google/elasticsearch6
Image ID: docker-pullable://launcher.gcr.io/google/elasticsearch6#sha256:1ddafd5293dbec8fb73eabffa29614916e4933bb057db50231084d89f4a0b3fa
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Sat, 12 May 2018 14:55:06 +0200
Finished: Sat, 12 May 2018 14:55:09 +0200
Ready: False
Restart Count: 2
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-XXXXX (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-XXXXX:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-XXXXX
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51s default-scheduler Successfully assigned test-elasticsearch to gke-elasticsearch-cluste-def
Normal SuccessfulMountVolume 51s kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv MountVolume.SetUp succeeded for volume "default-token-XXXXX"
Normal Pulling 22s (x3 over 49s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv pulling image "launcher.gcr.io/google/elasticsearch6"
Normal Pulled 22s (x3 over 49s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv Successfully pulled image "launcher.gcr.io/google/elasticsearch6"
Normal Created 22s (x3 over 48s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv Created container
Normal Started 21s (x3 over 48s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv Started container
Warning BackOff 4s (x3 over 36s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv Back-off restarting failed container
Warning FailedSync 4s (x3 over 36s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv Error syncing pod
The problem was the f1-micro instance. It doesn't have enough memory to run. Only after upgrading to an instance with 4GB it works. Unfortunately this is way too expensive for me, so I have to look for something else.

Resources