I’ve followed the instructions outlined on this page and pushed a local image to a local 3 node Minikube cluster with the registry add-on enabled and the cluster started with insecure-registry flag, but I get the following error when I try to create a Pod with the image:
Normal Pulling 9m18s (x4 over 10m) kubelet Pulling image "192.168.99.100:5000/myapp:v1”
Warning Failed 9m18s (x4 over 10m) kubelet Failed to pull image "192.168.99.100:5000/myapp:v1”: rpc error: code = Unknown desc = Error response from daemon: Get "https://192.168.99.100:5000/v2/": http: server gave HTTP response to HTTPS client
Any advice on resolving this would be greatly appreciated
My Minikube (v1.23.2) is on macOS (Big Sur 11.6) using the VirtualBox driver. It is a three node cluster. My Docker Desktop version is (20.10.8)
These are the steps I followed:
Get my cluster’s VMs’ IP range - 192.168.99.0/24
Added the following entry to my Docker Desktop config:
insecure-registries": [
"192.168.99.0/24"
]
Started Minikube with insecure registries flag:
$ minikube start —insecure-registry=“192.168.99.0/24”
Run:
$ minikube addons enable registry
Tagged the image I want to push:
$ docker tag docker.io/library/myapp:v1 $(minikube ip):5000/myapp:v1
Pushed the image:
$ docker push $(minikube ip):5000/myapp:v1
The push works ok - when I exec onto the registry Pod I can see the image in the filesystem. However, when I try to create a Pod using the image, I get the error mentioned above.
My Pod manifest is:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: myapp
name: myapp
spec:
containers:
- image: 192.168.99.100:5000/myapp:v1
name: myapp
imagePullPolicy: IfNotPresent
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
The issue was resolved by deleting the cluster and recreating it using the insecure-registry flag from the start - originally I had created the cluster, stopped it, and then started it again with the insecure-registry flag. For some reason this didn't work, but starting it for the first time with the flag did.
If you're going to be creating clusters with the registry addon a lot, it might be worth adding the flag permanently to your config. Replace the IP with your cluster's subnet:
$ minikube config set insecure-registry "192.168.99.0/24"
Related
I have 3 nodes kubernetes cluster managing with kubeadm. Previously i used kind and minikube. When I wanted make deployment based on docker image i just need make:
kind load docker-image in kind or,
minikube cache add in minikube,
Now, when I want make deployment in kubeadm I obviously get ImagePullBackOff.
Question: Is a equivalent comment do add image to kubeadm and I can't find it, or there is entirely other way to solve that problem?
EDIT
Maybe, question is not clear enough, so instead of delete it I try to put more details.
I have tree nodes (one control plane, and two workers) with docker, kubeadm, kubelet and kubectl installed on each. One deployment of my future cluster is machine learning module so I need tensorflow:
docker pull tensorflow/tensorflow
Using this image I build my own:
docker build -t mlimage:cluster -f ml.Dockerfile .
Next I prepare deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mldeployment
spec:
selector:
matchLabels:
app: mldeployment
name: mldeployment
replicas: 1
template:
metadata:
labels:
app: mldeployment
name: mldeployment
spec:
containers:
- name: mlcontainer
image: mlimage:cluster
imagePullPolicy: Never
ports:
- name: http
containerPort: 6060
and create it:
kubectl create -f mldeployment.yaml
Now, when I type
kubectl describe pod
In mldeployment are these events:
In that case of minikube or kind it was enought to simply add image to cluster typing
minikibe cache add ...
and
kind load docker-image ...
respectively.
Question is how to add image from my machine to cluster in case of managing it from kubeadm. I assume that there is similar way to do that like for minikube or kind (without creating any connection to docker hub, because everything is locally).
you are getting ImagePullBackOff due to kubeadm maybe going to check inside the registry.
while if you look at both command minikube cache and kind load is for loading the local images into the cluster.
As I now understand images for cluster managed via kubeadm should be stored in trusted registry like dockerhub or cloud. But if you want make fast solution in separated networks there is a posibility: Docker registry.
There are also some tools ready to use e.g. Trow, or simpler solution.
I used second approach, and it works (code is a bit old, so it may needs some changes, this links may be helpful: change apiVersion, add label
After that changes, first create deployment and daemonSet:
kubectl create -f docker-private-registry.json
kubectl create -f docker-private-registry-proxy.json
Add localhost address to image:
docker tag image:tag 127.0.0.1:5000/image:tag
Check full name of docker private registry deployment, and forward port (replace x by exact deployment name:
kubectl get pod
kubectl port-forward docker-private-registry-deployment-xxxxxxxxx-xxxxx 5000:5000 -n default
Open next terminal window and push image to private registry:
docker push 127.0.0.1:5000/image:tag
Finnaly change in deployment.yaml file containers image (add 127.0.0.1:5000/...) and create deployment.
This solution is very unsafe and vulnerable, so use it wisely only in separated networks for test and dev purposes.
I deploy a elasticsearch to minikube with below configure file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
name: elasticsearch
template:
metadata:
labels:
name: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.10.1
ports:
- containerPort: 9200
- containerPort: 9300
I run the command kubectl apply -f es.yml to deploy the elasticsearch cluster.
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
elasticsearch-fb9b44948-bchh2 1/1 Running 5 6m23s
The elasticsearch pod keep restarting every a few minutes. When I run kubectl describe pod command, I can see these events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m11s default-scheduler Successfully assigned default/elasticsearch-fb9b44948-bchh2 to minikube
Normal Pulled 3m18s (x5 over 7m11s) kubelet Container image "elasticsearch:7.10.1" already present on machine
Normal Created 3m18s (x5 over 7m11s) kubelet Created container elasticsearch
Normal Started 3m18s (x5 over 7m10s) kubelet Started container elasticsearch
Warning BackOff 103s (x11 over 5m56s) kubelet Back-off restarting failed container
The last event is Back-off restarting failed but I don't know why it restarts the pod. Is there any way I can check why it keeps restarting?
The first step (kubectl describe pod) you've already done. As a next step I suggest checking container logs: kubectl logs <pod_name>. 99% you get the reason from logs in this case (I bet on bootstrap check failure).
When neither describe pod nor logs do not have anything about the error, I get into the container with 'exec': kubectl exec -it <pod_name> -c <container_name> sh. With this you'll get a shell inside the container (of course if there IS a shell binary in it) ans so you can use it to investigate the problem manually. Note that to keep failing container alive you may need to change command and args to something like this:
command:
- /bin/sh
- -c
args:
- cat /dev/stdout
Be sure to disable probes when doing this. A container may restart if liveness probe fails, you will see that in kubectl describe pod if it happen. Since your snippet doesn't have any probes specified, you can skip this.
Checking logs of the pod using kubectl logs podname gives clue about what could go wrong.
ERROR: [2] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log
Check this post for a solution
I'm running a Ruby app on Kubernetes with Minikube.
However, whenever I look at the logs I don't see the output I would have seen in my terminal when running the app locally.
I presume it's because it only shows stderr?
What can I do to see all types of console logs (e.g. from puts or raise)?
On looking around is this something to do with it being in detached mode - see the Python related issue: Logs in Kubernetes Pod not showing up
Thanks.
=
As requested - here is the deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: sample
spec:
replicas: 1
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: someregistry
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: ["bundle exec rake sample:default --trace"]
envFrom:
- configMapRef:
name: sample
- secretRef:
name: sample
ports:
- containerPort: 3000
imagePullSecrets:
- name: regsecret
As shown in this article, kubectl logs pod apod should show you stdout and stderr for a pod deployed in a minikube.
By default in Kubernetes, Docker is configured to write a container's stdout and stderr to a file under /var/log/containers on the host system
Kubernetes adds:
There are two types of system components: those that run in a container and those that do not run in a container.
For example:
The Kubernetes scheduler and kube-proxy run in a container.
The kubelet and container runtime, for example Docker, do not run in containers.
And:
On machines with systemd, the kubelet and container runtime write to journald.
If systemd is not present, they write to .log files in the /var/log directory.
Similarly to the container logs, system component logs in the /var/log directory should be rotated.
In Kubernetes clusters brought up by the kube-up.sh script, those logs are configured to be rotated by the logrotate tool daily or once the size exceeds 100MB.
I presume it's because it only shows stderr?
Not really, unless something specific is disabled in your container or pod spec. I assume you are using Docker so then the default it's to output stdout and stderr and that's what you see when you do a kubectl logs <pod-name>
What can I do to see all types of console logs (e.g. from puts or raise)?
You should see them in the container logs. It would help to post your pod or deployment definition.
After successful deployment of CAM (Was up and running for couple of days), suddenly "cam-mongo" microservice went down and while checking the logs for pod using below 2 command it gives you Error synching pod
1) kubectl describe pods -n services
Warning BackOff 3s (x3 over 18s) kubelet, 9.109.191.126 Back-off restarting failed container
Warning FailedSync 3s (x3 over 18s) kubelet, 9.109.191.126 Error syncing pod
With this information you don't know what went wrong and how do you fix it
2) kubectl -n services logs cam-mongo-5c89fcccbd-r2hv4 -p (with -p option you can grab the logs from previously running container )
The above command show below information:
exception in initAndListen: 98 Unable to lock file: /data/db/mongod.lock Resource temporarily unavailable. Is a mongod instance already running?, terminatingConclusion:
While starting the container inside "cam-mongo" pod it was unable to use the existing /data/db/mongod.lock file and hence your pod will be not up and running and you cannot access CAM
After further analysis I resolved the issue as following:
1) spin up a container and mount the cam-mongo volume within it.
To do this I used the below pod creation yaml which will mount the concern pv's where /data/db/ is present.
kind: Pod
apiVersion: v1
metadata:
name: mongo-troubleshoot-pod
spec:
volumes:
name: cam-mongo-pv
persistentVolumeClaim:
claimName: cam-mongo-pv
containers:
name: mongo-troubleshoot
image: nginx
ports:
containerPort: 80
name: "http-server"
volumeMounts:
mountPath: "/data/db"
name: cam-mongo-pv
RUN:kubectl -n services create -f ./mongo-troubleshoot-pod.yaml
2) Use "docker exec -it /bin/bash " (look for it from "kubectl -n services describe po/mongo-troubleshoot-pod-xxxxx" info)
cd /data/db
rm mongod.lock
rm WiredTiger.lock
3) kill the pod which you have created for troubleshooting
4) kill the corrupted cam-mongo pod using below command
kubectl delete pods -n services
It fixed the issue.
Is there anything special about running ingress controllers on Kubernetes CoreOS Vagrant Multi-Machine? I followed the example but when I run kubectl -f I do not get an address.
Example:
http://kubernetes.io/v1.1/docs/user-guide/ingress.html#single-service-ingress
Setup:
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
I looked at networking in kubernetes. Everything looks like it should run without further configuration.
My goal is to create a local testing environment before I build out a production platform. I'm thinking there's something about how they setup their virtualbox networking. I'm about to dive into the CoreOS cloud config but thought I would ask first.
UPDATE
Yes I'm running an ingress controller.
https://github.com/kubernetes/contrib/blob/master/Ingress/controllers/nginx-alpha/rc.yaml
It runs without giving an error. It's just when I run kubectl -f I do not get an address. I'm thinking there's either two things:
I have to do something extra in networking for CoreOS-Kubernetes vagrant multi-node.
It's running right, but I'm point my localhost to the wrong IP. I'm using a 172.17.4.x ip, I also have 10.0.0.x . I can access services through the 172.17.4.x using a NodePort, but I can get to my Ingress.
Here is the code:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: gcr.io/google_containers/nginx-ingress:0.1
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
hostPort: 80
Update 2
Output of commands:
kubectl get pods
NAME READY STATUS RESTARTS AGE
echoheaders-kkja7 1/1 Running 0 24m
nginx-ingress-2wwnk 1/1 Running 0 25m
kubectl logs nginx-ingress-2wwnk --previous
Pod "nginx-ingress-2wwnk" in namespace "default": previous terminated container "nginx" not found
kubectl exec nginx-ingress-2wwnk -- cat /etc/nginx/nginx.conf
events {
worker_connections 1024;
}
http {
}%
I'm running an echoheaders service on NodePort. When I type the node IP and port on my browser, I get that just fine.
I restarted all nodes in virtualbox too.
With a lot help from kubernetes irc and slack, I fixed this a while back. If I remember correctly, I had the ingress service listening on a port that was already being used, I think for vagrant. These commands really help:
kubectl get pod <nginx-ingress pod> -o json
kubectl exec <nginx-ingress pod> -- cat /etc/nginx/nginx.conf
kubectl get pods -o wide
kubectl logs <nginx-ingress pod> --previous