I deploy a Elasticsearch cluster to EKS, below is the spec
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elk
spec:
version: 7.15.2
serviceAccountName: docker-sa
http:
tls:
selfSignedCertificate:
disabled: true
nodeSets:
- name: node
count: 3
config:
...
I can see it has been deployed correctly and all pods are running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
elk-es-node-0 1/1 Running 0 19h
elk-es-node-1 1/1 Running 0 19h
elk-es-node-2 1/1 Running 0 11h
But I can't restart the deployment Elasticsearch,
$ kubectl rollout restart Elasticsearch elk-es-node
Error from server (NotFound): elasticsearches.elasticsearch.k8s.elastic.co "elk-es-node" not found
The Elasticsearch is using statefulset so I tried to restart statefulset,
$ kubectl rollout restart statefulset elk-es-node
statefulset.apps/elk-es-node restarted
the above command says restarted, but the actual pods are not restarting.
what is the right way to restart a custom kind in K8S?
Use - kubectl get all
To identify if the resource created is a deployment or a statefulset -
use -n <namespace"> along with the above command, if you are working in a specific namespace.
Assuming, you are using a statefulset, the issue below command to understand the properties in which it is configured.
kubectl get statefulset <statefulset-name"> -o yaml > statefulsetContent.yaml
this will create a yaml file names statefulsetContent.yaml in same directory.
you can use it to explore different options configured in the statefulset.
Check for .spec.updateStrategy in the yaml file. Based on this we can identify its update strategy.
Below is from the official documentation
There are two possible values:
OnDelete
When a StatefulSet's .spec.updateStrategy.type is set to OnDelete, the StatefulSet controller will not automatically update the Pods in a StatefulSet. Users must manually delete Pods to cause the controller to create new Pods that reflect modifications made to a StatefulSet's .spec.template.
RollingUpdate
The RollingUpdate update strategy implements automated, rolling update for the Pods in a StatefulSet. This is the default update strategy.
As a work around, you can try to scale down/up the statefulset.
kubectl scale sts <statefulset-name"> --replicas=<count">
With ECK as the operator, you do not need to use rollout restart. Apply your updated Elasticsearch spec and the operator will perform rolling update for you. If for any reason you need to restart a pod, you use kubectl delete pod <es pod> -n <your es namespace> to remove the pod and the operator will spin up new one for you.
Related
I’ve followed the instructions outlined on this page and pushed a local image to a local 3 node Minikube cluster with the registry add-on enabled and the cluster started with insecure-registry flag, but I get the following error when I try to create a Pod with the image:
Normal Pulling 9m18s (x4 over 10m) kubelet Pulling image "192.168.99.100:5000/myapp:v1”
Warning Failed 9m18s (x4 over 10m) kubelet Failed to pull image "192.168.99.100:5000/myapp:v1”: rpc error: code = Unknown desc = Error response from daemon: Get "https://192.168.99.100:5000/v2/": http: server gave HTTP response to HTTPS client
Any advice on resolving this would be greatly appreciated
My Minikube (v1.23.2) is on macOS (Big Sur 11.6) using the VirtualBox driver. It is a three node cluster. My Docker Desktop version is (20.10.8)
These are the steps I followed:
Get my cluster’s VMs’ IP range - 192.168.99.0/24
Added the following entry to my Docker Desktop config:
insecure-registries": [
"192.168.99.0/24"
]
Started Minikube with insecure registries flag:
$ minikube start —insecure-registry=“192.168.99.0/24”
Run:
$ minikube addons enable registry
Tagged the image I want to push:
$ docker tag docker.io/library/myapp:v1 $(minikube ip):5000/myapp:v1
Pushed the image:
$ docker push $(minikube ip):5000/myapp:v1
The push works ok - when I exec onto the registry Pod I can see the image in the filesystem. However, when I try to create a Pod using the image, I get the error mentioned above.
My Pod manifest is:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: myapp
name: myapp
spec:
containers:
- image: 192.168.99.100:5000/myapp:v1
name: myapp
imagePullPolicy: IfNotPresent
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
The issue was resolved by deleting the cluster and recreating it using the insecure-registry flag from the start - originally I had created the cluster, stopped it, and then started it again with the insecure-registry flag. For some reason this didn't work, but starting it for the first time with the flag did.
If you're going to be creating clusters with the registry addon a lot, it might be worth adding the flag permanently to your config. Replace the IP with your cluster's subnet:
$ minikube config set insecure-registry "192.168.99.0/24"
I'm doing a perf test for web server which is deployed on EKS cluster. I'm invoking the server using jmeter with different conditions (like varying thread count, payload size, etc..).
So I want to record kubernetes perf data with the timestamp so that I can analyze these data with my jmeter output (JTL).
I have been digging through the internet to find a way to record kubernetes perf data. But I was unable to find a proper way to do that.
Can experts please provide me a standard way to do this??
Note: I have a multi-container pod also.
In line with #Jonas comment
This is the quickest way of installing Prometheus in you K8 cluster. Added Details in the answer as it was impossible to put the commands in a readable format in Comment.
Add bitnami helm repo.
helm repo add bitnami https://charts.bitnami.com/bitnami
Install helmchart for promethus
helm install my-release bitnami/kube-prometheus
Installation output would be:
C:\Users\ameena\Desktop\shine\Article\K8\promethus>helm install my-release bitnami/kube-prometheus
NAME: my-release
LAST DEPLOYED: Mon Apr 12 12:44:13 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
Watch the Prometheus Operator Deployment status using the command:
kubectl get deploy -w --namespace default -l app.kubernetes.io/name=kube-prometheus-operator,app.kubernetes.io/instance=my-release
Watch the Prometheus StatefulSet status using the command:
kubectl get sts -w --namespace default -l app.kubernetes.io/name=kube-prometheus-prometheus,app.kubernetes.io/instance=my-release
Prometheus can be accessed via port "9090" on the following DNS name from within your cluster:
my-release-kube-prometheus-prometheus.default.svc.cluster.local
To access Prometheus from outside the cluster execute the following commands:
echo "Prometheus URL: http://127.0.0.1:9090/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-prometheus 9090:9090
Watch the Alertmanager StatefulSet status using the command:
kubectl get sts -w --namespace default -l app.kubernetes.io/name=kube-prometheus-alertmanager,app.kubernetes.io/instance=my-release
Alertmanager can be accessed via port "9093" on the following DNS name from within your cluster:
my-release-kube-prometheus-alertmanager.default.svc.cluster.local
To access Alertmanager from outside the cluster execute the following commands:
echo "Alertmanager URL: http://127.0.0.1:9093/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-alertmanager 9093:9093
Follow the commands to forward the UI to localhost.
echo "Prometheus URL: http://127.0.0.1:9090/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-prometheus 9090:9090
Open the UI in browser: http://127.0.0.1:9090/classic/graph
Annotate the pods for sending the metrics.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 4 # Update the replicas from 2 to 4
template:
metadata:
labels:
app: nginx
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9102'
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
In the ui put appropriate filters and start observing the crucial parameter such as memory CPU etc. UI supports autocomplete so it will not be that difficult to figure out things.
Regards
I have 3 nodes kubernetes cluster managing with kubeadm. Previously i used kind and minikube. When I wanted make deployment based on docker image i just need make:
kind load docker-image in kind or,
minikube cache add in minikube,
Now, when I want make deployment in kubeadm I obviously get ImagePullBackOff.
Question: Is a equivalent comment do add image to kubeadm and I can't find it, or there is entirely other way to solve that problem?
EDIT
Maybe, question is not clear enough, so instead of delete it I try to put more details.
I have tree nodes (one control plane, and two workers) with docker, kubeadm, kubelet and kubectl installed on each. One deployment of my future cluster is machine learning module so I need tensorflow:
docker pull tensorflow/tensorflow
Using this image I build my own:
docker build -t mlimage:cluster -f ml.Dockerfile .
Next I prepare deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mldeployment
spec:
selector:
matchLabels:
app: mldeployment
name: mldeployment
replicas: 1
template:
metadata:
labels:
app: mldeployment
name: mldeployment
spec:
containers:
- name: mlcontainer
image: mlimage:cluster
imagePullPolicy: Never
ports:
- name: http
containerPort: 6060
and create it:
kubectl create -f mldeployment.yaml
Now, when I type
kubectl describe pod
In mldeployment are these events:
In that case of minikube or kind it was enought to simply add image to cluster typing
minikibe cache add ...
and
kind load docker-image ...
respectively.
Question is how to add image from my machine to cluster in case of managing it from kubeadm. I assume that there is similar way to do that like for minikube or kind (without creating any connection to docker hub, because everything is locally).
you are getting ImagePullBackOff due to kubeadm maybe going to check inside the registry.
while if you look at both command minikube cache and kind load is for loading the local images into the cluster.
As I now understand images for cluster managed via kubeadm should be stored in trusted registry like dockerhub or cloud. But if you want make fast solution in separated networks there is a posibility: Docker registry.
There are also some tools ready to use e.g. Trow, or simpler solution.
I used second approach, and it works (code is a bit old, so it may needs some changes, this links may be helpful: change apiVersion, add label
After that changes, first create deployment and daemonSet:
kubectl create -f docker-private-registry.json
kubectl create -f docker-private-registry-proxy.json
Add localhost address to image:
docker tag image:tag 127.0.0.1:5000/image:tag
Check full name of docker private registry deployment, and forward port (replace x by exact deployment name:
kubectl get pod
kubectl port-forward docker-private-registry-deployment-xxxxxxxxx-xxxxx 5000:5000 -n default
Open next terminal window and push image to private registry:
docker push 127.0.0.1:5000/image:tag
Finnaly change in deployment.yaml file containers image (add 127.0.0.1:5000/...) and create deployment.
This solution is very unsafe and vulnerable, so use it wisely only in separated networks for test and dev purposes.
I deploy a elasticsearch to minikube with below configure file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
name: elasticsearch
template:
metadata:
labels:
name: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.10.1
ports:
- containerPort: 9200
- containerPort: 9300
I run the command kubectl apply -f es.yml to deploy the elasticsearch cluster.
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
elasticsearch-fb9b44948-bchh2 1/1 Running 5 6m23s
The elasticsearch pod keep restarting every a few minutes. When I run kubectl describe pod command, I can see these events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m11s default-scheduler Successfully assigned default/elasticsearch-fb9b44948-bchh2 to minikube
Normal Pulled 3m18s (x5 over 7m11s) kubelet Container image "elasticsearch:7.10.1" already present on machine
Normal Created 3m18s (x5 over 7m11s) kubelet Created container elasticsearch
Normal Started 3m18s (x5 over 7m10s) kubelet Started container elasticsearch
Warning BackOff 103s (x11 over 5m56s) kubelet Back-off restarting failed container
The last event is Back-off restarting failed but I don't know why it restarts the pod. Is there any way I can check why it keeps restarting?
The first step (kubectl describe pod) you've already done. As a next step I suggest checking container logs: kubectl logs <pod_name>. 99% you get the reason from logs in this case (I bet on bootstrap check failure).
When neither describe pod nor logs do not have anything about the error, I get into the container with 'exec': kubectl exec -it <pod_name> -c <container_name> sh. With this you'll get a shell inside the container (of course if there IS a shell binary in it) ans so you can use it to investigate the problem manually. Note that to keep failing container alive you may need to change command and args to something like this:
command:
- /bin/sh
- -c
args:
- cat /dev/stdout
Be sure to disable probes when doing this. A container may restart if liveness probe fails, you will see that in kubectl describe pod if it happen. Since your snippet doesn't have any probes specified, you can skip this.
Checking logs of the pod using kubectl logs podname gives clue about what could go wrong.
ERROR: [2] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log
Check this post for a solution
Is there anything special about running ingress controllers on Kubernetes CoreOS Vagrant Multi-Machine? I followed the example but when I run kubectl -f I do not get an address.
Example:
http://kubernetes.io/v1.1/docs/user-guide/ingress.html#single-service-ingress
Setup:
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
I looked at networking in kubernetes. Everything looks like it should run without further configuration.
My goal is to create a local testing environment before I build out a production platform. I'm thinking there's something about how they setup their virtualbox networking. I'm about to dive into the CoreOS cloud config but thought I would ask first.
UPDATE
Yes I'm running an ingress controller.
https://github.com/kubernetes/contrib/blob/master/Ingress/controllers/nginx-alpha/rc.yaml
It runs without giving an error. It's just when I run kubectl -f I do not get an address. I'm thinking there's either two things:
I have to do something extra in networking for CoreOS-Kubernetes vagrant multi-node.
It's running right, but I'm point my localhost to the wrong IP. I'm using a 172.17.4.x ip, I also have 10.0.0.x . I can access services through the 172.17.4.x using a NodePort, but I can get to my Ingress.
Here is the code:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: gcr.io/google_containers/nginx-ingress:0.1
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
hostPort: 80
Update 2
Output of commands:
kubectl get pods
NAME READY STATUS RESTARTS AGE
echoheaders-kkja7 1/1 Running 0 24m
nginx-ingress-2wwnk 1/1 Running 0 25m
kubectl logs nginx-ingress-2wwnk --previous
Pod "nginx-ingress-2wwnk" in namespace "default": previous terminated container "nginx" not found
kubectl exec nginx-ingress-2wwnk -- cat /etc/nginx/nginx.conf
events {
worker_connections 1024;
}
http {
}%
I'm running an echoheaders service on NodePort. When I type the node IP and port on my browser, I get that just fine.
I restarted all nodes in virtualbox too.
With a lot help from kubernetes irc and slack, I fixed this a while back. If I remember correctly, I had the ingress service listening on a port that was already being used, I think for vagrant. These commands really help:
kubectl get pod <nginx-ingress pod> -o json
kubectl exec <nginx-ingress pod> -- cat /etc/nginx/nginx.conf
kubectl get pods -o wide
kubectl logs <nginx-ingress pod> --previous