Google Anthos Bare Metal - Add Node - google-anthos

So I'm trying to add nodes to my existing Anthos k8s cluster Anthos Bare Metal - Add Node / Resize Cluster.
Just add the new nodes under NodePool and run the bmctl update command
apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
name: node-pool-1
namespace: cluster-anthos-prod-cluster
spec:
clusterName: anthos-prod-cluster
nodes:
- address: 10.0.14.66
- address: 10.0.14.67
- address: 10.0.14.68
- address: 10.0.14.72
- address: 10.0.14.73
The new nodes are on RECONCILING state and upon checking the logs I have this messages below;
Warning:
FailedScheduling
6m5s (x937 over 17h)
default-scheduler
0/6 nodes are available: 1 node(s) had taint {client.ip.colocate: NoIngress}, that the pod didn't tolerate, 2 node(s) had taint {client.ip.colocate: SameInstance}, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity /selector.
Did I missed a step?
Would like to ask some help where to start checking to fix my problem.
Appreciate any help.
Thank you.
-MD

Related

ElasticSearch CrashLoopBackoff when deploying with ECK in Kubernetes OKD 4.11

I am running Kubernetes using OKD 4.11 (running on vSphere) and have validated the basic functionality (including dyn. volume provisioning) using applications (like nginx).
I also applied
oc adm policy add-scc-to-group anyuid system:authenticated
to allow authenticated users to use anyuid (which seems to have been required to deploy the nginx example I was testing with).
Then I installed ECK using this quickstart with kubectl to install the CRD and RBAC manifests. This seems to have worked.
Then I deployed the most basic ElasticSearch quickstart example with kubectl apply -f quickstart.yaml using this manifest:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.4.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
The deployment proceeds as expected, pulling image and starting container, but ends in a CrashLoopBackoff with the following error from ElasticSearch at the end of the log:
"elasticsearch.cluster.name":"quickstart",
"error.type":"java.lang.IllegalStateException",
"error.message":"failed to obtain node locks, tried
[/usr/share/elasticsearch/data]; maybe these locations
are not writable or multiple nodes were started on the same data path?"
Looking into the storage, the PV and PVC are created successfully, the output of kubectl get pv,pvc,sc -A -n my-namespace is:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-9d7b57db-8afd-40f7-8b3d-6334bdc07241 1Gi RWO Delete Bound my-namespace/elasticsearch-data-quickstart-es-default-0 thin 41m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-namespace persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0 Bound pvc-9d7b57db-8afd-40f7-8b3d-6334bdc07241 1Gi RWO thin 41m
NAMESPACE NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/thin (default) kubernetes.io/vsphere-volume Delete Immediate false 19d
storageclass.storage.k8s.io/thin-csi csi.vsphere.vmware.com Delete WaitForFirstConsumer true 19d
Looking at the pod yaml, it appears that the volume is correctly attached :
volumes:
- name: elasticsearch-data
persistentVolumeClaim:
claimName: elasticsearch-data-quickstart-es-default-0
- name: downward-api
downwardAPI:
items:
- path: labels
fieldRef:
apiVersion: v1
fieldPath: metadata.labels
defaultMode: 420
....
volumeMounts:
...
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
I cannot understand why the volume would be read-only or rather why ES cannot create the lock.
I did find this similar issue, but I am not sure how to apply the UID permissions (in general I am fairly naive about the way permissions work in OKD) when when working with ECK.
Does anyone with deeper K8s / OKD or ECK/ElasticSearch knowledge have an idea how to better isolate and/or resolve this issue?
Update: I believe this has something to do with this issue and am researching the optionas related to OKD.
For posterity, the ECK starts an init container that should take care of the chown on the data volume, but can only do so if it is running as root.
The resolution for me was documented here:
https://repo1.dso.mil/dsop/elastic/elasticsearch/elasticsearch/-/issues/7
The manifest now looks like this:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.4.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
# run init container as root to chown the volume to uid 1000
podTemplate:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 0
initContainers:
- name: elastic-internal-init-filesystem
securityContext:
runAsUser: 0
runAsGroup: 0
And the pod starts up and can write to the volume as uid 1000.

Fix elasticsearch broken cluster within kubernetes

I deployed an elasticsearch cluster with official Helm chart (https://github.com/elastic/helm-charts/tree/master/elasticsearch).
There are 3 Helm releases:
master (3 nodes)
client (1 node)
data (2 nodes)
Cluster was running fine, I did a crash test by removing master release, and re-create it.
After that, master nodes are ok, but data nodes complain:
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: join validation on cluster state with a different cluster uuid xeQ6IVkDQ2es1CO2yZ_7rw than local cluster uuid 9P9ZGqSuQmy7iRDGcit5fg, rejecting
which is normal because master nodes are new.
How can I fix data nodes cluster state without removing data folder?
Edit:
I know the reason why is broken, I know a basic solution is to remove data folder and restart node (as I can see on elastic forum, lot of similar questions without answers). But I am looking for a production aware solution, maybe with https://www.elastic.co/guide/en/elasticsearch/reference/current/node-tool.html tool?
Using elasticsearch-node utility, it's possible to reset cluster state, then the fresh node can join another cluster.
The tricky thing is to use this utility bin with Docker, because elasticsearch server must be stopped!
Solution with kubernetes:
Stop pods by scaling to 0 the sts: kubectl scale data-nodes --replicas=0
Create a k8s job that reset the cluster state, with data volume attached
Apply the job for each PVC
Rescale sts and enjoy!
job.yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: test-fix-cluster-m[0-3]
spec:
template:
spec:
containers:
- args:
- -c
- yes | elasticsearch-node detach-cluster; yes | elasticsearch-node remove-customs '*'
# uncomment for at least 1 PVC
#- yes | elasticsearch-node unsafe-bootstrap -v
command:
- /bin/sh
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
name: elasticsearch
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: es-data
restartPolicy: Never
volumes:
- name: es-data
persistentVolumeClaim:
claimName: es-test-master-es-test-master-[0-3]
If you are interested, here the code behind unsafe-bootstrap: https://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/cluster/coordination/UnsafeBootstrapMasterCommand.java#L83
I have written a small story at https://medium.com/#thomasdecaux/fix-broken-elasticsearch-cluster-405ad67ee17c.

Service discovery does not discover multiple nodes

I have elasticsearch cluster on kubernetes on aws. I have used upmc operator and readonlyrest security plugin.
I have started my cluster passing yaml to upmc operator with 3 data / master and ingest nodes.
However when I do /localhsot:9200/_nodes all I see only 1 node is being assigned. Service discovery did not attach other nodes to the clusters. Essentially I have a single node cluster. Any settings I am missing or after creating cluster I need to run some settings so that all nodes become part of the cluster ?
Here is my yml file:
The following yaml file is used to create the cluster.
This yaml config creates 3 master/data/ingest nodes and upmcoperator uses pod afinity to allocate pods in different zones.
All 9 nodes are getting created just fine, but they are unable to become part of the cluster.
====================================
apiVersion: enterprises.upmc.com/v1
kind: ElasticsearchCluster
metadata:
name: es-cluster
spec:
kibana:
image: docker.elastic.co/kibana/kibana-oss:6.1.3
image-pull-policy: Always
cerebro:
image: upmcenterprises/cerebro:0.7.2
image-pull-policy: Always
elastic-search-image: myelasticimage:elasticsearch-mod-v0.1
elasticsearchEnv:
- name: PUBLIC_KEY
value: "default"
- name: NETWORK_HOST
value: "_eth0:ipv4_"
image-pull-secrets:
- name: egistrykey
image-pull-policy: Always
client-node-replicas: 3
master-node-replicas: 3
data-node-replicas: 3
network-host: 0.0.0.0
zones: []
data-volume-size: 1Gi
java-options: "-Xms512m -Xmx512m"
snapshot:
scheduler-enabled: false
bucket-name: somebucket
cron-schedule: "#every 1m"
image: upmcenterprises/elasticsearch-cron:0.0.4
storage:
type: standard
storage-class-provisioner: volume.alpha.kubernetes.io/storage-class
volume-reclaim-policy: Delete

HPA with windows worker nodes - EKS 1.11

HPA doesn't work and keeps showing /10% for targets.
Metrics-server is installed and registers fine
PS C:\k> kubectl get apiservice v1beta1.metrics.k8s.io -o yaml
status:
conditions:
- lastTransitionTime: 2019-07-31T08:20:04Z
message: all checks passed
reason: Passed
status: "True"
type: Available
checking the logs for the metrics-server pod
$ kubectl logs metrics-server-686978657d-8rvzs -n kube-system
kubectl
E0731 20:30:09.062734 1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-xxx-xxx-xxx-xxx.us-west.computer.internal: [unable t
o get CPU for node "ip-xxx-xxx-xxx-xxx.us-west.computer.internal": missing cpu usage metric, unable to get CPU for container "windows-server-iis" in pod default/windows-server-iis-846f465947-n9t
tg on node "ip-xxx-xxx-xxx-xxx.us-west.computer.internal": missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-xxx-xxx-xxx-xxx.us-west.computer.internal: [un
able to get CPU for node "ip-xxx-xxx-xxx-xxx.us-west.computer.internal": missing cpu usage metric, unable to get CPU for container "mymicroservice-eks" in pod default/mymicroservice-eks-5f47bc8
9bb-4nmkc on node "ip-xxx-xxx-xxx-xxx.us-west.computer.internal": missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-xxx-xxx-xxx-xxx.us-west.computer.inte
rnal: [unable to get CPU for node "ip-xxx-xxx-xxx-xxx.us-west.computer.internal": missing cpu usage metric, unable to get CPU for container "mymicroservice-eks" in pod default/mymicroservice-e
ks-5f47bc89bb-dv9gx on node "ip-xxx-xxx-xxx-xxx.us-west.computer.internal": missing cpu usage metric, unable to get CPU for container "windows-iis" in pod default/windows-iis-64ddbbd57-929hv o
n node "ip-xxx-xxx-xxx-xxx.us-west.computer.internal": missing cpu usage metric]]
E0731 20:30:17.560396 1 reststorage.go:98] unable to fetch pod metrics for pod default/windows-iis-64ddbbd57-929hv: no metrics known for pod "default/windows-iis-64ddbbd57-929hv"
E0731 20:30:47.565251 1 reststorage.go:98] unable to fetch pod metrics for pod default/windows-iis-64ddbbd57-929hv: no metrics known for pod "default/windows-iis-64ddbbd57-929hv"
Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: windows-server-iis
spec:
selector:
matchLabels:
app: windows-server-iis
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: windows-server-iis
tier: backend
track: stable
spec:
containers:
- name: windows-server-iis
image: mcr.microsoft.com/windows/servercore:1809
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
command:
- powershell.exe
- -command
- "Add-WindowsFeature Web-Server; Invoke-WebRequest -UseBasicParsing -Uri 'https://dotnetbinaries.blob.core.windows.net/servicemonitor/2.0.1.6/ServiceMonitor.exe ' -OutFile 'C:\\ServiceMonitor.exe'; echo '<html><body><br/><br/><marquee><H1>Hello EKS!!!<H1><marquee></body><html>' > C:\\inetpub\\wwwroot\\default.html; C:\\ServiceMonitor.exe 'w3svc'; "
resources:
requests:
cpu: 500m
nodeSelector:
beta.kubernetes.io/os: windows
Details of the hpa that I have configured are provided below
PS C:\k> kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
windows-iis Deployment/windows-server-iis <unknown>/10% 1 10 1 16m
What am I missing, is HPA not supported for windows containers ?
Error you have provided: no metrics known for pod "default/windows-iis-64ddbbd57-929hv means that metrics-server pod did not received any updated. Due to lack of any CPU usage or wrong spec.scaleTargetRef: declaration. Here you can find Kubernetes example. Without `kubectl describe hpa is hard to say.
During searching solution for this I found that many people on EKS have similar problem. For example on Github or Stackoverflow.
Please try to add this to your deployment.
command:
- /metrics-server
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
However you mentioned that you are using Windows Containers. According to documentation Kubernetes is supporting HPA on Windows Containers since version 1.15. If you are using pervious version HPA will not work.
Hope it helps.

Why Prometheus pod pending after setup it by helm in Kubernetes cluster on Rancher server?

Installed Rancher server and 2 Rancher agents in Vagrant. Then switch to K8S environment from Rancher server.
On Rancher server host, installed kubectl and helm. Then installed Prometheus by Helm:
helm install stable/prometheus
Now check the status from Kubernetes dashboard, there are 2 pods pending:
It noticed PersistentVolumeClaim is not bound, so aren't the K8S components been installed default with Rancher server?
(another name, same issue)
Edit
> kubectl get pvc
NAME STATUS VOLUME CAPACITY
ACCESSMODES STORAGECLASS AGE
voting-prawn-prometheus-alertmanager Pending 6h
voting-prawn-prometheus-server Pending 6h
> kubectl get pv
No resources found.
Edit 2
$ kubectl describe pvc voting-prawn-prometheus-alertmanager
Name: voting-prawn-prometheus-alertmanager
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-4.6.9
component=alertmanager
heritage=Tiller
release=voting-prawn
Annotations: <none>
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 12s (x10 over 2m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
$ kubectl describe pvc voting-prawn-prometheus-server
Name: voting-prawn-prometheus-server
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-4.6.9
component=server
heritage=Tiller
release=voting-prawn
Annotations: <none>
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 12s (x14 over 3m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
I had same issues as you. I found two ways to solve this:
edit values.yaml under persistentVolumes.enabled=false this will allow you to use emptyDir "this applies to Prometheus-Server and AlertManager"
If you can't change values.yaml you will have to create the PV before deploying the chart so that the pod can bind to the volume otherwise it will stay in the pending state forever
PV are cluster scoped and PVC are namespaced scope.
If your application running in a different namespace and PVC in a different namespace, it can be issue.
If yes, use RBAC to give proper permissions, or put app and PVC in same namespace.
Can you make sure PV which is getting created from Storage class is the default SC of the cluster ?
I found that i was missing storage class and storage volumes. fixed similar problems on my cluster by first creating a storage class.
kubectl apply -f storageclass.ymal
storageclass.ymal:
{
"kind": "StorageClass",
"apiVersion": "storage.k8s.io/v1",
"metadata": {
"name": "local-storage",
"annotations": {
"storageclass.kubernetes.io/is-default-class": "true"
}
},
"provisioner": "kubernetes.io/no-provisioner",
"reclaimPolicy": "Delete"
and the using the storage class when install Prometheus with helm
helm install stable/prometheus --set server.storageClass=local-storage
and i was also forced to create a volume for Prometheus to bind to
kubectl apply -f prometheusVolume.yaml
prometheusVolume.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-volume
spec:
storageClassName: local-storage
capacity:
storage: 2Gi #Size of the volume
accessModes:
- ReadWriteOnce #type of access
hostPath:
path: "/mnt/data" #host location
You could use other storage classes, found that there as a lot to chose between but then there might be other steps involved to get it working.

Resources