readiness probe for snapshot validation webhook deployment - snapshot

I went through snapshot validation webhook code and see that there is no endpoint provided unlike enter link description here.
Things I tried:
Added port arg
readinessProbe:
httpGet:
path: /
port: 698
failureThreshold: 3
initialDelaySeconds: 20
periodSeconds: 10
timeoutSeconds: 5
resources:
limits:
cpu: 40m
memory: 80Mi
requests:
cpu: 10m
memory: 20Mi
args: ['--port=698','--tls-cert-file=/etc/snapshot-validation-webhook/certs/cert.pem', '--tls-private-key-file=/etc/snapshot-validation-webhook/certs/key.pem']
Without providing port arg but using 443
Can probe be added to this deployment

Related

K8s Horizontal pod autoscaling not working

I have following k8s manifest:
apiVersion: v1
kind: Service
metadata:
name: springboot-k8s-svc
spec:
selector:
app: spring-boot-k8s
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-k8s
spec:
selector:
matchLabels:
app: spring-boot-k8s
replicas: 1
template:
metadata:
labels:
app: spring-boot-k8s
spec:
containers:
- name: spring-boot-k8s
image: springboot-k8s-example:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
Currently I have following things running in my minikube:
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h52m
I start my dummy spring boot application:
$ kubectl apply -f deployment-n-svc.yaml
service/springboot-k8s-svc created
deployment.apps/spring-boot-k8s created
This app seem to start as desired appropriately:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/spring-boot-k8s-bccc4c557-7wbrn 1/1 Running 0 5s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h53m
service/springboot-k8s-svc NodePort 10.99.136.27 <none> 8080:30931/TCP 5s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/spring-boot-k8s 1/1 1 1 5s
NAME DESIRED CURRENT READY AGE
replicaset.apps/spring-boot-k8s-bccc4c557 1 1 1 5s
When I try to hit REST end point, I get the desired output:
$ curl http://192.168.49.2:30931/message
OK!
Now I tried to autoscale the app:
$ kubectl autoscale deployment spring-boot-k8s --min=1 --max=5 --cpu-percent=10
horizontalpodautoscaler.autoscaling/spring-boot-k8s autoscaled
Started watching the hpa just started as shown below command. It seems to have started:
$ watch -n 1 kubectl get hpa
Every 1.0s: kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLIC
AS AGE
horizontalpodautoscaler.autoscaling/spring-boot-k8s Deployment/spring-boot-k8s <unknown>/10% 1 5 1
8m10s
Then I tried apache bench HTTP load test utility to create load on the spring boot server to check if k8s increases the number of pods:
$ab -n 1000000 -c 100 http://192.168.49.2:30931/message
However this did not increase number of pods. What I am missing?
PS:
When I kill ab command in with Ctrl+C, it gives following output (notice aprox 5s processing time per request):
$ ab -n 1000000 -c 100 http://192.168.49.2:32215/message
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.49.2 (be patient)
^C
Server Software:
Server Hostname: 192.168.49.2
Server Port: 32215
Document Path: /message
Document Length: 4 bytes
Concurrency Level: 100
Time taken for tests: 35.650 seconds
Complete requests: 601
Failed requests: 0
Total transferred: 81736 bytes
HTML transferred: 2404 bytes
Requests per second: 16.86 [#/sec] (mean)
Time per request: 5931.751 [ms] (mean)
Time per request: 59.318 [ms] (mean, across all concurrent requests)
Transfer rate: 2.24 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.5 0 7
Processing: 5001 5004 4.7 5003 5022
Waiting: 5000 5004 3.9 5002 5019
Total: 5001 5006 5.4 5003 5024
Percentage of the requests served within a certain time (ms)
50% 5003
66% 5005
75% 5007
80% 5009
90% 5013
95% 5020
98% 5023
99% 5023
100% 5024 (longest request)
Update
As asked in comments, here is output of some more commands:
$ kubectl describe hpa spring-boot-k8s
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Name: spring-boot-k8s
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Fri, 03 Feb 2023 01:58:06 +0530
Reference: Deployment/spring-boot-k8s
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 10%
Min replicas: 1
Max replicas: 5
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 35m (x500 over 16h) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
Notice what it says: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler.
Also it says: no metrics returned from resource metrics API. Thought my metric server is running:
$ kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 1/1 1 1 304d
metrics-server 1/1 1 1 40h
This seems to be the reason why its not working. But what could be the reason?
Apart from that the CPU utilization is not also increasing much. Till yesterday night, I used to see max 3% CPU utilization in the output of command watch kubectl top node:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
minikube 141m 1% 1127Mi 7%
But now it shows following error:
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

Exposing Kibana behind GCE ingress (UNHEALTHY state)

I'm trying to expose Kibana behind of a GCE ingress, but the ingress is reporting the kibana service as UNHEALTHY while it is healthy and ready. Just note that the healthcheck created by the Ingress is still using the default value HTTP on the root / and Port: ex:32021.
Changing the healthcheck in GCP console to HTTPS on /login and Port: 5601 doesn't change anything and the service is still reported as Unhealthy. The healthcheck port is also being overwritten to the original value, which is strange.
I'm using ECK 1.3.1 and below are my configs. I'm I missing anything? Thank you in advance.
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: d3m0
spec:
version: 7.10.1
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
---
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: d3m0
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: d3m0
podTemplate:
metadata:
labels:
kibana: node
spec:
containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: 1
readinessProbe:
httpGet:
scheme: HTTPS
path: "/login"
port: 5601
http:
service:
spec:
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: kibana-ingress
spec:
backend:
serviceName: d3m0-kb-http
servicePort: 5601
When using ECK, all the security feature are enabled on ES and Kibana, which means that their services do not accept HTTP traffic used by the default GCP loadbalancer Healthcheck. You must add the required annotations to the services and override the healthcheck paths as in the code below. Please find more details here.
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: d3m0
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: d3m0
http:
service:
metadata:
labels:
app: kibana
annotations:
# Enable TLS between GCLB and the application
cloud.google.com/app-protocols: '{"https":"HTTPS"}'
service.alpha.kubernetes.io/app-protocols: '{"https":"HTTPS"}'
# Uncomment the following line to enable container-native load balancing.
cloud.google.com/neg: '{"ingress": true}'
podTemplate:
metadata:
labels:
name: kibana-fleet
spec:
containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: 1
readinessProbe:
# Override the readiness probe as GCLB reuses it for its own healthchecks
httpGet:
scheme: HTTPS
path: "/login"
port: 5601

Kubernetes: Cant Increase Req/Sec with more Nodes

I created simple nodejs express application, it just listen single get port and returns hello word string.
Then i deployed that application on a kubernete cluster with 1 node with 4vCpu 8GB Ram. Routed all traffic to app with nginx ingress controller.
So after completed deployment, i performed simple http load test. Results was 70 - 100 req/sec. I tried to increase replica set to 6 but results was still same. I also tried to specify resource limits but nothing changed.
Lastly i added 2 more node to pool with 4vCpu 8GB Ram.
After that i performed load test again but results was still same. Total of 3 nodes with 12vCpu and 24GB Ram, it can barely handle 80 Req/Sec.
Results form load testing.
12 threads and 400 connections
Latency 423.68ms 335.38ms 1.97s 84.88%
Req/Sec 76.58 34.13 212.00 72.37%
4457 requests in 5.08s, 1.14MB read
Probably i am doing something wrong but i couldn't figure out.
This is my deployment and service yaml file.
apiVersion: v1
kind: Service
metadata:
name: app-3
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: app-3
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-3
spec:
replicas: 6
selector:
matchLabels:
app: app-3
template:
metadata:
labels:
app: app-3
spec:
containers:
- name: app-3-name
image: s1nc4p/node:v22
ports:
- containerPort: 8080
resources:
requests:
memory: "1024Mi"
cpu: "1000m"
limits:
memory: "1024Mi"
cpu: "1000m"
This is ingress service yaml file.
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
And this is ingress file.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: kkk.storadewebservices.com
http:
paths:
- path: '/'
backend:
serviceName: app-3
servicePort: 80

microk8s.enable dns gets stuck in ContainerCreating

I have installed microk8s snap on Ubuntu 19 in a VBox. When I run microk8s.enable dns, the pod for the deployment does not get past ContainerCreating state.
I used to work in before. I have also re-installed microk8s, this helped in the passed, but not anymore.
n.a.
Output from microk8s.kubectl get all --all-namespaces shows that something is wrong with the volume for the secrets. I don't know how I can investigate further, so any help is appreciated.
Cheers
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-9b8997588-z88lz 0/1 ContainerCreating 0 16m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 20m
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 16m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 0/1 1 0 16m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-9b8997588 1 1 0 16m
Output from microk8s.kubectl describe pod/coredns-9b8997588-z88lz -n kube-system
Name: coredns-9b8997588-z88lz
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: peza-ubuntu-19/10.0.2.15
Start Time: Sun, 29 Sep 2019 15:49:27 +0200
Labels: k8s-app=kube-dns
pod-template-hash=9b8997588
Annotations: scheduler.alpha.kubernetes.io/critical-pod:
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/coredns-9b8997588
Containers:
coredns:
Container ID:
Image: coredns/coredns:1.5.0
Image ID:
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-h6qlm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-h6qlm:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-h6qlm
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned kube-system/coredns-9b8997588-z88lz to peza-ubuntu-19
Warning FailedMount 5m59s kubelet, peza-ubuntu-19 Unable to attach or mount volumes: unmounted volumes=[coredns-token-h6qlm config-volume], unattached volumes=[coredns-token-h6qlm config-volume]: timed out waiting for the condition
Warning FailedMount 3m56s (x11 over 10m) kubelet, peza-ubuntu-19 MountVolume.SetUp failed for volume "coredns-token-h6qlm" : failed to sync secret cache: timed out waiting for the condition
Warning FailedMount 3m44s (x2 over 8m16s) kubelet, peza-ubuntu-19 Unable to attach or mount volumes: unmounted volumes=[config-volume coredns-token-h6qlm], unattached volumes=[config-volume coredns-token-h6qlm]: timed out waiting for the condition
Warning FailedMount 113s (x12 over 10m) kubelet, peza-ubuntu-19 MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition
I spent my morning fighting with this on ubuntu 19.04. None of the microk8s add-ons worked. Their containers got stuck in "ContainerCreating" status having something like "MountVolume.SetUp failed for volume "kubernetes-dashboard-token-764ml" : failed to sync secret cache: timed out waiting for the condition" in their descriptions.
I tried to start/stop/reset/reinstall microk8s a few times. Nothing worked. Once I downgraded it to the prev version the problem went away.
sudo snap install microk8s --classic --channel=1.15/stable

kubernetes windows worker node with calico can not deploy pods

I try to use kubeadm.exe join to join windows worker node but it's not working.
Then I try to refer to this document nwoodmsft/SDN/CalicoFelix.md,after this, node status just like this
# node status
root#ysicing:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
win-o35a06j767t Ready <none> 1h v1.10.10 <none> Windows Server Standard 10.0.17134.1 docker://18.9.0
ysicing Ready master 4h v1.10.10 <none> Debian GNU/Linux 9 (stretch) 4.9.0-8-amd64 docker://17.3.3
pods stauts:
root#ysicing:~# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default demo-deployment-c96d5d97b-99h9s 0/1 ContainerCreating 0 5m <none> win-o35a06j767t
default demo-deployment-c96d5d97b-lq2jm 0/1 ContainerCreating 0 5m <none> win-o35a06j767t
default demo-deployment-c96d5d97b-zrc2k 1/1 Running 0 5m 192.168.0.3 ysicing
default iis-7f7dc9fbbb-xhccv 0/1 ContainerCreating 0 1h <none> win-o35a06j767t
kube-system calico-node-nr5mt 0/2 ContainerCreating 0 1h 192.168.1.2 win-o35a06j767t
kube-system calico-node-w6mls 2/2 Running 0 5h 172.16.0.169 ysicing
kube-system etcd-ysicing 1/1 Running 0 6h 172.16.0.169 ysicing
kube-system kube-apiserver-ysicing 1/1 Running 0 6h 172.16.0.169 ysicing
kube-system kube-controller-manager-ysicing 1/1 Running 0 6h 172.16.0.169 ysicing
kube-system kube-dns-86f4d74b45-dbcmb 3/3 Running 0 6h 192.168.0.2 ysicing
kube-system kube-proxy-wt6dn 1/1 Running 0 6h 172.16.0.169 ysicing
kube-system kube-proxy-z5jx8 0/1 ContainerCreating 0 1h 192.168.1.2 win-o35a06j767t
kube-system kube-scheduler-ysicing 1/1 Running 0 6h 172.16.0.169 ysicing
The kube-proxy and calico should not be the container way, and it runs under Windows using kube-proxy.exe.
calico pods err info:
Warning FailedCreatePodSandBox 2m (x1329 over 32m) kubelet, win-o35a06j767t Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "calico-node-nr5mt": Error response from daemon: network host not found
demo.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: iis
spec:
replicas: 1
template:
metadata:
labels:
app: iis
spec:
nodeSelector:
beta.kubernetes.io/os: windows
containers:
- name: iis
image: microsoft/iis
resources:
limits:
memory: "128Mi"
cpu: 2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: iis
name: iis
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: iis
type: NodePort
demo pods err logs
(extra info:
{"SystemType":"Container","Name":"082e861a8720a84223111b3959a1e2cd26e4be3d0ffcb9eda35b2a09955d4081","Owner":"docker","VolumePath":"\\\\?\\Volume{e8dcfa1d-fbbe-4ef9-b849-5f02b1799a3f}","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\082e861a8720a84223111b3959a1e2cd26e4be3d0ffcb9eda35b2a09955d4081","Layers":[{"ID":"8c940e59-c455-597f-b4b2-ff055e33bc2a","Path":"C:\\ProgramData\\docker\\windowsfilter\\7f1a079916723fd228aa878db3bb1e37b50e508422f20be476871597fa53852d"},{"ID":"f72db42e-18f4-54da-98f1-0877e17a069f","Path":"C:\\ProgramData\\docker\\windowsfilter\\449dc4ee662760c0102fe0f388235a111bb709d30b6d9b6787fb26d1ee76c990"},{"ID":"40282350-4b8f-57a2-94e9-31bebb7ec0a9","Path":"C:\\ProgramData\\docker\\windowsfilter\\6ba0fa65b66c3b3134bba338e1f305d030e859133b03e2c80550c32348ba16c5"},{"ID":"f5a96576-2382-5cba-a12f-82ad7616de0f","Path":"C:\\ProgramData\\docker\\windowsfilter\\3b68fac2830f2110aa9eb1c057cf881ee96ce973a378b37e20b74e32c3d41ee0"}],"ProcessorWeight":2,"HostName":"iis-7f7dc9fbbb-xhccv","HvPartition":false})
Warning FailedCreatePodSandBox 14m (x680 over 29m) kubelet, win-o35a06j767t (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "iis-7f7dc9fbbb-xhccv": Error response from daemon: CreateComputeSystem 0b9ab5f3dd4a69464f756aeb0bd780763b38712e32e8c1318fdd17e531437b0f: The operating system of the container does not match the operating system of the host.
(extra info:{"SystemType":"Container","Name":"0b9ab5f3dd4a69464f756aeb0bd780763b38712e32e8c1318fdd17e531437b0f","Owner":"docker","VolumePath":"\\\\?\\Volume{e8dcfa1d-fbbe-4ef9-b849-5f02b1799a3f}","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\0b9ab5f3dd4a69464f756aeb0bd780763b38712e32e8c1318fdd17e531437b0f","Layers":[{"ID":"8c940e59-c455-597f-b4b2-ff055e33bc2a","Path":"C:\\ProgramData\\docker\\windowsfilter\\7f1a079916723fd228aa878db3bb1e37b50e508422f20be476871597fa53852d"},{"ID":"f72db42e-18f4-54da-98f1-0877e17a069f","Path":"C:\\ProgramData\\docker\\windowsfilter\\449dc4ee662760c0102fe0f388235a111bb709d30b6d9b6787fb26d1ee76c990"},{"ID":"40282350-4b8f-57a2-94e9-31bebb7ec0a9","Path":"C:\\ProgramData\\docker\\windowsfilter\\6ba0fa65b66c3b3134bba338e1f305d030e859133b03e2c80550c32348ba16c5"},{"ID":"f5a96576-2382-5cba-a12f-82ad7616de0f","Path":"C:\\ProgramData\\docker\\windowsfilter\\3b68fac2830f2110aa9eb1c057cf881ee96ce973a378b37e20b74e32c3d41ee0"}],"ProcessorWeight":2,"HostName":"iis-7f7dc9fbbb-xhccv","HvPartition":false})
Normal SandboxChanged 4m (x1083 over 29m) kubelet, win-o35a06j767t Pod sandbox changed, it will be killed and re-created.
config: "c:\k\"
The cni directory is empty by default. Then add calico-felix.exe and config fileL2Brige.conf
i try to google it, need cni, but not found calico cni.
What should I do in this situation, build Windows calico cni?

Resources