Microk8s Pod fails to start when i create a service for it with 1000 UDP ports - microk8s

I am creating a deployment with a custom image i have in a private registry, the container will have lots of ports that need to be exposed, i want to expose them with a NodePort service, if i create a service with 1000 UDP ports then create the deployment, the deployment pod will keep crashing, if i delete the service and the deployment then create the deployment only without the service, the pod starts normally.
Any clue why would this be happening ?
Pod Description:
Name: freeswitch-7764cff4c9-d8zvh
Namespace: default
Priority: 0
Node: cc-lab/192.168.102.55
Start Time: Wed, 01 Jun 2022 15:44:09 +0000
Labels: app=freeswitch
pod-template-hash=7764cff4c9
Annotations: cni.projectcalico.org/containerID: de4baf5c4522e1f3c746a08a60bd7166179bac6c4aef245708205112ad71058a
cni.projectcalico.org/podIP: 10.1.5.8/32
cni.projectcalico.org/podIPs: 10.1.5.8/32
Status: Running
IP: 10.1.5.8
IPs:
IP: 10.1.5.8
Controlled By: ReplicaSet/freeswitch-7764cff4c9
Containers:
freeswtich:
Container ID: containerd://9cdae9120cc075af73d57ea0759b89c153c8fd5766bc819554d82fdc674e03be
Image: 192.168.102.55:32000/freeswitch:v2
Image ID: 192.168.102.55:32000/freeswitch#sha256:e6a36d220f4321e3c17155a889654a83dc37b00fb9d58171f969ec2dccc0a774
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Wed, 01 Jun 2022 15:47:16 +0000
Finished: Wed, 01 Jun 2022 15:47:20 +0000
Ready: False
Restart Count: 5
Environment: <none>
Mounts:
/etc/freeswitch from freeswitch-config (rw)
/tmp from freeswitch-tmp (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mwkc8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
freeswitch-config:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: freeswitch-config
ReadOnly: false
freeswitch-tmp:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: freeswitch-tmp
ReadOnly: false
kube-api-access-mwkc8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 4m3s (x5 over 5m44s) kubelet Container image "192.168.102.55:32000/freeswitch:v2" already present on machine
Normal Created 4m3s (x5 over 5m43s) kubelet Created container freeswtich
Normal Started 4m3s (x5 over 5m43s) kubelet Started container freeswtich
Warning BackOff 41s (x24 over 5m35s) kubelet Back-off restarting failed container
Service :
apiVersion: v1
kind: Service
metadata:
name: freeswitch
spec:
type: NodePort
selector:
app: freeswitch
ports:
- port: 30000
nodePort: 30000
name: rtp30000
protocol: UDP
- port: 30001
nodePort: 30001
name: rtp30001
protocol: UDP
- port: 30002
nodePort: 30002
name: rtp30002
protocol: UDP
- port: 30003
nodePort: 30003
name: rtp30003
protocol: UDP
- port: 30004...... this goes on for port 30999
Deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: freeswitch
spec:
selector:
matchLabels:
app: freeswitch
template:
metadata:
labels:
app: freeswitch
spec:
containers:
- name: freeswtich
image: 192.168.102.55:32000/freeswitch:v2
imagePullPolicy: IfNotPresent
volumeMounts:
- name: freeswitch-config
mountPath: /etc/freeswitch
- name: freeswitch-tmp
mountPath: /tmp
restartPolicy: Always
volumes:
- name: freeswitch-config
persistentVolumeClaim:
claimName: freeswitch-config
- name: freeswitch-tmp
persistentVolumeClaim:
claimName: freeswitch-tmp

Related

Can't communicate with pods through services

I have two deployment, where one of them creates 4 replica for php-fpm and another is a nginx webserver exposed to Internet through Ingress.
problem is that I can't connect to app service in webserver pod! (same issue while trying to connect to other services)
ping result:
$ ping -c4 app.ternobo-connect
PING app.ternobo-connect (10.245.240.225): 56 data bytes
--- app.ternobo-connect ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
but pods are individually available with their ClusterIP.
app-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
ternobo.kubernates.service: app
ternobo.kubernates.network/app-network: "true"
name: app
namespace: ternobo-connect
spec:
replicas: 4
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 50%
selector:
matchLabels:
ternobo.kubernates.service: app
template:
metadata:
labels:
ternobo.kubernates.network/app-network: "true"
ternobo.kubernates.service: app
spec:
containers:
- env:
- name: SERVICE_NAME
value: app
- name: SERVICE_TAGS
value: production
image: ghcr.io/ternobo/ternobo-connect:0.1.01
name: app
ports:
- containerPort: 9000
resources: {}
tty: true
workingDir: /var/www
envFrom:
- configMapRef:
name: appenvconfig
imagePullSecrets:
- name: regsecret
restartPolicy: Always
status: {}
app-service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
ternobo.kubernates.network/app-network: "true"
name: app
namespace: ternobo-connect
spec:
type: ClusterIP
ports:
- name: "9000"
port: 9000
targetPort: 9000
selector:
ternobo.kubernates.service: app
status:
loadBalancer: {}
network-policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-network
namespace: ternobo-connect
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
I also tried to removing netwok policy and but it didn't work! and change podSelector rules to only select services with ternobo.kubernates.network/app-network: "true" label.
Kubernetes services urls are in my-svc.my-namespace.svc.cluster-domain.example format, see: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-aaaa-records
So the ping should be
ping -c4 app.ternobo-connect.svc.cluster.local
If the webserver is in the same namespace as the service you can ping the service name directly
ping -c4 app
I don't know the impact of network policy, I haven't worked with it.

container "sonarqube" in pod "sonar-574d99bfb5-dr8nx" is waiting to start: CreateContainerConfigError

i am facing a problem with my sonar i've been trying to set it up but i get this error from : kubectl logs sonar-574d99bfb5-dr8nx -n sonar == container "sonarqube" in pod "sonar-574d99bfb5-dr8nx" is waiting to start: CreateContainerConfigError.
and when i do describe : kubectl describe pod sonar-574d99bfb5-dr8nx -n sonar
i get this :
Name: sonar-574d99bfb5-dr8nx
Namespace: sonar
Priority: 0
Node: master01/192.168.137.136
Start Time: Tue, 22 Mar 2022 20:30:16 +0000
Labels: app=sonar
pod-template-hash=574d99bfb5
Annotations: cni.projectcalico.org/containerID: 734ba33acb9e2c007861112ffe7c1fce84fa3a434494a0df6951a7b4b6b8dacb
cni.projectcalico.org/podIP: 10.42.241.105/32
cni.projectcalico.org/podIPs: 10.42.241.105/32
Status: Pending
IP: 10.42.241.105
IPs:
IP: 10.42.241.105
Controlled By: ReplicaSet/sonar-574d99bfb5
Containers:
sonarqube:
Container ID:
Image: sonarqube:latest
Image ID:
Port: 9000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CreateContainerConfigError
Ready: False
Restart Count: 0
Limits:
memory: 2Gi
Requests:
memory: 1Gi
Environment Variables from:
sonar-config ConfigMap Optional: false
Environment: <none>
Mounts:
/opt/sonarqube/data/ from app-pvc (rw,path="data")
/opt/sonarqube/extensions/ from app-pvc (rw,path="extensions")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q22lb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
app-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: sonar-pvc
ReadOnly: false
kube-api-access-q22lb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned sonar/sonar-574d99bfb5-dr8nx to master01
Warning Failed 10m (x12 over 12m) kubelet Error: stat /home/mtst/data-sonar-pvc: no such file or directory
Normal Pulled 2m24s (x50 over 12m) kubelet Container image "sonarqube:latest" already present on machine
here's my pvc yaml :
apiVersion: v1
kind: PersistentVolume
metadata:
name: sonar-pv
namespace: sonar
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/mtst/data-sonar-pvc"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sonar-pvc
namespace: sonar
labels:
type: local
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
if there's anything that can help me resolve my issue i would appreciate it.
thank you.
I had the same issue with the awx postgres deployment.
Credit goes to Kubernetes 1.17.2 Rancher 2.3.5 CreateContainerConfigError: stat no such file or directory but the directory IS there for resolving the issue for me.
Running a Rancher kubernetes cluster and wanting custom PV I needed to add the below type to my PersistentVolume definition:
hostPath:
path: "/home/mtst/data-sonar-pvc"
type: "DirectoryOrCreate"

NSQ cluster in Kubernetes

I'm trying to set up an NSQ cluster in Kubernetes and having issues.
Basically, I want to scale out NSQ and NSQ Lookup. I have a stateful set(2 nodes) definition for both of them. To not post the whole YAML file, I'll post only part of it for NSQ
NSQ container template
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd.default.svc.cluster.local:4160
here nsqlookupd.default.svc.cluster.local is a K8s headless service, by doing this I'm expecting the NSQ instance to open a connection with all of the NSQ Lookup instances, which in fact is not happening. It just opens a connection with a random one. However, if I explicitly list all of the NSQ Lookup hosts like this, it works.
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd-0.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-1.nsqlookupd:4160
I also wanted to use the headless service DNS name in --broadcast-address for both NSQ and NSQ Lookup but that doesn't work as well.
I'm using nsqio go library for Publishing and Consuming messages, and it looks like I can't use headless service there as well and should explicitly list NSQ/NSQ Lookup pod names when initializing a consumer or publisher.
Am I using this in the wrong way?
I mean I want to have horizontally scaled NSQ and NSQLookup instances and not hardcode the addresses.
you could use statfulset and headless service to achieve this goal
Try using the below config yaml to deploy it into the K8s cluster. Or else you should checkout the Official Helm : https://github.com/nsqio/helm-chart
Using headless service you can discover all pod IPs.
apiVersion: v1
kind: Service
metadata:
name: nsqlookupd
labels:
app: nsq
spec:
ports:
- port: 4160
targetPort: 4160
name: tcp
- port: 4161
targetPort: 4161
name: http
publishNotReadyAddresses: true
clusterIP: None
selector:
app: nsq
component: nsqlookupd
---
apiVersion: v1
kind: Service
metadata:
name: nsqd
labels:
app: nsq
spec:
ports:
- port: 4150
targetPort: 4150
name: tcp
- port: 4151
targetPort: 4151
name: http
clusterIP: None
selector:
app: nsq
component: nsqd
---
apiVersion: v1
kind: Service
metadata:
name: nsqadmin
labels:
app: nsq
spec:
ports:
- port: 4170
targetPort: 4170
name: tcp
- port: 4171
targetPort: 4171
name: http
selector:
app: nsq
component: nsqadmin
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nsqlookupd
spec:
serviceName: "nsqlookupd"
replicas: 3
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: nsq
component: nsqlookupd
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nsq
- key: component
operator: In
values:
- nsqlookupd
topologyKey: "kubernetes.io/hostname"
containers:
- name: nsqlookupd
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4160
name: tcp
- containerPort: 4161
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 2
command:
- /nsqlookupd
terminationGracePeriodSeconds: 5
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nsqd
spec:
serviceName: "nsqd"
replicas: 3
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: nsq
component: nsqd
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nsq
- key: component
operator: In
values:
- nsqd
topologyKey: "kubernetes.io/hostname"
containers:
- name: nsqd
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4150
name: tcp
- containerPort: 4151
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 2
volumeMounts:
- name: datadir
mountPath: /data
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd-0.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-1.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-2.nsqlookupd:4160
- -broadcast-address
- $(HOSTNAME).nsqd
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
terminationGracePeriodSeconds: 5
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: ssd
resources:
requests:
storage: 1Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nsqadmin
spec:
replicas: 1
template:
metadata:
labels:
app: nsq
component: nsqadmin
spec:
containers:
- name: nsqadmin
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4170
name: tcp
- containerPort: 4171
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
command:
- /nsqadmin
- -lookupd-http-address
- nsqlookupd-0.nsqlookupd:4161
- -lookupd-http-address
- nsqlookupd-1.nsqlookupd:4161
- -lookupd-http-address
- nsqlookupd-2.nsqlookupd:4161
terminationGracePeriodSeconds: 5
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nsq
spec:
rules:
- host: nsq.example.com
http:
paths:
- path: /
backend:
serviceName: nsqadmin
servicePort: 4171

Kubernetes giving CrashLoopBackOff error while running the packetbeat in kubernetes cluster

I'm trying to deploy Packetbeat as a DaemonSet on a Kubernetes cluster. But Kubernetes giving CrashLoopBackOff error while running the Packetbeat. I have checked the pod logs of Packetbeat. Below are the logs.
2020-08-23T14:28:00.054Z INFO instance/beat.go:475 Beat UUID: 69d32e5f-c8f2-41bf-9242-48435688c540
2020-08-23T14:28:00.054Z INFO instance/beat.go:213 Setup Beat: packetbeat; Version: 6.2.4
2020-08-23T14:28:00.061Z INFO add_cloud_metadata/add_cloud_metadata.go:301 add_cloud_metadata: hosting provider type detected as ec2, metadata={"availability_zone":"us-east-1f","instance_id":"i-05b8121af85c94236","machine_type":"t2.medium","provider":"ec2","region":"us-east-1"}
2020-08-23T14:28:00.061Z INFO kubernetes/watcher.go:77 kubernetes: Performing a pod sync
2020-08-23T14:28:00.074Z INFO kubernetes/watcher.go:108 kubernetes: Pod sync done
2020-08-23T14:28:00.074Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elasticsearch:9200
2020-08-23T14:28:00.074Z INFO kubernetes/watcher.go:140 kubernetes: Watching API for pod events
2020-08-23T14:28:00.074Z INFO pipeline/module.go:76 Beat name: ip-172-31-72-117
2020-08-23T14:28:00.075Z INFO procs/procs.go:78 Process matching disabled
2020-08-23T14:28:00.076Z INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2020-08-23T14:28:00.076Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elasticsearch:9200
2020-08-23T14:28:00.083Z WARN transport/tcp.go:36 DNS lookup failure "elasticsearch": lookup elasticsearch on 172.31.0.2:53: no such host
2020-08-23T14:28:00.083Z ERROR elasticsearch/elasticsearch.go:165 Error connecting to Elasticsearch at http://elasticsearch:9200: Get http://elasticsearch:9200: lookup elasticsearch on 172.31.0.2:53: no such host
2020-08-23T14:28:00.085Z INFO [monitoring] log/log.go:132 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":28},"total":{"ticks":160,"time":176,"value":160},"user":{"ticks":140,"time":148}},"info":{"ephemeral_id":"70e07383-3aae-4bc1-a6e1-540a6cfa8ad8","uptime":{"ms":35}},"memstats":{"gc_next":26511344,"memory_alloc":21723000,"memory_total":23319008,"rss":51834880}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":5,"events":{"active":0}}},"system":{"cpu":{"cores":2},"load":{"1":0.11,"15":0.1,"5":0.14,"norm":{"1":0.055,"15":0.05,"5":0.07}}}}}}
2020-08-23T14:28:00.085Z INFO [monitoring] log/log.go:133 Uptime: 37.596889ms
2020-08-23T14:28:00.085Z INFO [monitoring] log/log.go:110 Stopping metrics logging.
2020-08-23T14:28:00.085Z ERROR instance/beat.go:667 Exiting: Error importing Kibana dashboards: fail to create the Elasticsearch loader: Error creating Elasticsearch client: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://elasticsearch:9200: Get http://elasticsearch:9200: lookup elasticsearch on 172.31.0.2:53: no such host]
Exiting: Error importing Kibana dashboards: fail to create the Elasticsearch loader: Error creating Elasticsearch client: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://elasticsearch:9200: Get http://elasticsearch:9200: lookup elastic search on 172.31.0.2:53: no such host]
Here is Packetbeat.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: packetbeat-dynamic-config
namespace: kube-system
labels:
k8s-app: packetbeat-dynamic
kubernetes.io/cluster-service: "true"
data:
packetbeat.yml: |-
setup.dashboards.enabled: true
setup.template.enabled: true
setup.template.settings:
index.number_of_shards: 2
packetbeat.interfaces.device: any
packetbeat.protocols:
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80, 8000, 8080, 9200]
- type: mysql
ports: [3306]
- type: redis
ports: [6379]
packetbeat.flows:
timeout: 30s
period: 10s
processors:
- add_cloud_metadata:
- add_kubernetes_metadata:
host: ${HOSTNAME}
indexers:
- ip_port:
matchers:
- field_format:
format: '%{[ip]}:%{[port]}'
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
#setup.kibana.host: kibana:5601
setup.ilm.overwrite: true
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: packetbeat-dynamic
namespace: kube-system
labels:
k8s-app: packetbeat-dynamic
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: packetbeat-dynamic
kubernetes.io/cluster-service: "true"
template:
metadata:
labels:
k8s-app: packetbeat-dynamic
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: packetbeat-dynamic
terminationGracePeriodSeconds: 30
hostNetwork: true
containers:
- name: packetbeat-dynamic
image: docker.elastic.co/beats/packetbeat:6.2.4
imagePullPolicy: Always
args: [
"-c", "/etc/packetbeat.yml",
"-e",
]
securityContext:
runAsUser: 0
capabilities:
add:
- NET_ADMIN
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: KIBANA_HOST
value: kibana
- name: KIBANA_PORT
value: "5601"
volumeMounts:
- name: config
mountPath: /etc/packetbeat.yml
readOnly: true
subPath: packetbeat.yml
- name: data
mountPath: /usr/share/packetbeat/data
volumes:
- name: config
configMap:
defaultMode: 0600
name: packetbeat-dynamic-config
- name: data
emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: packetbeat-dynamic
subjects:
- kind: ServiceAccount
name: packetbeat-dynamic
namespace: kube-system
roleRef:
kind: ClusterRole
name: packetbeat-dynamic
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: packetbeat-dynamic
labels:
k8s-app: packetbeat-dynamic
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: packetbeat-dynamic
namespace: kube-system
labels:
k8s-app: packetbeat-dynamic
Could anyone suggest me to resolve this issue? any suggestible link also more helpful.
kubectl describe daemonset packetbeat-dynamic -n kube-system
Name: packetbeat-dynamic
Selector: k8s-app=packetbeat-dynamic,kubernetes.io/cluster-service=true
Node-Selector: <none>
Labels: k8s-app=packetbeat-dynamic
kubernetes.io/cluster-service=true
Annotations: deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 1
Current Number of Nodes Scheduled: 1
Number of Nodes Scheduled with Up-to-date Pods: 1
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 1
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: k8s-app=packetbeat-dynamic
kubernetes.io/cluster-service=true
Service Account: packetbeat-dynamic
Containers:
packetbeat-dynamic:
Image: docker.elastic.co/beats/packetbeat:6.2.4
Port: <none>
Host Port: <none>
Args:
-c
/etc/packetbeat.yml
-e
Environment:
ELASTICSEARCH_HOST: elasticsearch
ELASTICSEARCH_PORT: 9200
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: changeme
CLOUD_ID:
ELASTIC_CLOUD_AUTH:
KIBANA_HOST: kibana
KIBANA_PORT: 5601
Mounts:
/etc/packetbeat.yml from config (ro,path="packetbeat.yml")
/usr/share/packetbeat/data from data (rw)
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: packetbeat-dynamic-config
Optional: false
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Events: <none>

How to access StatefulSet with other PODs

I have a running Elasticsearch STS with a headless Service assigned to it:
svc.yaml:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch
namespace: elasticsearch-namespace
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
stateful.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: elasticsearch-namespace
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-persistent-storage
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: elasticsearch-storageclass
resources:
requests:
storage: 20Gi
The question is: how to access this STS with PODs of Deployment Kind? Let's say, using this Redis POD:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-ws-app
labels:
app: redis-ws-app
spec:
replicas: 1
selector:
matchLabels:
app: redis-ws-app
template:
metadata:
labels:
app: redis-ws-app
spec:
containers:
- name: redis-ws-app
image: redis:latest
command: [ "redis-server"]
ports:
- containerPort: 6379
I have been trying to create another service, that would enable me to access it from the outside, but without any luck:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-tcp
namespace: elasticsearch-namespace
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
ports:
- protocol: TCP
port: 9200
targetPort: 9200
You would reach it directly hitting the headless service. As an example, this StatefulSet and this Service.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 4
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
kind: Service
apiVersion: v1
metadata:
name: nginx-headless
spec:
selector:
app: nginx
clusterIP: None
ports:
- port: 80
name: http
I could reach the pods of the statefulset, through the headless service from any pod within the cluster:
/ # curl -I nginx-headless
HTTP/1.1 200 OK
Server: nginx/1.19.0
Date: Tue, 09 Jun 2020 12:36:47 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 26 May 2020 15:00:20 GMT
Connection: keep-alive
ETag: "5ecd2f04-264"
Accept-Ranges: bytes
The singularity of the headless service, is that it doesn't create iptable rules for that service. So, when you query that service, it goes to kube-dns (or CoreDNS), and it returns the backends, rather then the IP address is the service itself. So, if you do nslookup, for example, it will return all the backends (pods) of that service:
/ # nslookup nginx-headless
Name: nginx-headless
Address 1: 10.56.1.44
Address 2: 10.56.1.45
Address 3: 10.56.1.46
Address 4: 10.56.1.47
And it won't have any iptable rules assigned to it:
$ sudo iptables-save | grep -i nginx-headless
$
Unlike a normal service, that would return the IP address of the service itself:
/ # nslookup nginx
Name: nginx
Address 1: 10.60.15.30 nginx.default.svc.cluster.local
And it will have iptable rules assigned to it:
$ sudo iptables-save | grep -i nginx
-A KUBE-SERVICES ! -s 10.56.0.0/14 -d 10.60.15.30/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.60.15.30/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
User #suren was right about the headless service. In my case, I was just using a wrong reference.
The Kube-DNS naming convention is
service.namespace.svc.cluster-domain.tld
and the default cluster domain is cluster.local
In my case, the in order to reach the pod, one has to use:
curl -I elasticsearch.elasticsearch-namespace

Resources