I have kube-apiserver.yaml file that looks like this:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.49.2:8443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.49.2
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/var/lib/minikube/certs/ca.crt
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
- --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
- --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt
- --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt
- --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=8443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/var/lib/minikube/certs/sa.pub
- --service-account-signing-key-file=/var/lib/minikube/certs/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/var/lib/minikube/certs/apiserver.crt
- --tls-private-key-file=/var/lib/minikube/certs/apiserver.key
image: registry.k8s.io/kube-apiserver:v1.25.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.49.2
path: /livez
port: 8443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: 192.168.49.2
path: /readyz
port: 8443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: 192.168.49.2
path: /livez
port: 8443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /var/lib/minikube/certs
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /var/lib/minikube/certs
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
and i created another file called patch.yaml based on kube-apiserver.yaml structure that looks like this:
apiVersion: v1
kind: Pod
metadata:
spec:
containers:
- command:
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
volumeMounts:
- mountPath: /etc/kubernetes/audit-policy.yaml
name: audit
readOnly: true
- mountPath: /var/log/kubernetes/audit/
name: audit-log
readOnly: false
volumes:
- hostPath:
path: /etc/kubernetes/audit-policy.yaml
type: File
name: audit
- hostPath:
path: /var/log/kubernetes/audit/
type: DirectoryOrCreate
name: audit-log
status: {}
I want to modify kube-apiserver.yaml so it looks like this:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.49.2:8443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.49.2
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/var/lib/minikube/certs/ca.crt
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
- --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
- --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt
- --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt
- --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=8443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/var/lib/minikube/certs/sa.pub
- --service-account-signing-key-file=/var/lib/minikube/certs/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/var/lib/minikube/certs/apiserver.crt
- --tls-private-key-file=/var/lib/minikube/certs/apiserver.key
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
image: registry.k8s.io/kube-apiserver:v1.25.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.49.2
path: /livez
port: 8443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: 192.168.49.2
path: /readyz
port: 8443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: 192.168.49.2
path: /livez
port: 8443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /var/lib/minikube/certs
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/audit-policy.yaml
name: audit
readOnly: true
- mountPath: /var/log/kubernetes/audit/
name: audit-log
readOnly: false
hostNetwork: true
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /var/lib/minikube/certs
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
- hostPath:
path: /etc/kubernetes/audit-policy.yaml
type: File
name: audit
- hostPath:
path: /var/log/kubernetes/audit/
type: DirectoryOrCreate
name: audit-log
status: {}
Basically what i want is to add commands from patch.yaml to kube-apiserver.yaml as well as volumeMounts and volumes.
Related
I am trying to send the logs from my AKS cluster into Elasticsearch the log that I am getting is "event.agent_id_status auth_metadata_missing" in my kibana even after all the volume mounts are done correctly
here's my configmap for standalone elastic agent
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-node-datastreams
namespace: elastic
labels:
k8s-app: elastic-agent-standalone
data:
agent.yml: |-
outputs:
default:
type: elasticsearch
protocol: https
ssl.verification_mode: 'none'
allow_older_versions: true
hosts:
- >-
${ES_HOST}
username: ${ES_USERNAME}
password: ${ES_PASSWORD}
indices:
- index: "journalbeat-alias"
when:
and:
- has_fields: ['fields.k8s.component']
- equals:
fields.k8s.component: "journal"
- index: "test-audit"
when:
and:
- has_fields: ['fields.k8s.component']
- equals:
fields.k8s.component: "audit"
- index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}"
when.not:
has_fields: ['kubernetes.namespace']
agent:
monitoring:
enabled: true
use_output: default
logs: true
metrics: false
providers.kubernetes:
node: ${NODE_NAME}
scope: node
#Uncomment to enable hints' support
#hints.enabled: true
inputs:
- name: system-logs
type: logfile
use_output: default
meta:
package:
name: system
version: 0.10.7
data_stream:
namespace: filebeat
streams:
- data_stream:
dataset: audit
type: logfile
paths:
- /var/log/*.log
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
- data_stream:
dataset: journald
type: logfile
paths:
- /var/lib/host/log/journal
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
- data_stream:
dataset: container
type: logfile
paths:
- /var/log/containers/*.log
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
and here's my daemonset file
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: elastic-agent-standalone
namespace: elastic
labels:
app: elastic-agent-standalone
spec:
selector:
matchLabels:
app: elastic-agent-standalone
template:
metadata:
labels:
app: elastic-agent-standalone
spec:
# Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes.
# Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: elastic-agent-standalone
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: elastic-agent-standalone
image: docker.elastic.co/beats/elastic-agent:8.4.3
args: [
"-c", "/etc/elastic-agent/agent.yml",
"-e",
]
env:
# The basic authentication username used to connect to Elasticsearch
# This user needs the privileges required to publish events to Elasticsearch.
- name: FLEET_ENROLL_INSECURE
value: "1"
- name: ES_USERNAME
value: <CORRECT USER>
# The basic authentication password used to connect to Elasticsearch
- name: ES_PASSWORD
value: <MY CORRECT PASSWORD>
# The Elasticsearch host to communicate with
- name: ES_HOST
value: <CORRECT HOST>
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: STATE_PATH
value: "/etc/elastic-agent"
securityContext:
runAsUser: 0
resources:
limits:
memory: 700Mi
requests:
cpu: 100m
memory: 400Mi
volumeMounts:
- name: datastreams
mountPath: /etc/elastic-agent/agent.yml
readOnly: true
subPath: agent.yml
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: varlogcontainers
mountPath: /var/log/containers
readOnly: true
- name: varlogpods
mountPath: /var/log/pods
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: kubenodevarlogs
mountPath: /var/lib/host/log
readOnly: true
# - name: varlog
# mountPath: /var/log
# readOnly: true
- name: etc-full
mountPath: /hostfs/etc
readOnly: true
- name: var-lib
mountPath: /hostfs/var/lib
readOnly: true
volumes:
- name: datastreams
configMap:
defaultMode: 0640
name: agent-node-datastreams
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: kubenodevarlogs
hostPath:
path: /var/log
# The following volumes are needed for Cloud Security Posture integration (cloudbeat)
# If you are not using this integration, then these volumes and the corresponding
# mounts can be removed.
- name: etc-full
hostPath:
path: /etc
- name: var-lib
hostPath:
path: /var/lib
and this is the log that i get in kibana for path /var/log/containers but it's same for all the other inputs, and there are no datastreams getting generated for either path
Elasticsearch pod is not becoming active.
logging-es-data-master-ilmz5zyt-3-deploy 1/1 Running 0 5m
logging-es-data-master-ilmz5zyt-3-qxkml 1/2 Running 0 5m
and events are.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned logging-es-data-master-ilmz5zyt-3-qxkml to digi-srv-pp-01
Normal Pulled 5m kubelet, digi-srv-pp-01 Container image "docker.io/openshift/origin-logging-elasticsearch:v3.10" already present on machine
Normal Created 5m kubelet, digi-srv-pp-01 Created container
Normal Started 5m kubelet, digi-srv-pp-01 Started container
Normal Pulled 5m kubelet, digi-srv-pp-01 Container image "docker.io/openshift/oauth-proxy:v1.0.0" already present on machine
Normal Created 5m kubelet, digi-srv-pp-01 Created container
Normal Started 5m kubelet, digi-srv-pp-01 Started container
Warning Unhealthy 13s (x55 over 4m) kubelet, digi-srv-pp-01 Readiness probe failed: Elasticsearch node is not ready to accept HTTP requests yet [response code: 000]
Deployment config is
# oc export dc/logging-es-data-master-ilmz5zyt -o yaml
Command "export" is deprecated, use the oc get --export
apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
generation: 5
labels:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
name: logging-es-data-master-ilmz5zyt
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
strategy:
activeDeadlineSeconds: 21600
recreateParams:
timeoutSeconds: 600
resources: {}
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
name: logging-es-data-master-ilmz5zyt
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: logging-infra
operator: In
values:
- elasticsearch
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: DC_NAME
value: logging-es-data-master-ilmz5zyt
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_TRUST_CERTIFICATES
value: "true"
- name: SERVICE_DNS
value: logging-es-cluster
- name: CLUSTER_NAME
value: logging-es
- name: INSTANCE_RAM
value: 12Gi
- name: HEAP_DUMP_LOCATION
value: /elasticsearch/persistent/heapdump.hprof
- name: NODE_QUORUM
value: "1"
- name: RECOVER_EXPECTED_NODES
value: "1"
- name: RECOVER_AFTER_TIME
value: 5m
- name: READINESS_PROBE_TIMEOUT
value: "30"
- name: POD_LABEL
value: component=es
- name: IS_MASTER
value: "true"
- name: HAS_DATA
value: "true"
- name: PROMETHEUS_USER
value: system:serviceaccount:openshift-metrics:prometheus
image: docker.io/openshift/origin-logging-elasticsearch:v3.10
imagePullPolicy: IfNotPresent
name: elasticsearch
ports:
- containerPort: 9200
name: restapi
protocol: TCP
- containerPort: 9300
name: cluster
protocol: TCP
readinessProbe:
exec:
command:
- /usr/share/java/elasticsearch/probe/readiness.sh
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 120
resources:
limits:
memory: 12Gi
requests:
cpu: "1"
memory: 12Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/elasticsearch/secret
name: elasticsearch
readOnly: true
- mountPath: /usr/share/java/elasticsearch/config
name: elasticsearch-config
readOnly: true
- mountPath: /elasticsearch/persistent
name: elasticsearch-storage
- args:
- --upstream-ca=/etc/elasticsearch/secret/admin-ca
- --https-address=:4443
- -provider=openshift
- -client-id=system:serviceaccount:openshift-logging:aggregated-logging-elasticsearch
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-secret=endzaVczSWMzb0NoNlVtVw==
- -basic-auth-password=NXd9xTjg4npjIM0E
- -upstream=https://localhost:9200
- '-openshift-sar={"namespace": "openshift-logging", "verb": "view", "resource":
"prometheus", "group": "metrics.openshift.io"}'
- '-openshift-delegate-urls={"/": {"resource": "prometheus", "verb": "view",
"group": "metrics.openshift.io", "namespace": "openshift-logging"}}'
- --tls-cert=/etc/tls/private/tls.crt
- --tls-key=/etc/tls/private/tls.key
- -pass-access-token
- -pass-user-headers
image: docker.io/openshift/oauth-proxy:v1.0.0
imagePullPolicy: IfNotPresent
name: proxy
ports:
- containerPort: 4443
name: proxy
protocol: TCP
resources:
limits:
memory: 64Mi
requests:
cpu: 100m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/tls/private
name: proxy-tls
readOnly: true
- mountPath: /etc/elasticsearch/secret
name: elasticsearch
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/compute: "true"
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
supplementalGroups:
- 65534
serviceAccount: aggregated-logging-elasticsearch
serviceAccountName: aggregated-logging-elasticsearch
terminationGracePeriodSeconds: 30
volumes:
- name: proxy-tls
secret:
defaultMode: 420
secretName: prometheus-tls
- name: elasticsearch
secret:
defaultMode: 420
secretName: logging-elasticsearch
- configMap:
defaultMode: 420
name: logging-elasticsearch
name: elasticsearch-config
- name: elasticsearch-storage
persistentVolumeClaim:
claimName: logging-es-0
test: false
triggers: []
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
Spring boot application pod is not getting up, neither is failing , it is just stuck. Not getting any clue what is going wrong .Attached screenshot is for logs it is generating
pod log
apiVersion: apps/v1
kind: Deployment
metadata:
name: <deployement-name>
spec:
selector:
matchLabels:
app: <app-name>
replicas: 1
template:
metadata:
labels:
app: <app-name>
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8090"
prometheus.io/path: "/app/actuator/prometheus"
spec:
securityContext:
runAsGroup: 3000
runAsUser: 3000
containers:
- name: <app-name>
image: <url>/REPLACE_ME
imagePullPolicy: Always
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: <random>-db-secret
key: db-user
- name: DB_PASS
valueFrom:
secretKeyRef:
name: <random>-db-secret
key: db-pass
- name: spring-profile
valueFrom:
configMapKeyRef:
name: spring-profile-stage
key: ENV
- name: securityaudit.hostname
valueFrom:
configMapKeyRef:
name: securityaudit-config
key: securityaudit.hostname
- name: securityaudit.ipaddress
valueFrom:
configMapKeyRef:
name: securityaudit-config
key: securityaudit.ipaddress
- name: securityaudit.product
valueFrom:
configMapKeyRef:
name: securityaudit-config
key: securityaudit.product
- name: splunkurl
valueFrom:
configMapKeyRef:
name: splunk-config
key: splunkurl
- name: splunktoken
valueFrom:
secretKeyRef:
name: hec-token-secret
key: hec-token
- name: splunkindex
valueFrom:
configMapKeyRef:
name: splunk-config
key: splunkindex
- name: splunkCertValidDisable
valueFrom:
configMapKeyRef:
name: splunk-config
key: splunkDisableCertValidation
- name: MY_POD_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
resources:
limits:
memory: 4Gi
cpu: 1
requests:
memory: 4Gi
cpu: 1
securityContext:
privileged: false
ports:
- containerPort: 8090
livenessProbe:
httpGet:
path: /<app-name>/actuator/health/liveness
port: 8090
initialDelaySeconds: 180
readinessProbe:
httpGet:
path: /<app-name>/actuator/health/readiness
port: 8090
initialDelaySeconds: 180
periodSeconds: 10
volumeMounts:
- name: <app-name>-heapdump
mountPath: /<app-name>heapdump
volumes:
- name: <app-name>-heapdump
persistentVolumeClaim:
claimName: <app-name>heap-dump-stage
Answer by #Rakesh Gupta resolved my problem. Pod was up, problem was logging level was not set and i thought something was wrong with the pod itself. Being new to this project , i missed the point to check the logging level
Does this answer your question - stackoverflow.com/questions/40762547/… –
I'm trying to set up an NSQ cluster in Kubernetes and having issues.
Basically, I want to scale out NSQ and NSQ Lookup. I have a stateful set(2 nodes) definition for both of them. To not post the whole YAML file, I'll post only part of it for NSQ
NSQ container template
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd.default.svc.cluster.local:4160
here nsqlookupd.default.svc.cluster.local is a K8s headless service, by doing this I'm expecting the NSQ instance to open a connection with all of the NSQ Lookup instances, which in fact is not happening. It just opens a connection with a random one. However, if I explicitly list all of the NSQ Lookup hosts like this, it works.
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd-0.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-1.nsqlookupd:4160
I also wanted to use the headless service DNS name in --broadcast-address for both NSQ and NSQ Lookup but that doesn't work as well.
I'm using nsqio go library for Publishing and Consuming messages, and it looks like I can't use headless service there as well and should explicitly list NSQ/NSQ Lookup pod names when initializing a consumer or publisher.
Am I using this in the wrong way?
I mean I want to have horizontally scaled NSQ and NSQLookup instances and not hardcode the addresses.
you could use statfulset and headless service to achieve this goal
Try using the below config yaml to deploy it into the K8s cluster. Or else you should checkout the Official Helm : https://github.com/nsqio/helm-chart
Using headless service you can discover all pod IPs.
apiVersion: v1
kind: Service
metadata:
name: nsqlookupd
labels:
app: nsq
spec:
ports:
- port: 4160
targetPort: 4160
name: tcp
- port: 4161
targetPort: 4161
name: http
publishNotReadyAddresses: true
clusterIP: None
selector:
app: nsq
component: nsqlookupd
---
apiVersion: v1
kind: Service
metadata:
name: nsqd
labels:
app: nsq
spec:
ports:
- port: 4150
targetPort: 4150
name: tcp
- port: 4151
targetPort: 4151
name: http
clusterIP: None
selector:
app: nsq
component: nsqd
---
apiVersion: v1
kind: Service
metadata:
name: nsqadmin
labels:
app: nsq
spec:
ports:
- port: 4170
targetPort: 4170
name: tcp
- port: 4171
targetPort: 4171
name: http
selector:
app: nsq
component: nsqadmin
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nsqlookupd
spec:
serviceName: "nsqlookupd"
replicas: 3
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: nsq
component: nsqlookupd
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nsq
- key: component
operator: In
values:
- nsqlookupd
topologyKey: "kubernetes.io/hostname"
containers:
- name: nsqlookupd
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4160
name: tcp
- containerPort: 4161
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 2
command:
- /nsqlookupd
terminationGracePeriodSeconds: 5
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nsqd
spec:
serviceName: "nsqd"
replicas: 3
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: nsq
component: nsqd
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nsq
- key: component
operator: In
values:
- nsqd
topologyKey: "kubernetes.io/hostname"
containers:
- name: nsqd
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4150
name: tcp
- containerPort: 4151
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 2
volumeMounts:
- name: datadir
mountPath: /data
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd-0.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-1.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-2.nsqlookupd:4160
- -broadcast-address
- $(HOSTNAME).nsqd
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
terminationGracePeriodSeconds: 5
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: ssd
resources:
requests:
storage: 1Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nsqadmin
spec:
replicas: 1
template:
metadata:
labels:
app: nsq
component: nsqadmin
spec:
containers:
- name: nsqadmin
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4170
name: tcp
- containerPort: 4171
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
command:
- /nsqadmin
- -lookupd-http-address
- nsqlookupd-0.nsqlookupd:4161
- -lookupd-http-address
- nsqlookupd-1.nsqlookupd:4161
- -lookupd-http-address
- nsqlookupd-2.nsqlookupd:4161
terminationGracePeriodSeconds: 5
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nsq
spec:
rules:
- host: nsq.example.com
http:
paths:
- path: /
backend:
serviceName: nsqadmin
servicePort: 4171
I am trying to deploy elastic-search in kubernetes with local drive volume but I get the following error, can you please correct me.
using ubuntu 16.04
kubernetes v1.11.0
Docker version 17.03.2-ce
Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume
error: error validating "es-d.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[1]): unknown field "hostPath" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
This is the yaml file of the statefulSet:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
You have the wrong structure. volumes must be on the same level as containers, initContainers.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
You can find example here.
Check your format, hostPath is not supposed to be under container part, 'volume' is not in it's position.