Nodeport works but ingress doesn't - laravel

I am new to kubernetes and I am trying to set up a web app for production. I tested an nginx app and it works perfectly in both nodeport and ingress.
When I setupped my web app exposing it using nodeport it works but when I expose it via ingress it doesn't work. I know that I deleted all nginx app deployment and service and ingress but my web app is still the same (WORK via NODEPORT AND NOT via INGRESS). Any help getting this to work is appreciated.
THIS IS THE NGINX YAML CODE (WORK via BOTH NODEPORT AND INGRESS)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:stable
imagePullPolicy: Always
ports:
- containerPort: 80
name: web
protocol: TCP
readinessProbe:
httpGet:
port: web
path: /
---
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
namespace: default
spec:
type: NodePort
selector:
# app: nginx
app: mruser-dev
ports:
- name: web
port: 80
targetPort: web
nodePort: 31122
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-nodeport
port:
number: 80
THIS IS MY WEB APP CODE (WORK via NODEPORT AND NOT via INGRESS)
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebapp-dev
spec:
replicas: 2
selector:
matchLabels:
app: mywebapp-dev
template:
metadata:
labels:
app: mywebapp-dev
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "80"
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": ["server-node-1"]
}
]
}
]
}
}
}
spec:
volumes:
- name: logs
emptyDir: {}
- name: cache
emptyDir: {}
- name: testing
emptyDir: {}
- name: sessions
emptyDir: {}
- name: views
emptyDir: {}
- name: mywebapp-dev-nfsvolume
nfs:
server: 10.0.0.20
path: /var/www/html/misteruser-dev
- name: mywebapp-nginx-nfsvolume
nfs:
server: 10.0.0.20
path: /var/www/nginx
securityContext:
fsGroup: 82
initContainers:
- name: database-migrations
image: mywebapp-dev-laravel-fpm:8.1-fpm-alpine
imagePullPolicy: Never
envFrom:
- configMapRef:
name: mywebapp-dev-laravel
- secretRef:
name: mywebapp-dev-laravel
volumeMounts:
- name: mywebapp-dev-nfsvolume
mountPath: /var/www/html/
command:
- "php"
args:
- "artisan"
- "migrate"
- "--force"
containers:
- name: nginx
imagePullPolicy: Never
image: mywebapp-dev-laravel-nginx:stable-alpine
volumeMounts:
- name: mywebapp-nginx-nfsvolume
mountPath: /etc/nginx/conf.d/
resources:
limits:
cpu: 200m
memory: 50M
ports:
- containerPort: 80
name: web
protocol: TCP
readinessProbe:
httpGet:
port: web
path: /
- name: fpm
imagePullPolicy: Never
envFrom:
- configMapRef:
name: mywebapp-dev-laravel
- secretRef:
name: mywebapp-dev-laravel
securityContext:
runAsUser: 82
readOnlyRootFilesystem: true
volumeMounts:
- name: logs
mountPath: /var/www/html/storage/logs
- name: cache
mountPath: /var/www/html/storage/framework/cache
- name: sessions
mountPath: /var/www/html/storage/framework/sessions
- name: views
mountPath: /var/www/html/storage/framework/views
- name: testing
mountPath: /var/www/html/storage/framework/testing
- name: mywebapp-dev-nfsvolume
mountPath: /var/www/html/
resources:
limits:
cpu: 500m
memory: 200Mi
image: mywebapp-dev-laravel-fpm:8.1-fpm-alpine
ports:
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: mywebapp-dev-svc
namespace: default
spec:
type: NodePort
selector:
app: mywebapp-dev
ports:
- name: web
port: 80
targetPort: 80
nodePort: 31112
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mywebapp-nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mywebapp-dev-svc
port:
number: 80

Related

K8s - Metricbeat sending data but Filebeat doesn't to Elasticsearch

SOS
I'm trying to deploy ELK stack on my Kubernetes**a
ElasticSearch, Metricbeat, Filebeat and Kibana running on Kubernetes, but in Kibana there is no Filebeat index logs
Kibana accessable: URL here
Only MetricBeat index available
I don't know where the issue please help me to figure out.
Any idea???
Pods:
NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 21h
es-mono-0 1/1 Running 0 19h
filebeat-4446k 1/1 Running 0 11m
filebeat-fwb57 1/1 Running 0 11m
filebeat-mk5wl 1/1 Running 0 11m
filebeat-pm8xd 1/1 Running 0 11m
kibana-86d8ccc6bb-76bwq 1/1 Running 0 24h
logstash-deployment-8ffbcc994-bcw5n 1/1 Running 0 24h
metricbeat-4s5tx 1/1 Running 0 21h
metricbeat-sgf8h 1/1 Running 0 21h
metricbeat-tfv5d 1/1 Running 0 21h
metricbeat-z8rnm 1/1 Running 0 21h
SVC
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch LoadBalancer 10.245.83.99 159.223.240.9 9200:31872/TCP,9300:30997/TCP 19h
kibana NodePort 10.245.229.75 <none> 5601:32040/TCP 24h
kibana-external LoadBalancer 10.245.184.232 <pending> 80:31646/TCP 24h
logstash-service ClusterIP 10.245.113.154 <none> 5044/TCP 24h
Logstash logs logstash (Raw)
filebeat logs (Raw)
kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: elk
labels:
run: kibana
spec:
replicas: 1
selector:
matchLabels:
run: kibana
template:
metadata:
labels:
run: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:6.5.4
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch.elk:9200/
- name: XPACK_SECURITY_ENABLED
value: "true"
#- name: CLUSTER_NAME
# value: elasticsearch
#resources:
# limits:
# cpu: 1000m
# requests:
# cpu: 500m
ports:
- containerPort: 5601
name: http
protocol: TCP
#volumes:
# - name: logtrail-config
# configMap:
# name: logtrail-config
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: elk
labels:
#service: kibana
run: kibana
spec:
type: NodePort
selector:
run: kibana
ports:
- port: 5601
targetPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: kibana-external
spec:
type: LoadBalancer
selector:
app: kibana
ports:
- name: http
port: 80
targetPort: 5601
filebeat.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elk
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: elk
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
output.logstash:
hosts: ['logstash-service:5044']
setup.kibana.host: "http://kibana.elk:5601"
setup.kibana.protocol: "http"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: elk
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.5.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
#- name: data
# mountPath: /usr/share/filebeat/data
subPath: filebeat/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
#- name: data
# persistentVolumeClaim:
# claimName: elk-pvc
---
Metricbeat.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-config
namespace: elk
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
setup.kibana:
host: "kibana.elk:5601"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
namespace: elk
labels:
k8s-app: metricbeat
data:
system.yml: |-
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
#- core
#- diskio
#- socket
processes: ['.*']
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
hosts: ["localhost:10255"]
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: metricbeat
namespace: elk
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
serviceAccountName: metricbeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:6.5.4
args: [
"-c", "/etc/metricbeat.yml",
"-e",
"-system.hostfs=/hostfs",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
runAsUser: 0
resources:
limits:
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: data
mountPath: /usr/share/metricbeat/data
subPath: metricbeat/
volumes:
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: config
configMap:
defaultMode: 0600
name: metricbeat-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-daemonset-modules
- name: data
persistentVolumeClaim:
claimName: elk-pvc
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metricbeat
subjects:
- kind: ServiceAccount
name: metricbeat
namespace: elk
roleRef:
kind: ClusterRole
name: metricbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- events
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metricbeat
namespace: elk
labels:
k8s-app: metricbeat
Logstash.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: elk
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["elasticsearch.elk:9200"]
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: elk
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
---
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: elk
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
Full src files(GitHub)
Try to use FluentD as log transportation
fluentd.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: elk
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: elk
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: elk
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.elk.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

how to setup efk logging in aks cluster nodes?

how to set up efk logging in aks cluster nodes?
Below are my spec files for efk logging in aks clusters.
# Elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium-retain-sc"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
########################
# Kibana yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
##################
# fluentd daemonset and rbac,sa,clusterrole specs
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
The setup is working fine only thing no logs are coming to elasticsearch cluster from fluentd whereas the same spec files work fine inside minikube cluster.
As for this setup kibana is up and able to connect with elasticsearch and the same is the case with fluentd, just logs are not coming inside elasticseach.
What extra configuration needs to be configured to make these config files work with azure k8 service(AKS) cluster nodes?
Had to add below environment variables for Fluentd.
Reference Link: https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
Here's the complete spec.
# Elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium-retain-sc"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
########################
# Kibana yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
##################
# fluentd daemonset and rbac,sa,clusterrole specs
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods

How can I deploy Elasticsearch cluster in Kubernete replicaset?

I am trying to deploy Elasticsearch search to K8S cluster. Below is my configuration. It works fine for replicas: 1. But if I change it to a value more than 1 there will be multiple pods in the K8S cluster get deployed. But how can I make them working together as one Elasticsearch cluster nodes?
I know I can manually add the nodes in the Elasticsearch cluster by using the API. But is there any way to let the cluster discover the extra nodes automatically?
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es
namespace: default
spec:
serviceName: es-entrypoint
replicas: 1
selector:
matchLabels:
name: es
template:
metadata:
labels:
name: es
spec:
volumes:
- name: es-config
configMap:
name: es-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: persistent-storage
persistentVolumeClaim:
claimName: es-claim
initContainers:
- name: permissions-fix
image: busybox
volumeMounts:
- name: persistent-storage
mountPath: /usr/share/elasticsearch/data
command: [ 'chown' ]
args: [ '1000:1000', '/usr/share/elasticsearch/data' ]
containers:
- name: es
image: elasticsearch:7.10.1
resources:
requests:
cpu: 2
memory: 8
ports:
- name: http
containerPort: 9200
- containerPort: 9300
name: inter-node
volumeMounts:
- name: es-config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: persistent-storage
mountPath: /usr/share/elasticsearch/data
---
apiVersion: v1
kind: Service
metadata:
name: es-entrypoint
spec:
selector:
name: es
ports:
- port: 9200
targetPort: 9200
protocol: TCP
type: NodePort

How to mount saved objects to kibana in kubernetes?

I'm runnig EFK stack in my kubernetes cluster however each time i start kibana dashboard i will need to manually import export.ndjson i've heard that all kibana objects are stored in elasticsearch so a mounted this file to /usr/share/elasticsearch/data/ but still can't see it in the dashboard.
Here are my yaml files:
kibana :
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: tools
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.6.0
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: tools
spec:
selector:
app: kibana
ports:
- name: client
port: 5601
protocol: TCP
type: ClusterIP
es:
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: tools
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-volume
mountPath: /usr/share/elasticsearch/data
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
env:
- name: discovery.type
valueFrom:
configMapKeyRef:
name: tools-config
key: discovery.type
volumeMounts:
- name: elasticsearch-volume
mountPath: /usr/share/elasticsearch/data
- name: kibana-dashboard
mountPath: /usr/share/elasticsearch/data/export.ndjson
subPath: export.ndjson
volumes:
- name: elasticsearch-volume
persistentVolumeClaim:
claimName: elasticsearch-storage-pvc
- name: kibana-dashboard
configMap:
name: kibana-dashboard
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: tools
spec:
selector:
app: elasticsearch
ports:
- name: http
protocol: TCP
port: 9200
- name: transport
protocol: TCP
port: 9300
type: ClusterIP

How can i creat pod with out this error elasticsearch on kubernetes

I want to create elasticsearch pod on kubernetes.
I make some config change to edit path.data and path.logs
But I'm getting this error.
error: error validating "es-deploy.yml": error validating data:
ValidationError(Deployment.spec.template.spec.containers[0]): unknown
field "volumes" in io.k8s.api.core.v1.Container; if you choose to
ignore these errors, turn validation off with --validate=false
service-account.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch
es-svc.yml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
component: elasticsearch
spec:
# type: LoadBalancer
selector:
component: elasticsearch
ports:
- name: http
port: 9200
protocol: TCP
- name: transport
port: 9300
protocol: TCP
elasticsearch.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-config
data:
elasticsearch.yml: |
cluster:
name: ${CLUSTER_NAME:elasticsearch-default}
node:
master: ${NODE_MASTER:true}
data: ${NODE_DATA:true}
name: ${NODE_NAME}
ingest: ${NODE_INGEST:true}
max_local_storage_nodes: ${MAX_LOCAL_STORAGE_NODES:1}
processors: ${PROCESSORS:1}
network.host: ${NETWORK_HOST:_site_}
path:
data: ${DATA_PATH:"/data/elk"}
repo: ${REPO_LOCATIONS:[]}
bootstrap:
memory_lock: ${MEMORY_LOCK:false}
http:
enabled: ${HTTP_ENABLE:true}
compression: true
cors:
enabled: true
allow-origin: "*"
discovery:
zen:
ping.unicast.hosts: ${DISCOVERY_SERVICE:elasticsearch-discovery}
minimum_master_nodes: ${NUMBER_OF_MASTERS:1}
xpack:
license.self_generated.type: basic
es-deploy.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es
labels:
component: elasticsearch
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
spec:
serviceAccount: elasticsearch
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "DISCOVERY_SERVICE"
value: "elasticsearch"
- name: NODE_MASTER
value: "true"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/elastic.yaml
volumes:
- name: storage
emptyDir: {}
- name: config-volume
configMap:
name: elasticsearch-config
There is syntax problem in your es-deploy.yaml file.
This should work.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es
labels:
component: elasticsearch
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
spec:
serviceAccount: elasticsearch
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "DISCOVERY_SERVICE"
value: "elasticsearch"
- name: NODE_MASTER
value: "true"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/elastic.yaml
volumes:
- name: storage
emptyDir: {}
- name: config-volume
configMap:
name: elasticsearch-config
The volumes section is not under containers section, it should be under spec section as the error suggest.
You can validate your k8s yaml files for syntax error online using this site.
Hope this helps.

Resources