Filbeat 7.3.2 serviceaccount with clusterrole issue - elasticsearch

My Kubernetes user is not admin in the cluster. So I just cannot create a cluster role binding for filebeat service account. I am using auto discover in filebeat. Can someone help how can I achieve this without clusterrole.
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
setup.dashboards.enabled: false
setup.template.enabled: true
setup.template.settings:
index.number_of_shards: 1
filebeat.modules:
- module: system
syslog:
enabled: true
#var.paths: ["/var/log/syslog"]
auth:
enabled: true
#var.paths: ["/var/log/authlog"]
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition:
equals:
kubernetes.namespace: microsrv-test
config:
- type: docker
json.keys_under_root: true
json.add_error_key: true
json.message_key: log
containers:
ids:
- "${data.kubernetes.container.id}"
processors:
- drop_event:
when.or:
- and:
- regexp:
message: '^\d+\.\d+\.\d+\.\d+ '
- equals:
fileset.name: error
- and:
- not:
regexp:
message: '^\d+\.\d+\.\d+\.\d+ '
- equals:
fileset.name: access
- add_cloud_metadata:
- add_kubernetes_metadata:
- add_docker_metadata:
output.elasticsearch:
hosts: ["elasticsearch:9200"]
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.3.2
imagePullPolicy: Always
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: KIBANA_HOST
value: kibana
- name: KIBANA_PORT
value: "5601"
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: data
emptyDir: {}
---
Cluster Roles and role bindings
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
I have tried creating non cluster role and rolebinding as below,
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: filebeat
namespace: logging
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: Role
name: filebeat
apiGroup: rbac.authorization.k8s.io
but I am getting error.
Performing a resource sync err kubernetes api: Failure 403 pods is
forbidden: User "system:serviceaccount:xxxxx:filebeat" cannot list
resource "pods" in API group "" at the cluster scope for *v1.PodList|

Unfortunately, it will not work the way you want it and the error you are getting indicates it perfectly:
Performing a resource sync err kubernetes api: Failure 403 pods is forbidden: User "system:serviceaccount:xxxxx:filebeat" cannot list resource "pods" in API group "" at the cluster scope for *v1.PodList|
Notice the most important part which is: at the cluster scope. You could also check whether an action is allowed by executing the kubectl auth can-i command. More about that can be found in the Authorization Overview.
This brings us to the differences between Role and ClusterRole:
An RBAC Role or ClusterRole contains rules that represent a set of
permissions. Permissions are purely additive (there are no "deny"
rules).
A Role always sets permissions within a particular namespace;
when you create a Role, you have to specify the namespace it belongs
in.
ClusterRole, by contrast, is a non-namespaced resource. The
resources have different names (Role and ClusterRole) because a
Kubernetes object always has to be either namespaced or not
namespaced; it can't be both.
ClusterRoles have several uses. You can use a ClusterRole to:
define permissions on namespaced resources and be granted within individual namespace(s)
define permissions on namespaced resources and be granted across all namespaces
define permissions on cluster-scoped resources
If you want to define a role within a namespace, use a Role; if you
want to define a role cluster-wide, use a ClusterRole.
And between RoleBinding and ClusterRoleBinding:
A role binding grants the permissions defined in a role to a user or
set of users. It holds a list of subjects (users, groups, or service
accounts), and a reference to the role being granted. A RoleBinding
grants permissions within a specific namespace whereas a
ClusterRoleBinding grants that access cluster-wide.
A RoleBinding may reference any Role in the same namespace.
Alternatively, a RoleBinding can reference a ClusterRole and bind that
ClusterRole to the namespace of the RoleBinding. If you want to bind a
ClusterRole to all the namespaces in your cluster, you use a
ClusterRoleBinding.
So it is impossible to get the cluster scope permissions by using Role and RoleBinding.
You will most likely have to ask your Admin to help you solve this issue.

Related

Minikube: Issue running Kibana

I am currently learning Kubernetes and I am using Minikube on MacOS using Docker Desktop, I am facing issues with running Kibana which seems to be failing to start and to also enable it through my nginx ingress controller.
Regarding Kibana, it doesn't move to ready stage, it seems to be stuck and restarts several times. Everything lives inside the default namespace, except for fluentd that I use a persistent volume and persistent volume claim to access the shared /data/logs folder.
I have added my fluentd, kibana, es and ingress yaml configuration. And also kibana logs below.
Fluentd
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
version: v1
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
spec:
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENT_UID
value: "0"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- mountPath: /var/logs
name: logs
terminationGracePeriodSeconds: 30
volumes:
- name: logs
persistentVolumeClaim:
claimName: chi-kube-pvc
Kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
spec:
selector:
matchLabels:
run: kibana
template:
metadata:
labels:
run: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.14.2
readinessProbe:
httpGet:
path: /kibana
port: 5601
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /kibana
port: 5601
initialDelaySeconds: 15
periodSeconds: 20
env:
- name: XPACK_SECURITY_ENABLED
value: "true"
- name: SERVER_BASEPATH
value: "/kibana"
ports:
- containerPort: 5601
volumeMounts:
- mountPath: /var/logs
name: logs
volumes:
- name: logs
persistentVolumeClaim:
claimName: chi-pvc
---
apiVersion: v1
kind: Service
metadata:
name: kibana
labels:
service: kibana
spec:
type: NodePort
selector:
run: kibana
ports:
- port: 5601
targetPort: 5601
Kibana logs:
{"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["info","plugins-service"],"pid":1216,"message":"Plugin \"metricsEntities\" is disabled."}
{"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"You should set server.basePath along with server.rewriteBasePath. Starting in 7.0, Kibana will expect that all requests start with server.basePath rather than expecting you to rewrite the requests in your reverse proxy. Set server.rewriteBasePath to false to preserve the current behavior and silence this warning."}
{"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"Support for setting server.host to \"0\" in kibana.yml is deprecated and will be removed in Kibana version 8.0.0. Instead use \"0.0.0.0\" to bind to all interfaces."}
{"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"plugins.scanDirs is deprecated and is no longer used"}
{"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.\""}
{"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"\"xpack.reporting.roles\" is deprecated. Granting reporting privilege through a \"reporting_user\" role will not be supported starting in 8.0. Please set \"xpack.reporting.roles.enabled\" to \"false\" and grant reporting privileges to users using Kibana application privileges **Management > Security > Roles**."}
{"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","http","server","NotReady"],"pid":1216,"message":"http server running at http://0.0.0.0:5601"}
{"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","plugins-system"],"pid":1216,"message":"Setting up [106] plugins: [translations,taskManager,licensing,globalSearch,globalSearchProviders,banners,licenseApiGuard,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,screenshotMode,telemetry,newsfeed,mapsEms,mapsLegacy,legacyExport,kibanaLegacy,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,savedObjects,visualizations,visTypeXy,visTypeVislib,visTypeTimelion,features,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,presentationUtil,timelion,home,searchprofiler,painlessLab,grokdebugger,graph,visTypeVega,management,watcher,licenseManagement,indexPatternManagement,advancedSettings,discover,discoverEnhanced,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,savedObjectsManagement,spaces,security,transform,savedObjectsTagging,lens,reporting,canvas,lists,ingestPipelines,fileUpload,maps,dataVisualizer,encryptedSavedObjects,dataEnhanced,timelines,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,osquery,ml,cases,securitySolution,observability,uptime,infra,monitoring,logstash,console,apmOss,apm]"}
{"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","plugins","taskManager"],"pid":1216,"message":"TaskManager is identified by the Kibana UUID: 4f523c36-da1f-46e2-a071-84ee400bb9e7"}
{"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","security","config"],"pid":1216,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","security","config"],"pid":1216,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","reporting","config"],"pid":1216,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","reporting","config"],"pid":1216,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.4.2105\n OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."}
{"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":1216,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","actions","actions"],"pid":1216,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","alerting","plugins","alerting"],"pid":1216,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","plugins","ruleRegistry"],"pid":1216,"message":"Write is disabled, not installing assets"}
{"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"Starting saved objects migrations"}
{"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] INIT -> OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT. took: 226ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT -> OUTDATED_DOCUMENTS_SEARCH_READ. took: 192ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT. took: 118ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] INIT -> OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT. took: 536ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 86ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT -> OUTDATED_DOCUMENTS_SEARCH_READ. took: 86ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT. took: 64ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 112ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 49ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 29ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE. took: 106ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] Migration completed after 840ms"}
{"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE. took: 104ms."}
{"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] Migration completed after 869ms"}
{"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","plugins-system"],"pid":1216,"message":"Starting [106] plugins: [translations,taskManager,licensing,globalSearch,globalSearchProviders,banners,licenseApiGuard,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,screenshotMode,telemetry,newsfeed,mapsEms,mapsLegacy,legacyExport,kibanaLegacy,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,savedObjects,visualizations,visTypeXy,visTypeVislib,visTypeTimelion,features,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,presentationUtil,timelion,home,searchprofiler,painlessLab,grokdebugger,graph,visTypeVega,management,watcher,licenseManagement,indexPatternManagement,advancedSettings,discover,discoverEnhanced,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,savedObjectsManagement,spaces,security,transform,savedObjectsTagging,lens,reporting,canvas,lists,ingestPipelines,fileUpload,maps,dataVisualizer,encryptedSavedObjects,dataEnhanced,timelines,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,osquery,ml,cases,securitySolution,observability,uptime,infra,monitoring,logstash,console,apmOss,apm]"}
{"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","plugins","monitoring","monitoring"],"pid":1216,"message":"config sourced from: production cluster"}
{"type":"log","#timestamp":"2021-09-22T09:54:51+00:00","tags":["info","http","server","Kibana"],"pid":1216,"message":"http server running at http://0.0.0.0:5601"}
{"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":1216,"message":"Starting monitoring stats collection"}
{"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","plugins","securitySolution"],"pid":1216,"message":"Dependent plugin setup complete - Starting ManifestTask"}
{"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","status"],"pid":1216,"message":"Kibana is now degraded"}
{"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","plugins","reporting"],"pid":1216,"message":"Browser executable: /usr/share/kibana/x-pack/plugins/reporting/chromium/headless_shell-linux_x64/headless_shell"}
{"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["warning","plugins","reporting"],"pid":1216,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
{"type":"log","#timestamp":"2021-09-22T09:54:55+00:00","tags":["info","status"],"pid":1216,"message":"Kibana is now available (was degraded)"}
{"type":"response","#timestamp":"2021-09-22T09:54:58+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":36,"contentLength":60},"message":"GET /kibana 404 36ms - 60.0B"}
{"type":"response","#timestamp":"2021-09-22T09:55:08+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":25,"contentLength":60},"message":"GET /kibana 404 25ms - 60.0B"}
{"type":"response","#timestamp":"2021-09-22T09:55:08+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":22,"contentLength":60},"message":"GET /kibana 404 22ms - 60.0B"}
{"type":"response","#timestamp":"2021-09-22T09:55:18+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":26,"contentLength":60},"message":"GET /kibana 404 26ms - 60.0B"}
{"type":"response","#timestamp":"2021-09-22T09:55:28+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":27,"contentLength":60},"message":"GET /kibana 404 27ms - 60.0B"}
{"type":"response","#timestamp":"2021-09-22T09:55:28+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":22,"contentLength":60},"message":"GET /kibana 404 22ms - 60.0B"}
{"type":"response","#timestamp":"2021-09-22T09:55:38+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":25,"contentLength":60},"message":"GET /kibana 404 25ms - 60.0B"}
Elasticsearch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
spec:
selector:
matchLabels:
component: elasticsearch
template:
metadata:
labels:
component: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2
env:
- name: discovery.type
value: single-node
ports:
- containerPort: 9200
protocol: TCP
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 500m
memory: 4Gi
volumeMounts:
- mountPath: /var/logs
name: logs
volumes:
- name: logs
persistentVolumeClaim:
claimName: chi-pvc
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
type: NodePort
selector:
component: elasticsearch
ports:
- port: 9200
targetPort: 9200
Ingress-resource.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: chi-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /healthz
pathType: Prefix
backend:
service:
name: chi-svc
port:
number: 3000
- path: /kibana
pathType: Prefix
backend:
service:
name: kibana
port:
number: 5601
- path: /elasticsearch
pathType: Prefix
backend:
service:
name: elasticsearch
port:
number: 9200
I ended up sorting out the issue by having different ingresses and removing the Nginx rewrite-target annotation, I went one step ahead and created a special namespace for the logging infrastructure.
Namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: kube-logging
Persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: chi-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: /data/logs/
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: chi-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
selector:
matchLabels:
type: local
Elastic-search.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: kube-logging
labels:
k8s-app: elasticsearch
version: v1
spec:
selector:
matchLabels:
k8s-app: elasticsearch
version: v1
template:
metadata:
labels:
k8s-app: elasticsearch
version: v1
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
env:
- name: discovery.type
value: single-node
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
ports:
- containerPort: 9200
resources:
limits:
cpu: 500m
memory: 4Gi
requests:
cpu: 500m
memory: 4Gi
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: kube-logging
labels:
k8s-app: elasticsearch
version: v1
spec:
type: NodePort
selector:
k8s-app: elasticsearch
ports:
- port: 9200
Fluentd.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: kube-logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-logging
labels:
k8s-app: fluentd-logging
version: v1
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
version: v1
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
# Don't need this for Minikubes
# tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
# <hostname>.<namespace>.svc.cluster.local
value: "elasticsearch.kube-logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: 'disable'
- name: FLUENT_LOGSTASH_FORMAT
value: "true"
# # X-Pack Authentication
# # =====================
# - name: FLUENT_ELASTICSEARCH_USER
# value: "elastic"
# - name: FLUENT_ELASTICSEARCH_PASSWORD
# value: "changeme"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
# When actual pod logs in /var/lib/docker/containers, the following lines should be used.
- name: dockercontainerlogdirectory
mountPath: /var/lib/docker/containers
readOnly: true
# When actual pod logs in /var/log/pods, the following lines should be used.
# - name: dockercontainerlogdirectory
# mountPath: /var/log/pods
# readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
# When actual pod logs in /var/lib/docker/containers, the following lines should be used.
- name: dockercontainerlogdirectory
hostPath:
path: /var/lib/docker/containers
# When actual pod logs in /var/log/pods, the following lines should be used.
# - name: dockercontainerlogdirectory
# hostPath:
# path: /var/log/pods
Kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kube-logging
labels:
k8s-app: kibana
version: v1
spec:
selector:
matchLabels:
k8s-app: kibana
version: v1
template:
metadata:
labels:
k8s-app: kibana
version: v1
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.15.0
env:
- name: SERVER_NAME
value: kibana
- name: SERVER_BASEPATH
value: /kibana
- name: SERVER_REWRITEBASEPATH
value: "true"
# - name: XPACK_SECURITY_ENABLED
# value: "true"
readinessProbe:
httpGet:
path: /kibana/api/status
port: 5601
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /kibana/api/status
port: 5601
initialDelaySeconds: 15
periodSeconds: 20
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
k8s-app: kibana
version: v1
spec:
type: NodePort
selector:
k8s-app: kibana
ports:
- port: 5601
targetPort: 5601
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
namespace: kube-logging
spec:
rules:
- host: logging.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana
port:
number: 5601

Filebeat: How to export logs of specific pods

This is my filebeat config map.
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: $${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
setup.ilm.enabled: false
processors:
- add_cloud_metadata:
- add_host_metadata:
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
This sends logs from every pod to AWS ElasticSearch. How I can restrict it to send logs from specific pods by name and/or by the label?

Validating Error on deployment in Kubernetes

I have tried to deploy the producer-service app with MySQL database in the Kubernetes cluster. When i try to deploy producer app then the following validation error has thrown.
error: error validating "producer-deployment.yml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
producer-deployment.yml
apiVerion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
-nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: producer-app
spec:
selector:
matchLabels:
app: producer-app
replicas: 3
template:
metadata:
labels:
app: producer-app
spec:
containers:
- name: producer
image: producer:1.0
ports:
- containerPort: 9090
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
i have tried to find the error or typo within the config file but still, couldn't. What is wrong with the producer-deployment.yml file
Multiple issues:
It would be apiVersion: v1 not apiVerion: v1 in the Service
wrong Spec.ports formation of Service. As nodePort, port, targetPort and protocol are under the ports as a list but your did wrong formation.
your service yaml should be like below:
apiVersion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
- nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
So your overall yaml should be:
apiVersion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
- nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: producer-app
spec:
selector:
matchLabels:
app: producer-app
replicas: 3
template:
metadata:
labels:
app: producer-app
spec:
containers:
- name: producer
image: producer:1.0
ports:
- containerPort: 9090
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
Please change the first line in producer-deployment.yml. Letter s is missing.
From
apiVerion: v1
To
apiVersion: v1
There is a typo in the first line: "apiVerion" should be "apiVersion".
Your first error(there are more than 1) just point you to the place where you should start your investigation from..
error validating data: apiVersion not set;
As you know, each object in kubernetes has its own apiVersion.
Check Understanding Kubernetes Objects, especially Required Fields part:
In the .yaml file for the Kubernetes object you want to create, you'll
need to set values for the following fields:
apiVersion - Which version of the Kubernetes API you're using to
create this object
kind - What kind of object you want to create
metadata - Data that helps uniquely identify the object, including a
name string, UID, and optional namespace
spec - What state you desire
for the object The precise format of the object spec is different for
every Kubernetes object, and contains nested fields specific to that
object.
The Kubernetes API Reference can help you find the spec format
for all of the objects you can create using Kubernetes.
You can check Latest 1.20 API here
These values are mandatory and you wont be able to create object without them. So please, next time read more carefully errors you receive.

Kubernetes Traefik v2.3.0 - Web UI 404 Not Found after removing --api.insecure

I'm running Traefik v2.3.0 in a AKS (Azure Kubernetes Service) Cluster and i'm currently trying to setup a Basic Authentication on my Traefik UI.
The dashboard (Traefik UI) works fine without any authentication but i'm getting the server not found page when I try to access with a Basic Authentication.
Here is my configuration.
IngressRoute, Middleware for BasicAuth, Secret and Service :
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`traefik-ui.domain.com`) && PathPrefix(`/`) || PathPrefix(`/dashboard`)
services:
- name: traefik-ui
port: 80
middlewares:
- name: traefik-ui-auth
namespace: ingress-basic
tls:
secretName: traefik-ui-cert
---
apiVersion: v1
kind: Secret
metadata:
name: traefik-secret
namespace: ingress-basic
data:
users: |2
dWlhZG06JGFwcjEkanJMZGtEb1okaS9BckJmZzFMVkNIMW80bGtKWFN6LwoK
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: traefik-ui-auth
namespace: ingress-basic
spec:
basicAuth:
secret: traefik-secret
---
apiVersion: v1
kind: Service
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: traefik-ingress-lb
sessionAffinity: None
type: ClusterIP
DaemonSet and Service:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: traefik-ingress
namespace: ingress-basic
spec:
selector:
matchLabels:
app: traefik-ingress-lb
template:
metadata:
labels:
app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- small
containers:
- args:
- --api.dashboard=true
- --accesslog
- --accesslog.fields.defaultmode=keep
- --accesslog.fields.headers.defaultmode=keep
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.metrics.address=:8082
- --providers.kubernetesIngress.ingressClass=traefik-cert-manager
- --certificatesresolvers.default.acme.email=info#domain.com
- --certificatesresolvers.default.acme.storage=acme.json
- --certificatesresolvers.default.acme.tlschallenge
- --providers.kubernetescrd
- --ping=true
- --pilot.token=xxxxxx-xxxx-xxxx-xxxxx-xxxxx-xx
- --metrics.statsd=true
- --metrics.statsd.address=localhost:8125
- --metrics.statsd.addEntryPointsLabels=true
- --metrics.statsd.addServicesLabels=true
image: traefik:v2.3.0
imagePullPolicy: IfNotPresent
name: traefik-ingress-lb
ports:
- containerPort: 80
name: web
protocol: TCP
- containerPort: 8080
name: admin
protocol: TCP
- containerPort: 443
name: websecure
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /acme/acme.json
name: acme
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: traefik-ingress
serviceAccountName: traefik-ingress
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoSchedule
key: size
operator: Equal
value: small
volumes:
- hostPath:
path: /srv/configs/acme.json
type: ""
name: acme
With this configuration :
kubectl exec -it -n ingress-basic traefik-ingress-2m88q -- curl http://localhost:8080/dashboard/
404 page not found
When removing the Middleware and adding "--api.insecure" in the DaemonSet config :
kubectl exec -it -n ingress-basic traefik-ingress-1hf4q -- curl http://localhost:8080/dashboard/
<!DOCTYPE html><html><head><title>Traefik</title><meta charset=utf-8><meta name=description content="Traefik UI"><meta name=format-detection content="telephone=no"><meta name=msapplication-tap-highlight content=no><meta name=viewport content="user-scalable=no,initial-scale=1,maximum-scale=1,minimum-scale=1,width=device-width"><link rel=icon type=image/png href=statics/app-logo-128x128.png><link rel=icon type=image/png sizes=16x16 href=statics/icons/favicon-16x16.png><link rel=icon[...]</body></html>
Please let me know what I am doing wrong here? Is there any other way of doing it ?
Regards,
Here's another take on the IngressRoute, adapted to your environment.
I think 99% of the issue is actual route matching, especially if you say --api.insecure works.
Also as a rule of a thumb, logging & access log would help a lot in the DaemonSet definition.
- --log
- --log.level=DEBUG
- --accesslog
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik-ui.domain.com`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))
kind: Rule
services:
- name: api#internal
kind: TraefikService
middlewares:
- name: traefik-basic-auth
tls:
secretName: traefik-ui-cert

spring boot cloud kubernetes config not working for multiple pods

I am using spring-cloud-starter-kubernetes-all dependency for reading config map from my spring boot microservices and its working fine.
After modifying the config map i am using refresh endpoint
minikube servie list # to get the servive url
curl http://192.168.99.100:30824/actuator/refresh -d {} -H "Content-Type: application/json"
it working as expected and application loads configmap changes.
Issue
The above working fine if i have only 1 pod of my application but when i do use more that 1 pods only 1 pods picks the changes not all.
In below example only i pod picks the changes
[message-producer-5dc4b8b456-tbbjn message-producer] Say Hello to the World12431
[message-producer-5dc4b8b456-qzmgb message-producer] Say Hello to the World
minkube deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: message-producer
labels:
app: message-producer
spec:
replicas: 2
selector:
matchLabels:
app: message-producer
template:
metadata:
labels:
app: message-producer
spec:
containers:
- name: message-producer
image: sandeepbhardwaj/message-producer
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: message-producer
spec:
selector:
app: message-producer
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
configmap.yml
kind: ConfigMap
apiVersion: v1
metadata:
name: message-producer
data:
application.yml: |-
message: Say Hello to the World
bootstrap.yml
spring:
cloud:
kubernetes:
config:
enabled: true
name: message-producer
namespace: default
reload:
enabled: true
mode: EVENT
strategy: REFRESH
period: 3000
configuration
#ConfigurationProperties(prefix = "")
#Configuration
#Getter
#Setter
public class MessageConfiguration {
private String message = "Default message";
}
rbac
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default # "namespace" can be omitted since ClusterRoles are not namespaced
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
metadata:
name: service-reader
subjects:
- kind: User
name: default # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: service-reader
apiGroup: rbac.authorization.k8s.io
This is happening because when you hit curl http://192.168.99.100:30824/actuator/refresh -d {} -H "Content-Type: application/json" kubernetes will send that request to one of the pods behind the service via round robin load balancing.
You should use the property source reload feature by setting spring.cloud.kubernetes.reload.enabled=true. This will reload the property whenever there is a change in the config map hence you don't need to use the refresh endpoint.

Resources