I am trying to deploy filebeat in one cluster and trying to send the logs into another cluster which has elastic search and kibana installed in it.
This is my yml file
`
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: es-beats
namespace: elastic
spec:
type: filebeat
version: 7.12.1
elasticsearchRef:
name: elastic
config:
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
- output.elasticsearch:
# Array of hosts to connect to.
hosts: ["https://<my-other-cluster-ip>:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "https"
username: "elastic"
password: "mypass"
- setup.kibana:
host: "https://<my-other-cluster-ip>:9200"
username: "elastic"
password: "mypass"
daemonSet:
podTemplate:
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
`
beats is successfully installed and when i apply this file everything works fine but the pods don't get deployed and when I get pods the status says CrashLoopBackOff
I deployed the beats using k8 operators and want to get the logs into the other cluster I read the documentation but I am confused where do I enter the elasticsearch host and kibana host name and pass and what shall the output be.
I am using helmfile to deploy multiple sub-charts using helmfile sync -e env command.
I am having the configmap and secretes which I need to load based on the environment.
Is there a way to load configmap and secretes based on the environment in the helmfile.
I tried to add in the
dev-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: backend
namespace: {{ .Values.namespace }}
data:
NODE_ENV: {{ .Values.env }}
In helmfile.yaml
environments:
envDefaults: &envDefaults
values:
- ./values/{{ .Environment.Name }}.yaml
- kubeContext: ''
namespace: '{{ .Environment.Name }}'
dev:
<<: *envDefaults
secrets:
- ./config/secrets/dev.yaml
- ./config/configmap/dev.yaml
Is there a way to import configmap and secrets (Not encrypted) YAML dynamically based on the environment in helmfile?
I have configured an Elastic ECK Beat with autodiscover enabled for all pod logs, but I need to add logs from a specific pod log file too; from this path /var/log/traefik/access.log inside the container. I've tried with module and log config but still nothing works.
The access.log file exists on the pods and contains data.
The filebeat index does not show any data from this log.file.path
Here is the Beat yaml:
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
namespace: elastic
spec:
type: filebeat
version: 8.3.1
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
filebeat:
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
templates:
- condition.contains:
kubernetes.pod.name: traefik
config:
- module: traefik
access:
enabled: true
var.paths: [ "/var/log/traefik/*access.log*" ]
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
- name: varlog
mountPath: /var/log
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
namespace: elastic
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elastic
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
namespace: elastic
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elastic
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
Here is the module loaded from Filebeat Logs:
...
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.337Z","log.logger":"esclientleg","log.origin":{"file.name":"eslegclient/connection.go","file.line":291},"message":"Attempting to connect to Elasticsearch version 8.3.1","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.352Z","log.logger":"modules","log.origin":{"file.name":"fileset/modules.go","file.line":108},"message":"Enabled modules/filesets: traefik (access)","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.353Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/traefik/*access.log*]","service.name":"filebeat","input_id":"fa247382-c065-40ca-974e-4b69f14c3134","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.355Z","log.logger":"modules","log.origin":{"file.name":"fileset/modules.go","file.line":108},"message":"Enabled modules/filesets: traefik (access)","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.355Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/traefik/*access.log*]","service.name":"filebeat","input_id":"6883d753-f149-4a68-9499-fe039e0de899","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.437Z","log.origin":{"file.name":"input/input.go","file.line":134},"message":"input ticker stopped","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.439Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/containers/*9a1680222e867802388f649f0a296e076193242962b28eb7e0e575bf68826d85.log]","service.name":"filebeat","input_id":"3c1fffae-0213-4889-b0e7-5dda489eeb51","ecs.version":"1.6.0"}
...
Docker logging is based on the stdout/stderr output of a container. If you only write into a log file inside a container it will never be picked up by Docker logging and can therefore also not be processed by your Filebeat setup.
Instead ensure that all logs generated by your containers are sent to stdout. Which would mean in your example start the Traeffic pod with --accesslogsfile=/dev/stdout to also send the access logs to stdout instead of the log file.
I can't use the preview feature to add a different service account, but I do have the physical key (.json). I've uploaded this key to Secrets Manager and I intend to call it in during the build.
Is what I have done correct?
steps:
- id: 'Setup Credentials'
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: '/bin/bash'
secretEnv: ['SERVICE_ACCOUNT']
args:
- '-c'
- |
echo "$$SERVICE_ACCOUNT" >> /credentials/service_account.json
volumes:
- name: 'credentials'
path: /credentials
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/service-account-key/versions/latest
env: 'SERVICE_ACCOUNT'
Then in a step that needs to use it I am overwriting GOOGLE_APPLICATION_CREDENTIALS:
- id: 'Do stuff as other service account'
name: 'hashicorp/terraform'
entrypoint: '/bin/bash'
args:
- '-c'
- |
GOOGLE_APPLICATION_CREDENTIALS=/credentials/service_account.json
# do things here
# terraform plan
volumes:
- name: 'credentials'
path: /credentials
Ideally we would use this cloud builds service account but they already have too much going with the other one.
I have a requirement to run a shell script from worker pod. For that I have created the configMap and loaded as volume. When I applied the configuration, I see a directory in the name of shell script is created is created. Can you help to understand why I see this behavior.
volumes:
- name: rhfeed
configMap:
name: rhfeed
defaultMode: 0777
volumeMounts:
- name: rhfeed
mountPath: /local_path/rh-feed/rh-generator.sh
subPath: rh-generator.sh
readOnly: true
drwxrwsrwx 2 root 1000 4096 Jun 22 06:55 rh-generator.sh
rh-generator.sh folder created because used have used subPath.
subPath creates a separate folder.
I have just performed a test with ConfigMap serving some static files for the nginx pod and my file was mounted correctly without overriding nor create deleting the content of the directory.
Here's the configMap yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: static-files
data:
index.html: |
<html><head><title>File Mounted from ConfigMap</title></head>
<body><h1>Welcome</h1>
<p>Hi, This confirm that your yaml with subPath is working fine</p>
</body></html>
And here the pod yaml file:
➜ ~ cat pod-static-files.yaml
apiVersion: v1
kind: Pod
metadata:
name: website
spec:
volumes:
- name: static-files
configMap:
name: static-files
containers:
- name: nginx
image: nginx
volumeMounts:
- name: static-files
mountPath: /usr/share/nginx/index.html
subPath: index.html
As you can see below I has mounted the file into the directory without any issues:
➜ ~ keti website bash
root#website:~# ls /usr/share/nginx/
html index.html
And here is the content of the /usr/share/nginx/index.html file:
➜ root#website:~# cat /usr/share/nginx/index.html
<html><head><title>File Mounted from ConfigMap</title></head>
<body><h1>Welcomet</h1>
<p>Hi, This confirm that your yaml with subPath is working fine</p>
</body></html>