I am trying to deploy filebeat in one cluster and trying to send the logs into another cluster which has elastic search and kibana installed in it.
This is my yml file
`
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: es-beats
namespace: elastic
spec:
type: filebeat
version: 7.12.1
elasticsearchRef:
name: elastic
config:
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
- output.elasticsearch:
# Array of hosts to connect to.
hosts: ["https://<my-other-cluster-ip>:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "https"
username: "elastic"
password: "mypass"
- setup.kibana:
host: "https://<my-other-cluster-ip>:9200"
username: "elastic"
password: "mypass"
daemonSet:
podTemplate:
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
`
beats is successfully installed and when i apply this file everything works fine but the pods don't get deployed and when I get pods the status says CrashLoopBackOff
I deployed the beats using k8 operators and want to get the logs into the other cluster I read the documentation but I am confused where do I enter the elasticsearch host and kibana host name and pass and what shall the output be.
I am using helmfile to deploy multiple sub-charts using helmfile sync -e env command.
I am having the configmap and secretes which I need to load based on the environment.
Is there a way to load configmap and secretes based on the environment in the helmfile.
I tried to add in the
dev-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: backend
namespace: {{ .Values.namespace }}
data:
NODE_ENV: {{ .Values.env }}
In helmfile.yaml
environments:
envDefaults: &envDefaults
values:
- ./values/{{ .Environment.Name }}.yaml
- kubeContext: ''
namespace: '{{ .Environment.Name }}'
dev:
<<: *envDefaults
secrets:
- ./config/secrets/dev.yaml
- ./config/configmap/dev.yaml
Is there a way to import configmap and secrets (Not encrypted) YAML dynamically based on the environment in helmfile?
I have configured an Elastic ECK Beat with autodiscover enabled for all pod logs, but I need to add logs from a specific pod log file too; from this path /var/log/traefik/access.log inside the container. I've tried with module and log config but still nothing works.
The access.log file exists on the pods and contains data.
The filebeat index does not show any data from this log.file.path
Here is the Beat yaml:
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
namespace: elastic
spec:
type: filebeat
version: 8.3.1
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
filebeat:
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
templates:
- condition.contains:
kubernetes.pod.name: traefik
config:
- module: traefik
access:
enabled: true
var.paths: [ "/var/log/traefik/*access.log*" ]
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
- name: varlog
mountPath: /var/log
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
namespace: elastic
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elastic
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
namespace: elastic
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elastic
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
Here is the module loaded from Filebeat Logs:
...
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.337Z","log.logger":"esclientleg","log.origin":{"file.name":"eslegclient/connection.go","file.line":291},"message":"Attempting to connect to Elasticsearch version 8.3.1","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.352Z","log.logger":"modules","log.origin":{"file.name":"fileset/modules.go","file.line":108},"message":"Enabled modules/filesets: traefik (access)","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.353Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/traefik/*access.log*]","service.name":"filebeat","input_id":"fa247382-c065-40ca-974e-4b69f14c3134","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.355Z","log.logger":"modules","log.origin":{"file.name":"fileset/modules.go","file.line":108},"message":"Enabled modules/filesets: traefik (access)","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.355Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/traefik/*access.log*]","service.name":"filebeat","input_id":"6883d753-f149-4a68-9499-fe039e0de899","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.437Z","log.origin":{"file.name":"input/input.go","file.line":134},"message":"input ticker stopped","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-08-18T19:58:55.439Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/containers/*9a1680222e867802388f649f0a296e076193242962b28eb7e0e575bf68826d85.log]","service.name":"filebeat","input_id":"3c1fffae-0213-4889-b0e7-5dda489eeb51","ecs.version":"1.6.0"}
...
Docker logging is based on the stdout/stderr output of a container. If you only write into a log file inside a container it will never be picked up by Docker logging and can therefore also not be processed by your Filebeat setup.
Instead ensure that all logs generated by your containers are sent to stdout. Which would mean in your example start the Traeffic pod with --accesslogsfile=/dev/stdout to also send the access logs to stdout instead of the log file.
i have an App built with Laravel, and one of the env variables i want to make them accessible via frontend.
As the official Laravel documentation indicates: https://laravel.com/docs/master/mix#environment-variables
If we prefix the specific env variables with the MIX_ prefix they will be available and accessible from JavaScript. Locally this works perfectly.
Now, the thing is i want to setup the env variables via Kubernetes configmap when deploying to staging and production.
Here is my config-map.yaml file
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "common.fullname" . }}-app-config
labels:
{{- include "common.labels" . | indent 4 }}
data:
.env: |+
EVENT_LOGGER_API={{ .Values.app.eventLoggerApi }}
MIX_EVENT_LOGGER_API="${EVENT_LOGGER_API}"
Now, my deployment.yaml file
volumes:
- name: app-config
configMap:
name: {{ template "common.fullname" . }}-app-config
volumeMounts:
- name: app-config
mountPath: /var/www/html/.env
subPath: .env
This env variables are visible in the backend Laravel, but cannot be accessed via JavaScript when running locally.
process.env.MIX_EVENT_LOGGER_API
Anyone had any experience before with setting this env variables via K8 configmap and them being accessible via JavaScript?
If you add variables after building your frontend, those variables will not be available
You need to include those variables in the job that does the build/push
I have an image which contains data inside /usr/data/webroot. This data should be moved on container init to /var/www/html.
Now I stumbeled upon InitContainers. As I understand it, it can be used to execute tasks on container initialization.
But I don't know if the task is beeing excuted after the amo-magento pods are created, or if the init task runs, and after that the magento pods are created.
I suppose that the container with the magento image is not available when the initContainers task runs, so no content is available to move to the new directory.
apiVersion: apps/v1
kind: Deployment
metadata:
name: amo-magento
labels:
app: amo-magento
spec:
replicas: 1
selector:
matchLabels:
app: amo-magento
template:
metadata:
labels:
app: amo-magento
tier: frontend
spec:
initContainers:
- name: setup-magento
image: busybox:1.28
command: ["sh", "-c", "mv -r /magento/* /www"]
volumeMounts:
- mountPath: /www
name: pvc-www
- mountPath: /magento
name: magento-src
containers:
- name: amo-magento
image: amo-magento:0.7 # add google gcr.io path after upload
imagePullPolicy: Never
volumeMounts:
- name: install-sh
mountPath: /tmp/install.sh
subPath: install.sh
- name: mage-autoinstall
mountPath: /tmp/mage-autoinstall.sh
subPath: mage-autoinstall.sh
- name: pvc-www
mountPath: /var/www/html
- name: magento-src
mountPath: /usr/data/webroot
# maybe as secret - can be used as configMap because it has not to be writable
- name: auth-json
mountPath: /var/www/html/auth.json
subPath: auth.json
- name: php-ini-prod
mountPath: /usr/local/etc/php/php.ini
subPath: php.ini
# - name: php-memory-limit
# mountPath: /usr/local/etc/php/conf.d/memory-limit.ini
# subPath: memory-limit.ini
volumes:
- name: magento-src
emptyDir: {}
- name: pvc-www
persistentVolumeClaim:
claimName: magento2-volumeclaim
- name: install-sh
configMap:
name: install-sh
# kubectl create configmap mage-autoinstall --from-file=build/docker/mage-autoinstall.sh
- name: mage-autoinstall
configMap:
name: mage-autoinstall
- name: auth-json
configMap:
name: auth-json
- name: php-ini-prod
configMap:
name: php-ini-prod
# - name: php-memory-limit
# configMap:
# name: php-memory-limit
But I don't know if the task is beeing excuted after the amo-magento pods are created, or if the init task runs, and after that the magento pods are created.
For sure the latter, that's why you are able to specify an entirely different image: for your initContainers: task -- they are related to one another only in that they run on the same Node and, as you have seen, share volumes. Well, I said "for sure" but you have a slight misnomer: after that the magneto containers are created -- the Pod is the collection of every colocated container, initContainers: and container: containers
If I understand your question, the fix to your Deployment is just to update the image: in your initContainer: to be the one which contains the magic /usr/data/webroot and then update your shell command to reference the correct path inside that image:
initContainers:
- name: setup-magento
image: your-magic-image:its-magic-tag
command: ["sh", "-c", "mv -r /usr/data/webroot/* /www"]
volumeMounts:
- mountPath: /www
name: pvc-www
# but **removing** the reference to the emptyDir volume
and then by the time the container[0] starts up, the PVC will contain the data you expect
That said, I am actually pretty sure that you want to remove the PVC from this story, since -- by definition -- it is persistent across Pod reboots and thus will only accumulate files over time (since your sh command does not currently clean up /www before moving files there). If you replaced all those pvc references to emptyDir: {} references, then those directories would always be "fresh" and would always contain just the content from the tagged image declared in your initContainer: