Fluentd | how to drop logs of specific container [duplicate] - elasticsearch

This question already has an answer here:
How to exclude pattern in <match> for fluentd config?
(1 answer)
Closed 6 months ago.
we have Fluentd running on our multiple K8s clusters. and with Fluentd we are using Elasticsearch to store our logs from all remote K8s clusters.
there are a few applications for which we do not want Fluentd to push logs to Elasticsearch.
for example, there is a pod running that has container named testing or it has labels as mode: testing. and we want FluentD to not process the logs of this container and drop them.
looking for suggestion on this how we can achieve this.
Thanks

Here is an explanation to do that with Fluentd.
But I would like to recommend another tool developed by the same team: Fluent Bit. It is very light (needs <1MB of memory as Fluentd needs about 40MB), and more suitable for K8S, you can install it as a damonset (a pod on each node), with a Fluentd deployment (1-3 replicas), every Fluent Bit pod will collect the log and forwards it to a Fluentd instance which aggregate it and send it to ES. In this case, you can easily filter the records using the pod annotations (more info):
apiVersion: v1
kind: Pod
metadata:
name: apache-logs
labels:
app: apache-logs
annotations:
fluentbit.io/exclude: "true"
spec:
containers:
- name: apache
image: edsiper/apache_logs

Related

How to use Grafana to create graphs with data from Spring Boot

I am new to Grafana and Spring Boot. I am trying to create a Spring Boot application, and use Grafana SimpleJSON Datasource plugin to get data from my Spring Boot APIs and create graphs. I'm following instructions here https://grafana.com/grafana/plugins/grafana-simple-json-datasource/. Right now I am just hard-coding data into my Spring Boot App.
My question is - Are there other better plugins or approaches people would suggest me to use? SimpleJSON seems to require a very specific format of JSON response, and I don't see too many detailed docs online. Is there any way that I can be more free on my JSON responses of my APIs, and set the parameters needed to plot graphs in Grafana?
Thank you.
You can use Micrometer with Spring Boot Actuator framework to expose metric data to a time series database such as Prometheus. Or you can simply write log files, collect them with Promtail and store them in Loki.
At first this might seem like a lot of work to get these things running, but it might be worth it.
I found it surprisingly simple to get the whole monitoring stack running locally with docker-compose:
Add services grafana, prometheus, promtail and loki.
Configure each of them.
The docker-compose might look like this:
version: "3"
services:
prometheus:
image: prom/prometheus:latest
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- ./config/prometheus/prometheus_local.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
loki:
depends_on:
- promtail
image: grafana/loki:latest
volumes:
- ./config/loki:/etc/loki
ports:
- "3100:3100"
command: -config.file=/etc/loki/loki-local-config.yaml
promtail:
image: grafana/promtail:latest
volumes:
- .log:/var/log
- ./config/promtail/promtail-docker-config.yaml:/etc/promtail/config.yml
command: -config.file=/etc/promtail/config.yml
grafana:
depends_on:
- prometheus
- loki
image: grafana/grafana:latest
volumes:
- ./config/grafana/grafana.ini:/etc/grafana/grafana.ini
- ./config/grafana/provisioning:/etc/grafana/provisioning
- ./config/grafana/dashboards:/etc/grafana/dashboards
ports:
- "3000:3000"
Sample config files for provisioning grafana can be found in the grafana git repository. Loki provides sample configurations for itself and promtail. For Prometheus, see here.
The official documentaion about installing Grafana with loki can be found here. There is also documentation about the Prometheus configuration.
Now you need to configure your application. Enable and expose the endpoint prometheus as described in the spring boot documention. Configure a log file appender to write the logs to the above configured log directory .log.
Your logs will now get collected by Promtail and sent to Loki. Metrics will get stored in Prometheus. You can use PromQL and LogQL to write Grafana queries and render the results in Grafana panels.
With this solution you can add tags to your data that can later be used by grafana.

Connecting spring boot to postgres statefulset in Kubernetes

I'm new to Kubernetes and I'm learning about statefulsets. For stateful applications, where the identity of pods matter, we use statefulsets instead of simple deployments so each pod can have its own persistent volume. The writes need to be pointed to the master pod, while the reading operations can be pointed to the slaves. So pointing to the ClusterIP service attached to the statefulset won't guarantee the replication, instead we need to use a headless service that will be pointing to the master.
My questions are the following :
How to edit the application.properties in spring boot project to use the slaves for reading operations ( normal ClusterIP service ) and the master for writing/reading operations ( Headless service )?
In case that is unnecessary and the headless service does this work for us, how does it work exactly since it's pointing to the master ?

Elastic cloud on Kubernetes (ECK) using helm 3

Is there a way to run Elastic cloud on Kubernetes (ECK) with helm3?
As much as i know, there is no helm chart for ECK operator however for the Elasticsearch stack there is a helm chart available.
Elastic stack helm chart : https://github.com/elastic/helm-charts/tree/master/elasticsearch
ECK is operate you can extend Kubernetes orchestration by YAML files or else you can create own helm chart as per need if required.
ECK quick deploy : https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html
ECK is providing official support in GKE however i think you have idea about the advantages & disadvantages of using ECK.
Yes, starting ECK 1.3.0, there is an official Helm chart for deploying the operator.

How to migrate volume of statefulset to a helm chart

I have a functionning zookeeper stateful set, i have manually created.
I want to migrate to a helm installation of zookeeper.
My kubernetes cluster run on aws.
How to migrate volume from one stateful to the other ?
There is a tool called "chartify", that can generate Helm Charts from an existing Kubernetes api objects. It works both as a Helm plugin or as a stand-alone tool.
Please mind that there is already a Helm chart for zookeeper in the incubator repository.

Websockets + Spring boot + Kubernetes

I am creating a Facebook multiplayer game and am currently evaluating my tech stack.
My game would need to use websockets and I would like to use Spring Boot. Now, I can not find the info if websocket server will work nicely in Kubernetes? For example if I deploy 5 instances of the server in Kubernetes pods will load balancing/forwarding work correctly for websockets between game clients loaded in browsers and servers in Kubernetes and is there any additional work to enable it? Each pod/server would be stateless and current game info for each player will be stored/read from redis or some other memory db.
If this would not work, how can I work around it and still use Kubernetes? Maybe add one instance of rabbitmq to the stack just for the websockets?
An adequate way to handle this would be to use "sticky sessions". This is where the user is pinned down to a specific pod based on the setting of a cookie.
Here is an example of configuring the Ingress resource object to use sticky sessions:
#
# https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/affinity/cookie
#
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-sticky
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/affinity: "cookie"
ingress.kubernetes.io/session-cookie-name: "route"
ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: $HOST
http:
paths:
- path: /
backend:
serviceName: $SERVICE_NAME
servicePort: $SERVICE_PORT
Now with that being said, the proper way to handle this would be to use a message broker or a websocket implementation that supports clustering such as socketcluster (https://socketcluster.io).

Resources