Configure derived fields in grafana for loki datasource via helm chart - grafana-loki

I've deployed grafana, loki and promtail in my local kubernetes cluster via the grafana helm charts
In order to improve the log output I want to configure derived fields for the Loki Data Source in Grafana.
But I would like to provide these configuration automatically via my helm deployment.
I've found no such properties in the grafana, loki or promtail charts.

Related

How to monitor Apache Kafka metrics?

How can I build a microservice to monitor Kafka metrics?
I don't want to use the confluent control center or any other tool.
Before building anything like a microservice, I would explore the kafka exporter for Prometheus to expose Kafka metrics in prometheus format. You could then use Prometheus server to scrape these metrics and Grafana for dashboarding/visualisations. There's other tools you could use for scraping instead of Prometheus/Grafana, e.g. Elastic Metricbeat (which I mention because you've tagged the question with 'elasticsearch'), but the Prometheus/Grafana combination is quite easy to get up and running - there's also out-of-the-box Grafana dashboards that you can install without having to set this up manually e.g. this one.

How to connect to kafka installed using confluent helm chart

I have a kubernetes cluster hosted on azure cloud. I had installed kafka resources using below helm chart https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka. This helm chart successfully deployed zoopkeeper pods and broker pods etc. Now I want to write a golang based application which connects with any of the kafka broker installed on my kubernetes cluster and creates a new producer and publishes messages. Any help would be highly appreciated.
You can use the following string in bootstrap.servers to communicate with the brokers <helm-release-name>-cp-kafka-headless.<namespace>:9092 or bootstrap service which is created as part of confluent helm chart <helm-release-name>-cp-kafka. When you hit this service, it will randomly got to any of the brokers for the first time and get all the metadata information which is synced through zookeeper.
The subsequent requests will be made to individual brokers based on information found in metadata.
You would deploy your Golang code in a container, in k8s, then set bootstrap.servers to the Kafka Deployment's Service name, ideally via an environment variable

How do I pull Elastic-search metrics into Prometheus using the Elasticseacrh_exporter

I have installed Prometheus into a Kubernetes cluster using the helm stable chart. We run Elastic Search and I want to scrape metrics from this and then create Alerts based on events.
I have installed the elasticsearch exporter via helm but no where can I find how I then import these metrics into Prometheus ?
There is some config I am missing such as creating a scraping job or something. Anyone can help much appreciated.
I connected to the elasticsearch exporter and can see it pulling metrics.
If you're using an elasticsearch exporter it should contain some documentation. There are more than just one solution out there and you didn't specify which one you're using. In my opinion it would be best for you to start from a tutorial like this one which explains step by step the whole process. As you can read there:
Metrics collection of Prometheus follows the pull model. That means,
Prometheus is responsible for getting metrics from the services that
it monitors. This process introduced as scraping. Prometheus server
scrapes the defined service endpoints, collect the matrixes and store
in local database.
which means you need to configure Prometheus to scrape metrics exposed by the elasticsearch exporter you chose.
Official Prometheus documentation will be also a great source of knowledge and good starting point.
EDIT:
If you run your Elasticsearch instance on Kubernetes cluster, you should rather use the Service Discovery mechanism than static configs. More on <kubernetes_sd_config> you can find here.
There are five different types of Kubernetes service discoveries you can use with Prometheus: node, endpoints, service, pod, and ingress. The one which you most probably need in your case is endpoints. Prometheus uses the Kubernetes API to discover targets. Below you have some examples:
https://blog.sebastian-daschner.com/entries/prometheus-kubernetes-discovery
https://raw.githubusercontent.com/prometheus/prometheus/master/documentation/examples/prometheus-kubernetes.yml

Metricbeat Jolokia jmx mapping

I am currently experimenting with Metricbeat with Jolokia using ELK 7.2.0 using docker-compose, and I was able to get JMX metrics to display in Kibana.
My issue is that I need to configure per JMX metric the mapping in the jmx.mappings section of the metricbeat configuration YML the metrics that I would like to have send to ELK.
Question is it possible to pass some sort of wildcard configuration so that metricbeat simply pulls all the jmx metrics and sends it to ELK ?
Thank you, kindly
Luis Oscar Trigueiros
I think you'll have to wait for https://github.com/elastic/beats/issues/8168, which will hopefully make some progress soon.

How to migrate volume of statefulset to a helm chart

I have a functionning zookeeper stateful set, i have manually created.
I want to migrate to a helm installation of zookeeper.
My kubernetes cluster run on aws.
How to migrate volume from one stateful to the other ?
There is a tool called "chartify", that can generate Helm Charts from an existing Kubernetes api objects. It works both as a Helm plugin or as a stand-alone tool.
Please mind that there is already a Helm chart for zookeeper in the incubator repository.

Resources