How to add kubernetes labels to the log data using promtail? - grafana-loki

We can't seem to find in promtail a way to add kubernetes labels to the log data–seems to only allow copying that data to labels, not the log data itself. Is this possible to do?

I did not find a 'correct' way, but there is a workaround:
relabel_configs:
- replacement: your_label_val
source_labels:
- __path__
target_label: your_label

Related

PromQL change label value (not name) into a specific metric in all TSDB

I would like to change the value (not the label name) of label instance in the Prometheus DB using a PromQL from metric rundeck_system_stats_uptime_since.
I managed to do this before ingestion using this:
- source_labels: [__meta_kubernetes_pod_container_name,__meta_kubernetes_pod_container_port_number]
action: replace
separator: ":"
target_label: instance
So I'm covered for future metrics, but I would like to do this for existing values for instance label.
Expected result:
rundeck_system_stats_uptime_since{app="rdk-exporter", instance="rdk-exporter:9620", [...]}
Since it's a container in k8s I'm not interested in the IP of that container/host/node etc. because it's always changing, I'm only interested in the metrics.
Thank you
You can use label_replace from MetricsQL
How it is work you can check in victoriametrics play
So for example I have three metrics
process_cpu_seconds_total{cluster_num="1"cluster_retention="1m"instance="play-1m-1-vmagent.us-east1-b.c.victoriametrics-test.internal:8429"job="vmagent"}
process_cpu_seconds_total{cluster_num="1"cluster_retention="accounting"instance="play-accounting-1-vmagent.us-east1-b.c.victoriametrics-test.internal:8429"job="vmagent"}
process_cpu_seconds_total{cluster_num="1"cluster_retention="admin"instance="play-admin-1-vmagent.us-east1-b.c.victoriametrics-test.internal:8429"job="vmagent"}
if I will use this query
label_replace(process_cpu_seconds_total{instance=~".*:8429"}, "instance", "some:$2", "instance", "(.*):(.*)")
I will get next response
process_cpu_seconds_total{cluster_num="1"cluster_retention="1m"instance="some:8429"job="vmagent"}
process_cpu_seconds_total{cluster_num="1"cluster_retention="accounting"instance="some:8429"job="vmagent"}
process_cpu_seconds_total{cluster_num="1"cluster_retention="admin"instance="some:8429"job="vmagent"}
where instance will have same host.
Hope this solution will help you.

Index template not used/processed in ELK-Stack

I have some troubles with my ELK stack. I have set up an ELK stack with Winlogbeat (for reading EVTX files), Elasticsearch and Kibana. In Winlogbeat it is possible to define an template or using the common template ECS. The fields.yml file is already implemented. As I process my data, everything goes well. The data also appears in elasticsearch/kibana. The problem is, that the type of typical fields such as winlog.record_id or winlog.channel appears in kibana as text, but not as defined as keyword.
I tried many different combinations for processing the data in Winlogbeat, but I'm not sure, if Winlogbeat is really the problem.
My current Winlogbeat configuration looks like this:
winlogbeat.event_logs:
- name: 'c:\path\to\evtx\file.evtx'
setup.template.enabled: true
setup.template.fields: "c:/path/to/fields.yml"
setup.template:
name: "test_"
pattern: "test_*"
overwrite: true
settings:
index.number_of_shards: 1
setup.ilm.enabled: false
output.elasticsearch:
enabled: true
hosts: ["localhost:9200"]
index: "test_evtx"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
I use for Elasticsearch, Kibana and Winlogbeat the version 7.5.2.
Out of my current knowledge, the configuration is correct. What bothers me is, that in the "Management > Index Manangement > Index Templates" menu, I can see the current template used by my test index. The mapping settings there are correct and similar to the settings in the fields.yml, but they are also like this, if I don't reference to the fields.yml in the Winlogbeat configuration file. On the other hand, if the template is set, Kibana should use it while publishing the data with the created Index Pattern, right? But here the type of the fields is for 90% of the cases a the text type and not the type recommended in the index template.
So either I misunderstand the whole concept of index (templates/pattens) or anything else goes wrong. Does anyone have suggestions?

How to filter EC2 instances in prometheus.yml?

I am trying to filter EC2 instances in prometheus.yml. Suppose that the following is part of the yml file. How can I use regex in the values to return instance that starts with, lets say, prod or qa or other labels? Is this possible without configuring relabeling?
ec2_sd_configs:
- region: us-east-1
access_key: access
secret_key: mysecret
profile: profile
filters:
- name: tag:Name
values:
- prod.*
- qa.*
It seems that Prometheus does not support regex in filtering API right now but it would be a nice feature if they can add it in future releases. What can be done in this situation is that you can add a separate tag on EC2 instances so you can filter based on those tag. Filtering at early stage is extremely helpful if you have large number of instances. Otherwise, you'll get a huge list and you need to go through a drop/keep phase via relabeling which still keeps a long list in service discovery panel and make it difficult to read.
In the next step, you can use relabeling to replace the address of each discovered instance from private IP to public IP. As a final touch, you can replace the instance name with the tag name so, for example, all instances of QA are labeled as QA.
ec2_sd_configs:
- region: value
access_key: value
secret_key: value
port: value
filters:
- name: tag:Name
values:
- qa
- prod
- some other types
relabel_configs:
- source_labels: [__meta_ec2_public_ip]
replacement: '${1}:your port number'
target_label: __address__
- source_labels: [__meta_ec2_tag_Name]
target_label: instance
I don't have any experience with AWS, but I believe its API does not support regular expressions in filtering API.
In general relabelling is the preferred way to do filtering. An example of how to achieve this would be (for consul, but that does not matter much): Prometheus: how to drop a target based on Consul tags
List of ec2 meta labels available is in prometheus docs at https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config
I see that Prometheus docs recommend using filters over relabelling for efficiency reasons when you have potentially thousands of instances. Using separate tag (for example "Env") that has values of "qa", "prod" etc. so they can be matched exactly (without regex/wildcard) would be an elegant solution here I'd guess?

Deleting labels in Prometheus

I'm using Prometheus to do some monitoring but I can't seem to find a way to delete labels I no longer want. I tried using the DELETE /api/v1/series endpoint but it doesn't remove it from the dropdown list on the main Prometheus Graph page. Is there a way to remove them from the dropdown without restarting from scratch?
Thanks
This happens to me also, try to include the metric name when querying for labels' values like this:
label_values(node_load1, instance)
ref: http://docs.grafana.org/features/datasources/prometheus/
If you delete every relevant timeseries then it should no longer be returned. If this is not the case, please file a bug.
Prometheus doesn't provide the ability to delete particular labels, because this may result to duplicate time series with identical labelsets. For example, suppose Prometheus contains the following time series:
http_requests_total{instance="host1",job="foobar"}
http_requests_total{instance="host2",job="foobar"}
If instance label is removed, then these two time series will become identical:
http_requests_total{job="foobar"}
http_requests_total{job="foobar"}
Now neither Prometheus nor user can differentiate these two time series.
Prometheus provides only the API for deleting time series matching the given series selector - see these docs for details.

Can prometheus read consul node meta?

According to https://www.consul.io/docs/agent/options.html#_node_meta
I can associate with a consul node any metadata key/value pair.
Can prometheus read this metadata ?
I understand that only the following meta labels are available for prometheus:
__meta_consul_address: the address of the target
__meta_consul_node: the node name defined for the target
__meta_consul_tags: the list of tags of the target joined by the tag separator
__meta_consul_service: the name of the service the target belongs to
__meta_consul_service_address: the service address of the target
__meta_consul_service_port: the service port of the target
__meta_consul_service_id: the service ID of the target
__meta_consul_dc: the datacenter name for the target
But I would like to be absolutely sure that I miss nothing or there is no a trick to do it.
Thank you
That's not supported as the feature was only released a month ago, but feel free to send a pull request.
Yes. This was introduced into Prometheus 1.8
You can now simply reference __meta_consul_metadata_$KEYNAME
The following shows a prometheus label rewrite which filters the nodes 'location' metadata to a ldn fact which we've added to Consul agents running in London.
- source_labels: [__meta_consul_metadata_location]
separator: ;
regex: ldn
replacement: $1
action: keep

Resources