I have some troubles with my ELK stack. I have set up an ELK stack with Winlogbeat (for reading EVTX files), Elasticsearch and Kibana. In Winlogbeat it is possible to define an template or using the common template ECS. The fields.yml file is already implemented. As I process my data, everything goes well. The data also appears in elasticsearch/kibana. The problem is, that the type of typical fields such as winlog.record_id or winlog.channel appears in kibana as text, but not as defined as keyword.
I tried many different combinations for processing the data in Winlogbeat, but I'm not sure, if Winlogbeat is really the problem.
My current Winlogbeat configuration looks like this:
winlogbeat.event_logs:
- name: 'c:\path\to\evtx\file.evtx'
setup.template.enabled: true
setup.template.fields: "c:/path/to/fields.yml"
setup.template:
name: "test_"
pattern: "test_*"
overwrite: true
settings:
index.number_of_shards: 1
setup.ilm.enabled: false
output.elasticsearch:
enabled: true
hosts: ["localhost:9200"]
index: "test_evtx"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
I use for Elasticsearch, Kibana and Winlogbeat the version 7.5.2.
Out of my current knowledge, the configuration is correct. What bothers me is, that in the "Management > Index Manangement > Index Templates" menu, I can see the current template used by my test index. The mapping settings there are correct and similar to the settings in the fields.yml, but they are also like this, if I don't reference to the fields.yml in the Winlogbeat configuration file. On the other hand, if the template is set, Kibana should use it while publishing the data with the created Index Pattern, right? But here the type of the fields is for 90% of the cases a the text type and not the type recommended in the index template.
So either I misunderstand the whole concept of index (templates/pattens) or anything else goes wrong. Does anyone have suggestions?
Related
I would like to change the value (not the label name) of label instance in the Prometheus DB using a PromQL from metric rundeck_system_stats_uptime_since.
I managed to do this before ingestion using this:
- source_labels: [__meta_kubernetes_pod_container_name,__meta_kubernetes_pod_container_port_number]
action: replace
separator: ":"
target_label: instance
So I'm covered for future metrics, but I would like to do this for existing values for instance label.
Expected result:
rundeck_system_stats_uptime_since{app="rdk-exporter", instance="rdk-exporter:9620", [...]}
Since it's a container in k8s I'm not interested in the IP of that container/host/node etc. because it's always changing, I'm only interested in the metrics.
Thank you
You can use label_replace from MetricsQL
How it is work you can check in victoriametrics play
So for example I have three metrics
process_cpu_seconds_total{cluster_num="1"cluster_retention="1m"instance="play-1m-1-vmagent.us-east1-b.c.victoriametrics-test.internal:8429"job="vmagent"}
process_cpu_seconds_total{cluster_num="1"cluster_retention="accounting"instance="play-accounting-1-vmagent.us-east1-b.c.victoriametrics-test.internal:8429"job="vmagent"}
process_cpu_seconds_total{cluster_num="1"cluster_retention="admin"instance="play-admin-1-vmagent.us-east1-b.c.victoriametrics-test.internal:8429"job="vmagent"}
if I will use this query
label_replace(process_cpu_seconds_total{instance=~".*:8429"}, "instance", "some:$2", "instance", "(.*):(.*)")
I will get next response
process_cpu_seconds_total{cluster_num="1"cluster_retention="1m"instance="some:8429"job="vmagent"}
process_cpu_seconds_total{cluster_num="1"cluster_retention="accounting"instance="some:8429"job="vmagent"}
process_cpu_seconds_total{cluster_num="1"cluster_retention="admin"instance="some:8429"job="vmagent"}
where instance will have same host.
Hope this solution will help you.
I have setup logging like described in https://quarkus.io/guides/centralized-log-management with an ELK Stack using version 7.7.
My logstash pipeline looks like the proposed example:
input {
gelf {
port => 12201
}
}
output {
stdout {}
elasticsearch {
hosts => ["http://elasticsearch:9200"]
}
}
Most Messages are showing up in my Kibana using logstash.* as an Index pattern. But some Messages are dropped.
2020-05-28 15:30:36,565 INFO [io.quarkus] (Quarkus Main Thread) Quarkus 1.4.2.Final started in 38.335s. Listening on: http://0.0.0.0:8085
The Problem seems to be, that the fields MessageParam0, MessageParam1, MessageParam2 etc. are mapped to the type that first appeared in the logs but actually contain multiple datatypes. The Elasticsearch log shows Errors like ["org.elasticsearch.index.mapper.MapperParsingException: failed to parse field [MessageParam1].
Is there any way in the Quarkus logging-gelf extension to correctly map the values?
ELK can auto-create your Elasticsearch index mapping by looking at the first indexed document. This is a very convenient functionality, but it comes with some drawback.
For example, if you have a field that can contains numbers or strings, if the first document contains a number for this field, the mapping will be created with a number field so you will not be able to index a document containing a String inside this field ...
The only workaround for this is to create the mapping upfront (you can only defines the fields that causing the issue, the other fields will be created automatically).
This is an ELK issue, there is nothing we can do at Quarkus side.
Looking at this documentation on adding fields, I see that filebeat can add any custom field by name and value that will be appended to every documented pushed to Elasticsearch by Filebeat.
This is defined in filebeat.yml:
processors:
- add_fields:
target: project
fields:
name: myproject
id: '574734885120952459'
Is there a way, strictly from filebeat.yml, to give fields added here a "type" as well? For example, can I assign "name" to type "keyword" ?
The only way I've seen to accomplish does not involve filebeat.yml, but creating a custom fields.yml file in the filebeat directory (this all applies for any beat) with the field and type specified under the "beat" key.
Example, if field "id" above was declared in filebeat.yml, and we wanted it to be a custom field of type "keyword," we would do the following:
copy fields.yml to my_filebeat_fields.yml file in the filebeat directory.
In my_filebeat_fields, add in this section:
- key: beat
anchor: beat-common
title: Beat
description: >
Contains common beat fields available in all event types.
fields:
# your customization begins here:
- name: id
- type: keyword
Then do the following to use this new custom fields file:
Modify this portion of filebeat.yml to include:
#==================== Elasticsearch template setting ==========================
setup.template.name: "filebeat-*"
setup.template.fields: "my_filebeat_fields.yml"
setup.template.overwrite: true
then load the template into elastic in whichever method is appropriate for your environment, following this guide.
(Assuming version 7.x for everything)
EDIT:
Apparently setting "setup.template.append_fields" option in the filebeat.yml file could also work, but I have not explored that.
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-template.html
I am trying to filter EC2 instances in prometheus.yml. Suppose that the following is part of the yml file. How can I use regex in the values to return instance that starts with, lets say, prod or qa or other labels? Is this possible without configuring relabeling?
ec2_sd_configs:
- region: us-east-1
access_key: access
secret_key: mysecret
profile: profile
filters:
- name: tag:Name
values:
- prod.*
- qa.*
It seems that Prometheus does not support regex in filtering API right now but it would be a nice feature if they can add it in future releases. What can be done in this situation is that you can add a separate tag on EC2 instances so you can filter based on those tag. Filtering at early stage is extremely helpful if you have large number of instances. Otherwise, you'll get a huge list and you need to go through a drop/keep phase via relabeling which still keeps a long list in service discovery panel and make it difficult to read.
In the next step, you can use relabeling to replace the address of each discovered instance from private IP to public IP. As a final touch, you can replace the instance name with the tag name so, for example, all instances of QA are labeled as QA.
ec2_sd_configs:
- region: value
access_key: value
secret_key: value
port: value
filters:
- name: tag:Name
values:
- qa
- prod
- some other types
relabel_configs:
- source_labels: [__meta_ec2_public_ip]
replacement: '${1}:your port number'
target_label: __address__
- source_labels: [__meta_ec2_tag_Name]
target_label: instance
I don't have any experience with AWS, but I believe its API does not support regular expressions in filtering API.
In general relabelling is the preferred way to do filtering. An example of how to achieve this would be (for consul, but that does not matter much): Prometheus: how to drop a target based on Consul tags
List of ec2 meta labels available is in prometheus docs at https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config
I see that Prometheus docs recommend using filters over relabelling for efficiency reasons when you have potentially thousands of instances. Using separate tag (for example "Env") that has values of "qa", "prod" etc. so they can be matched exactly (without regex/wildcard) would be an elegant solution here I'd guess?
There is a file named "elasticsearch.yml". I have following questions about the file:-
Is it mandatory to name the file as elastic-search?
There is a property named cluster.name in the file, what is the use? If we don't mention any name, will it use any default name?
I am confused because i removed the name from the YML file but the program still worked.
Elasticsearch.yml is a configuration file. it contains various configurations related to cluster, node.
cluster.name is the property to set the name of your cluster. Default name of your cluster is elasticsearch. You can change it to any name you want.
If you remove cluster.name , it won't affect your program. Default name would be taken.
You can find answers in-line here:
Is it mandatory to name the file as elastic-search?
Yes, it is mandatory to not rename elasticsearch.yml. You can find more info here.
There is a property named cluster.name in the file, what is the use? If we don't mention any name, will it use any default name?
cluster.name is used to name your elasticsearch cluster. It has default value elasticsearch . But it is highly recommended to change this parameter as this is used to discover and auto-join other nodes. You can read more about this parameter here.
I am confused because i removed the name from the YML file but the program still worked.
Here there is little confusion that you changed cluster.name parameter value or completely removed that parameter from elasticsearch.yml . If you completely removed that then it still holds default value i.e elasticsearch or if you changed it's value then it depends on your program that how it identifying elasticsearch cluster.