I'm working on elasticsearch version 7.2.0 and shipping the logs using filebeat. I can use the custom pipeline but I'm unable to set custom index name. Kindly help.
Below is my filebeat output configuration:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
pipeline: reindex_timestamp
index: "logstash-%{+yyyy.MM.dd}"
setup.template.name: "logstash"
setup.template.pattern: "logstash-*"
setup.template.enabled: true
setup.template.overwrite: true
Here I'm not sure though how I have to create a custom template that I specified the name
Update: I found the solution to my requirement- Below configurations worked for me since my requirement is to write weblogs( if logs contain name: web) in separate index and rest of the application logs in another index (now writing to default index called filebeat-*)
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
#index: "filebeat-7.2.0-logstash-%{+yyyy.MM.dd}" #Its not taking custom index
pipeline: reindex_timestamp_logstash
indices:
- index: "node-%{+yyyy.MM.dd}"
when.contains:
name: web
pipelines:
- pipeline: reindex_timestamp_node
when.contains:
name: web
setup.template.name: "filebeat-7.2.0"
setup.template.pattern: "filebeat-7.2.0-*"
Related
I've created a docker-compose file with some configurations that deploy Elasticsearch, Kibana, Elastic Agent all version 8.7.0.
where in the Kibana configuration files I define the police I needed under xpack.fleet.agentPolicies, with single command all my environment goes up and all component connect successfully.
The only issue is there is one manual step, which is I had to go to Kibana -> Observability -> APM -> Add Elastic APM and then fill the Server configuration.
I want to automate this and manage this from the API/CMD/configuration file, I don't want to do it from the UI.
What is the way to do this? in which component? what is the path the configuration should be at?
I tried to look for APIs or command to do that, but with no luck. I'm expecting help with automating the remaning step.
#Update 1
I've tried to add it as below, but I still can't see the integration added.
package_policies:
- name: fleet_server-apm
id: default-fleet-server
package:
name: fleet_server
inputs:
- type: apm
enabled: true
vars:
- name: host
value: "0.0.0.0:8200"
- name: url
value: "http://0.0.0.0:8200"
- name: enable_rum
value: true
frozen: true
Tldr;
Yes, I believe there is a way to do it.
But I am pretty sure this is poorly documented.
You can find some idea in the repository of apm-server
Solution
In the kibana.yml file you can add some information related to fleet.
This section below is taken from the repository above and helped me set up apm automatically.
But if you have some specific settings you would like to see enable I am usure where you provide them.
xpack.fleet.packages:
- name: fleet_server
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server (APM)
id: fleet-server-apm
is_default_fleet_server: true
is_managed: false
namespace: default
package_policies:
- name: fleet_server-apm
id: default-fleet-server
package:
name: fleet_server
It is true that the kibana Fleet API is very poorly documented at this moment. I think your problem is that you are trying to add the variables to the fleet-server package insted of the apm package. Your yaml should look like this:
package_policies:
- name: fleet_server-apm
id: default-fleet-server
package:
name: fleet_server
- name: apm-1
package:
name: apm
inputs:
- type: apm
keep_enabled: true
vars:
- name: host
value: 0.0.0.0:8200
frozen: true
- name: url
value: "http://0.0.0.0:8200"
frozen: true
- name: enable_rum
value: true
frozen: true
Source
I use Filebeat to send Netflow to Elasticsearch and visualize it with Kibana.
The problem is that the Netflow events are not showing up in Kibana.
Here are my netflow and filebeat configuration files.
netflow.yml
Module: netflow
module: netflow
log:
enabled: true
var:
netflow_host: 0.0.0.0
netflow_port: 2055
filebeat.yml
Kibana section:
setup.kibana:
host: "X.X.X.X:5601"
Elasticsearch Output
output.elasticsearch:
hosts: ["localhost:9200"]
username: "xxxxxxxxxxxx"
password: "XXXXXXXXXXXX"
I am using a metricbeat (7.3) docker container along side several other docker containers, and sending the results to an elasticsearch (7.3) instance. This works, and the first time everything spins up I get an index in elasticsearch called metricbeat-7.3.1-2019.09.06-000001
The initial problem is that I have a Graphana dashboard setup to look for an index with today's date, so it seems to ignore one created several days ago altogether. I could try to figure out what's wrong with those Grafana queries, but more generically I need those index names to roll at some point - the index that's there is already up to over 1.3GB, and at some point that will just be too big for the system.
My initial metricbeat.yml config:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
Searching around a bit, it seems like the index field on the elasticsearch output should configure the index name, so I tried the following:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
index: "metricbeat-%{[beat.version]}-instance1-%{+yyyy.MM.dd}"
That throws an error about needing setup.template settings, so I settled on this:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
index: "metricbeat-%{[beat.version]}-instance1-%{+yyyy.MM.dd}"
setup.template:
overwrite: true
name: "metricbeat"
pattern: "metricbeat-*"
I don't really know what the setup.template section does, so most of that is a guess from Google searches.
I'm not really sure if the issue is on the metricbeat side, or on the elasticsearch side, or somewhere in-between. But bottom line - how do I get them to roll the index to a new one when the day changes?
This is the setting/steps that worked for me:
metricbeat.yml file:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["<es-ip>:9200"]
index: metricbeat-%{[beat.version]}
index_pattern: -%{+yyyy.MM.dd}
ilm.enabled: true
then, over to kibana i.e :5601:
go to "Stack Monitoring", select the "metricbeat-*"
do this kind of a setting to begin with, and what follows later is self-explanatory too:
I’m trying to collect logs from Kubernetes nodes using Filebeat and ONLY ship them to ELK IF the logs originate from a specific Kubernetes Namespace.
So far I’ve discovered that you can define Processors which I think accomplish this. However, no matter what I do I can not get the shipped logs to be constrained. Does this look right?
Hm, does this look correct then?
filebeat.config:
inputs:
path: ${path.config}/inputs.d/*.yml
reload.enabled: true
reload.period: 10s
when.contains:
kubernetes.namespace: "NAMESPACE"
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_kubernetes_metadata:
namespace: "NAMESPACE"
xpack.monitoring.enabled: true
output.elasticsearch:
hosts: ['elasticsearch:9200']
Despite this configuration I still get logs from all of the namespaces.
Filebeat is running as a DaemonSet on Kubernetes. Here is an example of an expanded log entry: https://i.imgur.com/xfTwbhl.png
You have number options to do it:
Filter data by filebeat
processors:
- drop_event:
when:
contains:
source: "field"
Use ingest pipeline into elasticsearch:
output.elasticsearch:
hosts: ["localhost:9200"]
pipeline: my_pipeline_id
And then test events into pipeline:
{
"drop": {
"if" : "ctx['field'] == null "
}
}
Use drop filter of logstash:
filter {
if ![field] {
drop { }
}
}
In the end, I resolved this by moving the drop processor to the input configuration file from the configuration file.
I am using elasic.co/filebeat:6.3.1 and ELK elastic.co:6.3.0 in ubuntu as docker, while running filebeat facing this issue,
https://i.stack.imgur.com/9rZjt.png
And my filebeat.yml is
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /usr/local/java/ABC_LOGS/*/*.log
#- c:\programdata\elasticsearch\logs\*
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#============================== Dashboards =====================================
setup.dashboards.enabled: true
#============================== Kibana =====================================
setup.kibana:
host: "10.0.0.0:5601"
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
hosts: ["10.0.0.0:9200"]
Please help me,Thanks in advance