Using conditionals in Logstash pipeline configuration - elasticsearch

I am trying to use Logstash conditionals in a context of pipeline output configuration.
Based on the presence of device field in the payload I'd like to forward the event to the appropriate index name in Elasticsearch:
output {
elasticsearch {
hosts => ["10.1.1.5:9200"]
if [device] ~= \.* {
index => "%{[device][0]}-%{+YYYY.ww}"
} else {
index => "%{[beat][name]}-%{+YYYY.ww}"
}
}
}
The above code would fail with the following mgs in the log indicating the syntax error:
...
"Expected one of #, => at line 14, column 12 (byte 326) after output {\n elasticsearch {\n hosts => [\"10.1.1.5:9200\"]\n if "
...
Can someone please advise?

You should use the conditional before the elasticsearch output, not inside it.
output {
if [device] ~= \.* {
elasticsearch {
hosts => ["10.1.1.5:9200"]
index => "%{[device][0]}-%{+YYYY.ww}"
}
} else {
elasticsearch {
hosts => ["10.1.1.5:9200"]
index => "%{[beat][name]}-%{+YYYY.ww}"
}
}
}

Related

Logstash error when using if statement in pipeline

I am trying to determine if a field exists in a log file and if so, use the value of that field as part of the index name. If the field does not exist, use a different index name.
beats {
port => 5000
}
}
filter {
}
output {
elasticsearch {
hosts => ["https://elasticserver.io:9243"]
user => "user"
password => "pass"
retry_on_conflict => "2"
if [index_append] {
index = "%{[#metadata][beat]}%{index_append}"
}
else {
index = "%{[#metadata][beat]}"
}
"action" => "create"
}
}
If I remove the if statements in the output section and just either one of the index options (index = "%{[#metadata][beat]}%{index_append}", or index = "%{[#metadata][beat]}") the pipeline loads fine, but doesn't account for when the field 'index_append' exists or not.
I have tried many combinations, but the logstash logs seem to indicate some sort of syntax issue.
[2021-06-09T17:17:38,658][ERROR][logstash.agent ] Failed to execute action {:id=>:"LogstashPipeline", :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Expected one of [ \\t\\r\\n], \"#\", \"=>\" at line 14, column 8 (byte 259) after output {\n elasticsearch {\n hosts => [\"https://elasticserver.io:9243\"]\n user => \"user\"\n password => \"pass\"\n retry_on_conflict => \"2\"\n if ", :backtrace=>["/opt/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:184:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:69:in `initialize'", "/opt/logstash/logstash-core/lib/logstash/pipeline_action/reload.rb:53:in `execute'", "/opt/logstash/logstash-core/lib/logstash/agent.rb:389:in `block in converge_state'"]}
I tried moving the if statements to the filter section, but receive the same error in logstash logs. I have used similar if statements in other pipelines and have not had these types of issues. I copied the code to VS Code and verified that there were no extra spaces or characters. I'm at a loss.
This pipeline is running on Logstash 7.10.2
Move the conditional to the filter section. Use a field under [#metadata] to store the index name. By default [#metadata] does not get written by the output so it is useful for storing temporary variables.
if [index_append] {
mutate { add_field => { "[#metadata][index]" => "%{[#metadata][beat]}%{index_append}" } }
} else {
mutate { add_field => { "[#metadata][index]" => "%{[#metadata][beat]}" } }
}
Then reference it in the output using
index => "%{[#metadata][index]}"

Configuring a logstash.conf file to create multiple indices when given one input containing several sources

My goal is to output filtered data into different indices. On my logstash.conf file, I currently have one input that takes in data from multiple logs files and then I filter the data as follows:
filter {
if ([source] ~= "examiner.log") {
json {
source => "message"
add_tag => ["EXAMINER"]
}
} else if ...
...
}
}
My output looks like:
output {
if ("EXAMINER" in [tags]) {
elasticsearch {
host => ["localhost:9200"]
index = "examiner"
}
} else {
elasticsearch {
host => ["localhost:9200"]
index => "data"
}
}
}
The data index gets created, but examiner never does. I'm not sure why the conditional does not seem to be working.
An example of my input data is:
{"#timestamp":"2019-09-27T20:42:12.254Z", "source_host":"WINL12345678", "file":"AppStarter.scala", "method":"start", "level":"INFO", "line_number":"34", "thread_name":"pool-1-thread-1", "#version":"1", "logger_name":"org.http4s.server.AppStarter", "message":"Application is running", "class":"org.http4s.server.AppStarter$class", "mdc":{}}

How to send different logstash event to different output

There are many events as fields that in logstash filter section are extracted from message field like below:
match => ["message", "%{type1:f1} %{type2:f2} %{type3:f3}"]
The purpose is to send f1, f2, f3 to one output and only f1 and f3 to other output plugin such that:
output {
elasticsearch {
action => "index"
hosts => "localhost"
index =>"indx1-%{+YYYY-MM}"
.
}
}
output {
elasticsearch {
action => "index"
hosts => "localhost"
index =>"indx2-%{+YYYY-MM}"
}
}
The problem is that all events are involved in every output pluggin but I want to handle which events goes to which output plugin.Is it possible to do this?
I found a solution by using filebeat to forward data to logstash.
If running two instancea of filebeat and one instance of logstash, each filebeat forwarda input data to the same logstash but with different type like:
document_type: type1
In logstash, appropriate filter and output is exceuted using if clause:
filter {
if [type] == "type1" {
}
else {
}
}
output {
if [type] == "type1" {
elasticsearch {
action => "index"
hosts => "localhost"
index => "%{type}-%{+YYYY.MM}"
}
}
else {
elasticsearch {
action => "index"
hosts => "localhost"
index => "%{type}-%{+YYYY.MM}"
}
}
}
If you have two distinct matching patterns in the "filter" section, then you can add specific "tags" for each match. Then in the output section use something like this:
if "matchtype1" in [tags] {
elasticsearch {
hosts => "localhost"
index => "indxtype1-%{+YYYY.MM}"
}
}
if "matchtype2" in [tags]{
elasticsearch {
hosts => "localhost"
index => "indxtype2-%{+YYYY.MM}"
}
}

Logstash conditional output to elasticsearch (index per filebeat hostname)

I have several web servers with filebeat installed and I want to have multiple indices per host.
My current configuration looks as
input {
beats {
ports => 1337
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
if [beat][hostname] == "luna"
{
hosts => "10.0.1.1:9200"
manage_template => true
index => "lunaindex-%{+YYYY.MM.dd}"
document_type => "apache"
}
}
}
However the above conf results to
The given configuration is invalid. Reason: Expected one of #, => at
line 22, column 6 (byte 346)
which is where the if statement takes place. Any help?
I would like to have the above in a nested format as
if [beat][hostname] == "lina"
{
index = lina
}
else if [beat][hostname] == "lona"
{
index = lona
}
etc. Any help please?
The thread is old but hopefully somebody will find this useful. Plugin definitions don't allow conditionals in them and hence the error. The conditional must include the entire definition like below. Also, see the documentation for details.
output {
if [beat][hostname] == "luna" {
elasticsearch {
hosts => "10.0.1.1:9200"
manage_template => true
index => "lunaindex-%{+YYYY.MM.dd}"
document_type => "apache"
}
} else{
elasticsearch {
// alternate configuration
}
}
}
To access any inner field you have to enclosed it with %{}.
Try this
%{[beat][hostname]}
See this for more explanations.
UPDATE:
Using %{[beat][hostname]} with == will not work, try
if "lina" in [beat][hostname]{
index = lina
}
A solution can be :
Define in each of your filebeat configuration file, in the prosperctor section define the document type :
document_type: luna
And in your pipeline conf file, check the type field
if[type]=="luna"
Hope this help.

Logstash expect # what is wrong

i want send data from kafka topic "test" to elasticsearch index "twitter" via logstash but my confing don't work
error is
reason=>"Expected one of #, => at line 1, column 101 (byte 101) after
input { kafka { bootstrap_servers=>\"localhost:9092\"
topics=>\"test\"} filter{} output{ elasticsearch "}
input { kafka { bootstrap_servers=>"localhost:9092" topics=>"test"} filter{} output{ elasticsearch {hosts=>["127.0.0.1:9200"]}}
Seems like you're missing out a closing bracket in your input:
input {
kafka {
bootstrap_servers=>"localhost:9092"
topics=>"test"
}
} <---- this bracket was missing in yours
filter{}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
}
}

Resources