I hope to find here an answer to my question that I am struggling with since yesterday:
I'm configuring Logstash 1.5.6 with a rabbitMQ input and an elasticsearch output.
Messages are published in rabbitMQ in bulk format, my logstash consumes them and write them all to elasticsearch default index logstash-YYY.MM.DD with this configuration:
input {
rabbitmq {
host => 'xxx'
user => 'xxx'
password => 'xxx'
queue => 'xxx'
exchange => "xxx"
key => 'xxx'
durable => true
}
output {
elasticsearch {
host => "xxx"
cluster => "elasticsearch"
flush_size =>10
bind_port => 9300
codec => "json"
protocol => "http"
}
stdout { codec => rubydebug }
}
Now what I'm trying to do is send the messages to different elasticsearch indices.
The messages coming from the amqp input already have the index and type parameters (bulk format).
So after reading the documentation:
https://www.elastic.co/guide/en/logstash/1.5/event-dependent-configuration.html#logstash-config-field-references
I try doing that
input {
rabbitmq {
host => 'xxx'
user => 'xxx'
password => 'xxx'
queue => 'xxx'
exchange => "xxx"
key => 'xxx'
durable => true
}
output {
elasticsearch {
host => "xxx"
cluster => "elasticsearch"
flush_size =>10
bind_port => 9300
codec => "json"
protocol => "http"
index => "%{[index][_index]}"
}
stdout { codec => rubydebug }
}
But what logstash is doing is create the index %{[index][_index]} and putting there all the docs instead of reading the _index parameter and sending there the docs !
I also tried the following:
index => %{index}
index => '%{index}'
index => "%{index}"
But none seems to work.
Any help ?
To resume, the main question here is: If the rabbitMQ messages have this format:
{"index":{"_index":"indexA","_type":"typeX","_ttl":2592000000}}
{"#timestamp":"2017-03-09T15:55:54.520Z","#version":"1","#fields":{DATA}}
How to tell to logstash to send the output in the index named "indexA" with type "typeX" ??
If your messages in RabbitMQ are already in bulk format then you don't need to use the elasticsearch output but a simple http output hitting the _bulk endpoint would do the trick:
output {
http {
http_method => "post"
url => "http://localhost:9200/_bulk"
format => "message"
message => "%{message}"
}
}
So everyone, with the help of Val, the solution was:
As he said since the RabbitMQ messages were already in bulk format, no need to use elasticsearch output, the http output to _bulk API will make it (silly me)
So I replaced the output with this:
output {
http {
http_method => "post"
url => "http://172.16.1.81:9200/_bulk"
format => "message"
message => "%{message}"
}
stdout { codec => json_lines }
}
But it still wasn't working. I was using Logstash 1.5.6 and after upgrading to Logstash 2.0.0 (https://www.elastic.co/guide/en/logstash/2.4/_upgrading_using_package_managers.html) it worked with the same configuration.
There it is :)
If you store JSON message in Rabbitmq, then this problem can be solved.
Use index and type as field in JSON message and assign those values to Elasticsearch output plugin.
index =>
"%{index}"
//INDEX from JSON body received from Kafka Producer document_type => "%{type}" } //TYPE from JSON body
With this approach , each message can have their own index and type.
Related
i have configured logstash 5.5 to use tcp protocol for give the json message.
input {
tcp {
port => 9001
codec => json
type => "test-tcp-1"
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
}
}
filter{
json { source => "message" }
}
The message has been received from logstash with successfully but elasticsearch not create a index ! Why ?
If use the same configuration with stdin input plugin work fine.
Many thanks.
I am attempting to use a logstash indexer to move data from redis to elasticsearch.
On the input to redis end, I give a 'key' to one set of logs from logstash output.
redis
{
host => "server
port => "7379"
data_type => "list"
key => "aruba"
}
On input end , I read each keys in the input.
input
{
redis
{
host => "localhost"
port => "6379"
data_type => "list"
type => "redis-input"
key => "logstash"
codec => "json"
threads => 32
batch_count => 1000
#timeout => 10
}
redis
{
host => "localhost"
port => "6379"
data_type => "list"
type => "redis-input"
key => "aruba"
codec => "json"
threads => 32
batch_count => 1000
#timeout => 10
}
}
and I am attempting to use the key in the logstash to write to index. i.e.
aruba-2017.24.10. something like that, but the output always goes to logstash. I tried
if[redis.key] == "xyz"
{
elasticsearch {index => "xyz-%{time}"}
}
or if[key] == "xyz" ....
also tried
elasticsearch
{
index => "%{key}-%{time}"
}
and elasticsearch{index => "%{redis.key}-%{time}"}
etc. None of it seems to work.
While #sysadmin1138 is write in that accessing nested fields is done via [field][subfield] rather than [field.subfield] your problem is that you are trying to access data that is not in your log event.
While in Redis, your log events have a key associated with them, but this is not part of the event itself and is merely used to access the event from Redis. When Logstash fetches the event from Redis, it uses that "key" to specify which events it wants, but the key does not make it to elastic.
To see this for yourself, try running logstash with stdout{codec => "rubydebug"} as an output plugin, it will prettyprint your whole log event allowing you to see what data is included.
To your rescue comes the add_field parameter that exists for every logstash plugin. You can add to your input:
redis
{
host => "localhost"
port => "6379"
data_type => "list"
type => "redis-input"
key => "aruba"
codec => "json"
threads => 32
batch_count => 1000
add_field => {
"[redis][key]" => "aruba"
}
}
Then changing your conditional to use [redis][key] will leave your code working.
(Cheers to RELK stacks)
This is likely due to an incorrect definition of the name in your conditional.
if [redis.key] == "xyz" {
elasticsearch {index => "xyz-%{time}"}
}
Should be:
if [redis][key] == "xyz" {
elasticsearch {index => "xyz-%{time}"}
}
I am currently evaluating Logstash for our data ingestion needs. One of the use case is to read data from AWS Kinesis stream. I have tried to install logstash-input-kinesis plugin. When i run it, i do not see logstash processing any event from the stream. My logstash is working fine with other type of inputs (tcp). There is no error in debug logs. It just behaves as there is nothing to process. my config file is :
input {
kinesis {
kinesis_stream_name => "GwsElasticPoc"
application_name => "logstash"
type => "kinesis"
}
tcp {
port => 10000
type => tcp
}
}
filter {
if [type] == "kinesis" {
json {
source => "message"
}
}
if [type] == "tcp" {
grok {
match => { "message" => "Hello, %{WORD:name}"}
}
}
}
output{
if [type] == "kinesis"
{
elasticsearch{
hosts => "http://localhost:9200"
user => "elastic"
password => "changeme"
index => elasticpoc
}
}
if [type] == "tcp"
{
elasticsearch{
hosts => "http://localhost:9200"
user => "elastic"
password => "changeme"
index => elkpoc
}
}
}
I have not tried the logstash way but if you are running on AWS. There is a Kinesis Firehose to Elasticsearch ingestion available as documented at http://docs.aws.amazon.com/firehose/latest/dev/basic-create.html#console-to-es
You can see if that would work as an alternate to logstash
we need to provide the AWS credentials for accessing the AWS services for this integration to work.
You can find the same here: https://github.com/logstash-plugins/logstash-input-kinesis#authentication
This plugin requires additional access to AWS DynamoDB as 'checkpointing' database.
You need to use 'application_name' to specify the table name in DynamoDB if you have multiple streams.
https://github.com/logstash-plugins/logstash-input-kinesis
Now,I meet a question. My logstash configuration file as follows:
input {
redis {
host => "127.0.0.1"
port => 6379
db => 10
data_type => "list"
key => "local_tag_del"
}
}
filter {
}
output {
elasticsearch {
action => "delete"
hosts => ["127.0.0.1:9200"]
codec => "json"
index => "mbd-data"
document_type => "localtag"
document_id => "%{album_id}"
}
file {
path => "/data/elasticsearch/result.json"
}
stdout {}
}
I want to read id from redis, by logstash, notify es to delete document.
Excuse me,My English is poor,I hope that someone will help me .
Thx.
I can't help you particularly, because your problem is spelled out in your error message - logstash couldn't connect to your elasticsearch instance.
That usually means one of:
elasticsearch isn't running
elasticsearch isn't bound to localhost
That's nothing to do with your logstash config. Using logstash to delete documents is a bit unusual though, so I'm not entirely sure this isn't an XY problem
Currently I have logstash configuration that pushing data to redis, and elastic server that pulling the data using the default index 'logstash'.
I've added another shipper and I've successfully managed to move the data using the default index as well. My goal is to move and restore that data on a separate index, what is the best way to achieve it?
This is my current configuration using the default index:
shipper output:
output {
redis {
host => "my-host"
data_type => "list"
key => "logstash"
codec => json
}
}
elk input:
input {
redis {
host => "my-host"
data_type => "list"
key => "logstash"
codec => json
}
}
Try to give the index filed in output. Give the name you want and then run that. so a seperate index will be created for that.
input {
redis {
host => "my-host"
data_type => "list"
key => "logstash"
codec => json
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
index => "redis-logs"
cluster => "cluster name"
}
}