separate indexes on logstash - elasticsearch

Currently I have logstash configuration that pushing data to redis, and elastic server that pulling the data using the default index 'logstash'.
I've added another shipper and I've successfully managed to move the data using the default index as well. My goal is to move and restore that data on a separate index, what is the best way to achieve it?
This is my current configuration using the default index:
shipper output:
output {
redis {
host => "my-host"
data_type => "list"
key => "logstash"
codec => json
}
}
elk input:
input {
redis {
host => "my-host"
data_type => "list"
key => "logstash"
codec => json
}
}

Try to give the index filed in output. Give the name you want and then run that. so a seperate index will be created for that.
input {
redis {
host => "my-host"
data_type => "list"
key => "logstash"
codec => json
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
index => "redis-logs"
cluster => "cluster name"
}
}

Related

Upsert documents in Elasticsearch using custom ID field

I am trying to load/ingest data from some log files that is almost a replica of what data is stored in some 3rd vendor's DB. The data is pipe separated "key-value" values and I am able split it up using kv filter plugin in logstash.
Sample data -
1.) TABLE="TRADE"|TradeID="1234"|Qty=100|Price=100.00|BuyOrSell="BUY"|Stock="ABCD Inc."
if we receive modification on the above record,
2.) TABLE="TRADE"|TradeID="1234"|Qty=120|Price=101.74|BuyOrSell="BUY"|Stock="ABCD Inc."
We need to update the record that was created on the first entry. So, I need to make the TradeID as id field and need to upsert the records so there is no duplication of same TradeID record.
Code for logstash.conf is somewhat like below -
input {
file {
path => "some path"
}
}
filter {
kv {
source => "message"
field_split => "\|"
value_split => "="
}
}
output {
elasticsearch {
hosts => ["https://localhost:9200"]
cacert => "path of .cert file"
ssl => true
ssl_certificate_verification => true
index => "trade-index"
user => "elastic"
password => ""
}
}
You need to update your elasticsearch output like below:
output {
elasticsearch {
hosts => ["https://localhost:9200"]
cacert => "path of .cert file"
ssl => true
ssl_certificate_verification => true
index => "trade-index"
user => "elastic"
password => ""
# add the following to make it work as an upsert
action => "update"
document_id => "%{TradeID}"
doc_as_upsert => true
}
}
So when Logstash reads the first trade, the document with ID 1234 will not exist and will be upserted (i.e. created). When the second trade is read, the document exists and will be simply updated.

Having issues creating conditional outputs with logstash using metadata fields

I want to send winlogbeat data to a separate index than my main index. I have configured winlogbeat to send it's data to my logstash server and i can confirm that i have received the data.
This is what i do currently:
output {
if [#metadata][beat] == "winlogbeat" {
elasticsearch {
hosts => ["10.229.1.12:9200", "10.229.1.13:9200"]
index => "%{[#metadata][beat]}-%{+YYYY-MM-dd}"
user => logstash_internal
password => password
stdout { codec => rubydebug }
}
else {
elasticsearch {
hosts => ["10.229.1.12:9200", "10.229.1.13:9200"]
index => "logstash-%{stuff}-%{+YYYY-MM-dd}"
user => logstash_internal
password => password
}
}
}
}
However, i cannot start logstash using this configuration. If i remove the if statements and only use one elasticsearch output, the one which handles regular logstash data, it works.
What am i doing wrong here?
You have problems with the brackets from your configuration. To fix your code please see below:
output {
if [#metadata][beat] == "winlogbeat" {
elasticsearch {
hosts => ["10.229.1.12:9200", "10.229.1.13:9200"]
index => "%{[#metadata][beat]}-%{+YYYY-MM-dd}"
user => logstash_internal
password => password
}
stdout { codec => rubydebug }
} else {
elasticsearch {
hosts => ["10.229.1.12:9200", "10.229.1.13:9200"]
index => "logstash-%{stuff}-%{+YYYY-MM-dd}"
user => logstash_internal
password => password
}
}
}
I hope this sorts your issue.

Using Redis key as Elasticsearch index name

I am attempting to use a logstash indexer to move data from redis to elasticsearch.
On the input to redis end, I give a 'key' to one set of logs from logstash output.
redis
{
host => "server
port => "7379"
data_type => "list"
key => "aruba"
}
On input end , I read each keys in the input.
input
{
redis
{
host => "localhost"
port => "6379"
data_type => "list"
type => "redis-input"
key => "logstash"
codec => "json"
threads => 32
batch_count => 1000
#timeout => 10
}
redis
{
host => "localhost"
port => "6379"
data_type => "list"
type => "redis-input"
key => "aruba"
codec => "json"
threads => 32
batch_count => 1000
#timeout => 10
}
}
and I am attempting to use the key in the logstash to write to index. i.e.
aruba-2017.24.10. something like that, but the output always goes to logstash. I tried
if[redis.key] == "xyz"
{
elasticsearch {index => "xyz-%{time}"}
}
or if[key] == "xyz" ....
also tried
elasticsearch
{
index => "%{key}-%{time}"
}
and elasticsearch{index => "%{redis.key}-%{time}"}
etc. None of it seems to work.
While #sysadmin1138 is write in that accessing nested fields is done via [field][subfield] rather than [field.subfield] your problem is that you are trying to access data that is not in your log event.
While in Redis, your log events have a key associated with them, but this is not part of the event itself and is merely used to access the event from Redis. When Logstash fetches the event from Redis, it uses that "key" to specify which events it wants, but the key does not make it to elastic.
To see this for yourself, try running logstash with stdout{codec => "rubydebug"} as an output plugin, it will prettyprint your whole log event allowing you to see what data is included.
To your rescue comes the add_field parameter that exists for every logstash plugin. You can add to your input:
redis
{
host => "localhost"
port => "6379"
data_type => "list"
type => "redis-input"
key => "aruba"
codec => "json"
threads => 32
batch_count => 1000
add_field => {
"[redis][key]" => "aruba"
}
}
Then changing your conditional to use [redis][key] will leave your code working.
(Cheers to RELK stacks)
This is likely due to an incorrect definition of the name in your conditional.
if [redis.key] == "xyz" {
elasticsearch {index => "xyz-%{time}"}
}
Should be:
if [redis][key] == "xyz" {
elasticsearch {index => "xyz-%{time}"}
}

I want to Delete document by logstash,but it throws a exception

Now,I meet a question. My logstash configuration file as follows:
input {
redis {
host => "127.0.0.1"
port => 6379
db => 10
data_type => "list"
key => "local_tag_del"
}
}
filter {
}
output {
elasticsearch {
action => "delete"
hosts => ["127.0.0.1:9200"]
codec => "json"
index => "mbd-data"
document_type => "localtag"
document_id => "%{album_id}"
}
file {
path => "/data/elasticsearch/result.json"
}
stdout {}
}
I want to read id from redis, by logstash, notify es to delete document.
Excuse me,My English is poor,I hope that someone will help me .
Thx.
I can't help you particularly, because your problem is spelled out in your error message - logstash couldn't connect to your elasticsearch instance.
That usually means one of:
elasticsearch isn't running
elasticsearch isn't bound to localhost
That's nothing to do with your logstash config. Using logstash to delete documents is a bit unusual though, so I'm not entirely sure this isn't an XY problem

How to move data from one Elasticsearch index to another using the Bulk API

I am new to Elasticsearch. How to move data from one Elasticsearch index to another using the Bulk API?
I'd suggest using Logstash for this, i.e. you use one elasticsearch input plugin to retrieve the data from your index and another elasticsearch output plugin to push the data to your other index.
The config logstash config file would look like this:
input {
elasticsearch {
hosts => "localhost:9200"
index => "source_index" <--- the name of your source index
}
}
filter {
mutate {
remove_field => [ "#version", "#timestamp" ]
}
}
output {
elasticsearch {
host => "localhost"
port => 9200
protocol => "http"
manage_template => false
index => "target_index" <---- the name of your target index
document_type => "your_doc_type" <---- make sure to set the appropriate type
document_id => "%{id}"
workers => 5
}
}
After installing Logstash, you can run it like this:
bin/logstash -f logstash.conf

Resources