Logstash: Missing data after migration - elasticsearch

I have been migrating one of the indexes in self-hosted Elasticsearch to amazon-elasticsearch using Logstash. we have around 1812 documents in our self-hosted Elasticsearch but in amazon-elasticsearch, we have only about 637 documents. Half of the documents are missing after migration.
Our logstash config file
input {
elasticsearch {
hosts => ["https://staing-example.com:443"]
user => "userName"
password => "password"
index => "testingindex"
size => 100
scroll => "1m"
}
}
filter {
}
output {
amazon_es {
hosts => ["https://example.us-east-1.es.amazonaws.com:443"]
region => "us-east-1"
aws_access_key_id => "access_key_id"
aws_secret_access_key => "access_key_id"
index => "testingindex"
}
stdout{
codec => rubydebug
}
}
We have tried for some of the other indexes as well but it still migrating only half of the documents.

Make sure to compare apples to apples by running GET index/_count on your index on both sides.
You might see more or less documents depending on where you look (Elasticsearch HEAD plugin, Kibana, Cerebro, etc) and if replicas are taken into account in the count or not.
In your case you had more replicas in your local environment than in your AWS Elasticsearch service, hence the different count.

Related

Logstash Elasticsearch plugin. Compare results from two sources

I have two deployed Elasticsearch clusters. Data "surpassingly" should be the same in both clusters. My main aim is to compare _source field for each elasticsearch document from source and target ES clusters.
I created logstash config in which I define Elasticsearch input plugin, which run over each document in source cluster, next using elasticsearch filter look up the target Elasticsearch cluster and query from it document by _id which I took from source cluster, match results of the _source field for both documents.
Could you please helm to implement such a config.
input {
elasticsearch {
hosts => ["source_cluster:9200"]
ssl => true
user => "user"
password => "password"
index => "my_index_pattern"
}
}
filter {
mutate {
remove_field => ["#version", "#timestamp"]
}
elasticsearch {
hosts => ["target_custer:9200"]
ssl => true
user => "user"
password => "password"
query => ???????
match _source field ????
}
}
output {
stdout { codec => rubydebug }
}
Maybe print some results of comparison...

Logstash to Elasticsearch adding new data in the fields instead of overwriting the existing data?

My pipeline is like this: CouchDB -> Logstash -> ElasticSearch. Every time I update a field value in CouchDB, the data in Elasticsearch is overwritten. My requirement is that, when data in a field is updated in CouchDB, I want to create a new data in Elasticsearch instead of overwriting the existing one.
My current logstash.conf is like this:
input {
couchdb_changes {
host => "<ip>"
port => <port>
db => "test_database"
keep_id => false
keep_revision => true
initial_sequence => 0
always_reconnect => true
#sequence_path => "/usr/share/logstash/config/seqfile"
}
}
output {
if([doc][doc_type] == "HR") {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "hrindex_new_1"
document_id => "%{[doc][_id]}"
user => elastic
password => changeme
}
}
if([doc][doc_type] == "SoftwareEngg") {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "softwareenggindex_new"
document_id => "%{[doc][_id]}"
user => elastic
password => changeme
}
}
}
How to do this?
You are using the document_id option in your elasticsearch output, what this option does is tell elasticsearch that it should index the document using this value as the document id, which will be a unique id.
document_id => "%{[doc][_id]}"
So, if in your source document the field [doc][_id] has for example the value of 1000, the _id field in the elasticsearch will also have the same value.
When you change something in the source document that has the [doc][_id] equals to 1000, it will replace the document with the _id equals to 1000 in elasticsearch because the _id is unique.
To achieve what you want, you will need to remove the option document_id from your outputs, this way elasticsearch will generate a unique value for the _id field of the document.
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "softwareenggindex_new"
user => elastic
password => changeme
}

Sending Cloudtrail gzip logs from S3 to ElasticSearch

I am relatively new to the whole of the ELK set up part, hence please bear along.
What I want to do is send the cloudtrail logs that are stored on S3 into a locally hosted (non-AWS I mean) ELK set up. I am not using Filebeat anywhere in the set up. I believe it isn't mandatory to use it. Logstash can directly deliver data to ES.
Am I right here ?
Once the data is in ES, I would simply want to visualize it in Kibana.
What I have tried so far, given that my ELK is up and running and that there is no Filebeat involved in the setup:
using the S3 logstash plugin
contents of /etc/logstash/conf.d/aws_ct_s3.conf
input {
s3 {
access_key_id => "access_key_id"
bucket => "bucket_name_here"
secret_access_key => "secret_access_key"
prefix => "AWSLogs/<account_number>/CloudTrail/ap-southeast-1/2019/01/09"
sincedb_path => "/tmp/s3ctlogs.sincedb"
region => "us-east-2"
codec => "json"
add_field => { source => gzfiles }
}
}
output {
stdout { codec => json }
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "attack-%{+YYYY.MM.dd}"
}
}
When logstash is started with the above conf, I can see all working fine. Using the head google chrome plugin, I can see that the documents are continuously getting added to the specified index.In fact when I browse it as well, I can see that there is the data I need. I am able to see the same on the Kibana side too.
The data that each of these gzip files have is of the format:
{
"Records": [
dictionary_D1,
dictionary_D2,
.
.
.
]
}
And I want to have each of these dictionaries from the list of dictionaries above to be a separate event in Kibana. With some Googling around I understand that I could use the split filter to achieve what I want to. And now my aws_ct_s3.conf looks something like :
input {
s3 {
access_key_id => "access_key_id"
bucket => "bucket_name_here"
secret_access_key => "secret_access_key"
prefix => "AWSLogs/<account_number>/CloudTrail/ap-southeast-1/2019/01/09"
sincedb_path => "/tmp/s3ctlogs.sincedb"
region => "us-east-2"
codec => "json"
add_field => { source => gzfiles }
}
}
filter {
split {
field => "Records"
}
}
output {
stdout { codec => json }
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "attack-%{+YYYY.MM.dd}"
}
}
And with this I am in fact getting the data as I need on Kibana.
Now the problem is
Without the filter in place, the number of documents that were being shipped by Logstash from S3 to Elasticsearch was in GBs, while after applying the filter it has stopped at roughly some 5000 documents alone.
I do not know what am I doing wrong here. Could someone please help ?
Current config:
java -XshowSettings:vm => Max Heap Size: 8.9 GB
elasticsearch jvm options => max and min heap size: 6GB
logstash jvm options => max and min heap size: 2GB
ES version - 6.6.0
LS version - 6.6.0
Kibana version - 6.6.0
This is how the current heap usage looks like:

can logstash send data simultaneusly to mulpile location along with elastic search

Normally, in ELK logstash parsed data and send to elastics search.
I want to know is it possible that logstash send same data to different location at real time.
If it is possible, please let me know how to do it.
Create several output files that match type and send to different hosts.
output {
if [type] == "syslog" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logstash-%{+YYYY.MM.dd}"
codec => "plain"
workers => 1
manage_template => true
template_name => "logstash"
template_overwrite => false
flush_size => 100
idle_flush_time => 1
}
}
}

Reindexing in Elasticsearch 1.7

There is a problem with our mappings for elasticsearch 1.7. I am fixing the problem by creating a new index with the correct mappings. I understand that since I am creating a new index I will have to reindex from old index with existing data to the new index I have just created. Problem is I have googled around and can't find a way to reindex from old to new. Seems like the reindex API was introduced in ES 2.3 and not supported for 1.7.
My question is how do I reindex my data from old to new after fixing my mappings. Alternatively, what is the best practice for making mapping changes in ES 1.7?
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html will not work for me because we're on an old version of ES (1.7)
https://www.elastic.co/blog/changing-mapping-with-zero-downtime
Initially went down that path but got stuck, need a way to reindex the old to the new
Late for your use case, but wanted to put it out there for others. This is an excellent step-by-step guide on how to reindex an Elasticsearch index using Logstash version 1.5 while maintaining the integrity of the original data: http://david.pilato.fr/blog/2015/05/20/reindex-elasticsearch-with-logstash/
This is the logstash-simple.conf the author creates:
Input {
# We read from the "old" cluster
elasticsearch {
hosts => [ "localhost" ]
port => "9200"
index => "index"
size => 500
scroll => "5m"
docinfo => true
}
}
filter {
mutate {
remove_field => [ "#timestamp", "#version" ]
}
}
output {
# We write to the "new" cluster
elasticsearch {
host => "localhost"
port => "9200"
protocol => "http"
index => "new_index"
index_type => "%{[#metadata][_type]}"
document_id => "%{[#metadata][_id]}"
}
# We print dots to see it in action
stdout {
codec => "dots"
}
There are a few options for you:
use logstash - it's very easy to create a reindex config in logstash and use that to reindex your documents. for example:
input {
elasticsearch {
hosts => [ "localhost" ]
port => "9200"
index => "index1"
size => 1000
scroll => "5m"
docinfo => true
}
}
output {
elasticsearch {
host => "localhost"
port => "9200"
protocol => "http"
index => "index2"
index_type => "%{[#metadata][_type]}"
document_id => "%{[#metadata][_id]}"
}
}
The problem with this approach that it'll be relatively slow since you'll have only a single machine that peforms the reindexing process.
another option, use this tool. It'll be faster than logstash but you'll have to provide a segmentation logic for all your documents to speed up the processing. For example, if you have a numeric fields whose values range from 1 - 100, then you could segment the queries in the tool for, maybe, 10 intervals (1 - 10, 11 - 20, ... 91 - 100), so the tool will spawn up 10 indexers that will work in parallel reindexing your old index.

Resources