Insert aggregation results into an index - elasticsearch

The goal is to build an Elasticsearch index with only the most recent documents in groups of related documents to track the current state of some monitoring counters and states.
I have crafted a simple Elasticsearch aggregation query:
{
"size": 0,
"aggs": {
"group_by_monitor": {
"terms": {
"field": "monitor_name"
},
"aggs": {
"get_latest": {
"top_hits": {
"size": 1,
"sort": [
{
"timestamp": {
"order": "desc"
}
}
]
}
}
}
}
}
}
It groups related documents into buckets and select the most recent document for each bucket.
Here are the different ideas I had to get the job done:
directly use the aggregation query to push the results into the index, but it does not seem possible : Is it possible to put the results of an ElasticSearch aggregation back into the index?
use the Logstash Elasticsearch input plugin to execute the aggregation query and the Elasticsearch output plugin to push into the index, but seems like the input plugin only looks at the hits field and is unable to handle aggregation results: Aggregation Query possible input ES plugin !
use the Logstash http_poller plugin to get a JSON document, but it does not seem to allow specifying a body for the HTTP request !
use the Logstash exec plugin to execute cURL commands to get the JSON but this seems quite cumbersome and my last resort.
use the NEST API to build a basic application that will do polling, extract results, clean them and inject the resulting documents into the target index, but I'd like to avoid adding a new tool to maintain.
Is there a reasonably complex way of accomplishing this?

Edit the logstash.conf file as follow
input {
elasticsearch {
hosts => "localhost"
index => "source_index_name"
type =>"index_type"
query => '{Query}'
size => 500
scroll => "5m"
docinfo => true
}
}
output {
elasticsearch {
index => "target_index_name"
document_id => "%{[#metadata][_id]}"
}
}

Related

Daterange + top_hits aggregation (as subaggregation) with Elasticsearch Java API Client 7.17.x

I've been at this for a day and I don't quite understand how I do it! This is the query I want to "recreate" with the new Java API Client (using Spring Boot)
{
"aggs": {
"range": {
"date_range": {
"field": "timestamp",
"ranges": [
{ "to": "now-2d" }
]
}
}
,
"aggs": {
"top_hits": {
"_source": {
"includes": [ "Id", "timestamp" ]
}
}
}
}
}
I tried doing it with DateRangeAggregation.of but I can't seem to get the right results or type. Here's what I have
SearchResponse<MyDto> response = client.search(b -> b
.index("test-index")
.size(0)
.aggregations("range",a->a.dateRange(DateRangeAggregation.of(d->d
.field("timestamp").ranges(r->r.to(t->t.expr("now-2d")))))),
.aggregations("hits", a -> a
.topHits(h->h.source(SourceConfig.of(c->c.filter(f->f.includes(Arrays.asList("Id", "timestamp"))))))),
MyDto.class
);
I've also tried removing the subaggregation and query for now, but I don't seem to be on the right track to even get the number of doc_count from the bucket. I kind of don't get how to work with the dateRange() here.
Edit: I played around a bit and was able to at least get the number of doc_count, I'm not very sure if this is a good way to do it though?
Aggregation agg = Aggregation.of(a -> a
.dateRange(d->d.field("timestamp").ranges(r->r.to(FieldDateMath.of(v->v.expr("now-2d"))))));
SearchResponse<MyDto> response = client.search(b -> b
.index("test-index")
.size(0)
.aggregations("range", agg),
MyDto.class
);
return response.aggregations().get("range").dateRange().buckets().array().get(0).docCount();
I also fixed the query above, it had an unnecessary extra query that broke the result.
My thought process was wrong. I wanted the documents that were aggregated within this a time but I misunderstood and thought tophits would give them to me, but that's not how it works! I made a seperate range query that actually queries the documents I needed back instead.

No effect of “size” in ‘query’ while reindexing in elasticsearch

I have been using logstash to migrate a index to another. I have recently tried to reindex certain amount of data from large dataset in local environment. So I tried using following configuration for migration:
input{
elasticsearch{
hosts=>"localhost:9200"
index=>"old_indexindex"
query=>'{"query":{"match_all":{}},"size":10 }'
}
}filter{
mutate{
remove_field=>[
"#version",
"#timestamp"
]
}
}output{
elasticsearch{
hosts=>"localhost:9200"
index=>"new_index"
document_type=>"contact"
manage_template=>false
document_id=>"%{contactId}"
}
}
But this reindexes all the documents in old_index to new_index, where as , I was expecting just 10 documents to be reindexed in new_index.
Am I missing some concept using logstash with elasticsearch?
The elasticsearch input doesn't make a conventional search, but does a scan/scroll search type instead. This means that all data will be retrieved from the index and the role of the size parameter just serves to define how much data will be fetched during each scroll, not how much data will be fetched altogether.
Also, note that the size parameter in the query itself has no effect. You need to use the size parameter of the elasticsearch input and not specify it in the query.
input{
elasticsearch{
hosts=> "localhost:9200"
index=> "old_index"
query=> '*'
size => 10 <--- size goes here
}
}
That being said, if you're running ES 2.3 or later, there's a way to achieve what you desire using the Reindex API, like this:
POST /_reindex
{
"size": 10,
"source": {
"index": "old_index"
},
"dest": {
"index": "new_index"
}
}

Is it possible to run an elasticsearch aggregation query in Kibana?

I would like to run the following aggregation query in Kibana:
GET _search
{
"size": 0,
"aggs": {
"group_by_host": {
"terms": {
"field": "host",
"size": 20
}
}
}
}
I can run it in the dev tools console (what used to be called Sense), but I would like to run it in the Kibana proper. Having a hard time figuring it out.
Just create a Chart from Visualize tab.
Then buckets => X Axis (or Split Rows or whatever based on your chart type) => Terms => Choose your field.
Then click Advanced link and write {"size":10} to there:
Hope that helps!

elasticsearch: define field's order in returned doc

i'm doing sending queries to elasticsearch and it responde with an unknown order of fields in its documents.
how can i fix the order that elsasticsearch is returning fields inside documents?
i mean, i'm sending this query:
{
"index": "my_index",
"_source":{
"includes" : ["field1","field2","field3","field14"]
},
"size": X,
"body": {
"query": {
// stuff
}
}
}
and when it responds, it gives me something not in the good order.
i ultimatly want to convert this to csv, and want to fix csv headers.
is there something to do so i can get something like
doc1 :{"field1","field2","field3","field14"}
doc2 :{"field1","field2","field3","field14"}
...
in the same order as my "_source" ?
thank's for your help.
A document in Elasticsearch is a JSON hash/map and by definition maps are unordered.
One solution around this would be to use Logstash in order to extract docs from ES using an elasticsearch input and output them in CSV using a csv output. That way you can guarantee that the fields in the CSV file will have the exact same order as specified. Another benefit is that you don't have to write your own boilerplate code to extract from ES and sink to CSV, Logstash does it all for you for free.
The Logstash configuration would look something like this:
input {
elasticsearch {
hosts => "localhost"
query => '{ "query": { "match_all": {} } }'
size => 100
index => "my_index"
}
}
filter {}
output {
csv {
fields => ["field1","field2","field3","field14"]
path => "/path/to/file.csv"
}
}

Get elasticsearch indices before specific date

My logstash service sends the logs to elasticsearch as daily indices.
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
index => "%{type}-%{+YYYY.MM.dd}"
}
Does Elasticsearch provides the API to lookup the indices before specific date?
For example, how could I get the indices created before 2015-12-15 ?
The only time I really care about what indexes are created is when I want to close/delete them using curator. Curator has "age" type features built in, if that's also your use case.
I think you are looking for Indices Query have a look here
Here is an example:
GET /_search
{
"query": {
"indices" : {
"query": {
"term": {"description": "*"}
},
"indices" : ["2015-01-*", "2015-12-*"],
"no_match_query": "none"
}
}
}
Each index has a creation_date field.
Since the number of indices is supposed to be quite small there's no such feature as 'searching for indices'. So you just get their metadata and filter them inside your app. The creation_date is also available via _cat API.

Resources