My logstash service sends the logs to elasticsearch as daily indices.
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
index => "%{type}-%{+YYYY.MM.dd}"
}
Does Elasticsearch provides the API to lookup the indices before specific date?
For example, how could I get the indices created before 2015-12-15 ?
The only time I really care about what indexes are created is when I want to close/delete them using curator. Curator has "age" type features built in, if that's also your use case.
I think you are looking for Indices Query have a look here
Here is an example:
GET /_search
{
"query": {
"indices" : {
"query": {
"term": {"description": "*"}
},
"indices" : ["2015-01-*", "2015-12-*"],
"no_match_query": "none"
}
}
}
Each index has a creation_date field.
Since the number of indices is supposed to be quite small there's no such feature as 'searching for indices'. So you just get their metadata and filter them inside your app. The creation_date is also available via _cat API.
Related
I have two indexes "indexname" and "indexnamelookup" in the elasticsearch instance. And I have created index pattern indexname* in kibana and trying to join two fields "IP"(field in indexname) and "location.IP"(field in indexnamelookup).
GET /indexname*/_search?q=*
{
"query": {
"multi_match": {
"query": "",
"fields": [
"IP",
"location.IP"
]
}
}
}
Above query is working fine in elasticsearch. But it is not working in kibana. Has anyone else faced a similar situation?
The ?q=* in your query turns it into a match all that ignores the body.
I assume we're talking about Discover in Kibana: The query location.IP : "foo" or IP : "foo" will work.
Alternatively you can use your Elasticsearch query in Kibana as well if you add a filter and then use the Query DSL:
I have an index in elasticsearch with is occupied by some json files with respected to timestamp.
I want to delete data from that index.
curl -XDELETE http://localhost:9200/index_name
Above code deletes the whole index. My requirement is to delete certain data after a time period(for example after 1 week). Could I automate the deletion process?
I tried to delete by using curator.
But I think it deletes the indexes created by timestamp, not data with in an index. Can we use curator for delete data within an index?
It will be pleasure if I get to know that either of following would work:
Can Curl Automate to delete data from an index after a period?
Can curator Automate to delete data from an index after a period?
Is there any other way like python scripting to do the job?
References are taken from the official site of elasticsearch.
Thanks a lot in advance.
You can use the DELETE BY QUERY API: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html
Basically it will delete all the documents matching the provided query:
POST twitter/_delete_by_query
{
"query": {
"match": {
"message": "some message"
}
}
}
But the suggested way is to implement indexes for different periods (days for example) and use curator to drop them periodically, based on the age:
...
logs_2019.03.11
logs_2019.03.12
logs_2019.03.13
logs_2019.03.14
Simple example using Delete By Query API:
POST index_name/_delete_by_query
{
"query": {
"bool": {
"filter": {
"range": {
"timestamp": {
"lte": "2019-06-01 00:00:00.0",
"format": "yyyy-MM-dd HH:mm:ss.S"
}
}
}
}
}
}
This will delete records which have a field "timestamp" which is the date/time (within the record) at which they occured. One can run the query to get a count for what will be deleted.
GET index_name/_search
{
"size": 1,
"query: {
-- as above --
Also it is nice to use offset dates
"lte": "now-30d",
which would delete all records older than 30 days.
You can always delete single documents by using the HTTP request method DELETE.
To know which are the id's you want to delete you need to query your data. Probably by using a range filter/query on your timestamp.
As you are interacting with the REST api you can do this with python or any other language. There is also a Java client if you prefer a more direct api.
I'm trying to use Visualize feature in Kibana to plot monthly date_histogram graph that counts # of messages in my system. Message type has a sent_at field that is stored as number since epoch time.
Although I can do that just fine with elasticsearch query
POST /_all/message/_search?size=0
{
"aggs" : {
"monthly_message" : {
"date_histogram" : {
"field" : "sent_at",
"interval" : "month"
}
}
}
}
I ran into a problem in Kibana saying No Compatible Fields: The "myindex" index pattern does not contain any of the following field types: date
Is there a way to get Kibana to use number field as date?
Not to my knowledge, Kibana will use the index mapping in order to find out date fields, if no date fields can be found, then Kibana won't be able to infer one from the other number fields.
What you can do is to add another field called sent_at_date to your mapping, then use the update-by-query API in order to copy the sent_at field to that new field and finally to recreate your index pattern in Kibana.
It goes basically like this:
# 1. add a new field to your mapping
PUT myindex/_mapping/message
{
"properties": {
"sent_at_date": {
"type": "date"
}
}
}
# 2. update all your documents
POST myindex/_update_by_query
{
"script": {
"source": "ctx._source.sent_at_date = ctx._source.sent_at"
}
}
And finally recreate your index pattern in Kibana. You should see a new field called sent_at_date of type date that you can use in Kibana.
I have been using logstash to migrate a index to another. I have recently tried to reindex certain amount of data from large dataset in local environment. So I tried using following configuration for migration:
input{
elasticsearch{
hosts=>"localhost:9200"
index=>"old_indexindex"
query=>'{"query":{"match_all":{}},"size":10 }'
}
}filter{
mutate{
remove_field=>[
"#version",
"#timestamp"
]
}
}output{
elasticsearch{
hosts=>"localhost:9200"
index=>"new_index"
document_type=>"contact"
manage_template=>false
document_id=>"%{contactId}"
}
}
But this reindexes all the documents in old_index to new_index, where as , I was expecting just 10 documents to be reindexed in new_index.
Am I missing some concept using logstash with elasticsearch?
The elasticsearch input doesn't make a conventional search, but does a scan/scroll search type instead. This means that all data will be retrieved from the index and the role of the size parameter just serves to define how much data will be fetched during each scroll, not how much data will be fetched altogether.
Also, note that the size parameter in the query itself has no effect. You need to use the size parameter of the elasticsearch input and not specify it in the query.
input{
elasticsearch{
hosts=> "localhost:9200"
index=> "old_index"
query=> '*'
size => 10 <--- size goes here
}
}
That being said, if you're running ES 2.3 or later, there's a way to achieve what you desire using the Reindex API, like this:
POST /_reindex
{
"size": 10,
"source": {
"index": "old_index"
},
"dest": {
"index": "new_index"
}
}
The goal is to build an Elasticsearch index with only the most recent documents in groups of related documents to track the current state of some monitoring counters and states.
I have crafted a simple Elasticsearch aggregation query:
{
"size": 0,
"aggs": {
"group_by_monitor": {
"terms": {
"field": "monitor_name"
},
"aggs": {
"get_latest": {
"top_hits": {
"size": 1,
"sort": [
{
"timestamp": {
"order": "desc"
}
}
]
}
}
}
}
}
}
It groups related documents into buckets and select the most recent document for each bucket.
Here are the different ideas I had to get the job done:
directly use the aggregation query to push the results into the index, but it does not seem possible : Is it possible to put the results of an ElasticSearch aggregation back into the index?
use the Logstash Elasticsearch input plugin to execute the aggregation query and the Elasticsearch output plugin to push into the index, but seems like the input plugin only looks at the hits field and is unable to handle aggregation results: Aggregation Query possible input ES plugin !
use the Logstash http_poller plugin to get a JSON document, but it does not seem to allow specifying a body for the HTTP request !
use the Logstash exec plugin to execute cURL commands to get the JSON but this seems quite cumbersome and my last resort.
use the NEST API to build a basic application that will do polling, extract results, clean them and inject the resulting documents into the target index, but I'd like to avoid adding a new tool to maintain.
Is there a reasonably complex way of accomplishing this?
Edit the logstash.conf file as follow
input {
elasticsearch {
hosts => "localhost"
index => "source_index_name"
type =>"index_type"
query => '{Query}'
size => 500
scroll => "5m"
docinfo => true
}
}
output {
elasticsearch {
index => "target_index_name"
document_id => "%{[#metadata][_id]}"
}
}