I want to query Elasticsearch for an index a day before current date in Logstash using Elasticsearch input plugin.
I tried the following config for logstash,
input {
elasticsearch
{
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd-6}"
query => '{ "query": { "query_string": { "query": "*" } } }'
size => 500
scroll => "5m"
docinfo => true
}
}
output { stdout { codec => rubydebug }
}
Can someone help me on how to do it?
You can use Date math index name within your elastic search query,
Date math index name resolution enables you to search a range of
time-series indices, rather than searching all of your time-series
indices and filtering the results or maintaining aliases. Limiting the
number of indices that are searched reduces the load on the cluster
and improves execution performance. For example, if you are searching
for errors in your daily logs, you can use a date math name template
to restrict the search to the past two days.
Almost all APIs that have an index parameter, support date math in the
index parameter value.
for instance to search for indices for yesterday, assuming the index use the default Logstash index name format, logstash-YYYY.MM.dd
GET /<logstash-{now/d-1d}>/_search
Related
hi i am trying to create quarterly index in ES using log-stash , i know how to create index weekly in log-stash
here is my piece of configuration -
> output {
> elasticsearch {
> hosts => "localhost"
> index => "logstash-%{+xxxx.ww}"
>
>
> }
> stdout{}
> }
but how can i create quarterly index or how we can have month in any variable so i can calculate the quarter .
thanks
Date math currently doesn't support specifying quarters Q and an issue is still open to improve upon this.
Ideally it would be nice if we could circumvent this shortcoming with something like now-3M/3M but multiples of rounding are not supported either.
Until the issue is resolved, one solution would be to use monthly indices and when a quarter has gone, reindex the three previous monthly indices into a single quarter index.
Another solution is to compute the quarter beforehand in a Logstash ruby filter and then use it in the elasticsearch output, like this:
filter {
ruby {
code => "event.set('quarter', Date.today.year + '-' + (Date.today.month / 3.0).ceil)"
}
}
output {
elasticsearch {
hosts => "localhost"
index => "logstash-%{quarter}"
}
}
i need read messages(content is logs) from rabbitMq by logstash and then send that to elasticsearch for make visualize monitoring in kibana. so i wrote input for read from rabbitmq in logstash like this:
input {
rabbitmq {
queue => "testLogstash"
host => "localhost"
}
}
and i wrote output configuration for store in elasticsearch in logstash like this:
output {
elasticsearch{
hosts => "http://localhost:9200"
index => "d13-%{+YYYY.MM.dd}"
}
}
Both of them are placed in myConf.conf
In the content of each message, there is a Json that contains the fields like this:
{
"mDate":"MMMM dd YYYY, HH:mm:ss.SSS"
"name":"test name"
}
But there are two problems. First, there is no date field in the field of creating a new index(Time Filter field name). Second, I use the same timestamp as the default #timestamp, this field will not be displayed in the build type of graphs. I think the reason for this is because of the data type of the field. The field is of type date, but the string is considered.
i try to convert value of field to date by mutate in logstash config like this:
filter {
mutate {
convert => { "mdate" => "date" }
}
}
Now, two questions arise:
1- Is this the problem? If yes What is the right solution to fix it?
2- My main need is to use the time when messages are entered in the queue, not when Logstash takes them. What is the best solution?
If you don't specify a value for #timestamp, you should get the current system time when elasticsearch indexes the document. With that, you should be able to see items in kibana.
If I understand you correctly, you'd rather use you mDate field for #timestamp. For this, use the date{} filter in logstash.
I have a MySQL database with a table that contains 2 importants fields title and age_range.
That table saves documents like this '45;60' for documents designed for users between 45 and 60 years old, '18;70' for users between 18 and 70 years old and so on...
Now I would like to fire the query 'test' on the field title with the filter '18;50' for the field age_range that will return all documents matching 'test' with the age range field contained in this interval including the 2 cases above for example.
For instance, I use Logstash to index my data.
How can I achieve this?
Any treatment to do while indexing my data with logstash?
Any filter, tokenizer to use while indexing using ES analyzer?
Thank you in advance
You can split the data as two fields with grok filter. To ship data to Elasticsearch, you can use logstash jdbc_streaming input and elasticsearch output firstly. And you configure your input like below:
input {
jdbc_streaming {
# Configuration of jdbc
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_streaming.html
}
}
filter {
# Split the field as separeted two fields
grok {
match => { "age_range" => "%{NUMBER:age_range_top};%{NUMBER:age_range_bottom}" }
}
}
output {
elasticsearch {
# elasticsearch output configuration
}
}
Analysis depends on your search method. How can you want to search these fields. Range fields is necessary default one if you want to do only range filter. But you should do some work about title. For example you can follow this example to handle autocomplete.
I have a logstash configuration that uses the following in the output block in an attempt to mitigate duplicates.
output {
if [type] == "usage" {
elasticsearch {
hosts => ["elastic4:9204"]
index => "usage-%{+YYYY-MM-dd-HH}"
document_id => "%{[#metadata][fingerprint]}"
action => "update"
doc_as_upsert => true
}
}
}
The fingerprint is calculated from a SHA1 hash of two unique fields.
This works when logstash sees the same doc in the same index, but since the command that generates the input data doesn't have a reliable rate at which different documents appear, logstash will sometimes insert duplicates docs in a different date stamped index.
For example, the command that logstash runs to get the input generally returns the last two hours of data. However, since I can't definitively tell when a doc will appear/disappear, I tun the command every fifteen minutes.
This is fine when the duplicates occur within the same hour. However, when the hour or day date stamp rolls over, and the document still appears, elastic/logstash thinks it's a new doc.
Is there a way to make the upsert work cross index? These would all be the same type of doc, they would simply apply to every index that matches "usage-*"
A new index is an entirely new keyspace and there's no way to tell ES to not index two documents with the same ID in two different indices.
However, you could prevent this by adding an elasticsearch filter to your pipeline which would look up the document in all indices and if it finds one, it could drop the event.
Something like this would do (note that usages would be an alias spanning all usage-* indices):
filter {
elasticsearch {
hosts => ["elastic4:9204"]
index => "usages"
query => "_id:%{[#metadata][fingerprint]}"
fields => {"_id" => "other_id"}
}
# if the document was found, drop this one
if [other_id] {
drop {}
}
}
I still face problems to figure out, how to tell Logstash to send a dynamic index, based on a document field. Furthermore, this Field must be transformed in order to get the "real" index at the very end.
Given, that there is a field "time" (which is a UNIX Timestamp). This Field gets already transformed with a "date" Filter to a DateTime Object for Elastic.
Additionally, it should server as index (YYYYMM). The index should NOT be derived from #Timestamp, which is not touched.
Example:
{...,"time":1453412341,...}
Shall go to the Index: 201601
I use the following Config:
filter {
date {
match => [ "time", "UNIX" ]
target => "time"
timezone => "Europe/Berlin"
}
}
output {
elasticsearch {
index => "%{time}%{+YYYYMM}"
document_type => "..."
document_id => "%{ID}"
hosts => "..."
}
}
Sadly, its not working. Any idea, how to achieve that?
Thanks a lot!
The "%{+YYYYMM}" says to use the date values from #timestamp. If you want an index named after the YYYYMM in %{time}, you need to make a string out of that date field and then reference that string in the output stanza. There might be a mutate{} that would do it, or drop into ruby{}.
In most installations, you want to set #timestamp to the event's value. The default of logstash's own time is not very useful (imagine if your events were delayed by an hour during processing). If you did that, then %{+YYYYMM}" would work just fine.
This is caused because the index name is created based on UTC time by default.