i have just started to deal with grafana and elastic. in the dokus i see things like #timestamp or #value again and again. Is that a variable that you set somewhere?
can this be used for any elasticsearch database? i connected elastic without metricbeats… and only get to the timestamp when i walk over an object. Means : object.timestamp
# is used in Logstash( part of Elastic stack) like #timestamp. #timestamp variable is set as default, but you can change that and some other fields can be used instead of timestamp( you need a field that can be used as time or date for your graph to work). For example, if you have a time variable, you can use it instead of #timestamp by just typing 'time' in the text box of your query.
I don't know much about Logstash, but since they are both part of Elastic stack. I assume # will be used in elastic database. Hope this helps a little bit.
Related
In Kibana (ElasticSearch v6.8) I'm storing documents containing a date field and a LaunchTime field, and I have a scripted field uptime as their difference (in seconds):
(doc['date'].value.millis - doc['LaunchTime'].value.millis) / 1000 / 60
I'm trying to create a monitor (under alerting) on the max value of this field of the index, but the field 'Uptime' doesn't show up in the list of fields I can do a max query on. Its type is number and in visualisations I can do max/min etc. displays of this field.
Is this a limitation of Kibana alerting - that I can't use a scripted field? Or is there some way I can make it available to use?
I'm afraid it is a limitation of kibana's scripted fields. See this post about the same subject referring to the scripted field official documentation. I believe that the watcher are handled by ES itself while the scripted field are handled by kibana (they can be used in discovery and visualisations because kibana is handlind those too)
But have no fear! you already have the script for the calculation and you could just add it into logstash to add the field to you actual documents when you index them, which would enable you to use it for watchers AND would probably optimize the load at runtime, since the val is only calculated one, when you ingest it. Then you could run an update by query with a the script and add the field in you existing documents.
If you don't use logstash, you could look into ES's ingestion pipelines, but it's a rather advanced subject and i'm not sure if it was implemented in 5.x.
Is there a way to get the date and time that an elastic search document was written?
I am running es queries via spark and would prefer NOT to look through all documents that I have already processed. Instead I would like read the only documents that were ingested between the last time the program ran and now.
What is the best most efficient way to do this?
I have looked at;
updating to add a field with an array with booleans for if its been looked at by which analytic. The negative is waiting for the update to occur.
index per time frame method, which would be to break down the current indexes into smaller ones so by hour.The negative I see is the number of open file descriptors.
??
Elasticsearch version 5.6
I posted the question on the elasticsearch discussion board and it appears using the ingest pipeline is the best option.
I am running es queries via spark and would prefer NOT to look through
all documents that I have already processed. Instead I would like read
the only documents that were ingested between the last time the
program ran and now.
A workaround could be :
While inserting data using Logstash to Elasticsearch, Logstash appends a #timestamp key to the document which represents the time (in UTC) at which the document is created or we can use an ingest pipline
After that we can query based on the timestamp.
For more on this please have a look at :
Mapping changes
There is no way to ask ES to insert a timestamp at index time
Elasticsearch doesn't have such functionality.
You need manually save with each document date. In this case you will be able to search by date range.
I am looking for a way to measure the time logstash takes to output data into elastic search.
- There is this elapsed filter
https://www.elastic.co/guide/en/logstash/current/plugins-filters-elapsed.html, which I think can be used to measure the time taken to process the message through all the configured filters but not to measure the time taken to output to elastic search
- I also tried with a batch file with something like
echo starttime = %time%
cd c:\Temp\POC\Mattias\logstash-2.0.0\logstash-2.0.0\bin
logstash agent -f first-pipeline.conf
echo endtime = %time%
The problem with this approach is logstash doesn’t stop/exit after finishing a given input file.
Any help is highly appreciated!
Thanks and regards,
Priya
The elapsed{} filter is for computing the difference between two events (start/stop pairs, etc).
Logstash sets #timestamp to the current time. If you don't replace it (via the date{} filter), it will represent the time that logstash received the document.
Elasticsearch had a feature called _timestamp that would set a field by that name to the time of the elasticsearch server. For some reason, they've deprecated that feature in version 2.
As of right now, there is no supported way to get the time that elasticsearch indexed the data, so there is no supported way to determine the lag between logstash and elasticsearch and the processing time required by elasticsearch.
I was hoping that you could add a date field in your mapping and use the null_value to default the value to 'now', but that's not supported. Hopefully, they'll support that and reinstate this very useful feature.
I'm having some issues getting elasticsearch to interpret an epoch millis timestamp field. I have some old bro logs I want to ingest and have them be in the proper orders and spacing. Thanks to Logstash filter to convert "$epoch.$microsec" to "$epoch_millis"
I've been able to convert the field holding the bro timestamp to the proper length of digits. I've also inserted a mapping into elasticsearch for that field, and it says that the type is "Date" with the format being the default. However, when I go and look at the entries it still has a little "t" next to it instead of a little clock. And hence I can't use it for my filter view reference in kibana.
Anyone have any thoughts or have dealt with this before? Unfortunately it's a stand alone system so I would have to manually enter any of the configs I'm using.
I did try and convert my field "ts" back to an integer after using the method described in the link above. So It should be a logstash integer before hitting the elasticsearch mapping.
So I ended up just deleting all my mappings in Kibana, and elasticsearch. I then resubmitted and this time it worked. Must have been some old junk in there that was messing me up. But now it's working great!
I have imported weblogs into Elasticsearch via Logstash. This has completed successfully.
I have a field in the log file (clientip) that is always populated and another field that is sometimes populated (trueclientip). I want to aggregate based on the coalescing of the two; e.g. if trueclientip is not empty then use that otherwise use clientip.
How can I do this with the Visualisation in Kibana? Do I need to generate a scripted field or is there another approach?
Thanks.
Define a scripted field that should have this formula: doc['trueclientip'].value ? doc['trueclientip'].value : doc['clientip'].value and use this in your aggregations.
But, there is a downside to this scripted fields functionality AND the ip type: it seems what you get back from the script is the number itself (which is logic because the scripted fields in Kibana 4 only use Lucene expressions as a language), not the string representation. IPs internally are actually long numbers in Lucene.
For example, 127.0.0.1 is represented internally as 2130706433. And this is what you will see in Visualize.
Is not ideal, indeed, and it would be good to have a more advanced scripting language in scripted fields, but a github issue already exists.