I'm a baby sitter learning Elastic Search. Here I'm pushing data from Kafka to ES using Kafka Connect. The data index size just grows and grows to TB's and its not easy to perform a search until I realized to have a new index hourly/daily, picking the date from the Document.
My document looks like:
{
"base":{
"message":"",
"timestamp":"2019-08-09T13:20:11.877Z",
"type":"vpc"
},
"ecs":{
"version":"1.0.0"
}
}
Now could I use the timestamp and type from the document to form a new index like 'vpc-2019.08.09-1'? That helps me create and direct documents to the index based on type and timestamp.
Taking a sample, we have an alias 'foo' which is defined as a time based alias with index format as foo-yyyy.mm.dd
We get a document at Jan 10, 2018 to write to the index. The ES client infers the index to be written to is foo-2018.01.10 and writes the data to the specified index or creates it if required.
We get another document at Jan 11, 2018 to write. Index inferred will be foo-2018.01.11 and written.
I came across this, based on system time. Doesn't help.
Any suggestions?
Related
I am trying to figure out the how to create a monthly rolling index with custom routing (multi-tenancy scenario) , with these requirements :
WRITE flow : Each document will have a timestamp and the document should be indexed to the appropriate backing index based on that timestamp and not to the latest index. Also, write requests will have a custom routing key (eg: customerId) so they hit a specific shard.
READ flow : Requests must be routed to all backing indexes. Requests will have a custom routing key specified (eg: customerId) and results must be aggregated and returned.
Index creation : Rolling the index should be automated. Each index should have a custom routing key (eg: customerId )
Wondering, what are the options available ?
This very feature, called time-series data stream, will be coming in the upcoming ES 8.5 release.
The big difference between normal data streams and time-series data stream is that all backing indexes of TSDS are sorted by timestamp and all documents will be written in the right backing index for the given time frame of the document, even if that backing index is not the current write index, which means if your data source lags (even by a few hours), the data will still land in the right index. Also all documents related to the same dimension (i.e. customerId in your case) will end up on the same shard.
Another difference is that the ID of the documents is computed as a function of the timestamp and the dimension(s) contained in the document, which means there can only be one single occurence for a given timestamp/dimension pair (i.e. no duplicate).
Technically, you can already achieve pretty much the same with normal data streams, however, the underlying optimizations related to storing docs in the same shard and the ability to write documents to older backing indexes won't be possible since you can only index documents in the current write index.
I'm trying to decide how to structure the data in ElasticSearch.
I have a system that is producing metrics on a daily basis. I would like to put those metrics into ES so I could do some advances querying/sorting. I also only care about the most recent data that's in there. The system producing the data could also be late.
Currently I can think of two options:
I can have one index with a date column that contains the date that the metric was created. I am unsure, however, of how to write the query so that if multiple days worth of data are in the index I filter it to just the most recent set.
I could also try and split the data up into different indexes (recent and past) and have some sort of process that migrates data from the recent index to the past index. I think the challenge with this would be having downtime where the data is being moved and/or added into the recent.
Thoughts?
A common approach to solving this problem with elastic search would be to store data in a form that allows historic querying, then again in a second form that allows querying the most recent data. For example if your metric update looked like:
{
"type":"OperationsPerSecond",
"name":"Questions",
"value":10
}
Then it can be indexed into our current values index using a composite key constructed from the document (obviously, for this to work you'd need to be able to construct a composite key from your document!). For example, your identity for this document might be the type and name concatenated. You then leverage the upsert API to allow you to write your updates to the same document:
POST current_metrics/_update/OperationsPerSecond-Questions
{
"type":"OperationsPerSecond",
"name":"Questions",
"value":10
}
Every time you call this API with the same composite key it will update the existing document, rather than create a new document. This will give you an index that only contains a single record per metric you are monitoring, and you can query that index to get your most recent values.
To store your historic data, you change your primary key strategy, it would probably be most straightforward to use the index API and get elastic to generate a primary key for you.
POST all_metrics/_doc/
{
"type":"OperationsPerSecond",
"name":"Questions",
"value":10
}
This API will create a new document for every request made to it. So as long as you have something in your data that you can use in an elastic range query, such as a field like createdDate with a value that looks like a date time, then you should be able to query historic data.
The main thing is, don't worry about duplicating your data for different purposes, elastic does a good job of compressing this stuff on disk and in memory. Storing data multiple times is called denormalization and is a pretty common technique in data warehousing and big data.
does anyone have any experience with entity centric indexing with elasticsearch by using python and groovy scripts for reindex the event centric index into entity centric indexes where every log message has own index or so?
I've got a lot of following messages:
Jul 23 09:24:16 msda msda-core[5147]: 1563866656876839.mt
Jul 23 09:24:18 msda msda-core[5210]: 1563866656876839.0.dn
where I have a lot of the same id numbers with .mt suffix and .dn suffix.
I always need to find the message with the same id number and appropriate dn suffix if a message with .dn suffix appears within one hour.
Any idea would be appreciated!
If you are running v7.2 or later, I would recommend using Elasticsearch data frame transforms for creating an event centric index grouped by the id numbers.. https://www.elastic.co/guide/en/elastic-stack-overview/current/ml-dataframes.html
Use a scripted metric on the min and max timestamp to calculate duration. There is a good example in the Elastic documentation - https://www.elastic.co/guide/en/elastic-stack-overview/7.3/example-clientips.html
Is there a way to get the date and time that an elastic search document was written?
I am running es queries via spark and would prefer NOT to look through all documents that I have already processed. Instead I would like read the only documents that were ingested between the last time the program ran and now.
What is the best most efficient way to do this?
I have looked at;
updating to add a field with an array with booleans for if its been looked at by which analytic. The negative is waiting for the update to occur.
index per time frame method, which would be to break down the current indexes into smaller ones so by hour.The negative I see is the number of open file descriptors.
??
Elasticsearch version 5.6
I posted the question on the elasticsearch discussion board and it appears using the ingest pipeline is the best option.
I am running es queries via spark and would prefer NOT to look through
all documents that I have already processed. Instead I would like read
the only documents that were ingested between the last time the
program ran and now.
A workaround could be :
While inserting data using Logstash to Elasticsearch, Logstash appends a #timestamp key to the document which represents the time (in UTC) at which the document is created or we can use an ingest pipline
After that we can query based on the timestamp.
For more on this please have a look at :
Mapping changes
There is no way to ask ES to insert a timestamp at index time
Elasticsearch doesn't have such functionality.
You need manually save with each document date. In this case you will be able to search by date range.
I have a data source which will create a high number of entries that I'm planning to store in ElasticSearch.
The source creates two entries for the same document in ElasticSearch:
the 'init' part which records init-time and other details under a random key in ES
the 'finish' part which contains the main data, and updates the initially created document (merges) in ES under the init's random key.
I will need to use time-based indexes in ElasticSearch, with an alias pointing to the actual index,
using the rollover index.
For updates I'll use the update API to merge init and finish.
Question: If the init document with the random key is not in the current index (but in an older one already rolled over) would updating it using it's key
successfully execute? If not, what is the best practice to perform the update?
After some quietness I've set out to test it.
Short answer: After the index is rolled over under an alias, an update operation using the alias refers to the new index only, so it will create the document in the new index, resulting in two separate documents.
One way of solving it is to perform a search in the last 2 (or more if needed) indexes and figure out which non-alias index name to use for the update.
Other solution which I prefer is to avoid using the rollover, but calculate index name from the required date field of our document, and create new index from the application, using template to define mapping. This way event sourcing and replaying the documents in order will yield the same indexes.