Update dataset wth ElasticSearch Aggregation result - elasticsearch

I'd like to automate a features creation process for large dataset with elastic search.
I'd like to know if it is possible to create a new field in my dataset that will be the result of an aggregation.
I'm currently working on log from a network and wants to implement the moving average (the mean of a field during the past x days) of the filed "bytes_in".
After spending time reading the doc and example, I wasn't able to do so ...

You have two possibilities:
By using the Rollup API you can create a job that will allow you to summarize data on the go and store it in a dedicated index.
A detailed example can be found in this blog article.
By using the Data Frame Transform API, you can pivot your data into a new entity-centric index, aggregate your data in various ways and store the results in a dedicated index.

Related

Is is more efficient to query multiple ElasticSearch indices at once or one big index

I have an ElasticSearch cluster and my system handles events coming from an API.
Each event is a document stored in an index and I create a new index per source (the company calling the API). Sources come and go, so I have new sources every week and most sources become inactive after a few weeks. Each source send between 100k and 10M new events every day.
Right now my indices are named api-events-sourcename
The documents contain a datetime field and most of my queries look like "fetch the data for that source between those dates.
I frequently use Kibana and I have configured a filter that matches all my indices (api-events-*) at once, and I then add terms to filter a specific source and specific days.
My requests can be slow at times and they tend to slow down the ingestion of new data.
Given that workflow, should I see any performance benefits to create an index per source and per day, instead of the index per source only that I use today ?
Are there other easy tricks to avoid putting to much strain on the cluster ?
Thanks!

What is the best way to enrich near real-time data in ElasticSearch with batch data that may come in later?

I have two types of indices in my elasticsearch. The first contains data that is updated in near-real time. The second is data I can use to enhance the first that is updated nightly. I am new to elasticsearch and I'm wondering if there are any good patterns that easily allows me to update the streaming data with the nightly batches.
I've looked at the enrichment processor, but that appears to enrich at time of index. The enrichment data I have might be there, or might show up that night.
My goal is to create a dashboard that uses the enrichment index to help identify what documents in the streaming data I care about; and eventually add more fields for detailed exploration from there. In SQL terms: "count the number of documents where the ID of the stream document exists in the enrichment data", but that is pretty much a JOIN which I believe I should be avoiding given the large size of both indices.
Enrichment processors can be run at index time but also after documents have already been indexed using the _update_by_query endpoint.
The idea is this: you index your streaming data in real-time. Once your second data set comes in, you can create a new index to store it, then create an enrichment index out of it and finally update your first data set with the enrich processor.

Elasticsearch / Kibana: Subtraction across pre-aggregated time-series data

I am working with Johns Hopkins University CSSE COVID19 data, published on their GitHub. Some of the metrics they publish in their daily US reports are sum aggregations. I would like to perform basic math against the values within a given field so that I can get a daily tally.
JHU publishes their data daily, so let's assume that the numbers reported reflect a 24-hour period.
Example: In the State of New York, I can see the following values for Last_Update and Recovered, where Recovered is a rolling sum of all cases where people have recovered from infection:
Last_Update,Recovered
2020-08-05,73326
2020-08-04,73279
2020-08-03,73222
2020-08-02,73134
2020-08-01,73055
2020-07-31,72973
Ideally, I would like to create a new field (be it a scripted field, or a new field that is generated via a Logstash Filter processor) called RecoveredToday, where the field value reflects the difference between today's Recovered aggregation and yesterday's Recovered aggregation.
Last_Update,Recovered,RecoveredToday
2020-08-05,73326,47
2020-08-04,73279,57
2020-08-03,73222,88
2020-08-02,73134,79
2020-08-01,73055,82
2020-07-31,72973,...
In the above data set, RecoveredToday is calculated from the value of Recovered on 2020-08-05 minus the value of Recovered on 2020-08-04.
73326 - 73279 = 47
With respect to using a Scripted Field in Kibana, according to this blog article, Scripted Fields can only analyze fields within one given document at a time, and cannot perform calculations against a field across multiple documents.
I do see user #agung-darmanto solved a similar problem on StackOverflow, but the solution calls out specific dates rather than performing rolling calculations. It's also unclear from the code snippet if the results are being inserted into a new field that can subsequently be used to build visualizations.
The approach to use Logstash ruby processing on the fly also presents a problem. Logstash, as far as I know, cannot access an already ingested document ... and if it can, it's probably a pretty ugly superpower to wield.
Goal: There are other fields provided in the JHU CSSE data which are also pre-aggregated. I would like to produce visualizations that reflect trends such as:
Number of new cases per day
Number of new hospitalizations per day
Number of new deaths per day
Using the data they provide, I can build visualizations that will plateau, and that plateau reflects a reduction of incidences. I'm trying to produce visualizations that show ZERO.

ElasticSearch: querying most recent snapshot design

I'm trying to decide how to structure the data in ElasticSearch.
I have a system that is producing metrics on a daily basis. I would like to put those metrics into ES so I could do some advances querying/sorting. I also only care about the most recent data that's in there. The system producing the data could also be late.
Currently I can think of two options:
I can have one index with a date column that contains the date that the metric was created. I am unsure, however, of how to write the query so that if multiple days worth of data are in the index I filter it to just the most recent set.
I could also try and split the data up into different indexes (recent and past) and have some sort of process that migrates data from the recent index to the past index. I think the challenge with this would be having downtime where the data is being moved and/or added into the recent.
Thoughts?
A common approach to solving this problem with elastic search would be to store data in a form that allows historic querying, then again in a second form that allows querying the most recent data. For example if your metric update looked like:
{
"type":"OperationsPerSecond",
"name":"Questions",
"value":10
}
Then it can be indexed into our current values index using a composite key constructed from the document (obviously, for this to work you'd need to be able to construct a composite key from your document!). For example, your identity for this document might be the type and name concatenated. You then leverage the upsert API to allow you to write your updates to the same document:
POST current_metrics/_update/OperationsPerSecond-Questions
{
"type":"OperationsPerSecond",
"name":"Questions",
"value":10
}
Every time you call this API with the same composite key it will update the existing document, rather than create a new document. This will give you an index that only contains a single record per metric you are monitoring, and you can query that index to get your most recent values.
To store your historic data, you change your primary key strategy, it would probably be most straightforward to use the index API and get elastic to generate a primary key for you.
POST all_metrics/_doc/
{
"type":"OperationsPerSecond",
"name":"Questions",
"value":10
}
This API will create a new document for every request made to it. So as long as you have something in your data that you can use in an elastic range query, such as a field like createdDate with a value that looks like a date time, then you should be able to query historic data.
The main thing is, don't worry about duplicating your data for different purposes, elastic does a good job of compressing this stuff on disk and in memory. Storing data multiple times is called denormalization and is a pretty common technique in data warehousing and big data.

Need clarification about usage of mahout with hadoop

I currently have an implementation of a recommender in mahout using the in memory recommendation apis. However, I would like to move to a distributed solution using hadoop in order to calculate offline recommendations. This is my first time using hadoop and I'm looking for clarification on a few concepts and api usages.
Currently, my understanding of hadoop is minimal and I think that the correct approach is the following:
use something like apache drill in order to populate the hdfs with the user and item data.
using the recommendation job in mahout train on the data from the hdfs.
transform the resultant data in the hdfs to index shards to be used by solr
use solr to provide the recommendations to the userbase
However, I am looking for clarifications on a couple aspects of this design:
How would I utilize a rescorer in the manner that it is used in the in memory live recommendations?
What is the best manner in which to invoke the recommendations job?
I have other questions besides these two but the answers to these would be a huge help.
You may be talking about the Mahout + Hadoop + Solr recommender. This method handles rescoring in a couple different ways.
The basic recommender can be put together in two ways:
After getting data into into HDFS in the form of (user id, item id, preference weight) run the ItemSimilarityJob on the data (use LLR similarity, which is usaully best). It will create what is called an indicator matrix. This will be an item id by item id sparse matrix of values indicating the similarity magnitude between any two items. You must then convert this into values that Solr can index. That means translating the internal Mahout integer IDs into some unique string representation, which is probably what they were at the very beginning. This will look like (item123,item223 item643 item293 item445...) as a CSV. so two Solr fields, the first is an item id, the second is a list of similar items. All ids must be text tokens. Then the query for recommendations is a Solr query made up of item ids that a particular user has shown a preference for. So query = "item223 item344 item445...". Make the query against the filed that olds the indicator matrix values. You will get back an ordered list of item IDs
A much easier way that may work for you is to use a tool in the /examples folder of Mahout 1.0-SNAPSHOT or here: https://github.com/pferrel/solr-recommender. It takes in raw log files with unique strings for user and item ids. it does all the work on Hadoop to output CSVs that can be indexed by Solr directly or loaded into a DB as described above.
The way I did the demo site (https://guide.finderbots.com) is to use my Solr web app integration, putting the indicator matrix into a DB attaching the similar item list to my collection of items. So item123 got item223 item643 item293 item445... in its indicator field. After you index the collection the query is then = "item223 item344 item445..." -- the user's prefered items.
Here are three ways to do rescoring:
Mix in metadata with the query. So you could do query = "item223 item344 item445..." against the indicator field AND "SciFi" against the "genre" field. This gives you blended collaborative filtering and metadata in your query and as you can imagine, the recs are based on the user's prefs but skewed towards "SciFi". There are lots of other interesting things you can do once you get item+indicators+metadata into an index.
Filter recs by metadata. You can get recs not skewed but filtered, if you want. Using the Solr query = "item223 item344 item445..." against the indicator field AND "SciFi" as a filter against the "genre" field. In this case you get nothing but "SciFi" where #1 you would get mostly "SciFi"
Get your ordered list of recs back and rescore them in any way you'd like based on other things you know about the user, context, or items. Often these can be encoded into a Solr query and done with one query but reordering and filtering can be done after the recs are returned too. You would have to write that code, it is not built in.
The fun thing is you can mix filters, metadata fields, and user preferences with what Solr calls "boost" values to get all sorts of rescoring. Solr can even use location to query, skew, or filter.
Note: You don't have to worry about Solr shards necessarily. Solr will index most DBs and HDFS directly but only the index is sharded. You shard an index if you have a very big one, you replicate it if you have lots of queries/second (or for failover). Solr queries are generally very fast so I'd worry about that after you have a functioning system since it's a config thing and shouldn't be affected by the rest of your workflow.

Resources