How to create a month over month new companies bar chart - elasticsearch

Lets say I create an index in elasticsearch and import data
PUT test
{
"mappings": {
"orders": {
"properties": {
"devices": {
"type": "integer"
},
"location": {
"type":"geo_point"
},
"time" : {
"type":"date",
"format": "epoch_second"
},
"company" : {
"type":"keyword"
}
}
}
}
}
What is fairly simple to do in kibana is getting unique count of companies per month, which is fine, but not good enough to get a number of companies that had the first order in that month. Is this possible in kibana or timelion ? If not any idea how I should save data to elastic so I could get this number ? I am using Kibana 5.1.2

I have added a property type in index called sequence, which is the number of the sequential order, so if it is a first order I gave it 1, if second 2 and so on...
Mapping looks like this
PUT test
{
"mappings": {
"orders": {
"properties": {
"devices": {
"type": "integer"
},
"email": {
"type":"keyword"
},
"company": {
"type":"keyword"
},
"location": {
"type":"geo_point"
},
"sequence":{
"type":"integer"
},
"time": {
"type":"date",
"format": "strict_date_optional_time||epoch_second"
}
}
}
}
}
So for this operation i choose Timelion which is cool and lets you calculate values. The formula for MoM % of new users is...
(new users this month / cumulative users till previous month)
This look in timelion expression like this
.es(q=sequence:1,index=index,timefield=time,metric=cardinality:company)
.divide(.es(index=index,timefield=time,metric=cardinality:company,offset=-1M)
.cusum()).bars()
Note that this query is all one query I only cut it in 3 pieces for easier readibility. And i added bars() just for fun.

Related

Why elasticsearch dynamic templates create explicit fields in the mapping?

The document that I want to index is as follows
{
"Under Armour": 0.16667,
"Skechers": 0.14774,
"Nike": 0.24404,
"New Balance": 0.11905,
"SONOMA Goods for Life": 0.11236
}
Fields under this node are dynamic, which means when documents are getting added various fields(brands) will come with those documents.
If I create an index without specifying a mapping, ES says "maximum number of fields (1000) have been reached". Though we can increase this value, it is not a good practice.
In order to support the above document, I created a mapping as follows and created an index.
{
"mappings": {
"my_type": {
"dynamic_templates": [
{
"template1":{
"match_mapping_type": "double",
"match": "*",
"mapping": {
"type": "float"
}
}
}
]
}
}
}
When I add above document to the created index and checked the mapping of the index again. It looks like as below.
{
"my_index": {
"mappings": {
"my_type": {
"dynamic_templates": [
{
"template1": {
"match": "*",
"match_mapping_type": "double",
"mapping": {
"type": "float"
}
}
}
],
"properties": {
"New Balance": {
"type": "float"
},
"Nike": {
"type": "float"
},
"SONOMA Goods for Life": {
"type": "float"
},
"Skechers": {
"type": "float"
},
"Under Armour": {
"type": "float"
}
}
}
}
}
}
If you clearly see the mapping that I created earlier and the mapping when I added a document to the index is different. It added fields statically added to the mapping. When I keep adding more documents, new fields will be added to the mapping (which will end up with maximum number of fields(1000) has been reached).
My question is,
The mapping that I mentioned above is correct for the above mentioned document.
If it is correct, why new fields are added to the mapping?
According to the posts that I read, increasing the number of fields in an index is not a good practice it may increase the resource usage.
In this case, when there are enormous number of brands are there and new brands to be introduced.
The proper solution for such a case is, introduce key-value pairs. (Probably I need to do a transformation during ETL)
{
"brands": [
{
"key": "Under Armour",
"value": 0.16667
},
{
"key": "Skechers",
"value": 0.14774
},
{
"key": "Nike",
"value": 0.24404
}
]
}
When the data is formatted as above, the map won't be change.
A good reading that I found was
https://www.elastic.co/blog/found-beginner-troubleshooting#keyvalue-woes
Thanks #Val for the suggestion

How to index date ranges with ElasticSearch 5.1

I have documents that I want to index/search with ElasticSearch. These documents may contain multiple dates, and in some cases, the dates are actually date ranges. I'm wondering if someone can help me figure out how to write a query that does the right thing (or how to properly index my document so I can query it).
An example is worth a thousand words. Suppose the document contains two marriage date ranges: 2005-05-05 to 2007-07-07 and 2012-12-012 to 2014-03-03.
If I index each date range in start and end date fields, and write a typical range query, then a search for 2008-01-01 will return this record because one marriage will satisfy one of the inequalities and the other will satisfy the other. I don't know how to get ES to keep the two date ranges separate. Obviously, having marriage1 and marriage2 fields would resolve this particular problem, but in my actual data set I have an unbounded number of dates.
I know that ES 5.2 supports the date_range data type, which I believe would resolve this issue, but I'm stuck with 5.1 because I'm using AWS's managed ES.
Thanks in advance.
You can use nested objects for this purpose.
PUT /records
{
"mappings": {
"record": {
"properties": {
"marriage": {
"type": "nested",
"properties": {
"start": { "type": "date" },
"end": { "type": "date" },
"person1": { "type": "string" },
"person2": { "type": "string" }
}
}
}
}
}
}
PUT /records/record/1
{
"marriage": [ { "start" : "2005-05-05","end" :"2007-07-07" , "person1" : "","person2" :"" },{"start": "2012-12-12","end": "2014-03-03","person1" : "","person2" :"" }]
}
POST /records/record/_search
{
"query": {
"nested": {
"path": "marriage",
"query": {
"range": {
"marriage.start": { "gte": "2008-01-01", "lte": "2015-02-03"}
}
}
}
}

ElasticSearch - issue with sub term aggregation with array fields

I have the two following documents:
{
"title":"The Avengers",
"year":2012,
"casting":[
{
"name":"Robert Downey Jr.",
"category":"Actor",
},
{
"name":"Chris Evans",
"category":"Actor",
}
]
}
and:
{
"title":"The Judge",
"year":2014,
"casting":[
{
"name":"Robert Downey Jr.",
"category":"Producer",
},
{
"name":"Robert Duvall",
"category":"Actor",
}
]
}
I would like to perform aggregations, based on two fields : casting.name and casting.category.
I tried with a TermsAggregation based on casting.name field, with a subaggregation, which is another TermsAggregation based on the casting.category field.
The problem is that for the "Chris Evans" entry, ElasticSearch set buckets for ALL categories (Actor, Producer) whereas it should set only 1 bucket (Actor).
It seems that there is a cartesian product between all casting.category occurences and all casting.name occurences.
It behaves like this with array fields (casting), whereas I don't have the problem with simple fields (as title, or year).
I also tried to use nested aggregations, but maybe not properly, and ElasticSearch throws an error telling that casting.category is not a nested field.
Any idea here?
Elasticsearch will flatten the nested objects, so internally you will get:
{
"title":"The Judge",
"year":2014,
"casting.name": ["Robert Downey Jr.","Robert Duvall"],
"casting.category": ["Producer", "Actor"]
}
if you want to keep the relationship you'll need to use either nested objects or a parent child relationship
To do a nested mapping you'd need to do something like this:
"mappings": {
"movies": {
"properties": {
"title" : { "type": "string" },
"year" : { "type": "integer" },
"casting": {
"type": "nested",
"properties": {
"name": { "type": "string" },
"category": { "type": "string" }
}
}
}
}
}

Date_histogram Elasticsearch facet can't find field

I am using the date_histogram facet to find results based on a Epoch timestamp. The results are displayed on a histogram, with the date on the x-axis and count of events on the y-axis. Here is the code that I have that doesn't work:
angular.module('controllers', [])
.controller('FacetsController', function($scope, $http) {
var payload = {
query: {
match: {
run_id: '9'
}
},
facets: {
date: {
date_histogram: {
field: 'event_timestamp',
factor: '1000',
interval: 'second'
}
}
}
}
It works if I am using
field: '#timestamp'
which is in ISO8601 format; however, I need it to now work with Epoch timestamps.
Here is an example of what's in my Elasticsearch, maybe this can lead to some answers:
{"#version":"1",
"#timestamp":"2014-07-04T13:13:35.372Z","type":"automatic",
"installer_version":"0.3.0",
"log_type":"access.log","user_id":"1",
"event_timestamp":"1404479613","run_id":"9"}
},
When I run this, I receive this error:
POST 400 (Bad Request)
Any ideas as to what could be wrong here? I don't understand why I'd have such a difference from using the two different fields, as the only difference is the format. I researched as best I could and discovered I should be using 'factor', but that didn't seem to solve my problem. I am probably making a silly beginner mistake!
You need to set the indexing initially. Elasticsearch is good at defaults but it is not possible for it to determine if the provided value is a timestamp, integer or string. So its your job to tell Elasticsearch about the same.
Let me explain by example. Lets consider the following document is what you are trying to index:
{
"#version": "1",
"#timestamp": "2014-07-04T13:13:35.372Z",
"type": "automatic",
"installer_version": "0.3.0",
"log_type": "access.log",
"user_id": "1",
"event_timestamp": "1404474613",
"run_id": "9"
}
So initially you don't have an index and you index your document by making an HTTP request like so:
POST /test/date_experiments
{
"#version": "1",
"#timestamp": "2014-07-04T13:13:35.372Z",
"type": "automatic",
"installer_version": "0.3.0",
"log_type": "access.log",
"user_id": "1",
"event_timestamp": "1404474613",
"run_id": "9"
}
This creates a new index called test and a new doc type in index test called date_experiments.
You can check the mapping of this doc type date_experiments by doing so:
GET /test/date_experiments/_mapping
And what you get in the result is an auto-generated mapping that was generated by Elasticsearch:
{
"test": {
"date_experiments": {
"properties": {
"#timestamp": {
"type": "date",
"format": "dateOptionalTime"
},
"#version": {
"type": "string"
},
"event_timestamp": {
"type": "string"
},
"installer_version": {
"type": "string"
},
"log_type": {
"type": "string"
},
"run_id": {
"type": "string"
},
"type": {
"type": "string"
},
"user_id": {
"type": "string"
}
}
}
}
}
Notice that the type of event_timestamp field is set to string. Which is why your date_histogram is not working. Also notice that the type of #timestamp field is already date because you pushed the date in the standard format which made easy for Elasticsearch to recognize your intention was to push a date in that field.
Drop this mapping by sending a DELETE request to /test/date_experiments and lets start from the beginning.
This time instead of pushing the document first, we will make the mapping according to our requirements so that our event_timestamp field is considered as a date.
Make the following HTTP request:
PUT /test/date_experiments/_mapping
{
"date_experiments": {
"properties": {
"#timestamp": {
"type": "date"
},
"#version": {
"type": "string"
},
"event_timestamp": {
"type": "date"
},
"installer_version": {
"type": "string"
},
"log_type": {
"type": "string"
},
"run_id": {
"type": "string"
},
"type": {
"type": "string"
},
"user_id": {
"type": "string"
}
}
}
}
Notice that I have changed the type of event_timestamp field to date. I have not specified a format because Elasticsearch is good at understanding a few standard formats like in the case of #timestamp field where you pushed a date. In this case, Elasticsearch will be able to understand that you are trying to push a UNIX timestamp and convert it internally to treat it as a date and allow all date operations on it. You can specify a date format in the mapping just in case the dates you are pushing are not in any standard formats.
Now you can start indexing your documents and starting running your date queries and facets the same way as you were doing earlier.
You should read more about mapping and date format.

Elasticsearch sequential dates

I am creating a hotel booking system using Elasticsearch and am trying to find a way to return hotels that have a variable number of sequential dates available (for example 7 days) across a range of dates
I am currently storing dates and prices as a child document to the hotel but am unsure how to undertake the search or if it is even possible with my current setup?
Edit: added mappings
Hotel Mapping
{
"hotel":{
"properties":{
"city":{
"type":"string"
},
"hotelid":{
"type":"long"
},
"lat":{
"type":"double"
},
"long":{
"type":"double"
},
"name":{
"type":"multi_field",
"fields":{
"name":{
"type":"string"
},
"name.exact":{
"type":"string",
"index":"not_analyzed",
"omit_norms":true,
"index_options":"docs",
"include_in_all":false
}
}
},
"star":{
"type":"double"
}
}
}
}
Date Mapping
{
"dates": {
"_parent": {
"type": "hotel"
},
"_routing": {
"required": true
},
"properties": {
"date": {
"type": "date",
"format": "dateOptionalTime"
},
"price": {
"type": "double"
}
}
}
}
I am currently using a date range to select the available dates and then a field query to match the city - the other fields will be used later
I had to to something similar and i ended up storing every day the hotel(room) was booked during indexing. (So a list of [2014-02-15, 2014-02-16, 2014-02-17, etc])
After that is was fairy trivial to write a query to find all hotel rooms that were free during a certain date range.
Still seems like there should be am ore elegant solution, but this ended up working great for me.
In my project we have limited period of stays, so I created an object in mapping, and specified available booking dates for each period, like:
list of available booking dates for 2 days stay
list of available booking dates for 3 days stay
....
list of available booking dates for 10 days stay

Resources