Elasticsearch searchable synthetic fields - elasticsearch

Provided that in a source document (JSON) exist a couple of fields named, a and b,
that are of type long, I would like to construct a synthetic field (e.g. c)
by concatenating the values of the previous fields with an underscore and
index it as keyword.
That is, I am looking into a feature that could be supported with an imaginary, partial, mapping like this:
...
"a": { "type": "long" },
"b": { "type": "long" },
"c": {
"type": "keyword"
"expression": "${a}_${b}"
},
...
NOTE: The mapping above was made up just for the sake of the example. It is NOT valid!
So what I am looking for, is if there is a feature in elasticsearch, a recipe or hint to support
this requirement. The field need not be registered in _source, just need to be searchable.

There are 2 steps to this -- a dynamic_mapping and an ingest_pipeline.
I'm assuming your field c is non-trivial so you may want to match that field in a dynamic template using a match and assign the keyword mapping to it:
PUT synthetic
{
"mappings": {
"dynamic_templates": [
{
"c_like_field": {
"match_mapping_type": "string",
"match": "c*",
"mapping": {
"type": "keyword"
}
}
}
],
"properties": {
"a": {
"type": "long"
},
"b": {
"type": "long"
}
}
}
}
Then you can set up a pipeline which'll concatenate your a & b:
PUT _ingest/pipeline/combined_ab
{
"description" : "Concatenates fields a & b",
"processors" : [
{
"set" : {
"field": "c",
"value": "{{_source.a}}_{{_source.b}}"
}
}
]
}
After ingesting a new doc (with the activated pipeline!)
POST synthetic/_doc?pipeline=combined_ab
{
"a": 531351351351,
"b": 251531313213
}
we're good to go:
GET synthetic/_search
yields
{
"a":531351351351,
"b":251531313213,
"c":"531351351351_251531313213"
}
Verify w/ GET synthetic/_mapping too.

Related

Use condition in Elasticsearch mapping

I'd like to know if it's possible to use conditions when I define the mapping of an index, because depending on certain values of a field, other fields are then required.
For example :
PUT /my-index
{
"mappings": {
"properties": {
"age": { "type": "integer" },
"email": { "type": "keyword" },
"name": { "type": "text" },
"is_external": {"type": "boolean" }
if [is_external] == true {
"adress": { "type": "text" },
"date": { "type": "date" }
}
}
}
}
If there's no way to do this, how can I deal with it ?
It doesn't make sense to do this kind of checks at the mapping level, because ultimately your index will contain both documents with is_external: true and is_external: false and so the mapping will have to also contain the address and date field definitions for the documents where is_external: true and there can only be one single mapping for each index.
If you want to enforce that a document with is_external: true also contains the address and date fields, then you can do this using an ingest pipeline with a drop processor:
...
{
"drop": {
"if" : "ctx.is_external == true && (ctx.date == null || ctx.address == null)"
}
}

How to generate N FlowFiles and set the content of each FlowFile according to the data in Elastic?

In Elasticsearch I have this index and mapping:
PUT /myindex
{
"mappings": {
"myentries": {
"_all": {
"enabled": false
},
"properties": {
"yid": {"type": "keyword"},
"days": {
"properties": {
"Type1": { "type": "date" },
"Type2": { "type": "date" }
}
},
"directions": {
"properties": {
"name": {"type": "keyword"},
"recorder": { "type": "keyword" },
"direction": { "type": "integer" }
}
}
}
}
}
}
I want to generate N FlowFiles, 1 for each combination of values of recorder and direction in the mapping directions. How can I do it in Nifi? I was thinking to use GenerateFlowFile, but how can I apply this logic related to Elasticsearch?
One possible workaround might be to generate N FlowFiles using GenerateFlowFile, where Batch field could be hardcoded and set to 10 (the number of entries in Elastic). But then I don't know what should be the next step?
GenerateFlowFile is probably not the right tool here, as it doesn't accept incoming connections, so you would not be able to parameterize it with the count. You can use SplitJson, which will split the flowfile into multiple flowfiles given a JSONPath expression that returns an array from the JSON content.
Update
Here is a great tool you can use to evaluate JSONPath dynamically and see what it matches. In your example, let's say you received data like the following:
{
"yid": "nifi",
"days" : [{"Type1": "09/07/2017"},{"Type2":"10/07/2017"}],
"directions": [
{
"name": "San Francisco",
"recorder" : "Samsung",
"direction": "0"
},
{
"name": "Santa Monica",
"recorder" : "iPhone",
"direction": "270"
},
{
"name": "San Diego",
"recorder" : "Razr",
"direction": "180"
},
{
"name": "Santa Clara",
"recorder" : "Android",
"direction": "0"
}
]
}
The JSONPath expression $.directions[*].direction would return:
[
"0",
"270",
"180",
"0"
]
This would allow SplitJson to create four flowfiles with the derived content and fragment attributes to correlate them back to the original flowfile.
If you actually need to perform permutation logic on the resulting direction & recorder values, you may want to use ExecuteScript with a simple Groovy/Ruby/Python script to do that operation inline and split out the resulting values.

Why elasticsearch dynamic templates create explicit fields in the mapping?

The document that I want to index is as follows
{
"Under Armour": 0.16667,
"Skechers": 0.14774,
"Nike": 0.24404,
"New Balance": 0.11905,
"SONOMA Goods for Life": 0.11236
}
Fields under this node are dynamic, which means when documents are getting added various fields(brands) will come with those documents.
If I create an index without specifying a mapping, ES says "maximum number of fields (1000) have been reached". Though we can increase this value, it is not a good practice.
In order to support the above document, I created a mapping as follows and created an index.
{
"mappings": {
"my_type": {
"dynamic_templates": [
{
"template1":{
"match_mapping_type": "double",
"match": "*",
"mapping": {
"type": "float"
}
}
}
]
}
}
}
When I add above document to the created index and checked the mapping of the index again. It looks like as below.
{
"my_index": {
"mappings": {
"my_type": {
"dynamic_templates": [
{
"template1": {
"match": "*",
"match_mapping_type": "double",
"mapping": {
"type": "float"
}
}
}
],
"properties": {
"New Balance": {
"type": "float"
},
"Nike": {
"type": "float"
},
"SONOMA Goods for Life": {
"type": "float"
},
"Skechers": {
"type": "float"
},
"Under Armour": {
"type": "float"
}
}
}
}
}
}
If you clearly see the mapping that I created earlier and the mapping when I added a document to the index is different. It added fields statically added to the mapping. When I keep adding more documents, new fields will be added to the mapping (which will end up with maximum number of fields(1000) has been reached).
My question is,
The mapping that I mentioned above is correct for the above mentioned document.
If it is correct, why new fields are added to the mapping?
According to the posts that I read, increasing the number of fields in an index is not a good practice it may increase the resource usage.
In this case, when there are enormous number of brands are there and new brands to be introduced.
The proper solution for such a case is, introduce key-value pairs. (Probably I need to do a transformation during ETL)
{
"brands": [
{
"key": "Under Armour",
"value": 0.16667
},
{
"key": "Skechers",
"value": 0.14774
},
{
"key": "Nike",
"value": 0.24404
}
]
}
When the data is formatted as above, the map won't be change.
A good reading that I found was
https://www.elastic.co/blog/found-beginner-troubleshooting#keyvalue-woes
Thanks #Val for the suggestion

Elasticsearch indexing homogenous objects under dynamic keys

The kind of document we want to index and query contains variable keys but are grouped into a common root key as follows:
{
"articles": {
"0000000000000000000000000000000000000001": {
"crawled_at": "2016-05-18T19:26:47Z",
"language": "en",
"tags": [
"a",
"b",
"d"
]
},
"0000000000000000000000000000000000000002": {
"crawled_at": "2016-05-18T19:26:47Z",
"language": "en",
"tags": [
"b",
"c",
"d"
]
}
},
"articles_count": 2
}
We want to able to ask: what documents contains articles with tags "b" and "d", with language "en".
The reason why we don't use list for articles, is that elasticsearch can efficiently and automatically merge documents with partial updates. The challenge however is to index the objects inside under the variable keys. One possible way we tried is to use dynamic_templates as follows:
{
"sources": {
"dynamic": "strict",
"dynamic_templates": [
{
"article_template": {
"mapping": {
"fields": {
"crawled_at": {
"format": "dateOptionalTime",
"type": "date"
},
"language": {
"index": "not_analyzed",
"type": "string"
},
"tags": {
"index": "not_analyzed",
"type": "string"
}
}
},
"path_match": "articles.*"
}
}
],
"properties": {
"articles": {
"dynamic": false,
"type": "object"
},
"articles_count": {
"type": "integer"
}
}
}
}
However this dynamic template fails because when documents are inserted, the following can be found in the logs:
[2016-05-30 17:44:45,424][WARN ][index.codec] [node]
[main] no index mapper found for field:
[articles.0000000000000000000000000000000000000001.language] returning
default postings format
Same for the two other fields as well. When I try to query for the existence of a certain article, or even articles it doesn't return any document (no error but empty hits):
curl -LsS -XGET 'localhost:9200/main/sources/_search' -d '{"query":{"exists":{"field":"articles"}}}'
When I query for the existence of articles_count, it returns everything. Is there a minor error in what we are trying to achieve, for example in the schema: the definition of articles as a property and in the dynamic template? What about the types and dynamic false? The path seems correct. Maybe this is not possible to define templates for objects in variable-keys, but it should be according to the documentation.
Otherwise, what alternatives are possible without changing the document if possible?
Notes: we have other types in the same index main that also have these fields like language, I ignore if it could influence. The version of ES we are using is 1.7.5 (we cannot upgrade to 2.X for now).

How to specify or target a field from a specific document type in queries or filters in Elasticsearch?

Given:
Documents of two different types, let's say 'product' and 'category', are indexed to the same Elasticsearch index.
Both document types have a field 'tags'.
Problem:
I want to build a query that returns results of both types, but the documents of type 'product' are allowed to have tags 'X' and 'Y', and the documents of type 'category' are only allowed to have tag 'Z'. How can I achieve this? It appears I can't use product.tags and category.tags since then ES will look for documents' product/category field, which is not what I intend.
Note:
While for the example above there might be some kind of workaround, I'm looking for a general way to target or specify fields of a specific document type when writing queries. I basically want to 'namespace' the field names used in my query so only documents of the type I want to work with are considered.
I think field aliasing would be the best answer for you, but it's not possible.
Instead you can use "copy_to" but I it probably affects index size:
DELETE /test
PUT /test
{
"mappings": {
"product" : {
"properties": {
"tags": { "type": "string", "copy_to": "ptags" },
"ptags": { "type": "string" }
}
},
"category" : {
"properties": {
"tags": { "type": "string", "copy_to": "ctags" },
"ctags": { "type": "string" }
}
}
}
}
PUT /test/product/1
{ "tags":"X" }
PUT /test/product/2
{ "tags":"Y" }
PUT /test/category/1
{ "tags":"Z" }
And you can query one of fields or many of them:
GET /test/product,category/_search
{
"query": {
"term": {
"ptags": {
"value": "x"
}
}
}
}
GET /test/product,category/_search
{
"query": {
"multi_match": {
"query": "x",
"fields": [ "ctags", "ptags" ]
}
}
}

Resources