Creating custom fields based on analysis output in ealsticsearch - elasticsearch

I am having document where value is raw string :
{ "content" : "field1=1 , field2=foo"}
My intention is to, query by field1, field2 values.
Closest thing I can think of is to use custom analyser which will create tokens based on comma separator, and then I can search with matching exact values like "field1=1" or "field2=foo" . However, ideally I like to search by range values for field1, pattern matching for field2 etc.
Is there any way to achieve this? I could not find any way to store result of analysis which I can query in this way.

How are you ingesting the documents? If you are doing via logstash ,then you can apply the transformation there using a filter processor

I'm having a little difficulty understanding your question. However, I think you are asking if there's a way to make the type of Field1 numeric and the type of Field2 searchable?
Hopefully you are running Kibana so you can use the Dev console to test this out. If you just let Elastic import data it will create aggregateable and searchable fields for both field1 and field2 because they are both set to string values:
PUT /content_default/type/1 {"field1":"1" , "field2":"foo"}
If you instead omit the quotes around the 1, Elastic will create the field as a long (assuming you haven't already imported a document with a string in the same field) - this allows you to search by range. Here I'm creating a new field3 and setting the value to 1, if you query you should see it's a long
PUT /content_default/type/2 {"field1":"1" , "field2":"foo", "field3":1}
You can pre-load a template to allow you to define types up-front before loading any data - that way Elastic doesn't have to guess what types your fields should be. With strings you can also define whether you want them to be just keywords, searchable or both.
Something like this should do the trick for you:
PUT _template\with_template
{
   "template":"content_with_template",
   "mappings":{
      "content_with_template":{
         "properties":{
            "field2":{
               "analyzer":"simple",
               "type":"text"
            },
            "field1":{
               "type":"keyword"
            },
            "field3":{
               "type":"long"
            }
         }
      }
   }
}
Then put a document in the new 'content_with_template' index like this, at this point it doesn't matter if field3 is in quotes or not - as long as it's parses to a number it'll save
PUT /content_with_template/type/1
{ "field1":"a1d" , "field2":"foo", "field3":1}
https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html

Related

Is it possible to query by field data type in Elasticsearch?

I am needing to do a query in Elasticsearch by field data type. I have not been successful in creating that query. I want to be able to {1) specify the type I want to search for in the query, i.e. all fields of {"type"="boolean"}, and also, (2) get the field and see what the type is for that field.
Reason is to check that the field is designated correctly. Let's say I inserted the following data into this index and fields and I now want to see what the data types of those fields are programmatically. How would I query that?
POST /index_name1/_doc/
{
"field1":"hello_field_2",
"field2":"123456.54321",
"field3.field4": false,
"field3.field5.field10":"POINT(-117.918976 33.812511)",
"field3.field5.field8": "field_of_dragons",
"field3.field5.field9": "2022-05-26T07:47:26.133275Z"
}
I have tried:
GET /index_name1/_search
{
"query":{
"wildcard":{
"field3.field4":{ "type":"*"}
}
}
}
That gives [wildcard] query does not support [type].
I've tried many other queries and searched the documentation and threads, but can't find anything that will do this. It has got to be possible, right?

Type of field for prefix search in Elastic Search

I'm confused on what index type I should apply for my field for prefix search, many show search_as_you_type but I think auto complete is not what I'm going for.
I have a UUID field:
id: 34y72ca1-3739-41ff-bbec-f6d17479384c
The following terms should return the doc above:
3
34
34y72ca1
34y72ca1-3739
34y72ca1-3739-41ff-bbec-f6d17479384c
Using 3739 should not return it as it doesn't start with 3739. Initially this is what I was going for but then the wildcard field is not supported by Amazon AWS, so I compromise for prefix search instead of partial search.
I tried search_as_you_type field but it doesn't return the result when I use the whole ID. Actually, my use case is when user click enter, the results will be shown, instead of real-live when they type, so if speed is compromised its OK, just that I hope for something that will be good for many rows of data.
Thanks
If you have not explicitly defined any index mapping, then you need to use id.keyword field instead of the id field for the prefix query to show the appropriate results. This uses the keyword analyzer instead of the standard analyzer
{
"query": {
"prefix": {
"id.keyword": {
"value": "34y72ca1"
}
}
}
}
Otherwise, you can modify your index mapping, by adding multi fields for id field

Elastic Search - Tokenization and Multi Match query

I need to perform tokenization and multi match in a single query in Elastic Search.
Currently,
1)I am using the analyzer to get the tokens like below
String text = // 4 line log data;
List<AnalyzeToken> analyzeTokenList = new ArrayList<AnalyzeToken>();
AnalyzeRequestBuilder analyzeRequestBuilder = this.client.admin().indices().prepareAnalyze();
for (String newIndex : newIndexes) {
analyzeRequestBuilder.setIndex(newIndex);
analyzeRequestBuilder.setText(text);
analyzeRequestBuilder.setAnalyzer(analyzer);
Response analyzeResponse = analyzeRequestBuilder.get();
analyzeTokenList.addAll(analyzeResponse.getTokens());
}
then, I will iterate through the AnalyzeToken and get the list of tokens,
List<String> tokens = new ArrayList<String>();
for (AnalyzeToken token : tokens)
{
tokens.addAll(token.getTerm().replaceAll("\\s+"," "));
}
then use the tokens and frame the multi-match query like below,
String query = "";
for(string data : tokens) {
query = query + data;
}
MultiMatchQueryBuilder multiMatchQueryBuilder = new MultiMatchQueryBuilder(query, "abstract", "title");
Iterable<Document> result = documentRepository.search(multiMatchQueryBuilder);
Based on the result, I am checking whether similar data exists in the database.
Is it possible to combine as single query - the analyze and multi match query as single query?
Any help is appreciated!
EDIT :
Problem Statement : Say I have 90 entries in one index, In which each 10 entries in that index are identical (not exactly but will have 70% match) so I will have 9 pairs.
I need to process only one entry in each pair, so I went in the following approach (which is not the good way - but as of now I end up with this approach)
Approach :
Get each entry from the 90 entries in the index
Tokenize using the analyzer (this removes the unwanted keywords)
Search in the same index (It checks whether the same kind of data is there in the index) and also filters the flag as processed. --> this flag will be updated after the first log gets processed.
If there is no flag available as processed for the similar kind of data (70% match) then I will process these logs and update the current log flag as processed.
If any data already exist with the flag as processed then I will consider this data is already processed and I will continue with the next one.
So Ideal goal is to, process only one data in the 10 unique entries.
Thanks,
Harry
Multi-match queries internally uses the match queries which are analyzed means they apply the same analyzer which is defined in the fields mapping(standard) if there is no analyzer defined.
From the multi-match query doc
The multi_match query builds on the match query to allow multi-field
queries:
Also, accepts analyzer, boost, operator, minimum_should_match,
fuzziness, lenient, as explained in match query.
So what you are trying to do is overkill, even if you want to change the analyzer(need different tokens during search time) then you can use the search analyzer instead of creating tokens and then using them in multi-match query.

How to use the elasticseach java api for dynamic searches?

So I'm trying to use elasticsearch for dynamic query building. Imagine that I can have a query like:
a = "something" AND b >= "other something" AND (c LIKE "stuff" OR c LIKE "stuff2" OR d BETWEEN "x" AND "y");
or like this:
(c>= 23 OR d<=43) AND (a LIKE "text" OR a LIKE "text2") AND f="text"
Should I use the QueryBuilder or the FilterBuilder, how do you match both? The official documentation says that for exact values we should use the filter approach? I assume I should use filters for equal comparisons? what about dates and numbers? Should I use the Filter or Query?
For the Like/Equals for the number/number problem I tried this:
#Field(type = String, index = FieldIndex.analyzed, pattern = "(\\d+\\/\\d+)|(\\d+\\/)|(\\d+)|(\\/\\d+)")
public String processNumber;
The pattern would deal with the structure number + slash + number, but also number and number + slash.
But when using either the term filter or the match_query I can't get only hits with the exact structure like 20/2014, if I type 20 I would still get hits on the term filter.
Query is the main component when you search for something, it takes into consideration ranking and other features such as stemming, synonyms and other things. Filter, on the other hand, just filters the result set you get from your query.
I suggest that if you don't care about the ranking use filters because they are faster. Otherwise, use query.

How to search fields with '-' characters in elastic search

I am new to elastic search. I have got following document where one of the field "eventId" has "-" in value.
When i try to search with complete value of eventId, i don't get any results.
Sample Document app/event
{
"tags": {}
"eventId": "cc98d57b-c6bc-424c-b54c-df1e3df0d942",
}
I haven't created any explicit settings for my index.
Thanks.
you should check if the tokenizer splits your value into multiple fields. Maybe your value is stored as 5 fields: "cc98d57b", "c6bc", "424c", "b54c" and "df1e3df0d942"
You can analyze that with the 'Kopf' Plugin (https://github.com/lmenezes/elasticsearch-kopf).
If that is your problem you should change your field mapping, so that the value is not analyzed ("index" : "not_analyzed").
For an example how to set that mapping see here: Elasticsearch mapping settings 'not_analyzed' and grouping by field in Java
After that, you should be able to search for your specific value.

Resources