{
"size": 0,
"query": {
"bool": {
"must": [
{
"range": {
"timestamp": {
"gte": "now-30m/m"
}
}
},
{
"match": {
"type": "ERROR"
}
},
{
"wildcard": {
"Name": "*Rajesh*"
}
},
{
"wildcard": {
"Name": "*Shiv*"
}
}
]
}
}
}
I want search in the name field using wildcard that matches any of the two wildcard values (Rajesh,Shiv).And i need to look tat the results from last 30 minutes. When i am using this wildcard, it does not give me any result. Replacing 'Wildcard' with 'match' worked though . like "match" :"Rajesh" or "match" :"Shiv". Is there something wrong with usage of wildcard in my query ?
Most probably cause of the error is that you are using text field to store the names like Rajeash and Shiv and text fields by default uses the standard analyzer which lowercase the generated tokens so tokens would be creates as rajesh and shiv. Note the lowercase in the tokens.
When you use the match query, it uses the same analyzer so again it search with lowercase tokens and you get the result while wildcard query you are specifying the capital letter which doesn't matches the tokens in index.
change your wildcard query to *rajesh* and *shiv* and it should work, otherwise provide the mapping to further debug the issue.
Related
I have registered the following document
"ownDomainValue":"catalogonuevo1.com"
When I perform the following query the document is found, value is "catalogonuevo1"
[
{
"query": {
"bool": {
"filter": [
{
"term": {
"valor_dominio_propio": "catalogonuevo1"
}
}
]
}
},
"from": 0,
"size": 1
}
]
However, when the search value is "catalogonuevo1.com"
[
{
"query": {
"bool": {
"filter": [
{
"term": {
"valor_dominio_propio": "catalogonuevo1.com"
}
}
]
}
},
"from": 0,
"size": 1
}
]
it does not return any value, using MatchQueries the opposite happens, it always finds a wrong document, such as one with the value "catalogonuevo2.com" which is not what I am looking for since I need the search to be exact
It sounds like the problem is that the "term" query in Elasticsearch is not matching the exact value "catalogonuevo1.com" when it is included in the query.
This is likely because the "term" query is tokenizing the input string at the "." character, so it is matching on the token "catalogonuevo1" rather than the entire string "catalogonuevo1.com".
You can resolve this issue by using the "match_phrase" query instead of "term" query, as "match_phrase" query matches on the exact phrase rather than individual tokens.
Additionally, you can use keyword fields to store the domain values; this way, the values are not tokenized and the match phrase will work as expected.
I am using kibanna
I am trying to put filter on a field container_name = "armenian"
but I have other container names with following names
armenian_alpha
armenian_beta
armenian_gama
armenian1
armenian2
after putting the filter , search query in kibanna becomes
{
"query": {
"match": {
"container_name": {
"query": "armenian",
"type": "phrase"
}
}
}
}
But the output searches logs for all containers , as I can see the Elastic search query is using a pattern matching
How can I put an exact match with the string provided and avoid the rest ?
You can try out with term query. Do note that it is case sensitive by default unless you specify with case_insensitive equals to true. Also, if your container_name is a text field type instead of keyword field type, do add the .keyword after the field name. Otherwise, ignore the .keyword.
Example:
GET /_search
{
"query": {
"term": {
"container_name.keyword": {
"value": "armenian"
}
}
}
}
Link here: https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-term-query.html
I would recommend using a direct wildcard in query or wildcard as follow
GET /_search
{
"query": {
"match": {
"container_name": {
"query": "*armenian",
"type": "phrase"
}
}
}
}
GET /_search
{
"query": {
"wildcard": {
"container_name": {
"value": "*armenian"
}
}
}
}
With *armenian you are ensuring that armenian comes at the end.
"I am searching in a elasticsearch cluster GET request on the basis of sourceID tag with value :- "/A/B/C/UniqueValue.xml" and search query looks like this:-"
{
"query": {
"bool": {
"must": [
{
"term": {
"source_id": {
"value": "/A/B/C/UniqueValue.xml"
}
}
}
]
}
}
}
"How can i replace "/A/B/C" from any wildcard or any other way as i just have "UniqueValue.xml" as an input for this query. Can some please provide the modified search Query for this requirement? Thanks."
The following search returns documents where the source_id field contains a term that ends with UniqueValue.xml.
{
"query": {
"wildcard": {
"source_id": {
"value": "*UniqueValue.xml"
}
}
}
}
Note that wildcard queries are expensive. If you need fast suffix search, you could add a multi-field to your mapping which includes a reverse token filter. Then you can use prefix queries on that reversed field.
Works as expected:
{
"query": {
"query_string": {
"query": "Hofstetten-Grünau"
}
}
}
an added wildcard at the end delivers no results and I wonder why:
{
"query": {
"query_string": {
"query": "Hofstetten-Grünau*"
}
}
}
how to fix it?
elasticsearch v5.3.2
This delivers results:
{
"query": {
"query_string": {
"query": "Hofstetten*"
}
}
}
I use a single search field. The end user can freely use wildcards as they see fit. A user might type in:
hofstetten grünau
+ort:hofstetten-grünau
+ort:Hofstetten-G*
so using a match query wont work out for me.
I am using Jest (Java Annotations) as Mapping, and using "default" for this field. My index mapping declares nothing special for the field:
{
"mappings": {
"_default_": {
"date_detection": false,
"dynamic_templates": [{
}]
}
}
}
Adding the wildcard "*" at the end of your query string is causing the query analyzer to interpret the dash between "Hofstetten" and "Grünau" as a logical NOT operator. So you're actually searching for documents that contain Hofstetten but do NOT contain Grünau.
You can verify this by doing the following variations of your search:
"query": "Hofstetten-XXXXX" #should not return results
"query": "Hofstetten-XXXXX*" #should return results
To fix this I would recommend using a match query instead of a query_string query:
{"query": {"match": { "city": "Hofstetten-Grünau" }}}'
(with whatever your appropriate field name is in place of city).
When I'm trying to search for a documents with such query (field indexed with Standard analyzer):
"query": {
"match": {
"Book": "OG/44"
}
}
I've got terms 'OG' and '44' and the result set will contain results where could be either of these terms. What analyzer/tokenizer I should use to get results when only both of terms are present?
You can set operator in match query (by default it is or)
"query": {
"match": {
"Book": {
"query": "OG/44",
"operator" : "and"
}
}
}
You have two tokens because standard analyzer tokenized them by slash, so if you need not this behaviour you can escape it