How to configure elasticsearch regexp query - elasticsearch

I try to configure elasticsearch request. I use DSL and try to find some data with word "swagger" into "message" field.
Here is one of correct answer I want to show :
{
"_index": "apiconnect508",
"_type": "audit",
"_id": "AWF1us1T4ztincEzswAr",
"_score": 1,
"_source": {
"consumerOrgId": null,
"headers": {
"http_accept": "application/json",
"content_type": "application/json",
"request_path": "/apim-5a7c34e0e4b02e66c60edbb2-2018.02/auditevent",
"http_version": "HTTP/1.1",
"http_connection": "keep-alive",
"request_method": "POST",
"http_host": "localhost:9700",
"request_uri": "/apim-5a7c34e0e4b02e66c60edbb2-2018.02/auditevent",
"content_length": "533",
"http_user_agent": "Wink Client v1.1.1"
},
"nlsMessage": {
"resource": "messages",
"replacements": [
"test",
"1.0.0",
"ext_mafashagov#rencredit.ru"
],
"key": "swagger.import.notification"
},
"notificationType": "EVENT",
"eventType": "AUDIT",
"source": null,
"envId": null,
"message": "API test version 1.0.0 was created from a Swagger document by ext_mafashagov#rencredit.ru.",
"userId": "ext_mafashagov#rencredit.ru",
"orgId": "5a7c34e0e4b02e66c60edbb2",
"assetType": "api",
"tags": [
"_geoip_lookup_failure"
],
"gateway_geoip": {},
"datetime": "2018-02-08T14:04:32.731Z",
"#timestamp": "2018-02-08T14:04:32.747Z",
"assetId": "5a7c58f0e4b02e66c60edc53",
"#version": "1",
"host": "127.0.0.1",
"id": "5a7c58f0e4b02e66c60edc55",
"client_geoip": {}
}
}
I try to find ths JSON by :
POST myAddress/_search
Next query works without "regexp" field. How should I configure regexp part of my query?
{
"query": {
"filtered": {
"filter": {
"bool": {
"must": [
{
"range": {
"#timestamp" : {"gte" : "now-100d"}
}
},
{
"term": {
"_type": "audit"
}
},
{
"regexp" : {
"message": "*wagger*"
}
}
]
}
}
}
},
"sort": {
"TraceDateTime": {
"order": "desc",
"ignore_unmapped": "true"
}
}
}

If message field is analyzed, this simple match query should work:
"match":{
"message":"*swagger*"
}
However if it is not analyzed, these two queries should also work for you:
These two queries are case sensitive so you should consider lower casing your field if you wish to keep it not analyzed.
"wildcard":{
"message":"*swagger*"
}
or
"regexp":{
"message":"swagger"
}
Be careful as wildcard and regexp queries degrade performance.

Related

Cannot seem to use must and must_not together in an elastic search query

If I run the following query:
{
"query": {
"bool": {
"must": [
{
"multi_match": {
"query": "boxing",
"fuzziness": 2,
"minimum_should_match": 2
}
}
],
"must_not": [
{
"terms_set": {
"allowedCountries": {
"terms": ["gb", "mx"],
"minimum_should_match_script": {
"source": "2"
}
}
}
}
],
"filter": [
{
"range": {
"expireTime": {
"gt": 1674061907954
}
}
},
{
"term": {
"region": {
"value": "row"
}
}
},
{
"term": {
"sourceType": {
"value": "article"
}
}
}
]
}
}
}
against an index with articles that look like:
{
"_index": "content-items-v10",
"_type": "_doc",
"_id": "e7hm75ui4dma1mm4j8q5v7914",
"_score": 4.3724976,
"_source": {
"allowedCountries": ["gb", "ie"],
"body": "Both Joshua Buatsi and Craig Richards join The DAZN Boxing Show ahead of their clash at London's O2 Arena. Matchroom's Eddie Hearn also gives his take on the night, as well as Chantelle Cameron previewing her contest with Victoria Noelia Bustos.",
"competitions": [
{
"id": "8lo6205qyio0fksjx9glqbdhj",
"name": "Buatsi v Richards"
}
],
"contestants": [
{
"id": "7rq59j3eiamxlm12vhxcsgujj",
"name": "Joshua Buatsi"
},
{
"id": "boby9oqe23g6qyuwphrxh8su5",
"name": "Craig Richards"
}
],
"countries": [
{
"id": "7yasa43laq1nb2e6f8bfuvxed",
"name": "World"
},
{
"id": "258l9t5sm55592i08mdpqzr3t",
"name": "United Kingdom"
}
],
"dotsLastUpdateTime": 1673979749396,
"expireTime": 4800000000000,
"fixtureDate": {},
"headline": "Buatsi vs. Richards: Preview",
"id": "e7hm75ui4dma1mm4j8q5v7914",
"importance": 0,
"languageKeys": ["en"],
"languages": ["en"],
"lastUpdateTime": {
"ts": 1653088281000,
"iso8601": "2022-05-20T23:11:21.000Z"
},
"promoImageUrl": null,
"publication": {
"typeId": "1plcw0iyhx9vn1fcanbm2ja3rf",
"typeName": "Shoulder"
},
"publishedTime": {
"ts": 1653088281000,
"iso8601": "2022-05-20T23:11:21.000Z"
},
"region": "row",
"shortHeadline": null,
"sourceType": "article",
"sports": [
{
"id": "2x2oqzx60orpoeugkd754ga17",
"name": "Boxing"
}
],
"teaser": "",
"thumbnailImageUrl": "https://images.daznservices.com/di/library/babcock_canada/45/3e/the-dazn-boxing-show-20052022_xc4jbfqi022l1shq9lu641h9e.png?t=-477976832",
"translations": {}
}
}
I get the following validation error from elasticsearch:
{
"ok": false,
"errors": {
"validation": [
{
"message": "\"query.bool.must_not\" is not allowed",
"path": [
"query",
"bool",
"must_not"
],
"type": "object.unknown",
"context": {
"child": "must_not",
"label": "query.bool.must_not",
"value": [
{
"terms_set": {
"allowedCountries": {
"terms": [
"gb",
"mx"
],
"minimum_should_match_script": {
"source": "2"
}
}
}
}
],
"key": "must_not"
}
}
]
},
"correlationId": "d29e9275-9ab3-4ff8-944d-852b98d4b503"
}
And I cannot figure out what the issue might be! From the elastic docs it should be OK.
I'm using ElasticSearch 7.9.3 running in a local docker container.
I'm hoping someone out there will give me a clue!
Cheers!
I would expect this to just work.
I'm trying to filter out articles that have both of the country codes gb and mx in the field allowedCountries.
I can include them easily enough in the results when I add the terms_set query to the bool.must section of the query.
It works well, you just need to enclose your query in the query section
{
"query": { <--- add this
"bool": { <--- your query starts here
"must": [
...
Thank you for responding!
I was helping with a system I did not have full context on - it turns out there is a proxy in the mix with validation that was blocking the must_not query. So, with the proxy fixed, it now works.

How to update a text type field in Elasticsearch to a keyword field, where each word becomes a keyword in a list?

I’m looking to update a field in Elasticsearch from text to keyword type.
I’ve tried changing the type from text to keyword in the mapping and then reindexing, but with this method the entire text value is converted into one big keyword. For example, ‘limited time offer’ is converted into one keyword, rather than being broken up into something like ['limited', 'time', 'offer'].
Is it possible to change a text field into a list of keywords, rather than one big keyword? Also, is there a way to do this with only a mapping change and then reindexing?
You need create a new index and reindex using a pipeline to create a list words.
Pipeline
POST _ingest/pipeline/_simulate
{
"pipeline": {
"processors": [
{
"split": {
"field": "items",
"target_field": "new_list",
"separator": " ",
"preserve_trailing": true
}
}
]
},
"docs": [
{
"_index": "index",
"_id": "id",
"_source": {
"items": "limited time offer"
}
}
]
}
Results
{
"docs": [
{
"doc": {
"_index": "index",
"_id": "id",
"_version": "-3",
"_source": {
"items": "limited time offer",
"new_list": [
"limited",
"time",
"offer"
]
},
"_ingest": {
"timestamp": "2022-11-11T14:49:15.9814242Z"
}
}
}
]
}
Steps
1 - Create a new index
2 - Create a pipeline
PUT _ingest/pipeline/split_words_field
{
"processors": [
{
"split": {
"field": "items",
"target_field": "new_list",
"separator": " ",
"preserve_trailing": true
}
}
]
}
3 - Reindex with pipeline
POST _reindex
{
"source": {
"index": "idx_01"
},
"dest": {
"index": "idx_02",
"pipeline": "split_words_field"
}
}
Example:
PUT _ingest/pipeline/split_words_field
{
"processors": [
{
"split": {
"field": "items",
"target_field": "new_list",
"separator": " ",
"preserve_trailing": true
}
}
]
}
POST idx_01/_doc
{
"items": "limited time offer"
}
POST _reindex
{
"source": {
"index": "idx_01"
},
"dest": {
"index": "idx_02",
"pipeline": "split_words_field"
}
}
GET idx_02/_search

Elastic Search Wildcard query with space failing 7.11

I am having my data indexed in elastic search in version 7.11. This is my mapping i got when i directly added documents to my index.
{"properties":{"name":{"type":"text","fields":{"keyword":{"type":"keyword","ignore_above":256}}}
I havent added the keyword part but no idea where it came from.
I am running a wild card query on the same. But unable to get data for keywords with spaces.
{
"query": {
"bool":{
"should":[
{"wildcard": {"name":"*hello world*"}}
]
}
}
}
Have seen many answers related to not_analyzed . And i have tried updating {"index":"true"} in mapping but with no help. How to make the wild card search work in this version of elastic search
Tried adding the wildcard field
PUT http://localhost:9001/indexname/_mapping
{
"properties": {
"name": {
"type" :"wildcard"
}
}
}
And got following response
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "mapper [name] cannot be changed from type [text] to [wildcard]"
}
],
"type": "illegal_argument_exception",
"reason": "mapper [name] cannot be changed from type [text] to [wildcard]"
},
"status": 400
}
Adding a sample document to match
{
"_index": "accelerators",
"_type": "_doc",
"_id": "602ec047a70f7f30bcf75dec",
"_score": 1.0,
"_source": {
"acc_id": "602ec047a70f7f30bcf75dec",
"name": "hello world example",
"type": "Accelerator",
"description": "khdkhfk ldsjl klsdkl",
"teamMembers": [
{
"userId": "karthik.r#gmail.com",
"name": "Karthik Ganesh R",
"shortName": "KR",
"isOwner": true
},
{
"userId": "anand.sajan#gmail.com",
"name": "Anand Sajan",
"shortName": "AS",
"isOwner": false
}
],
"sectorObj": [
{
"item_id": 14,
"item_text": "Cross-sector"
}
],
"geographyObj": [
{
"item_id": 4,
"item_text": "Global"
}
],
"technologyObj": [
{
"item_id": 1,
"item_text": "Artificial Intelligence"
}
],
"themeColor": 1,
"mainImage": "assets/images/Graphics/Asset 35.svg",
"features": [
{
"name": "Ideation",
"icon": "Asset 1007.svg"
},
{
"name": "Innovation",
"icon": "Asset 1044.svg"
},
{
"name": "Strategy",
"icon": "Asset 1129.svg"
},
{
"name": "Intuitive",
"icon": "Asset 964.svg"
},
],
"logo": {
"actualFileName": "",
"fileExtension": "",
"fileName": "",
"fileSize": 0,
"fileUrl": ""
},
"customLogo": {
"logoColor": "#B9241C",
"logoText": "EC",
"logoTextColor": "#F6F6FA"
},
"collaborators": [
{
"userId": "muhammed.arif#gmail.com",
"name": "muhammed Arif P T",
"shortName": "MA"
},
{
"userId": "anand.sajan#gmail.com",
"name": "Anand Sajan",
"shortName": "AS"
}
],
"created_date": "2021-02-18T19:30:15.238000Z",
"modified_date": "2021-03-11T11:45:49.583000Z"
}
}
You cannot modify a field mapping once created. However, you can create another sub-field of type wildcard, like this:
PUT http://localhost:9001/indexname/_mapping
{
"properties": {
"name": {
"type": "text",
"fields": {
"wildcard": {
"type" :"wildcard"
},
"keyword": {
"type" :"keyword",
"ignore_above":256
}
}
}
}
}
When the mapping is updated, you need to reindex your data so that the new field gets indexed, like this:
POST http://localhost:9001/indexname/_update_by_query
And then when this finishes, you'll be able to query on this new field like this:
{
"query": {
"bool": {
"should": [
{
"wildcard": {
"name.wildcard": "*hello world*"
}
}
]
}
}
}

How to search for a nested key in kibana

I have kibana documents that look like this
{
"_index": "echo.caspian-test.2020-06-11.idx.2",
"_type": "status",
"_id": "01754abe95fd084495da20646194fdf7",
"_score": 1,
"_source": {
"applicationVersion": "9f80e49dea1c647fa1baf2e70665aba3a74158eb",
"echoClientVersion": "1.5.1",
"echoMetadata": {
"transportType": "echo"
},
"dataCenter": "hdc-digital-non-prod",
"echoLoggerVersion": "EchoLogbackAppender-1.5.1",
"host": "e22ab1e4-9256-438b-5855-ad04",
"type": "INFO",
"message": "AddUpdate process method ends",
"messageDetail": {
"logger": "com.kroger.cxp.app.transformer.processor.AddUpdateTransformerImpl",
"thread": "DispatchThread: [com.ibm.mq.jmqi.remote.impl.RemoteSession[:/1f6e1b6c][connectionId=414D5143514D2E4150504C2E54455354967C7F5F0407B82E]]"
},
"routingKey": "caspian-test",
"timestamp": "1603276805250"
},
"fields": {
"timestamp": [
"2020-10-21T10:40:05.250Z"
]
}
}
I need to search all the docs having a particular connectionId which is present in
"messageDetail": {
"logger": "com.kroger.cxp.app.transformer.processor.AddUpdateTransformerImpl",
"thread": "DispatchThread: [com.ibm.mq.jmqi.remote.impl.RemoteSession[:/1f6e1b6c][connectionId=414D5143514D2E4150504C2E54455354967C7F5F0407B82E]]"
}
How can i do that . I have tried searching for messageDetail.thread=%$CONNECTION_ID% but it didn't work
You need to add a nested path in your search query to make it work and your messageDetail must be of nested datatype, something like below
{
"query": {
"nested": {
"path": "messageDetail", --> note this
"query": {
"bool": {
"must": [
{
"match": {
"messageDetail. thread": "CONNECTION_ID"
}
}
]
}
}
}
}
}
Adding a working sample with mapping, search query, and result
Index mapping
{
"mappings": {
"properties": {
"messageDetail": {
"type" : "nested"
}
}
}
}
Index sample doc
{
"applicationVersion": "9f80e49dea1c647fa1baf2e70665aba3a74158eb",
"echoClientVersion": "1.5.1",
"echoMetadata": {
"transportType": "echo"
},
"dataCenter": "hdc-digital-non-prod",
"echoLoggerVersion": "EchoLogbackAppender-1.5.1",
"host": "e22ab1e4-9256-438b-5855-ad04",
"type": "INFO",
"message": "AddUpdate process method ends",
"messageDetail": {
"logger": "com.kroger.cxp.app.transformer.processor.AddUpdateTransformerImpl",
"thread": "DispatchThread: [com.ibm.mq.jmqi.remote.impl.RemoteSession[:/1f6e1b6c][connectionId=414D5143514D2E4150504C2E54455354967C7F5F0407B82E]]"
},
"routingKey": "caspian-test",
"timestamp": "1603276805250"
}
And search query
{
"query": {
"nested": {
"path": "messageDetail",
"query": {
"bool": {
"must": [
{
"match": {
"messageDetail.thread": "DispatchThread"
}
}
]
}
}
}
}
}
And search res
"hits": [
{
"_index": "nestedmsg",
"_type": "_doc",
"_id": "1",
"_score": 0.2876821,
"_source": {
"applicationVersion": "9f80e49dea1c647fa1baf2e70665aba3a74158eb",
"echoClientVersion": "1.5.1",
"echoMetadata": {
"transportType": "echo"
},
"dataCenter": "hdc-digital-non-prod",
"echoLoggerVersion": "EchoLogbackAppender-1.5.1",
"host": "e22ab1e4-9256-438b-5855-ad04",
"type": "INFO",
"message": "AddUpdate process method ends",
"messageDetail": {
"logger": "com.kroger.cxp.app.transformer.processor.AddUpdateTransformerImpl",
"thread": "DispatchThread: [com.ibm.mq.jmqi.remote.impl.RemoteSession[:/1f6e1b6c][connectionId=414D5143514D2E4150504C2E54455354967C7F5F0407B82E]]"
},
"routingKey": "caspian-test",
"timestamp": "1603276805250"
}
}
]

Elasticsearch OR filtered query does not return results

I have the following data set:
{
"_index": "myIndex",
"_type": "myType",
"_id": "220005",
"_score": 1,
"_source": {
"id": "220005",
"name": "Some Name",
"type": "myDataType",
"doc_as_upsert": true
}
}
Doing a direct match query like so:
GET typo3data/destination/_search
{
"query": {
"match": {
"name": "Some Name"
}
},
"size": 500
}
Will return the data just fine:
"hits": {
"total": 1,
"max_score": 3.442347,
"hits": [...
Doing an OR-query however (I am not sure which syntax is correct, the first syntax is taken from elasticsearch docs, the second is a working query taken from another project with the same versions):
GET typo3data/destination/_search
{
"query": {
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"or": {
"filters": [
{
"term": {
"name": "Some Name"
}
}
]
}
}
}
},
"size": 500
}
or
{
"query":
{
"match_all": {}
},
"filter":
{
"or":
[
{ "term": { "name": "Some Name"} },
{ "term": { "name": "Some Other Name"} }
]
},
"size": 1000
}
Does not return anything.
The mapping for the name field is:
"name": {
"type": "string",
"index": "not_analyzed"
}
Elasticsearch version is 1.4.4.
When indexing "some name" , this is broken into tokens as follows -
"some name" => [ "some" , "name" ]
Now in a normal match query , it also does the same above process before matching result. If either "same" or "name" is present , that document is qualified as result
match query ("some name") => search for term "some" or "name"
The term query does not analyze or tokenize your query. This means that it looks for a exact token or term of "some name" which is not present.
term query ("some name") => search for term "some name"
Hence you wont be seeing any result.
Things should work fine if you make the field not_analyzed , but then make sure the case is also matching,
You can read more about the same here.
After extending our mapping to include every field we have:
PUT typo3data/_mapping/destination
{
"someType": {
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "string",
"index": "not_analyzed"
},
"parentId": {
"type": "integer"
},
"type": {
"type": "string"
},
"generatedUid": {
"type": "integer"
}
}
}
}
The or-filters were working. So the general answer is: If you have such a problem, check your mappings closely and rather do too much work on them than too little.
If someone has an explanation why this might be happening, I will gladly pass the answer mark on to it.

Resources