POST http://localhost:9200/test2/drug?pretty
{
"title": "I can do this"
}
get test2/drug/_search
{
"query" : {
"match": {
"title": "cancer"
}
}
}
The mappings are:
{
"test2": {
"mappings": {
"drug": {
"properties": {
"title": {
"type": "string"
}
}
}
}
}
}
Running the above query returns the document. I want to understand what elastic is doing behind the scenes? From looking at the output of the default analyzer it does not tokenize cancer such that it returns "can" so why is a document with the word "can" being returned and what is causing this to be returned? In other words, what other processing is happening to the search query "cancer".
Updated
Is there a command I can run on my box that will clear all indexes and everything so I have a clean slate? I ran delete /* which succeeded but still getting a match.
The problem with your test is, if you are using Sense, the get request. In Sense it should be GET (capital letters).
The explanation is related to GET vs. POST http methods.
Behind the scene Sense actually converts a GET request to a HTTP POST (given that many browsers do not support HTTP GET requests with a request body). This means that, even if you write GET, the actual http request is a POST.
Because Sense has the autocomplete that forces upper case letters for request methods, it uses the same upper case letters when deciding if it's a GET (and not a lowercase get) request together with a request body. If it is, then that request is transformed to a POST one. If it compares the request method and decides is not a GET it sends the request as is, meaning with a get method and with a body. Since the body is ignored, what reaches Elasticsearch will be a test2/drug/_search which is basically a match_all.
I guess that you configured in your index mappings an NGram filter or tokenizer. Let's suppose (I hope you'll confirm my hypothesis) an Edge NGram is configured. You can check it with:
GET test2/_mapping
Then the document is tokenized: i,c,ca,can,d,do,t,th,thi,this. As a result, in the index, the token can points to the document I can do this
When you're searching cancer, the tokens c,ca,can,canc,cance,cancer are produced by the same analysis chain, and then looked for in the index. As a result your document is found.
With the NGram filter, you often need to configure a different analyzer for search than for indexing, for instance:
index_analyzer/analyzer: standard + edge ngram
search_analyzer: stardand along
Then if you search can you'll find documents containing can,cancer,candy... But if you search cancer, you'll only find documents containing cancer,cancerology... and so on.
Related
Documents contain a url field with a full url. Users should be able to search for documents containing a given url by supplying a portion of the url string. The search string can be 3-15 characters long. An N-gram token filter with min_gram of 3 and max_gram of 15 would work but generates a large number of tokens for long urls. Is it possible to have ElasticSearch only generate tokens for the first 100 characters of the url field?
For example, the user should be able to search for documents containing the following url using a search string such as ’example.com’ or ‘/foo/bar’.
https://click.example.com/foo/bar/55gft/?qs=1952934d0ee8e2368ec7f7a921e3c6202b39365b9a2d26774c8122b8555ca21fce9d2344fc08a8ba40caede5e6901a112c6e89ead40892109eb8290d70571eab
There are two ways to achieve what you want.
Option 1: Keep using ngrams as you do now, but insert a truncate token filter before the ngram one, to limit the url size to 100 and only after ngram it.
Option 2: Use the wildcard field type, which has been created exactly for cases like this.
In your index, you should first change the type of the URL field to wildcard:
PUT test
{
"mappings": {
"properties": {
"url": {
"type": "wildcard"
}
}
}
}
Then, you can search on that field, using the wildcard query, like this:
POST test/_search
{
"query": {
"wildcard": {
"url": "*foo/bar*"
}
}
}
Also, read the related blog post which shows in details how the wildcard field type performs.
I have an Elasticsearch setup with an alias that points to many indices. I need to update a single document, but I don't know which index it resides in.
There are two ways I can accomplish this as far as I can see:
_update_by_query:
POST my-alias/_update_by_query
{
"query": {
"terms": {
"_id": ["my-id-to-update"]
}
},
"script": {
"source": "ctx._source['Field'] = 'new value'"
}
}
read (which returns the specific index) then write:
GET my-alias/_search
{
"query": {
"terms": {
"_id": ["my-id-to-update"]
}
}
}
POST my-index-returned-from-the-get/_update/my-id-to-update
{
"doc": {
"Field": "new value"
}
}
Which method is more performant?
Which method is preferred?
Is there a better way than either of these two?
The performance of both approach will be the same with one difference that your first approach only need to send one request compare to second one with two request, so it would be better to use first approach as you will reduce the API calls by half.
Also in my opinion the first approach is much cleaner and fits more in concept of aliases of Elasticsearch because you are encapsulating exact index name from your application, as application doesn't need to have any clue about exact index-name your documents are in.
An important note about updating a document in Elasticsearch is documents in Elasticsearch don't get updated, it means the document will be flagged as deleted and new document will be created (this is due to Lucene implementation), then during process of Lucene segment merging the document will be actually deleted.
you can find a good blog post about segment merging here.
I am building a basic CRUD service with some business logic under the hood, and I'm about to start working on the PUT (update) endpoint. I have already fully written+tested GET (read) and POST (create) for my data object. The data store for my documents is an ElasticSearch instance on AWS.
I have some decisions to make about how I want to architect the PUT, namely, how I want to determine a valid request. My goal is to make it so the POST is only for the creation of new assets, and PUT will only update existing documents. (At the moment, I am POSTing to elastic with /_doc/, however the intent is to move to /_create/ as part of this work)
What I'm a little hung-up on is the "right" way to check that a document exists before making the API call to Elastic to update.
When a user submits a document to PUT, should I first GET from Elastic with the document ID to make sure the document already exists? Or should I simply try to "update" the resource and if it doesn't exists, one is created?
Obviously there are trade-offs to each strategy. With the latter, PUTing a document that doesn't exist almost completely negates the need for a POST at all, so I'd be more inclined to go with the former - despite the additional REST call - to maintain the integrity of the basic REST definition.
Thoughts?
The consideration whether to update a doc (with versioning) or create a new one with some shared ID related to all previous versions depends on your use case -- either of them are 'correct' but there's too little information to advise on that right now.
With regards to the document-exists strategies -- there are essentially 2 types of IDs in ES -- what I call:
internal ids (_id)
external ids (doc_values-provided ids)
Create an index & a doc:
PUT myindex
PUT myindex/_doc/internal_id_1
{
"external_id": "1"
}
Internal ID check
GET myindex/_doc/internal_id_1
or
GET myindex/_count
{
"query": {
"ids": {
"values": [
"internal_id_1"
]
}
}
}
or
GET myindex/_count
{
"query": {
"term": {
"_id": {
"value": "internal_id_1"
}
}
}
}
External ID check
GET myindex/_count
{
"query": {
"term": {
"external_id": {
"value": "1"
}
}
}
}
and many others (terms, match (for partial matches etc), ...)
Note that I've used the _count endpoint instead of _search -- it's slightly faster.
If you intend to check the _version of a given doc before you proceed to update it, replace _count with _search?version=true and the _version attribute will become available:
{
"_index":"myindex",
"_type":"_doc",
"_id":"internal_id_1",
"_version":2, <---
"_score":1.0,
"_source":{
"external_id":"1"
}
}
This seemingly simple task is not well-documented in the ElasticSearch documentation:
We have an ElasticSearch instance with an index that has a field in it called sourceId. What API call would I make to first, GET all documents with 100 in the sourceId field (to verify the results before deletion) and then to DELETE same documents?
You probably need to make two API calls here. First to view the count of documents, second one to perform the deletion.
Query would be the same, however the end points are different. Also I'm assuming the sourceId would be of type keyword
Query to Verify
POST <your_index_name>/_search
{
"size": 0,
"query": {
"term": {
"sourceId": "100"
}
}
}
Execute the above Term Query and take a note at the hits.total of the response.
Remove the "size":0 in the above query if you want to view the entire documents as response.
Once you have the details, you can go ahead and perform the deletion using the same query as shown in the below query, notice the endpoint though.
Query to Delete
POST <your_index_name>/_delete_by_query
{
"query": {
"term": {
"sourceId": "100"
}
}
}
Once you execute the Deletion By Query, notice the deleted field in the response. It must show you the same number.
I've used term queries however you can also make use of any Match or any complex Bool Query. Just make sure that the query is correct.
Hope it helps!
POST /my_index/_delete_by_query?conflicts=proceed&pretty
{
"query": {
"match_all": {}
}
}
Delete all the documents of an index without deleting the mapping and settings:
See: https://opster.com/guides/elasticsearch/search-apis/elasticsearch-delete-by-query/
So I've setup the following data set so I can test searching on an field storing multiple values:
post /test/participant
{
"Synonyms" : [ "foo" ]
}
post /test/participant
{
"Synonyms" : [ "bar" ]
}
post /test/participant
{
"Synonyms" : [ "foo", "bar" ]
}
I've tried to get some data back by trying something like:
get /test/participant/_search
{
"query": {
"filtered": {
"filter": {
"term": { "Synonyms": "foo" }
}
}
}
}
and I was expecting to get back the first and third records (see order above). However, I keep on getting all the records back. I've tried no end of alerations to the query to try and get something sensible (there's not enough space to add them here) and all I keep on getting is all the records in the index. Does anyone have an idea how I would query to get back those records with "foo" as a value (1st and 3rd)? And is there some subtle point I've been missing here? I'm aware that ElasticSearch does not store the values as an array but as an unordered collection.
I think you are running these queries in Sense, right?
The commands you need are these:
POST /test/participant
{"Synonyms":["foo"]}
POST /test/participant
{"Synonyms":["bar"]}
POST /test/participant
{"Synonyms":["foo","bar"]}
GET /test/participant/_search
{
"query": {
"filtered": {
"filter": {
"term": {
"Synonyms": "foo"
}
}
}
}
}
The explanation is related to GET vs. POST http methods.
Behind the scene Sense actually converts a GET request to a HTTP POST (given that many browsers do not support HTTP GET requests with a request body). This means that, even if you write GET, the actual http request is a POST.
Because Sense has the autocomplete that forces upper case letters for request methods, it uses the same upper case letters when deciding if it's a GET (and not a get) request together with a request body. If it is, then that request is transformed to a POST one. If it compares the request method and decides is not a GET it sends the request as is, meaning with a get method and with a body. Since the body is ignored, what reaches Elasticsearch will be a get /test/participant/_search which is basically a match_all which, of course, returns all documents :-).