Isn't it against REST-style approach to pass a request body together with GET request?
For instance to filter some information in Elasticsearch
curl localhost:9200/megacorp/employee/_search -d '{"query" : {"filtered" : {"filter" : {"range" : {"age" : { "gt" : 30 }}},"query" : {"match" : {"last_name" : "smith"}}}}}'
some tools are even designed to avoid request body in GET request (like postman)
From the RFC:
A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
In other words, it's not forbidden, but it's undefined behavior and should be avoided. HTTP clients, servers and proxies are free to drop the body and this would not go against the standard. It's absolutely a bad practice.
Further text from the HTTPBis working group (the group working on HTTP and related standards):
Finally, note that while HTTP allows GET requests to have a body syntactically, this is done only to allow parsers to be generic; as per RFC7231, Section 4.3.1, a body on a GET has no meaning, and will be either ignored or rejected by generic HTTP software.
source
No. It's not.
In REST, using POST to query does not make sense. POST is supposed to modify the server. When searching you obviously don't modify the server.
GET applies here very well.
For example, what would be the difference of running a search with:
GET /_search?q=foo
vs
GET /_search
{
"query": {
"query_string": {
"query" : "foo"
}
}
}
In both cases, you'd like to "GET" back some results. You don't mean to change any state on the server side.
That's why I think GET is totally applicable here wether you are passing the query within the URI or using a body.
That being said, we are aware that some languages and tools don't allow that. Although the RFC does not mention that you can't have a body with GET.
So elasticsearch supports also POST.
This:
curl -XPOST localhost:9200/megacorp/employee/_search -d '{"query" : {"filtered" : {"filter" : {"range" : {"age" : { "gt" : 30 }}},"query" : {"match" : {"last_name" : "smith"}}}}}'
Will work the same way.
You can use query parameter in an ElasticSearch GET request:
just add source=query_string_body&source_content_type='application/json'
The url will look like the following:
http://localhost:9200/index/_search/?source_content_type=application/json&source={"query":{"match_all":{}}}
ref:
https://discuss.elastic.co/t/query-elasticsearch-from-browser-webservice/129697
Related
The simplest example:
GET /_search
{
"from" : 0, "size" : 10,
"query" : {
"term" : { "user" : "kimchy" }
}
}
Rewrite without data raw Search URI:
GET /_search?from=0&size=10&q=user:kimchy
Is it possible to rewrite the example for Search Template like this:
GET /_search/template
{
"id": "sample_id_script",
"params": {
"gte": "2020-10-15 00:00:00",
"lte": "2020-10-15 23:59:59"
}
}
Yes, it's possible via the source query string parameter!! You simply need to inline your JSON body and add the other &source_content_type=application/json query string parameter, and voilĂ !
GET /_search/template?source={"id": "sample_id_script","params": {"gte": "2020-10-15 00:00:00","lte": "2020-10-15 23:59:59"}}&source_content_type=application/json
Please note, though, that it's not the same concept as the example you're showing. In your example, we're hitting the _search endpoint and sending a query (i.e. using q=) expressed in the Lucene Expression language. It's basically the equivalent of what you would send in a query_string query.
The second case is different, because you're sending a search template via the _search/template endpoint. So even though the effect is the same (i.e. sending a payload via the query string), the concept semantic is different.
I am running elasticsearch 2.3.4, but the syntax does not seem to have changed in 5.x.
Multiget over curl is working just fine. Here is what my curl looks like:
curl 'localhost:9200/_mget' -d '{
"docs" : [
{
"_index" : "logs-2017-04-30",
"_id" : "e72927c2-751c-4b33-86de-44a494abf78f"
}
]
}'
And when I want to pull the "message" field off that response, I use this request:
curl 'localhost:9200/_mget' -d '{
"docs" : [
{
"_index" : "logs-2017-04-30",
"_id" : "e72927c2-751c-4b33-86de-44a494abf78f",
"fields" : ["message"]
}
]
}'
Both of the above queries return the log and information that I am looking for.
But when I try to translate it to Java like this:
MultiGetRequestBuilder request = client.prepareMultiGet();
request.add("logs-2017-04-30", null, "e72927c2-751c-4b33-86de-44a494abf78f");
MultiGetResponse mGetResponse = request.get();
for (MultiGetItemResponse itemResponse : mGetResponse.getResponses()) {
GetResponse response = itemResponse.getResponse();
logger.debug("Outputing object: " + ToStringBuilder.reflectionToString(response));
}
I appear to be getting null objects back. When I try to grab the message field off the null-looking GetResponse object, nothing is there:
GetField field = response.getField("message"); <--- returns null
What am I doing wrong? doing a rest call to elasticsearch proves the log exists, but my Java call is wrong somehow.
The documentation page for the Java multi get completely skips over the extra syntax required to retrieve data beyond the _source field. Just like the REST API, doing a multi get with the minimum information required to locate a log gets very limited information about it. In order to get specific fields from a log in a multi get call through the Java API, you must pass in a MultiGetRequest.Item to the builder. This item needs to have the fields you want specified in it before you execute the request.
Here is the code change (broken into multiple lines for clarity) that results in the fields I want being present when I make the query:
MultiGetRequestBuilder request = client.prepareMultiGet();
MultiGetRequest.Item item = new MultiGetRequest.Item("logs-2017-04-30", "null", "e72927c2-751c-4b33-86de-44a494abf78f");
item.fields("message");
request.add(item);
MultiGetResponse mGetResponse = request.get();
Now I can ask for the field I specified earlier:
GetField field = response.getField("message");
I am passing here some parameters via get to limit query result and also query_string is passed in url. Although, I am also giving request body to filter results.
curl -XGET 'http://localhost:9200/books/fantasy/_search?from=0&size=10&q=%2A' -d '{
"query":{
"filtered":{
"filter":{
"exists":{
"field":"speacial.ean"
}
}
}
}
}'
I just want to check is this approach okay? is there any downsides doing it like this? Or should I pass any parameters in url when body is used?
This seems to work, but is it bad practice?
GET requests are not supposed to use a body ( more information on this here). While curl might convert your GET requests with a body to POST, many tools might simply drop the body, or it might be sent to Elastic but ignored because you used GET.
When executing this query in my SENSE, I get all the documents instead of just the document matching my query, proving that the body has been ignored:
GET myIndex/_search
{
"query": {
"match": {
"zlob": true
}
}
}
This example shows that you should avoid to use GET to make requests with a body, because the result will depend on the tool you use for your rest queries.
POST http://localhost:9200/test2/drug?pretty
{
"title": "I can do this"
}
get test2/drug/_search
{
"query" : {
"match": {
"title": "cancer"
}
}
}
The mappings are:
{
"test2": {
"mappings": {
"drug": {
"properties": {
"title": {
"type": "string"
}
}
}
}
}
}
Running the above query returns the document. I want to understand what elastic is doing behind the scenes? From looking at the output of the default analyzer it does not tokenize cancer such that it returns "can" so why is a document with the word "can" being returned and what is causing this to be returned? In other words, what other processing is happening to the search query "cancer".
Updated
Is there a command I can run on my box that will clear all indexes and everything so I have a clean slate? I ran delete /* which succeeded but still getting a match.
The problem with your test is, if you are using Sense, the get request. In Sense it should be GET (capital letters).
The explanation is related to GET vs. POST http methods.
Behind the scene Sense actually converts a GET request to a HTTP POST (given that many browsers do not support HTTP GET requests with a request body). This means that, even if you write GET, the actual http request is a POST.
Because Sense has the autocomplete that forces upper case letters for request methods, it uses the same upper case letters when deciding if it's a GET (and not a lowercase get) request together with a request body. If it is, then that request is transformed to a POST one. If it compares the request method and decides is not a GET it sends the request as is, meaning with a get method and with a body. Since the body is ignored, what reaches Elasticsearch will be a test2/drug/_search which is basically a match_all.
I guess that you configured in your index mappings an NGram filter or tokenizer. Let's suppose (I hope you'll confirm my hypothesis) an Edge NGram is configured. You can check it with:
GET test2/_mapping
Then the document is tokenized: i,c,ca,can,d,do,t,th,thi,this. As a result, in the index, the token can points to the document I can do this
When you're searching cancer, the tokens c,ca,can,canc,cance,cancer are produced by the same analysis chain, and then looked for in the index. As a result your document is found.
With the NGram filter, you often need to configure a different analyzer for search than for indexing, for instance:
index_analyzer/analyzer: standard + edge ngram
search_analyzer: stardand along
Then if you search can you'll find documents containing can,cancer,candy... But if you search cancer, you'll only find documents containing cancer,cancerology... and so on.
I have a situation where I want to filter the results not when performing a search but rather a GET using elasticsearch. Basically I have a Document that has a status field indicating that the entity has a state of discarded. When performing the GET I need to check the value of this field thereby excluding it if the status is indeed one of "discarded".
I know i can do this using a search with a term query, but what about when using a GET against the index based on Document ID?
Update: Upon further investigation, it seems the only way to do this is to use percolation or a search. I hope I am wrong if anyone has any suggestions I am all ears.
Just to clarify I am using the Java API.
thanks
Try something like this:
curl http://domain/my_index/_search -d '{
"filter": {
"and": [
{
"ids" : {
"type" : "my_type",
"values" : ["123"]
}
},
{
"term" : {
"discarded" : "false"
}
}
]
}
}
NOTE: you can also use a missing filter if the discarded field does not exist on some docs.
NOTE 2: I don't think this will be markedly slower than a normal get request either...