i have a local ES container running http://elasticsearch.net:9202 and i see a cluster by the name elasticsearch-dev with 1 node in it .
When i make a Postman call to it like the following http://elasticsearch.localnet:9202/elasticsearch-dev/*/_search?q=order_number:102811901637201
I get
{
"error": {
"root_cause": [
{
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_or_alias",
"resource.id": "elasticsearch-dev",
"index_uuid": "_na_",
"index": "elasticsearch-dev" --------------------------- cluster name is reported back in as index name.
}
],
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_or_alias",
"resource.id": "elasticsearch-dev",
"index_uuid": "_na_",
"index": "elasticsearch-dev"
},
"status": 404
}
But if i query like http://elasticsearch.localnet:9202/*/_search?q=order_number:102811901637201. without the cluster name it works fine (200 response) .
Does ES cluster deployed locally (1 node )have a different URL pattern ?
When you are running elasticsearch in your local system, the URL for index or search API is in the below format
http://{{hostname}}:{{es-port}}/{{index-name}}/_search
When running Elasticsearch in AWS EC2 instance, in that case also you can use this URL :
http://{{public-ip-of-cluster}}:{{es-port}}/{{index-name}}/_search
Related
I use tasks API for Elasticsearch 6.8. reindex process for my project. Sometimes I face the issue when in some of elastic search installations this API doesn't work properly. For example I create reindex task by command:
POST _reindex?wait_for_completion=false
{
"source": {
"index": "index1"
},
"dest": {
"index": "index2"
}
}
As response I get task Id.
But when I try to check status of this tasks by command
GET _tasks/{task-id}
instead of task status like in normal case I recieve the following:
"error": {
"root_cause": [{
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_expression",
"resource.id - Registered at Namecheap.com ": ".tasks",
"index_uuid": "na",
"index": ".tasks"
}
],
"type": "resource_not_found_exception",
"reason": "task [epsVDuiBRO-IJBqCB2zHXQ:974632] isn't running and hasn't stored its results",
"caused_by": {
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_expression",
"resource.id - Registered at Namecheap.com ": ".tasks",
"index_uuid": "na",
"index": ".tasks"
}
},
"status": 404
}
Is there any way to recover tasks API without reinstalling Elastic? I haven't succeed to find it in documentation.
Thanks in advance.
Problem solved. The issue was related to our own mapping with index patterns "*". It seems that by default elasticsearch doesn't create .tasks index. It creates it only when first command using tasks API performed adding new task document to this index. In my case it ElasticSearch couldn't add first .tasks document because some of fields of this document conflicted with the same fields from our own mapping. Solution is to change pattern of our mapping, or explicitly create mapping for .tasks index before putting our own mapping.
Elasticsearch (AWS version 7.1) gives me an error message when inserting WKT data into the geo_point field via Kibana Console. This happens when trying to do it on simple index of
PUT my-geopoints
{
"mappings": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
from the website https://www.elastic.co/guide/en/elasticsearch/reference/current/geo-point.html or my own index with a field of geo_point.
When running in the Kibana Console:
PUT my-geopoints/_doc/5
{
"text": "Geopoint as a WKT POINT primitive",
"location" : "POINT (-71.34 41.12)"
}
The error message I am getting is:
{
"error":{
"root_cause": [
{
"type": "parse_exception",
"reason": "unsupported symbol [P] in geohash [POINT (-71.34 41.12)]"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse field [location] of type [geo_point]",
"caused_by":{
"type": "parse_exception",
"reason": "unsupported symbol [P] in geohash [POINT (-71.34 41.12)]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "unsupported symbol [P] in geohash [POINT (-71.34 41.12)]"
}
}
},
"status": 400
}
This is now happening on a bulk load of my data to a separate index that loads WKT geometry data. I can't find anything that points to any reason why. Yesterday and this morning it worked until I tried this tutorial to try to figure out why the tutorial for geo distances (https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-geo-distance-query.html) wasn't allowing me to have the same index mapping for geo_shapes as for geo_points. That will be a separate question itself.
We upgraded from Elasticsearch 7.1 to 7.10 and it fixed this particular issue.
Change the location type in mappings
"type": "geo_point" to "type": "geo_shape" and try
While trying to use LTR plugin of elasticsearch by making a PUT request to _ltr to initialize the , Ref - https://elasticsearch-learning-to-rank.readthedocs.io/en/latest/index.html on elastic cloud , It gives back an error saying -
{
"status": 400,
"error": {
"index_uuid": "_na_",
"index": "_ltr",
"root_cause": [
{
"index_uuid": "_na_",
"index": "_ltr",
"reason": "Invalid index name [_ltr], must not start with '_', '-', or '+'",
"type": "invalid_index_name_exception"
}
],
"type": "invalid_index_name_exception",
"reason": "Invalid index name [_ltr], must not start with '_', '-', or '+'"
}
}
The error is basically of an index naming rule but when the plugin is installed locally , it works correctly.
The problem occurs only when the plugin is installed on elastic cloud. Elastic cloud gives a confirmation saying that your extension is installed but the route still doesn't work.
Tried restarting the deployment after and it still the same.
ES version: 2.4.1
I was given a username and password for an Elasticsearch cluster (hosted on elastic.co). When I run:
GET /_cat/indices?v
it returns:
{
"error": {
"root_cause": [
{
"type": "index_not_found_exception",
"reason": "no such index",
"index": "_all"
}
],
"type": "index_not_found_exception",
"reason": "no such index",
"index": "_all"
},
"status": 404
}
Is it because of my assigned user role?
UPDATE:
I tried all the endpoints mentioned in the answers at list all indexes on ElasticSearch server? and this one worked:
GET /_cluster/health?level=indices
But I am at a loss as to why other queries don't work.
We have a requirement where in we need to query across multiple indices as follows
We are using ElasticSearch 5.1.1.
http://localhost:9200/index1,index2,index3/type1,type2/_search
query:
{
"query": {
"multi_match": {
"query": "data",
"fields": ["status"]
}
}
}
However we may not know in advance if the index is present or not , we get following error if either of above indices is not present.
{
"error": {
"root_cause": [
{
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_or_alias",
"resource.id": "index3",
"index_uuid": "_na_",
"index": "index3"
}
],
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_or_alias",
"resource.id": "index3",
"index_uuid": "_na_",
"index": "index3"
},
"status": 404
}
One obvious way is to check if index is already present or not but I would like to avoid that extra call.
Note: At least 1 index will always be present
Is it possible to avoid this Exception ?
Thanks in advance !!
"ignore_unavailable" is the solution for this. Pass this as a query parameter in search url.
Exa. http://localhost:9200/index1,index2/type/_search?ignore_unavailable
This will not give 404 even if either of the indices are not present