LTR plugin of elasticsearch not working on elastic cloud - elasticsearch

While trying to use LTR plugin of elasticsearch by making a PUT request to _ltr to initialize the , Ref - https://elasticsearch-learning-to-rank.readthedocs.io/en/latest/index.html on elastic cloud , It gives back an error saying -
{
"status": 400,
"error": {
"index_uuid": "_na_",
"index": "_ltr",
"root_cause": [
{
"index_uuid": "_na_",
"index": "_ltr",
"reason": "Invalid index name [_ltr], must not start with '_', '-', or '+'",
"type": "invalid_index_name_exception"
}
],
"type": "invalid_index_name_exception",
"reason": "Invalid index name [_ltr], must not start with '_', '-', or '+'"
}
}
The error is basically of an index naming rule but when the plugin is installed locally , it works correctly.
The problem occurs only when the plugin is installed on elastic cloud. Elastic cloud gives a confirmation saying that your extension is installed but the route still doesn't work.
Tried restarting the deployment after and it still the same.

Related

Import dashboard from kibana 7.5.1 to kibana 7.4.1

I need to import a dashboard from kibana 7.5.1(prod) to kibana 7.4.1 (test). If I cannot do that I'll need to create a new dashboard in kibana (test) from scratch. However, in kibana's doc https://www.elastic.co/guide/en/kibana/current/managing-saved-objects.html, it notices "Exported saved objects are not backwards compatible and cannot be imported into an older version of Kibana. ".
When I called import api in console to import the dashboard from 7.5.1 to the 7.4.1 version kibana, it shows mapper_parsing_exception error. Is there any way to modify the dashboard.ndjson file to import it to an older version kibana?
POST /api/saved_objects/_import
{
"file" : "C:\Users\dashboards-kibana\EKC-Dashboard-Prod.ndjson"
}
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "failed to parse"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse",
"caused_by": {
"type": "json_parse_exception",
"reason": "Unrecognized character escape 'U' (code 85)\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper#33da7471; line: 2, column: 17]"
}
},
"status": 400
}
you will need to rebuild it, as the docs say you cannot export newer versions to older ones

elastic search tasks api returns index not found

I use tasks API for Elasticsearch 6.8. reindex process for my project. Sometimes I face the issue when in some of elastic search installations this API doesn't work properly. For example I create reindex task by command:
POST _reindex?wait_for_completion=false
{
"source": {
"index": "index1"
},
"dest": {
"index": "index2"
}
}
As response I get task Id.
But when I try to check status of this tasks by command
GET _tasks/{task-id}
instead of task status like in normal case I recieve the following:
"error": {
"root_cause": [{
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_expression",
"resource.id - Registered at Namecheap.com ": ".tasks",
"index_uuid": "na",
"index": ".tasks"
}
],
"type": "resource_not_found_exception",
"reason": "task [epsVDuiBRO-IJBqCB2zHXQ:974632] isn't running and hasn't stored its results",
"caused_by": {
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_expression",
"resource.id - Registered at Namecheap.com ": ".tasks",
"index_uuid": "na",
"index": ".tasks"
}
},
"status": 404
}
Is there any way to recover tasks API without reinstalling Elastic? I haven't succeed to find it in documentation.
Thanks in advance.
Problem solved. The issue was related to our own mapping with index patterns "*". It seems that by default elasticsearch doesn't create .tasks index. It creates it only when first command using tasks API performed adding new task document to this index. In my case it ElasticSearch couldn't add first .tasks document because some of fields of this document conflicted with the same fields from our own mapping. Solution is to change pattern of our mapping, or explicitly create mapping for .tasks index before putting our own mapping.

Index not found when having cluster name in URL path

i have a local ES container running http://elasticsearch.net:9202 and i see a cluster by the name elasticsearch-dev with 1 node in it .
When i make a Postman call to it like the following http://elasticsearch.localnet:9202/elasticsearch-dev/*/_search?q=order_number:102811901637201
I get
{
"error": {
"root_cause": [
{
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_or_alias",
"resource.id": "elasticsearch-dev",
"index_uuid": "_na_",
"index": "elasticsearch-dev" --------------------------- cluster name is reported back in as index name.
}
],
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_or_alias",
"resource.id": "elasticsearch-dev",
"index_uuid": "_na_",
"index": "elasticsearch-dev"
},
"status": 404
}
But if i query like http://elasticsearch.localnet:9202/*/_search?q=order_number:102811901637201. without the cluster name it works fine (200 response) .
Does ES cluster deployed locally (1 node )have a different URL pattern ?
When you are running elasticsearch in your local system, the URL for index or search API is in the below format
http://{{hostname}}:{{es-port}}/{{index-name}}/_search
When running Elasticsearch in AWS EC2 instance, in that case also you can use this URL :
http://{{public-ip-of-cluster}}:{{es-port}}/{{index-name}}/_search

elasticsearch index field of type geo_point won't allow PUT of WKT coordinates

Elasticsearch (AWS version 7.1) gives me an error message when inserting WKT data into the geo_point field via Kibana Console. This happens when trying to do it on simple index of
PUT my-geopoints
{
"mappings": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
from the website https://www.elastic.co/guide/en/elasticsearch/reference/current/geo-point.html or my own index with a field of geo_point.
When running in the Kibana Console:
PUT my-geopoints/_doc/5
{
"text": "Geopoint as a WKT POINT primitive",
"location" : "POINT (-71.34 41.12)"
}
The error message I am getting is:
{
"error":{
"root_cause": [
{
"type": "parse_exception",
"reason": "unsupported symbol [P] in geohash [POINT (-71.34 41.12)]"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse field [location] of type [geo_point]",
"caused_by":{
"type": "parse_exception",
"reason": "unsupported symbol [P] in geohash [POINT (-71.34 41.12)]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "unsupported symbol [P] in geohash [POINT (-71.34 41.12)]"
}
}
},
"status": 400
}
This is now happening on a bulk load of my data to a separate index that loads WKT geometry data. I can't find anything that points to any reason why. Yesterday and this morning it worked until I tried this tutorial to try to figure out why the tutorial for geo distances (https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-geo-distance-query.html) wasn't allowing me to have the same index mapping for geo_shapes as for geo_points. That will be a separate question itself.
We upgraded from Elasticsearch 7.1 to 7.10 and it fixed this particular issue.
Change the location type in mappings
"type": "geo_point" to "type": "geo_shape" and try

_cat/indices gives _all not found

ES version: 2.4.1
I was given a username and password for an Elasticsearch cluster (hosted on elastic.co). When I run:
GET /_cat/indices?v
it returns:
{
"error": {
"root_cause": [
{
"type": "index_not_found_exception",
"reason": "no such index",
"index": "_all"
}
],
"type": "index_not_found_exception",
"reason": "no such index",
"index": "_all"
},
"status": 404
}
Is it because of my assigned user role?
UPDATE:
I tried all the endpoints mentioned in the answers at list all indexes on ElasticSearch server? and this one worked:
GET /_cluster/health?level=indices
But I am at a loss as to why other queries don't work.

Resources