im starting with elasticsearch. Got Elasticsearch and Kibana installed on my Mac on Sierra (with no error). Now i try to start with the demo data to explore ES with this tutorial: https://www.elastic.co/guide/en/kibana/current/tutorial-load-dataset.html
All works fine but i got two problems:
1) I got an error at
PUT /logstash-2015.05.20 {...}
Errormessage:
{
"error": {
"root_cause": [
{
"type": "index_already_exists_exception",
"reason": "index [logstash-2015.05.20/1i-pAxzaTpWscYud0Ufczg] already exists",
"index_uuid": "1i-pAxzaTpWscYud0Ufczg",
"index": "logstash-2015.05.20"
}
],
"type": "index_already_exists_exception",
"reason": "index [logstash-2015.05.20/1i-pAxzaTpWscYud0Ufczg] already exists",
"index_uuid": "1i-pAxzaTpWscYud0Ufczg",
"index": "logstash-2015.05.20"
},
"status": 400
}`
Can i ignore this message if the index already exist? So all should be fine?
2) Even at the next step i got an error:
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary #accounts.json`
Errormessage:
{
"statusCode": 400,
"error": "Bad Request",
"message": "child \"method\" fails because [\"method\" must be one of [HEAD, GET, POST, PUT, DELETE]]",
"validation": {
"source": "query",
"keys": [
"method"
]
}
}
Thanks for any help or hints.
Look at the file. It contains one line for index/mapping and other settings
{"index":{"_id":"1"}}
and another line the document itself:
{"account_number":1,"balance":39225,"firstname":"Amber","lastname":"Duke","age":32,"gender":"M","address":"880 Holmes Lane","employer":"Pyrami","email":"amberduke#pyrami.com","city":"Brogan","state":"IL"}
This two lines are repeated for each document to be indexed using _bulk api. just try to `put' one of documents in your index
PUT localhost:9200/bank/account/1
{"account_number":1,"balance":39225,"firstname":"Amber","lastname":"Duke","age":32,"gender":"M","address":"880 Holmes Lane","employer":"Pyrami","email":"amberduke#pyrami.com","city":"Brogan","state":"IL"}
Are you familiar with sense plugin for chrome? It can be very helpful for elasticsearch development :-)
Related
I have two instances of the same version of elasticsearch with the same data.
I am now trying to _reindex an index with:
curl -X POST -H 'Content-Type: application/json' 'localhost:9200/_reindex' -d '{"source": {"index": "my_index_v1"}, "dest": { "index": "my_index_v2" }}' | jq
On one of the machines, it works correctly and a new index is correctly created. However on second machine, it ends with:
{
"error": {
"root_cause": [
{
"type": "index_not_found_exception",
"reason": "no such index [my_index_v2] and [action.auto_create_index] ([.watches,.triggered_watches,.watcher-history-*,.monitoring-*]) doesn't match",
"index_uuid": "_na_",
"index": "my_index_v2"
}
],
"type": "index_not_found_exception",
"reason": "no such index [my_index_v2] and [action.auto_create_index] ([.watches,.triggered_watches,.watcher-history-*,.monitoring-*]) doesn't match",
"index_uuid": "_na_",
"index": "my_index_v2"
},
"status": 404
}
I checked the elasticsearch.yml and the file is completely equal on both machines and contains following:
action.auto_create_index: .watches,.triggered_watches,.watcher-history-*,.monitoring-*
I have no idea why this happens.
To be clear, the index really exists.
EDIT:
working machine
elastic version: 7.17.4
settings.json: https://gist.github.com/knyttl/90f5d0f5de194be534160ab729c5a83b
non-working machine
elastic version: 7.17.4
settings.json: https://gist.github.com/knyttl/fcb94cd5739d3626f4545a9b9c5cceef
From the diff it seems that the working machine contains also:
{
"persistent": {
"action": {
"auto_create_index": "true"
},
…
}
}
So I guess that can be the culprit.
Update auto_create_index in second machine as,
{
"persistent": {
"action": {
"auto_create_index": "true"
}
}
}
instead of
action.auto_create_index: .watches,.triggered_watches,.watcher-history-*,.monitoring-*
could likely fix the issue.
I use tasks API for Elasticsearch 6.8. reindex process for my project. Sometimes I face the issue when in some of elastic search installations this API doesn't work properly. For example I create reindex task by command:
POST _reindex?wait_for_completion=false
{
"source": {
"index": "index1"
},
"dest": {
"index": "index2"
}
}
As response I get task Id.
But when I try to check status of this tasks by command
GET _tasks/{task-id}
instead of task status like in normal case I recieve the following:
"error": {
"root_cause": [{
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_expression",
"resource.id - Registered at Namecheap.com ": ".tasks",
"index_uuid": "na",
"index": ".tasks"
}
],
"type": "resource_not_found_exception",
"reason": "task [epsVDuiBRO-IJBqCB2zHXQ:974632] isn't running and hasn't stored its results",
"caused_by": {
"type": "index_not_found_exception",
"reason": "no such index",
"resource.type": "index_expression",
"resource.id - Registered at Namecheap.com ": ".tasks",
"index_uuid": "na",
"index": ".tasks"
}
},
"status": 404
}
Is there any way to recover tasks API without reinstalling Elastic? I haven't succeed to find it in documentation.
Thanks in advance.
Problem solved. The issue was related to our own mapping with index patterns "*". It seems that by default elasticsearch doesn't create .tasks index. It creates it only when first command using tasks API performed adding new task document to this index. In my case it ElasticSearch couldn't add first .tasks document because some of fields of this document conflicted with the same fields from our own mapping. Solution is to change pattern of our mapping, or explicitly create mapping for .tasks index before putting our own mapping.
Elasticsearch (AWS version 7.1) gives me an error message when inserting WKT data into the geo_point field via Kibana Console. This happens when trying to do it on simple index of
PUT my-geopoints
{
"mappings": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
from the website https://www.elastic.co/guide/en/elasticsearch/reference/current/geo-point.html or my own index with a field of geo_point.
When running in the Kibana Console:
PUT my-geopoints/_doc/5
{
"text": "Geopoint as a WKT POINT primitive",
"location" : "POINT (-71.34 41.12)"
}
The error message I am getting is:
{
"error":{
"root_cause": [
{
"type": "parse_exception",
"reason": "unsupported symbol [P] in geohash [POINT (-71.34 41.12)]"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse field [location] of type [geo_point]",
"caused_by":{
"type": "parse_exception",
"reason": "unsupported symbol [P] in geohash [POINT (-71.34 41.12)]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "unsupported symbol [P] in geohash [POINT (-71.34 41.12)]"
}
}
},
"status": 400
}
This is now happening on a bulk load of my data to a separate index that loads WKT geometry data. I can't find anything that points to any reason why. Yesterday and this morning it worked until I tried this tutorial to try to figure out why the tutorial for geo distances (https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-geo-distance-query.html) wasn't allowing me to have the same index mapping for geo_shapes as for geo_points. That will be a separate question itself.
We upgraded from Elasticsearch 7.1 to 7.10 and it fixed this particular issue.
Change the location type in mappings
"type": "geo_point" to "type": "geo_shape" and try
I am trying to load log records into elasticsearch (7.3.1) and showing the results in kibana. I am facing the fact that although records are loaded into elasticearch and a curl GET shows them, they are not visible in kibana.
Most of the time, this is because of the timestamp format. In my case, the proper timestamp format should be basic_date_time, but the index only has:
# curl -XGET 'localhost:9200/og/_mapping'
{"og":{"mappings":{"properties":{"#timestamp":{"type":"date"},"componentName":{"type":"text","fields":{"keyword":{"type":"keyword","ignore_above":256}}}}}}}%
I would like to add format 'basic_date_time' to the #timestamp properties, but each try I do is either not accepted by elasticsearch or does not change the index field.
I simply fail to get the right command to do the job.
For example, the simplest I could think of,
Z cr 23;curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/og/_mapping' -d'
{"mappings":{"properties":{"#timestamp":{"type":"date","format":"basic_date_time"}}}}
'
gives error
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [mappings : {properties={#timestamp={format=basic_date_time, type=date}}}]"}],"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [mappings : {properties={#timestamp={format=basic_date_time, type=date}}}]"},"status":400}%
and trying to do it via kibana with
PUT /og
{
"mappings": {
"properties": {
"#timestamp": { "type": "date", "format": "basic_date_time" }
}
}
}
gives
{
"error": {
"root_cause": [
{
"type": "resource_already_exists_exception",
"reason": "index [og/NIT2FoNfQpuPT3Povp97bg] already exists",
"index_uuid": "NIT2FoNfQpuPT3Povp97bg",
"index": "og"
}
],
"type": "resource_already_exists_exception",
"reason": "index [og/NIT2FoNfQpuPT3Povp97bg] already exists",
"index_uuid": "NIT2FoNfQpuPT3Povp97bg",
"index": "og"
},
"status": 400
}
I am not sure if I should even try this in kibana. But I would be very glad if I could find the right curl command to get the index changed.
Thanks for helping, Ruud
You can do it either via curl like this:
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/og/_mapping' -d '{
"properties": {
"#timestamp": {
"type": "date",
"format": "basic_date_time"
}
}
}
'
Or in Kibana like this:
PUT /og/_mapping
{
"properties": {
"#timestamp": {
"type": "date",
"format": "basic_date_time"
}
}
}
Also worth noting is that once an index/mapping is created you can usually not modify it (very few exceptions). You can create a new index with the correct mapping and reindex your data into it.
ES version: 2.4.1
I was given a username and password for an Elasticsearch cluster (hosted on elastic.co). When I run:
GET /_cat/indices?v
it returns:
{
"error": {
"root_cause": [
{
"type": "index_not_found_exception",
"reason": "no such index",
"index": "_all"
}
],
"type": "index_not_found_exception",
"reason": "no such index",
"index": "_all"
},
"status": 404
}
Is it because of my assigned user role?
UPDATE:
I tried all the endpoints mentioned in the answers at list all indexes on ElasticSearch server? and this one worked:
GET /_cluster/health?level=indices
But I am at a loss as to why other queries don't work.