Error: index_not_found_exception - elasticsearch

I use ELK stack to analyze my log file. I have tested last week and everything works well.
Today, I tested but I get this error when I typed "http://localhost:9200/iot_log/_count" (iot_log is my index pattern):
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no
such
index","resource.type":"index_or_alias","resource.id":"iot_log","index_uuid":"na","index":"iot_log"}],"type":"index_not_found_exception","reason":"no such
index","resource.type":"index_or_alias","resource.id":"iot_log","index_uuid":"na","index":"iot_log"},"status":404}
I really searched the forums but I have not found a solution, I want to know what is the cause of this problem please and how can I correct it?

Make sure index iot_log exist and create it if not:
curl -X PUT "localhost:9200/iot_log" -H 'Content-Type: application/json' -d'{ "settings" : { "index" : { } }}'

You need to set your action.auto_create_index parameter in elasticsearch.yml file.
Example:
action.auto_create_index: -l*,+z*
With this kind of configuration, indexes starting with "z" will be created automatically while indexes starting with "l" will not.

The best way to resolve it by using setting as follow
Allow Auto Create YourIndexName and index10 and not allowing any index name matching index1* and any other index matching ind*. The patterns are matched in the order they are given.
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{
"persistent": {
"action.auto_create_index": "YourIndexName,index10,-index1*,+ind*"
}
}'
Stop any Auto Indexing
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{
"persistent": {
"action.auto_create_index": "false"
}
}'
Allow any Index create automatically
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{
"persistent": {
"action.auto_create_index": "true"
}
}'
```

In my case, My all data is DELETED in elastic search automatically, After importing data again in elastic search my application working good.

Related

Elasticsearch add node to cluster

I have 3 node elasticsearch cluster
192.168.2.11 - node-01
192.168.2.12 - node-02
192.168.2.13 - node-03
and i deleted node-02 from cluster using this command
curl -XPUT 192.168.2.12:9200/_cluster/settings -H 'Content-Type: application/json' -d '{
"transient" :{
"cluster.routing.allocation.exclude._ip" : "192.168.2.12"
}
}'
and ok, all my indexes moved to node-01 and node-03, but how to return back this node to the cluster?
i try this command
curl -XPUT 192.168.2.12:9200/_cluster/settings -H 'Content-Type: application/json' -d '{
"transient" :{
"cluster.routing.allocation.include._ip" : "192.168.2.12"
}
}'
but this doesn't works
:"node does not cluster setting [cluster.routing.allocation.include] filters [_ip:\"192.168.2.12\"]
The node has not been deleted but you can 'undo' your command by updating the setting you changed to null
Try updating the settings on either of the running nodes (01 or 03) with
"transient" :{
"cluster.routing.allocation.exclude._ip" : null
}
and the cluster should rebalance shards across the three nodes.
Be careful using the include._ip: "192.168.2.12" as this might stop routing indices to the other two, instead include all three ip addresses if you wanted to us this, for example
"transient" :{
"cluster.routing.allocation.include._ip" :"192.168.2.11, 192.168.2.12, 192.168.2.13"
}

Ansible roles YAML errors

I'm try to execute this
curl -X PUT 192.168.1.11:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d'{"persistent": {"cluster.routing.allocation.enable": "primaries"}}'
And when i do this directly from the shell, it gives me right output
curl -X PUT 192.168.1.11:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d'{"persistent": {"cluster.routing.allocation.enable": "primaries"}}'
{
"acknowledged" : true,
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "primaries"
}
}
}
},
"transient" : { }
}
and here is my ansible shell task
- name: Turn off shard reallocation
shell: "curl -X PUT 192.168.1.11:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d'{"persistent": {"cluster.routing.allocation.enable": "primaries"}}'"
register: response
failed_when: response.stdout.find('"acknowledged":true') == -1
and it executes with error
ERROR! Syntax Error while loading YAML.
did not find expected key
The offending line appears to be:
- name: Turn off shard reallocation
shell: "curl -XPUT 192.168.1.11:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d '{"persistent" : {\"cluster.routing.allocation.enable" : "primaries"}}'"
^ here
Double quotes inside other double quotes must be escaped.
shell: "curl -X PUT 192.168.1.11:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d '{\"persistent\": {\"cluster.routing.allocation.enable\": \"primaries\"}}'"
In such cases, you can ease your life and make things more readable by using a yaml folded scalar block
shell: >-
curl -X PUT 192.168.1.11:9200/_cluster/settings?pretty
-H 'Content-Type: application/json'
-d '{"persistent": {"cluster.routing.allocation.enable": "primaries"}}'
Meanwhile have a look at #Matt Schuchard comment and consider using the uri module instead of curl in shell.

adding snapshot to elasticsearch Windows

I have two snapshots that I want to insert into elasticsearch in path:
C:\Users\name\Downloads\book_backup\agg_example
C:\Users\name\Downloads\book_backup\search_example
which I properly listed in elasticsearch.yml
path.repo: ["C:\\Users\\olulo\\Downloads\\book_backup\\agg_example", "C:\\Users\\olulo\\Downloads\\book_backup\\search_example"]
my elasticsearch starts fine and creating a new index works too.
Now when I try to insert snapshot into my elasticsearch so I can work on it:
curl -XPUT "http://localhost:9200/movies/" -d '{"type":"fs", "settings":{"location":"C:\\Users\\name\\Downloads\\book_backup\\search_example", "compress":true}}'
It gives me:
curl: (1) Protocol "'http" not supported or disabled in libcurl
curl: (3) [globbing] unmatched brace in column 10
curl: (3) [globbing] unmatched close brace/bracket in column 14
In https://www.elastic.co/guide/en/elasticsearch/reference/5.2/modules-snapshots.html it seems like they've added a name at the end of path in location so I did
curl -XPUT "http://localhost:9200/movies/" -d '{"type":"fs", "settings":{"location":"C:\\Users\\name\\Downloads\\book_backup\\search_example\\test", "compress":true}}'
but still gives me same error.
following https://superuser.com/questions/1322567/http-not-supported-or-disabled-in-libcurl I've change everything to double quotes:
curl -XPUT "http://localhost:9200/_snapshot/javacafe" -d "{"type":"fs", "settings":{"location":"C:\\Users\\olulo\\Downloads\\book_backup\\search_example", "compress":true}}"
giving me :
{"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}
Tried adding curl option as suggested Content-Type header [application/x-www-form-urlencoded] is not supported on Elasticsearch by
curl -XPUT "localhost:9200/_snapshot/javacafe" -H 'Content-Type: application/json' -d "{
"type":"fs",
"settings"{"location":"C:\\Users\\olulo\\Downloads\\book_backup\\search_example\\test", "compress":true}
}"
which outputs similar error with added statement at the end:
{"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}curl: (6) Could not resolve host: application
SOLVED : for windows you need to use backslash infront of double quotation mark inside {}:
curl -XPUT "localhost:9200/_snapshot/javacafe" -H "Content-Type: application/json" -d "{
\"type\":\"fs\",
\"settings\"{\"location\":"C:\\Users\\olulo\\Downloads\\book_backup\\search_example\\test", \"compress\":true}
}"
From my understanding windows use \ to consider " as it is. If so why not add backslash to all curl commands as well?
Try this:
curl -XPUT "localhost:9200/_snapshot/javacafe" -H 'Content-Type: application/json' -d '{
"type":"fs",
"settings" : {
"location":"C:\\Users\\olulo\\Downloads\\book_backup\\search_example\\test",
"compress":true
}
}'

Elasticsearch error: cluster_block_exception [FORBIDDEN/12/index read-only / allow delete (api)], flood stage disk watermark exceeded

When trying to post documents to Elasticsearch as normal I'm getting this error:
cluster_block_exception [FORBIDDEN/12/index read-only / allow delete (api)];
I also see this message on the Elasticsearch logs:
flood stage disk watermark [95%] exceeded ... all indices on this node will marked read-only
This happens when Elasticsearch thinks the disk is running low on space so it puts itself into read-only mode.
By default Elasticsearch's decision is based on the percentage of disk space that's free, so on big disks this can happen even if you have many gigabytes of free space.
The flood stage watermark is 95% by default, so on a 1TB drive you need at least 50GB of free space or Elasticsearch will put itself into read-only mode.
For docs about the flood stage watermark see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/disk-allocator.html.
The right solution depends on the context - for example a production environment vs a development environment.
Solution 1: free up disk space
Freeing up enough disk space so that more than 5% of the disk is free will solve this problem. Elasticsearch won't automatically take itself out of read-only mode once enough disk is free though, you'll have to do something like this to unlock the indices:
$ curl -XPUT -H "Content-Type: application/json" https://[YOUR_ELASTICSEARCH_ENDPOINT]:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
Solution 2: change the flood stage watermark setting
Change the "cluster.routing.allocation.disk.watermark.flood_stage" setting to something else. It can either be set to a lower percentage or to an absolute value. Here's an example of how to change the setting from the docs:
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.disk.watermark.low": "100gb",
"cluster.routing.allocation.disk.watermark.high": "50gb",
"cluster.routing.allocation.disk.watermark.flood_stage": "10gb",
"cluster.info.update.interval": "1m"
}
}
Again, after doing this you'll have to use the curl command above to unlock the indices, but after that they should not go into read-only mode again.
By default, Elasticsearch installed goes into read-only mode when you have less than 5% of free disk space. If you see errors similar to this:
Elasticsearch::Transport::Transport::Errors::Forbidden: [403]
{"error":{"root_cause":[{"type":"cluster_block_exception","reason":"blocked
by: [FORBIDDEN/12/index read-only / allow delete
(api)];"}],"type":"cluster_block_exception","reason":"blocked by:
[FORBIDDEN/12/index read-only / allow delete (api)];"},"status":403}
Or in /usr/local/var/log/elasticsearch.log you can see logs similar to:
flood stage disk watermark [95%] exceeded on
[nCxquc7PTxKvs6hLkfonvg][nCxquc7][/usr/local/var/lib/elasticsearch/nodes/0]
free: 15.3gb[4.1%], all indices on this node will be marked read-only
Then you can fix it by running the following commands:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
FROM
https://techoverflow.net/2019/04/17/how-to-fix-elasticsearch-forbidden-12-index-read-only-allow-delete-api/
This error is usually observed when your machine is low on disk space.
Steps to be followed to avoid this error message
Resetting the read-only index block on the index:
$ curl -X PUT -H "Content-Type: application/json" http://127.0.0.1:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
Response
${"acknowledged":true}
Updating the low watermark to at least 50 gigabytes free, a high watermark of at least 20 gigabytes free, and a flood stage watermark of 10 gigabytes free, and updating the information about the cluster every minute
Request
$curl -X PUT "http://127.0.0.1:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d' { "transient": { "cluster.routing.allocation.disk.watermark.low": "50gb", "cluster.routing.allocation.disk.watermark.high": "20gb", "cluster.routing.allocation.disk.watermark.flood_stage": "10gb", "cluster.info.update.interval": "1m"}}'
Response
${
"acknowledged" : true,
"persistent" : { },
"transient" : {
"cluster" : {
"routing" : {
"allocation" : {
"disk" : {
"watermark" : {
"low" : "50gb",
"flood_stage" : "10gb",
"high" : "20gb"
}
}
}
},
"info" : {"update" : {"interval" : "1m"}}}}}
After running these two commands, you must run the first command again so that the index does not go again into read-only mode
Only changing the settings with the following command did not work in my environment:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
I had to also ran the Force Merge API command:
curl -X POST "localhost:9200/my-index-000001/_forcemerge?pretty"
ref: Force Merge API
Even if the computer storage is revived above 95% the issue will still persist.
Short term solution is to increase kibana limit above 95%.This solution works in Windows only.
a. Create a json file with following parameters
{
"persistent": {
"cluster.routing.allocation.disk.watermark.low": "90%",
"cluster.routing.allocation.disk.watermark.high": "95%",
"cluster.routing.allocation.disk.watermark.flood_stage": "97%"
}
}
b.Name it anything ,e.g : json.txt
c.Type following command in command prompt
>curl -X PUT "localhost:9200/_cluster/settings?pretty" -H "Content-Type: application/json" -d #json.txt
d.Following output is received.
{
"acknowledged" : true,
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"disk" : {
"watermark" : {
"low" : "90%",
"flood_stage" : "97%",
"high" : "95%"
}
}
}
}
}
},
"transient" : { }
}
e.Create another json file with following parameter
{
"index.blocks.read_only_allow_delete": null
}
f.Name it anything ,e.g : json1.txt
g.Type following command in command prompt
>curl -X PUT "localhost:9200/*/_settings?expand_wildcards=all" -H "Content-Type: application/json" -d #json1.txt
h.You should get following output
{"acknowledged":true}
i.Restart ELK stack/Kibana and the issue should be resolved.
Delete setting of read-only from PostMan
A nice guide from the ELK team:
https://www.elastic.co/guide/en/elasticsearch/reference/master/disk-usage-exceeded.html
It worked for me with ELK 7.x

How to unset Elasticsearch routing

I'm using the shrink API and it requires you to move all shards to a single node. After the shrink operation is completed I wish to have the shards on the original index reassigned though out the cluster.
So my question is how to I reverse this command? I attempted to set _name to "*" but that did not work.
curl -s -XPUT "#{ES_HOST}:9200/#{BULK_INDEX}/_settings?pretty" -d '
{
"settings": {
"index.routing.allocation.require._name": "shrink-node-1"
}
}'
You can try to set it to null instead but you also need to remove the settings section since you're already hitting the _settings endpoint:
curl -s -XPUT "#{ES_HOST}:9200/#{BULK_INDEX}/_settings?pretty" -d '
{
"index.routing.allocation.require._name": null
}'

Resources