Elasticsearch add node to cluster - elasticsearch

I have 3 node elasticsearch cluster
192.168.2.11 - node-01
192.168.2.12 - node-02
192.168.2.13 - node-03
and i deleted node-02 from cluster using this command
curl -XPUT 192.168.2.12:9200/_cluster/settings -H 'Content-Type: application/json' -d '{
"transient" :{
"cluster.routing.allocation.exclude._ip" : "192.168.2.12"
}
}'
and ok, all my indexes moved to node-01 and node-03, but how to return back this node to the cluster?
i try this command
curl -XPUT 192.168.2.12:9200/_cluster/settings -H 'Content-Type: application/json' -d '{
"transient" :{
"cluster.routing.allocation.include._ip" : "192.168.2.12"
}
}'
but this doesn't works
:"node does not cluster setting [cluster.routing.allocation.include] filters [_ip:\"192.168.2.12\"]

The node has not been deleted but you can 'undo' your command by updating the setting you changed to null
Try updating the settings on either of the running nodes (01 or 03) with
"transient" :{
"cluster.routing.allocation.exclude._ip" : null
}
and the cluster should rebalance shards across the three nodes.
Be careful using the include._ip: "192.168.2.12" as this might stop routing indices to the other two, instead include all three ip addresses if you wanted to us this, for example
"transient" :{
"cluster.routing.allocation.include._ip" :"192.168.2.11, 192.168.2.12, 192.168.2.13"
}

Related

How to clear elastisearch indice from all the elastic nodes in a cluster at once

I use the following curl command to clear indices from the elasticsearch node.
curl -X POST -u user:password "IP:9200/index_name_here/_delete_by_query?conflicts=proceed&pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}
'
But the problem which Iam facing is when I clear index from one node then it does not clear data from all the other connected elastic nodes and the data again is copied from other nodes to the node which has been cleared from the above command.
All I want is to clear the index(not delete) like the above command from all the elastic nodes in a cluster.

Elasticsearch error: cluster_block_exception [FORBIDDEN/12/index read-only / allow delete (api)], flood stage disk watermark exceeded

When trying to post documents to Elasticsearch as normal I'm getting this error:
cluster_block_exception [FORBIDDEN/12/index read-only / allow delete (api)];
I also see this message on the Elasticsearch logs:
flood stage disk watermark [95%] exceeded ... all indices on this node will marked read-only
This happens when Elasticsearch thinks the disk is running low on space so it puts itself into read-only mode.
By default Elasticsearch's decision is based on the percentage of disk space that's free, so on big disks this can happen even if you have many gigabytes of free space.
The flood stage watermark is 95% by default, so on a 1TB drive you need at least 50GB of free space or Elasticsearch will put itself into read-only mode.
For docs about the flood stage watermark see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/disk-allocator.html.
The right solution depends on the context - for example a production environment vs a development environment.
Solution 1: free up disk space
Freeing up enough disk space so that more than 5% of the disk is free will solve this problem. Elasticsearch won't automatically take itself out of read-only mode once enough disk is free though, you'll have to do something like this to unlock the indices:
$ curl -XPUT -H "Content-Type: application/json" https://[YOUR_ELASTICSEARCH_ENDPOINT]:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
Solution 2: change the flood stage watermark setting
Change the "cluster.routing.allocation.disk.watermark.flood_stage" setting to something else. It can either be set to a lower percentage or to an absolute value. Here's an example of how to change the setting from the docs:
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.disk.watermark.low": "100gb",
"cluster.routing.allocation.disk.watermark.high": "50gb",
"cluster.routing.allocation.disk.watermark.flood_stage": "10gb",
"cluster.info.update.interval": "1m"
}
}
Again, after doing this you'll have to use the curl command above to unlock the indices, but after that they should not go into read-only mode again.
By default, Elasticsearch installed goes into read-only mode when you have less than 5% of free disk space. If you see errors similar to this:
Elasticsearch::Transport::Transport::Errors::Forbidden: [403]
{"error":{"root_cause":[{"type":"cluster_block_exception","reason":"blocked
by: [FORBIDDEN/12/index read-only / allow delete
(api)];"}],"type":"cluster_block_exception","reason":"blocked by:
[FORBIDDEN/12/index read-only / allow delete (api)];"},"status":403}
Or in /usr/local/var/log/elasticsearch.log you can see logs similar to:
flood stage disk watermark [95%] exceeded on
[nCxquc7PTxKvs6hLkfonvg][nCxquc7][/usr/local/var/lib/elasticsearch/nodes/0]
free: 15.3gb[4.1%], all indices on this node will be marked read-only
Then you can fix it by running the following commands:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
FROM
https://techoverflow.net/2019/04/17/how-to-fix-elasticsearch-forbidden-12-index-read-only-allow-delete-api/
This error is usually observed when your machine is low on disk space.
Steps to be followed to avoid this error message
Resetting the read-only index block on the index:
$ curl -X PUT -H "Content-Type: application/json" http://127.0.0.1:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
Response
${"acknowledged":true}
Updating the low watermark to at least 50 gigabytes free, a high watermark of at least 20 gigabytes free, and a flood stage watermark of 10 gigabytes free, and updating the information about the cluster every minute
Request
$curl -X PUT "http://127.0.0.1:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d' { "transient": { "cluster.routing.allocation.disk.watermark.low": "50gb", "cluster.routing.allocation.disk.watermark.high": "20gb", "cluster.routing.allocation.disk.watermark.flood_stage": "10gb", "cluster.info.update.interval": "1m"}}'
Response
${
"acknowledged" : true,
"persistent" : { },
"transient" : {
"cluster" : {
"routing" : {
"allocation" : {
"disk" : {
"watermark" : {
"low" : "50gb",
"flood_stage" : "10gb",
"high" : "20gb"
}
}
}
},
"info" : {"update" : {"interval" : "1m"}}}}}
After running these two commands, you must run the first command again so that the index does not go again into read-only mode
Only changing the settings with the following command did not work in my environment:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
I had to also ran the Force Merge API command:
curl -X POST "localhost:9200/my-index-000001/_forcemerge?pretty"
ref: Force Merge API
Even if the computer storage is revived above 95% the issue will still persist.
Short term solution is to increase kibana limit above 95%.This solution works in Windows only.
a. Create a json file with following parameters
{
"persistent": {
"cluster.routing.allocation.disk.watermark.low": "90%",
"cluster.routing.allocation.disk.watermark.high": "95%",
"cluster.routing.allocation.disk.watermark.flood_stage": "97%"
}
}
b.Name it anything ,e.g : json.txt
c.Type following command in command prompt
>curl -X PUT "localhost:9200/_cluster/settings?pretty" -H "Content-Type: application/json" -d #json.txt
d.Following output is received.
{
"acknowledged" : true,
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"disk" : {
"watermark" : {
"low" : "90%",
"flood_stage" : "97%",
"high" : "95%"
}
}
}
}
}
},
"transient" : { }
}
e.Create another json file with following parameter
{
"index.blocks.read_only_allow_delete": null
}
f.Name it anything ,e.g : json1.txt
g.Type following command in command prompt
>curl -X PUT "localhost:9200/*/_settings?expand_wildcards=all" -H "Content-Type: application/json" -d #json1.txt
h.You should get following output
{"acknowledged":true}
i.Restart ELK stack/Kibana and the issue should be resolved.
Delete setting of read-only from PostMan
A nice guide from the ELK team:
https://www.elastic.co/guide/en/elasticsearch/reference/master/disk-usage-exceeded.html
It worked for me with ELK 7.x

Create index-patterns from console with Kibana 6.0 or 7+ (v7.0.1)

I recently upgraded my ElasticStack instance from 5.5 to 6.0, and it seems that some of the breaking changes of this version have harmed my pipeline. I had a script that, depending on the indices inside ElasticSearch, created index-patterns automatically for some groups of similar indices. The problem is that with the new mapping changes of the 6.0 version, I cannot add any new index-pattern from the console. This was the request I used and worked fine in 5.5:
curl -XPOST "http://localhost:9200/.kibana/index-pattern" -H 'Content- Type: application/json' -d'
{
"title" : "index_name",
"timeFieldName" : "execution_time"
}'
This is the response I get now, in 6.0, from ElasticSearch:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Rejecting mapping update to [.kibana] as the final mapping would have more than 1 type: [index-pattern, doc]"
}
],
"type": "illegal_argument_exception",
"reason": "Rejecting mapping update to [.kibana] as the final mapping would have more than 1 type: [index-pattern, doc]"
},
"status": 400
}
How could I add index-patterns from the console avoiding this multiple mapping issue?
The URL has been changed in version 6.0.0, here is the new URL:
http://localhost:9200/.kibana/doc/doc:index-pattern:my-index-pattern-name
This CURL should work for you:
curl -XPOST "http://localhost:9200/.kibana/doc/index-pattern:my-index-pattern-name" -H 'Content-Type: application/json' -d'
{
"type" : "index-pattern",
"index-pattern" : {
"title": "my-index-pattern-name*",
"timeFieldName": "execution_time"
}
}'
If you are Kibana 7.0.1 / 7+ then you can refer saved_objects API ex:
Refer: https://www.elastic.co/guide/en/kibana/master/saved-objects-api.html (Look for Get, Create, Delete etc).
In this case, we'll use: https://www.elastic.co/guide/en/kibana/master/saved-objects-api-create.html
$ curl -X POST -u $user:$pass -H "Content-Type: application/json" -H "kbn-xsrf:true" "${KIBANA_URL}/api/saved_objects/index-pattern/dummy_index_pattern" -d '{ "attributes": { "title":"index_name*", "timeFieldName":"sprint_start_date"}}' -w "\n" | jq
and
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 327 100 250 100 77 543 167 --:--:-- --:--:-- --:--:-- 543
{
"type": "index-pattern",
"id": "dummy_index_pattern",
"attributes": {
"title": "index_name*",
"timeFieldName": "sprint_start_date"
},
"references": [],
"migrationVersion": {
"index-pattern": "6.5.0"
},
"updated_at": "2020-02-25T22:56:44.531Z",
"version": "Wzg5NCwxNV0="
}
Where $KIBANA_URL was set to: http://my-elk-stack.devops.local:5601
If you don't have jq installed, remove | jq from the command (as listed above).
PS: When KIBANA's GUI is used to create an index-pattern, Kibana stores its i.e. index ID as an alpha-numeric value (ex: laskl32ukdflsdjflskadf-sdf-sdfsaldkjfhsdf-dsfasdf) which is hard to use/find/type when doing GET operation to find info about an existing index-pattern using the following curl command.
If you passed index pattern name (like we did above), then in Kibana/Elasticsearch, it'll story the Index-Pattern's ID by the name you gave to the REST call (ex: .../api/saved_objects/index-pattern/dummy_index_pattern")
here: dummy_index_pattern will become the ID (only visible if you hover over your mouse on the index-pattern name in Kibana GUI) and
it'll have it's index name as: index_name* (i.e. what's listed in GUI when you click on Kibana Home > Gear icon > Index Patterns and see the index patterns listed on the right side.
NOTE: The timeFieldName is very important. This is the field, which is used for looking for time-series events (i.e. especially TSVB Time Series Visual Builder Visualization type). By default, it uses #timestamp field, but if you recreate your index (instead of sending delta information to your target Elasticsearch index from a data source (ex: JIRA)) every time and send all data in one shot from scratch from a data source, then #timestamp won't help with Visualization's Time-Spanning/Window feature (where you change time from last 1 week to last 1 hour or last 6 months); in that case, you can set a different field i.e. sprint_start_date like I used (and now in Kibana Discover data page, if you select this index-pattern, it'll use sprint_start_date (type: date) field, for events.
To GET index pattern info about the newly created index-pattern, you can refer: https://www.elastic.co/guide/en/kibana/master/saved-objects-api-get.html --OR run the following where (the last value in the URL path is the ID value of the index pattern we created earlier:
curl -X GET "${KIBANA_URL}/api/saved_objects/index-pattern/dummy_index_pattern" | jq
or
otherwise (if you want to perform a GET on an index pattern which is created via Kibana's GUI/webpage under Page Index Pattern > Create Index Pattern, you'd have to enter something like this:
curl -X GET "${KIBANA_URL}/api/saved_objects/index-pattern/jqlaskl32ukdflsdjflskadf-sdf-sdfsaldkjfhsdf-dsfasdf" | jq
For Kibana 7.7.0 with Open Distro security plugin (amazon/opendistro-for-elasticsearch-kibana:1.8.0 Docker image to be precise), this worked for me:
curl -X POST \
-u USERNAME:PASSWORD \
KIBANA_HOST/api/saved_objects/index-pattern \
-H "kbn-version: 7.7.0" \
-H "kbn-xsrf: true" \
-H "content-type: application/json; charset=utf-8" \
-d '{"attributes":{"title":"INDEX-PATTERN*","timeFieldName":"#timestamp","fields":"[]"}}'
Please note, that kbn-xsrf header is required, but it seems like it's useless as from security point of view.
Output was like:
{"type":"index-pattern","id":"UUID","attributes":{"title":"INDEX-PATTERN*","timeFieldName":"#timestamp","fields":"[]"},"references":[],"migrationVersion":{"index-pattern":"7.6.0"},"updated_at":"TIMESTAMP","version":"VERSION"}
I can't tell why migrationVersion.index-pattern is "7.6.0".
For other Kibana versions you should be able to:
Open Kibana UI in browser
Open Developers console, navigate to Network tab
Create index pattern using UI
Open POST request in the Developers console, take a look on URL and headers, than rewrite it to cURL
Indices created in Elasticsearch 6.0.0 or later may only contain a single mapping type.
Indices created in 5.x with multiple mapping types will continue to function as before in Elasticsearch 6.x.
Mapping types will be completely removed in Elasticsearch 7.0.0.
Maybe you are creating a index with more than one doc_types in ES 6.0.0.
https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html
Create index-pattern in bulk with timestamp:
cat index_svc.txt
my-index1
my-index2
my-index3
my-index4
my-index5
my-index6
cat index_svc.txt | while read index; do
echo -ne "create index-pattern ${index} \t"
curl -XPOST "http://10.0.1.44:9200/.kibana/doc/index-pattern:${index}" -H 'Content-Type: application/json' -d "{\"type\":\"index-pattern\",\"index-pattern\":{\"title\":\"${index}2020*\",\"timeFieldName\":\"#timestamp\"}}"
echo
done

How to unset Elasticsearch routing

I'm using the shrink API and it requires you to move all shards to a single node. After the shrink operation is completed I wish to have the shards on the original index reassigned though out the cluster.
So my question is how to I reverse this command? I attempted to set _name to "*" but that did not work.
curl -s -XPUT "#{ES_HOST}:9200/#{BULK_INDEX}/_settings?pretty" -d '
{
"settings": {
"index.routing.allocation.require._name": "shrink-node-1"
}
}'
You can try to set it to null instead but you also need to remove the settings section since you're already hitting the _settings endpoint:
curl -s -XPUT "#{ES_HOST}:9200/#{BULK_INDEX}/_settings?pretty" -d '
{
"index.routing.allocation.require._name": null
}'

Error: index_not_found_exception

I use ELK stack to analyze my log file. I have tested last week and everything works well.
Today, I tested but I get this error when I typed "http://localhost:9200/iot_log/_count" (iot_log is my index pattern):
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no
such
index","resource.type":"index_or_alias","resource.id":"iot_log","index_uuid":"na","index":"iot_log"}],"type":"index_not_found_exception","reason":"no such
index","resource.type":"index_or_alias","resource.id":"iot_log","index_uuid":"na","index":"iot_log"},"status":404}
I really searched the forums but I have not found a solution, I want to know what is the cause of this problem please and how can I correct it?
Make sure index iot_log exist and create it if not:
curl -X PUT "localhost:9200/iot_log" -H 'Content-Type: application/json' -d'{ "settings" : { "index" : { } }}'
You need to set your action.auto_create_index parameter in elasticsearch.yml file.
Example:
action.auto_create_index: -l*,+z*
With this kind of configuration, indexes starting with "z" will be created automatically while indexes starting with "l" will not.
The best way to resolve it by using setting as follow
Allow Auto Create YourIndexName and index10 and not allowing any index name matching index1* and any other index matching ind*. The patterns are matched in the order they are given.
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{
"persistent": {
"action.auto_create_index": "YourIndexName,index10,-index1*,+ind*"
}
}'
Stop any Auto Indexing
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{
"persistent": {
"action.auto_create_index": "false"
}
}'
Allow any Index create automatically
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{
"persistent": {
"action.auto_create_index": "true"
}
}'
```
In my case, My all data is DELETED in elastic search automatically, After importing data again in elastic search my application working good.

Resources