We make a poc with ElasticSearch but while doing it, we lost data in clustered enviroment. We use ES 2.4.0.
Can anyone say what we are missing?
Our scenario is:
Open Elastic Server-1 and Server-2 with the configurations below,
they are in a cluster.
Index document over Server-1:
curl -XPUT '20.20.20.5:9200/ert/post/1' -d '
{
"user": "easlan",
"postDate": "01-16-2015",
"body": "Adding Data in ElasticSearch Cluster" ,
"title": "ElasticSearch Cluster Test - 1"
}'
Look for indexed docs over Server-1 or Server-2:Total number of results is 1(as expected):
curl -XGET '20.20.20.5:9200/ert/post/_search?q=user:easlan&pretty=true'
curl -XGET '20.20.20.6:9200/ert/post/_search?q=user:easlan&pretty=true'
Then close Server-1
Index new document over Server-2:
curl -XPUT '20.20.20.6:9200/ert/post/2' -d '
{
"user": "easlan",
"postDate": "01-16-2015",
"body": "Adding Data in ElasticSearch Cluster" ,
"title": "ElasticSearch Cluster Test - 2"
}'
Look for indexed docs over Server-2:Total number of results is 2:
curl -XGET '20.20.20.6:9200/ert/post/_search?q=user:easlan&pretty=true'
Close Server-2
Open Server-1
Look for indexed docs over Server-1:Total number of results is 1 (as expected, because server-2 is closed):
curl -XGET '20.20.20.5:9200/ert/post/_search?q=user:easlan&pretty=true'
Then open Server-2 again. Look for indexed docs over Server-1 or Server-2. We expect to see total number of results as 2 but when we look, we got 1 as a result. Even we restart two of them again the result is still 1:
curl -XGET '20.20.20.5:9200/ert/post/_search?q=user:easlan&pretty=true'
curl -XGET '20.20.20.6:9200/ert/post/_search?q=user:easlan&pretty=true'
Our Configurations:
*** Server-1 ****
cluster.name: ESCluster
node.master: true
node.name: "es1"
node.data: true
network.bind_host: ["127.0.0.1","20.20.20.5"]
network.publish_host: "20.20.20.5"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["20.20.20.5","20.20.20.6"]
discovery.zen.minimum_master_nodes: 1
*** Server-2 ****
cluster.name: ESCluster
node.master: true
node.name: "es2"
node.data: true
network.bind_host: ["127.0.0.1","20.20.20.6"]
network.publish_host: "20.20.20.6"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["20.20.20.5","20.20.20.6"]
discovery.zen.minimum_master_nodes: 1
Related
I have 3 node elasticsearch cluster
192.168.2.11 - node-01
192.168.2.12 - node-02
192.168.2.13 - node-03
and i deleted node-02 from cluster using this command
curl -XPUT 192.168.2.12:9200/_cluster/settings -H 'Content-Type: application/json' -d '{
"transient" :{
"cluster.routing.allocation.exclude._ip" : "192.168.2.12"
}
}'
and ok, all my indexes moved to node-01 and node-03, but how to return back this node to the cluster?
i try this command
curl -XPUT 192.168.2.12:9200/_cluster/settings -H 'Content-Type: application/json' -d '{
"transient" :{
"cluster.routing.allocation.include._ip" : "192.168.2.12"
}
}'
but this doesn't works
:"node does not cluster setting [cluster.routing.allocation.include] filters [_ip:\"192.168.2.12\"]
The node has not been deleted but you can 'undo' your command by updating the setting you changed to null
Try updating the settings on either of the running nodes (01 or 03) with
"transient" :{
"cluster.routing.allocation.exclude._ip" : null
}
and the cluster should rebalance shards across the three nodes.
Be careful using the include._ip: "192.168.2.12" as this might stop routing indices to the other two, instead include all three ip addresses if you wanted to us this, for example
"transient" :{
"cluster.routing.allocation.include._ip" :"192.168.2.11, 192.168.2.12, 192.168.2.13"
}
I use the following curl command to clear indices from the elasticsearch node.
curl -X POST -u user:password "IP:9200/index_name_here/_delete_by_query?conflicts=proceed&pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}
'
But the problem which Iam facing is when I clear index from one node then it does not clear data from all the other connected elastic nodes and the data again is copied from other nodes to the node which has been cleared from the above command.
All I want is to clear the index(not delete) like the above command from all the elastic nodes in a cluster.
My Elasticsearch cluster status is red due to low space but when I checked through query GET /_cat/allocation?v&pretty it's showing 6.8 Gb free space in both nodes.
Can anyone help me?
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
6 25.5gb 27.3gb 6.8gb 34.2gb 80 x.x.x.x x.x.x.x
6 25.5gb 27.3gb 6.8gb 34.2gb 80 x.x.x.x x.x.x.x
You can increase the disk watermark as mentioned in the docs here
curl -XPUT "localhost:9200/_cluster/settings" -d '{
"transient": {
"cluster.routing.allocation.disk.watermark.low": "1gb",
"cluster.routing.allocation.disk.watermark.high": "500mb",
"cluster.routing.allocation.disk.watermark.flood_stage": "200mb",
"cluster.info.update.interval": "1m"
}'
This is my elasticsearch.yml:
cluster.name: cluster
node.name: esn1
path.conf: "/etc/elasticsearch"
path.data: "/var/lib/elasticsearch"
path.logs: "/var/log/elasticsearch"
network.host: 0.0.0.0
http.port: 9201
bootstrap.memory_lock: false
discovery.zen.minimum_master_nodes: 1
xpack.monitoring.enabled: false
xpack.graph.enabled: false
xpack.watcher.enabled: false
I've also installed x-pack:
# sudo /usr/share/elasticsearch/bin/elasticsearch-plugin list
repository-s3
x-pack
Nevertheless:
curl -XPUT 'http://localhost:9200/_xpack/security/user/elastic/_password' -d '
> {
> "password": "L5ngDgtl00?"
> }
> '
No handler found for uri [/_xpack/security/user/elastic/_password] and method [PUT][
Any ideas?
You're almost there but i guess you're making some mistake in the command for curl. -u elastic option is missing.
See here: https://www.elastic.co/guide/en/x-pack/current/security-getting-started.html
Also, try to reinstall x-pack once by following step 1 in the above link.
I have 2 nodes in elastic search cluster with 8 CPU and 16 GB RAM. I have set ES_HEAP_SIZE to 10 GB.
In my yml configuration file on both machines i have set
index.number_of_shards: 5
index.number_of_replicas: 1
And both machines are allowed as master/data true.Now problem is my 0th shard of node 1 is unassigned after restart.I tried
for shard in $(curl -XGET http://localhost:9201/_cat/shards | grep UNASSIGNED | awk '{print $2}'); do
echo "processing $shard"
curl -XPOST 'localhost:9201/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "inxn",
"shard" : '$shard',
"node" : "node1",
"allow_primary" : true
}
}
]
}'
done
it does not give any error and says acknowledged true and show status of shard to initialize but when i view shard its still uninitialized.
Am I doing anything wrong in setting? Should I make both node as master/data true and on both machines set shard:5 and replica:1
Any help or suggestion would be greatly appreciated.
Thanks
I did a trick to solve the same , I renamed 0 folder under indices on node1 and did a force full assign 0th shard on node1 and it worked for me.
curl -XPOST 'localhost:9201/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "inxc",
"shard" : 0,
"node" : "node1",
"allow_primary" : true
}
}
]
}'