I have an elasticsearch cluster that only reports that it is green but reports only one node . From my research the cluster should be yellow and there should be two separate clusters . So could someone explain why the cluster below is reporting a green status ?
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2,
"active_shards" : 2,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
The cluster is being configured for clustering in elasticsearch.yml , and before those changes it properly reported a yellow status with the same 2 shards per node .
you have two primary shards in your cluster with no replica. both shards are assigned to one data node.
if you increase Number_of_replicas to 1 or higher, you would see the yellow status of cluster. on that moment you can do two things. 1) add another data node. 2) change elastic setting to force assign both primary and replica shards to one node (not recommended).
The cluster is green because there are 0 unassigned shards - every shard that needs a home has one. This is likely because you have indices with number_of_replicas set to 1, and since you have 1 active node in your cluster, all replica requirements are satisfied. This is generally a bad idea, as it doesn't provide any redundancy.
If you create indices with number_of_replicas set to a value larger than 1, you will need at least that many machines active in the cluster to be eligible for green status.
Related
Here is an instance:
screen size : 1024 * 768
Each block regards as a rectangle.
Coordinate: 0, 0 denotes left-down while 512, 384 denotes right-up
block 1 : <0, 0, 512, 384>
block 2 : <0, 384, 512, 1024>
block 3 : <512, 0, 1024, 384>
block 4 : <512, 384, 1024, 768>
the case above is correct, but how I can testify wrong one like:
case 1 :
block 1 : <0, 0, 512, 384>
block 2 : <0, 384, 512, 1024>
block 3 : <512, 0, 1024, 384>
block 4 : <800, 384, 1024, 768>
case 2 :
block 1 : <0, 0, 512, 384>
block 2 : <0, 384, 512, 1024>
block 3 : <512, 0, 1024, 384>
block 4 : <256, 192, 768, 576>
case 3:
block 1 : <0, 0, 1024, 384>
block 2 : <0, 0, 1024, 384>
block 3 : <0, 384, 512, 1024>
case 4 :
block 1 : <0, 0, 1024, 384>
block 2 : <5, 5, 600, 300>
block 3 : <0, 384, 512, 1024>
How I can detect the user's wrong input including empty or repeat blocks.
This is my idea:
I will first calculate the sum of area of blocks. Before calculating, I wipe off the repeated information of each block
such as
block 1 : <0, 0, 1024, 384>
block 2 : <0, 0, 1024, 384>
block 3 : <0, 384, 512, 1024>
the sum would be 1024*384 + 512*(1024-384) not 1024*384*2 + 512*(1024-384).
If the sum is not equal to 1024*768, then the user input is invalid.
Otherwise, I will find out if there is empty block or repeated blocks.
If there are n blocks, each block can compare with the rest. But then time complexity would be O(n^2), which considers low performance.
I wonder if there are better ways to achieve the algorithm.
To simplify, consider 2 blocks only. There are 4 case of overlapping at total:
0.
╭───╮
│ 1╭+──╮
└──+┘0 │
└───┘
top-left vertex of block 0 located within block 1
1.
╭───╮
│ 0╭+──╮
└──+┘1 │
└───┘
top-left vertex of block 1 located within block 0
2.
╭───╮
╭──+╮1 │
│ 0└+──┘
└───┘
top-right vertex of block 0 located within block 1
3.
╭───╮
╭──+╮0 │
│ 1└+──┘
└───┘
top-right vertex of block 1 located within block 0
bool locate_within(Point, Rectangle) should be easy to implement,
then apply it to bool overlap(RectangleAlpha, RectangleBeta).
Pseudocode:
n = Rectangle-Count
for i in [0..(n-2)]:
for k in [(i+1)..(n-1)]:
if overlap(Rectangle[i], Rectangle[k]):
return False
return True
Note: True -> validated
Time complexity: O(n^2)
I use percolator(Elasticsearch 2.3.3) and i have ~100 term queries. When i percolate 1 document in 1 thread, it took ~500ms:
{u'total': 0, u'took': 452, u'_shards': {u'successful': 12, u'failed': 0, u'total': 12}} TIME 0.467885982513
There are 4 CPU, so i want to percolate in 4 processes. But when i launch them, everyone took ~2000ms:
{u'total': 0, u'took': 1837, u'_shards': {u'successful': 12, u'failed': 0, u'total': 12}} TIME 1.890885982513
Why?
I use python module Elasticsearch 2.3.0.
I have tried to manage count of shards(from 1 to 12), but it is the same result.
When i try to percolate in 20 thread, elastic crushes with error:
RemoteTransportException[[test_node01][192.168.69.142:9300][indices:data/read/percolate[s]]];
nested: EsRejectedExecutionException[rejected execution of
org.elasticsearch.transport.TransportService$4#7906d a8a on
EsThreadPoolExecutor[percolate, queue capacity = 1000,
org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor#31a1c278[Running,
pool size = 16, active threads = 16, queued tasks = 1000, compl eted
tasks = 156823]]]; Caused by: EsRejectedExecutionException[rejected
execution of org.elasticsearch.transport.TransportService$4#7906da8a
on EsThreadPoolExecutor[percolate, queue capacity = 1000,
org.elasticsearch.common.util
.concurrent.EsThreadPoolExecutor#31a1c278[Running, pool size = 16,
active threads = 16, queued tasks = 1000, completed tasks = 156823]]]
Server has 16 CPU and 32 GB RAM
I have three nodes working together. On those nodes I have five indexes, each having 5 primary shards and each primary shard has 2 replicas. It looks like this (i cut to see only two indices out of 5 ):
![IMG]http://i59.tinypic.com/2ez1wjt.png
As you can see on the picture:
- the node 1 has primary shards 0 and 3 (and replicas 1,2 and 4)
- the node 2 has primary shards 2 (and replicas 0, 1, 3 and 4)
- the node 3 has primary shards 1 and 4 (and replicas 0 ,2 and 3)
and this is the case for each index (the 5 of them).
I understand that if I restart my nodes this "organisation" will change but still the "look" of index will be the same as index2, 3, 4, 5. For example, after restarting, I would have:
- the node 1 has primary shards 1 and 2 (and replicas 0, 3 and 4)
- the node 2 has primary shards 3 (and replicas 0, 1, 2 and 4)
- the node 3 has primary shards 0 and 4 (and replicas 1 ,2 and 3)
and this would be the case for each index (the 5 of them).
Is there a reason why I find always the same pattern for each of my index?
Thanks
I have a cluster with one node (by local). Health cluster is yellow. Now I add more one node, but shards can not be allocated in second node. So the health of my cluster is still yellow. I can not change this state to green, not like as this guide:health cluster example.
So how to change health state to green?
My cluster:
Cluster health:
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "astrung",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 2,
"unassigned_shards" : 20
}
Shard status:
curl -XGET 'http://localhost:9200/_cat/shards?v'
index shard prirep state docs store ip node
_river 0 p STARTED 2 8.1kb 192.168.1.3 One
_river 0 r UNASSIGNED
megacorp 4 p STARTED 1 3.4kb 192.168.1.3 One
megacorp 4 r UNASSIGNED
megacorp 0 p STARTED 2 6.1kb 192.168.1.3 One
megacorp 0 r UNASSIGNED
megacorp 3 p STARTED 1 2.2kb 192.168.1.3 One
megacorp 3 r UNASSIGNED
megacorp 1 p STARTED 0 115b 192.168.1.3 One
megacorp 1 r UNASSIGNED
megacorp 2 p STARTED 1 2.2kb 192.168.1.3 One
megacorp 2 r UNASSIGNED
mybucket 2 p STARTED 1 2.1kb 192.168.1.3 One
mybucket 2 r UNASSIGNED
mybucket 0 p STARTED 0 115b 192.168.1.3 One
mybucket 0 r UNASSIGNED
mybucket 3 p STARTED 2 5.4kb 192.168.1.3 One
mybucket 3 r UNASSIGNED
mybucket 1 p STARTED 1 2.2kb 192.168.1.3 One
mybucket 1 r UNASSIGNED
mybucket 4 p STARTED 1 2.5kb 192.168.1.3 One
mybucket 4 r UNASSIGNED
.kibana 0 r INITIALIZING 192.168.1.3 Two
.kibana 0 p STARTED 2 8.9kb 192.168.1.3 One
.marvel-kibana 2 p STARTED 0 115b 192.168.1.3 One
.marvel-kibana 2 r UNASSIGNED
.marvel-kibana 0 r INITIALIZING 192.168.1.3 Two
.marvel-kibana 0 p STARTED 1 2.9kb 192.168.1.3 One
.marvel-kibana 3 p STARTED 0 115b 192.168.1.3 One
.marvel-kibana 3 r UNASSIGNED
.marvel-kibana 1 p STARTED 0 115b 192.168.1.3 One
.marvel-kibana 1 r UNASSIGNED
.marvel-kibana 4 p STARTED 0 115b 192.168.1.3 One
.marvel-kibana 4 r UNASSIGNED
user_ids 4 p STARTED 11 5kb 192.168.1.3 One
user_ids 4 r UNASSIGNED
user_ids 0 p STARTED 7 25.1kb 192.168.1.3 One
user_ids 0 r UNASSIGNED
user_ids 3 p STARTED 11 4.9kb 192.168.1.3 One
user_ids 3 r UNASSIGNED
user_ids 1 p STARTED 8 28.7kb 192.168.1.3 One
user_ids 1 r UNASSIGNED
user_ids 2 p STARTED 11 8.5kb 192.168.1.3 One
user_ids 2 r UNASSIGNED
I suggest updating the replication factor of all the indices to 0 and then update it back to 1.
Here's a curl to get you started
curl -XPUT 'http://localhost:9200/_settings' -H 'Content-Type: application/json' -d '
{
"index" : {
"number_of_replicas" : 0
}
}'
like #mohitt said above, update number_of_replicas to zero(for local dev only,be careful to use in production)
you can run the below in Kibana DevTool Console:
PUT _settings
{
"index" : {
"number_of_replicas" : 0
}
}
thou recovery normally takes a long time, looking at the number and size of your documents, it should take a very sort time to recover.
Looks like you have issues with the nodes contacting each other, check firewall rules, ensure ports 9200 and 9300 are reachable from each.
I have 1 ES cluster with 3 nodes, 1 index, 3 shards and 2 replica per shard.
For some reason, all my primary shards are located on the same node:
Node 1: replica 0, replica 1, replica 2
Node 2: replica 0, replica 1, replica 2
Node 3: primary 0, primary 1, primary 2.
What I should do to rebalance shards? I want to have 1 primary shard per 1 node, for example:
Node 1: primary 0, replica 1, replica 2
Node 2: replica 0, primary 1, replica 2
Node 3: replica 0, replica 1, primary 2.