why are none of my shards being assigned? (ES 2.3)
Create index:
PUT 'host:9200/entities?pretty' -d ' {
"mappings": {
x
}
},
"settings" : {
"index" : {
"number_of_shards" : 6,
"number_of_replicas" : 1
}
}
}'
Cluster Settings:
GET 'host:9200/_cluster/settings?pretty'
{
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "all"
}
}
}
},
"transient" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "all"
}
}
}
}
}
Cluster:
host master
node3 m
node2 m
node1 *
Shards
GET 'host:9200/_cat/shards?v'
index shard prirep state docs store ip node
entities 5 p UNASSIGNED
entities 5 r UNASSIGNED
entities 1 p UNASSIGNED
entities 1 r UNASSIGNED
entities 4 p UNASSIGNED
entities 4 r UNASSIGNED
entities 2 p UNASSIGNED
entities 2 r UNASSIGNED
entities 3 p UNASSIGNED
entities 3 r UNASSIGNED
entities 0 p UNASSIGNED
entities 0 r UNASSIGNED
I'm able to assign nodes directly through the routing API, but that doesn't seem to be the wait to go.
If I setup the cluster differently, with 1 master node and 2 data nodes, the problem doesn't occur. But
Turns out I misinterpreted node.master and node.data settings. I thought it had to be either or.
Set all three nodes to node.master: true and node.data: true , now it's working like a charm.
Related
I'm looking through the dbus api for Network Manager and there are methods with inputs of type a{sa{sv}}.
I'm still new to dbus but if I'm interpreting the definition of signature specifiers in https://www.freedesktop.org/software/systemd/man/sd_bus_message_read.html# correctly this is:
A variable-length array
Of named variable arrays
Of named "variants" (which I guess are tagged unions)
What practically is this for? A name-paginated list of named settings? I'm seeing it all over the place in this API.
s is std::string.
v is variant.
a{} is std::map.
a{sv} is std::map<std::string, Variant>
Finally: a{sa{sv}} is std::map<std::string, std::map<std::string, Variant>>
Variant can hold value of any D-Bus-supported type, if you are using c++ I recommend you to check it at
sdbus-cpp
It turns out this is what I guess should be called a "Settings" in Network Manager. For several methods instead of building a connection setting by setting an entire set of settings are added all at once. Here's a tabbed and commented version of the settings of my current connection as queried for example:
5
"connection" 5
"id" s "Profile 1"
"permissions" as o //<empty array of strings>
"timestamp" t 1661376049
"type" s "802-3-ethernet"
"uuid" s <not posting for privacy>
"802-3-ethernet" 3
"auto-negotiate" b false
"mac-address-blacklist" as 0
"s390-options" a{ss} 0
"ipv4" 6
"address-data" aa{sv} 0
"addresses" aau 0
"dns-search" as 0
"method" s "auto"
"route-data" aa{sv} 0
"routes" aau 0
"ipv6" 7
"addr-gen-mode" i 1
"address-data" aa{sv} 0
"addresses" a(ayuay) 0
"dns-search" as 0
"method" s "auto"
"route-data" aa{sv} 0
"routes" a(ayuayu) 0
"proxy" 0
I think most of these are defaults so the real settings you might set when creating a connection are probably something like:
4
"connection" 4
"id" s "Profile Foo"
"timestamp" t <whatever, maybe this is autogenerated>
"type" s "802-3-ethernet"
"uuid" s <might be auto generated too>
"802-3-ethernet" 0
"ipv4" 1
"method" s "auto"
"ipv6" 1
"addr-gen-mode" i 1
"method" s "auto"
I have problem with count performance in MongoDB.
I'm using ZF2 and Doctrine ODM with SoftDelete filter. Now when query "first time" collection with db.getCollection('order').count({"deletedAt": null}), it takes about 30 seconds, sometimes even more. Second and more query takes about 150ms. After few minutes query takes again about 30 seconds. This is only on collections with size > 700MB.
Server is Amazon EC2 t2.medium instance, Mongo 3.0.1
Maybe it similar to MongoDB preload documents into RAM for better performance, but those answers do not solve my problem.
Any ideas what is going on?
/edit
explain
{
"executionSuccess" : true,
"nReturned" : 111449,
"executionTimeMillis" : 24966,
"totalKeysExamined" : 0,
"totalDocsExamined" : 111449,
"executionStages" : {
"stage" : "COLLSCAN",
"filter" : {
"$and" : []
},
"nReturned" : 111449,
"executionTimeMillisEstimate" : 281,
"works" : 145111,
"advanced" : 111449,
"needTime" : 1,
"needFetch" : 33660,
"saveState" : 33660,
"restoreState" : 33660,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 111449
},
"allPlansExecution" : []
}
The count will go through each document which is creating performance issues.
Care about the precise number if it's a small one. You're interested to know if there are 100 results or 500. But once it goes beyond, let's say, 10000, you can just say 'More than 10000 results' found to the user.
db.getCollection('order').find({"deletedAt": null}).limit(10000).count(true)
Is it possible to achieve something like this:
region : 2
regions {
1 : us-east-1
2 : eu-west-1
3 : sa-east-1
}
# Desired outcome is `name : eu-west-1` (depending on the region value)
name : ${regions.${region}}
I face the similar problem like http://elasticsearch-users.115913.n3.nabble.com/New-to-Elastic-Search-I-get-this-exception-org-elasticsearch-action-UnavailableShardsException-td3244381.html.
I have a master node with config:
cluster.name: geocoding
node.name: "node1"
node.master: true
node.data: false
index.number_of_shards: 5
index.number_of_replicas: 0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost"]
and a data node with config:
cluster.name: geocoding
node.name: "node2"
node.master: false
node.data: true
index.number_of_shards: 5
index.number_of_replicas: 0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost"]
Started both master & data node , running on 9200 & 9201 resp.
When i connecting for made index (using the following source), I met err(org.elasticsearch.action.UnavailableShardsException).
Source code:
ImmutableSettings.Builder builder = ImmutableSettings.settingsBuilder();
builder.put("node.client", true);
builder.put("node.data", false);
builder.put("node.name", "node3");
builder.put("cluster.name", "geocoding");
builder.put("discovery.zen.ping.multicast.enabled", "false");
builder.put("discovery.zen.ping.unicast.hosts", "localhost");
builder.build();
Node node = NodeBuilder.nodeBuilder().settings(builder).node();
client = node.client();
print(client.admin().cluster().prepareHealth().setWaitForGreenStatus().execute().actionGet());
It prints:
{
"cluster_name" : "geocoding",
"status" : "red",
"timed_out" : true,
"number_of_nodes" : 3,
"number_of_data_nodes" : 1,
"active_primary_shards" : 4,
"active_shards" : 4,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 16
}
Any suggestions?
Edited:
#AndreiStefan:
curl -XGET localhost:9200/_cat/shards?v
index shard prirep state docs store ip node
india 2 p UNASSIGNED
india 2 r UNASSIGNED
india 0 p UNASSIGNED
india 0 r UNASSIGNED
india 3 p UNASSIGNED
india 3 r UNASSIGNED
india 1 p UNASSIGNED
india 1 r UNASSIGNED
india 4 p UNASSIGNED
india 4 r UNASSIGNED
mytest 4 p UNASSIGNED
mytest 4 r UNASSIGNED
mytest 0 p STARTED 0 115b <my ip> node2
mytest 0 r UNASSIGNED
mytest 3 p STARTED 1 2.6kb <my ip> node2
mytest 3 r UNASSIGNED
mytest 1 p STARTED 0 115b <my ip> node2
mytest 1 r UNASSIGNED
mytest 2 p STARTED 0 115b <my ip> node2
mytest 2 r UNASSIGNED
Exception detail:
org.elasticsearch.action.UnavailableShardsException: [india][4] Primary shard is not active or isn't assigned is a known node. Timeout: [1m], request: index {[x][y][1], source[{"message":"a",}]}
Edited:
Full source:
HashMap<String, String> documentMap = new HashMap();
String id = "1";
String indexName = "india";
String indexType = "address";
ImmutableSettings.Builder builder = ImmutableSettings.settingsBuilder();
builder.put("node.client", true);
builder.put("node.data", false);
builder.put("node.name", "node3");
builder.put("cluster.name", "geocoding");
builder.put("discovery.zen.ping.multicast.enabled", "false");
builder.put("discovery.zen.ping.unicast.hosts", "localhost");
builder.build();
Node node = NodeBuilder.nodeBuilder().settings(builder).node();
Client client = node.client();
XContentBuilder contentBuilder = XContentFactory.jsonBuilder().startObject();
Iterator<String> keys = documentMap.keySet().iterator();
while (keys.hasNext()) {
String key = keys.next();
contentBuilder.field(key, documentMap.get(key));
}
contentBuilder.endObject();
IndexResponse response = client.prepareIndex(indexName, indexType, id).setSource(contentBuilder).execute().actionGet();
I have an ES cluster which is playing up. At one point I had all primary and replica shards correctly assigned to 4 of my 5 nodes, but in trying to get some onto the 5th node I have once again lost my replica shards. Now my primary shards exist only on 3 nodes.
I am trying to get to the bottom of the issue:
On trying a forced allocation such as:
{
"commands": [
{
"allocate": {
"index": "group7to11poc",
"shard": 7,
"node": "SPOCNODE1"
}
}
]
}
I get the following response. I am having trouble finding out the exact problem!
explanations: [1]
0: {
command: "allocate"
parameters: {
index: "group7to11poc"
shard: 7
node: "SPOCNODE5"
allow_primary: true
}-
decisions: [11]
0: {
decider: "same_shard"
decision: "YES"
explanation: "shard is not allocated to same node or host"
}-
1: {
decider: "filter"
decision: "NO"
explanation: "node does not match index include filters [_id:"4rZYPBOGRMK4y9YG6p7E2w"]"
}-
2: {
decider: "replica_after_primary_active"
decision: "YES"
explanation: "primary is already active"
}-
3: {
decider: "throttling"
decision: "YES"
explanation: "below shard recovery limit of [2]"
}-
4: {
decider: "enable"
decision: "YES"
explanation: "allocation disabling is ignored"
}-
5: {
decider: "disable"
decision: "YES"
explanation: "allocation disabling is ignored"
}-
6: {
decider: "awareness"
decision: "YES"
explanation: "no allocation awareness enabled"
}-
7: {
decider: "shards_limit"
decision: "YES"
explanation: "total shard limit disabled: [-1] <= 0"
}-
8: {
decider: "node_version"
decision: "YES"
explanation: "target node version [1.3.2] is same or newer than source node version [1.3.2]"
}-
9: {
decider: "disk_threshold"
decision: "YES"
explanation: "disk usages unavailable"
}-
10: {
decider: "snapshot_in_progress"
decision: "YES"
explanation: "shard not primary or relocation disabled"
}-
Finally sorted this. Somehow the Index has gotten a filter applied to it which prevented shard allocation and move.
I removed the filter and the cluster began behaving.
curl -XPUT localhost:9200/test/_settings -d '{
"index.routing.allocation.include._id" : "" }'
This sets the _id filter to empty. This was previously populated and prevented the filter ever being matched!