org.elasticsearch.action.UnavailableShardsException - elasticsearch

I face the similar problem like http://elasticsearch-users.115913.n3.nabble.com/New-to-Elastic-Search-I-get-this-exception-org-elasticsearch-action-UnavailableShardsException-td3244381.html.
I have a master node with config:
cluster.name: geocoding
node.name: "node1"
node.master: true
node.data: false
index.number_of_shards: 5
index.number_of_replicas: 0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost"]
and a data node with config:
cluster.name: geocoding
node.name: "node2"
node.master: false
node.data: true
index.number_of_shards: 5
index.number_of_replicas: 0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost"]
Started both master & data node , running on 9200 & 9201 resp.
When i connecting for made index (using the following source), I met err(org.elasticsearch.action.UnavailableShardsException).
Source code:
ImmutableSettings.Builder builder = ImmutableSettings.settingsBuilder();
builder.put("node.client", true);
builder.put("node.data", false);
builder.put("node.name", "node3");
builder.put("cluster.name", "geocoding");
builder.put("discovery.zen.ping.multicast.enabled", "false");
builder.put("discovery.zen.ping.unicast.hosts", "localhost");
builder.build();
Node node = NodeBuilder.nodeBuilder().settings(builder).node();
client = node.client();
print(client.admin().cluster().prepareHealth().setWaitForGreenStatus().execute().actionGet());
It prints:
{
"cluster_name" : "geocoding",
"status" : "red",
"timed_out" : true,
"number_of_nodes" : 3,
"number_of_data_nodes" : 1,
"active_primary_shards" : 4,
"active_shards" : 4,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 16
}
Any suggestions?
Edited:
#AndreiStefan:
curl -XGET localhost:9200/_cat/shards?v
index shard prirep state docs store ip node
india 2 p UNASSIGNED
india 2 r UNASSIGNED
india 0 p UNASSIGNED
india 0 r UNASSIGNED
india 3 p UNASSIGNED
india 3 r UNASSIGNED
india 1 p UNASSIGNED
india 1 r UNASSIGNED
india 4 p UNASSIGNED
india 4 r UNASSIGNED
mytest 4 p UNASSIGNED
mytest 4 r UNASSIGNED
mytest 0 p STARTED 0 115b <my ip> node2
mytest 0 r UNASSIGNED
mytest 3 p STARTED 1 2.6kb <my ip> node2
mytest 3 r UNASSIGNED
mytest 1 p STARTED 0 115b <my ip> node2
mytest 1 r UNASSIGNED
mytest 2 p STARTED 0 115b <my ip> node2
mytest 2 r UNASSIGNED
Exception detail:
org.elasticsearch.action.UnavailableShardsException: [india][4] Primary shard is not active or isn't assigned is a known node. Timeout: [1m], request: index {[x][y][1], source[{"message":"a",}]}
Edited:
Full source:
HashMap<String, String> documentMap = new HashMap();
String id = "1";
String indexName = "india";
String indexType = "address";
ImmutableSettings.Builder builder = ImmutableSettings.settingsBuilder();
builder.put("node.client", true);
builder.put("node.data", false);
builder.put("node.name", "node3");
builder.put("cluster.name", "geocoding");
builder.put("discovery.zen.ping.multicast.enabled", "false");
builder.put("discovery.zen.ping.unicast.hosts", "localhost");
builder.build();
Node node = NodeBuilder.nodeBuilder().settings(builder).node();
Client client = node.client();
XContentBuilder contentBuilder = XContentFactory.jsonBuilder().startObject();
Iterator<String> keys = documentMap.keySet().iterator();
while (keys.hasNext()) {
String key = keys.next();
contentBuilder.field(key, documentMap.get(key));
}
contentBuilder.endObject();
IndexResponse response = client.prepareIndex(indexName, indexType, id).setSource(contentBuilder).execute().actionGet();

Related

Telegraf unable to pull route table information from Arista MIB

So I'm trying to collect routing stats from some Aristas.
When I run snmpwalk it all seems to work...
snmpwalk -v2c -c pub router.host ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.other = Gauge32: 3
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.connected = Gauge32: 8
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.static = Gauge32: 26
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.ospf = Gauge32: 542
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.bgp = Gauge32: 1623
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.attached = Gauge32: 12
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.internal = Gauge32: 25
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv6.other = Gauge32: 3
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv6.internal = Gauge32: 1
But when I try to pull the stats with telegraf I get different information with missing context...
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutesForRouteType=2i 1654976575000000000
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutes=2260i 1654976575000000000
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutesForRouteType=8i 1654976575000000000
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutesForRouteType=63i 1654976575000000000
According to the MIB documentation..
https://www.arista.com/assets/data/docs/MIBS/ARISTA-FIB-STATS-MIB.txt
it is using IANA-RTPROTO-MIB.txt protocol definitions but I have no idea where to derive that information from as the retrieved data via telegraf isn't showing me anything. Anyone know how to deal with this?
First, you might want to enable telegraf to return the index of the returned rows by setting index_as_tag = true inside the inputs.snmp.table.
Then, add the following processors in your config:
# Parse aristaFIBStatsAF and aristaFIBStatsRouteType from index for BGP table
[[processors.regex]]
namepass = ["BGP"]
order = 1
[[processors.regex.tags]]
## Tag to change
key = "index"
## Regular expression to match on a tag value
pattern = "^(\\d+)\\.(\\d+)$"
replacement = "${1}"
## Tag to store the result
result_key = "aristaFIBStatsAF"
[[processors.regex.tags]]
## Tag to change
key = "index"
## Regular expression to match on a tag value
pattern = "^(\\d+)\\.(\\d+)$"
replacement = "${2}"
## Tag to store the result
result_key = "aristaFIBStatsRouteType"
# Rename index to aristaFIBStatsAF for BGP table with single index row
[[processors.rename]]
namepass = ["BGP"]
order = 2
[[processors.rename.replace]]
tag = "index"
dest = "aristaFIBStatsAF"
[processors.rename.tagdrop]
aristaFIBStatsAF = ["*"]
# Translate tag values for BGP table
[[processors.enum]]
namepass = ["BGP"]
order = 3
tagexclude = ["index"]
[[processors.enum.mapping]]
## Name of the tag to map
tag = "aristaFIBStatsAF"
## Table of mappings
[processors.enum.mapping.value_mappings]
0 = "unknown"
1 = "ipv4"
2 = "ipv6"
[[processors.enum.mapping]]
## Name of the tag to map
tag = "aristaFIBStatsRouteType"
## Table of mappings
[processors.enum.mapping.value_mappings]
1 = "other"
2 = "connected"
3 = "static"
8 = "rip"
9 = "isIs"
13 = "ospf"
14 = "bgp"
200 = "ospfv3"
201 = "staticNonPersistent"
202 = "staticNexthopGroup"
203 = "attached"
204 = "vcs"
205 = "internal"
Disclaimer: did not test this in telegraf, so there might be some typo's

What are the system requirements for comfortable running IBM Cloud Private Community Edition(ICP CE)?

I'm trying to run IBM Cloud Private.
I saw Hardware requirements and recommendations, but I'm not sure that the spec is enough for me.
I'd liked to run several cloud foundry applications, MessageSight, Spinnaker, and so on.
How do you think this spec?
CPU: 3GHz 10core
Mem: 64GB
HDD: 2TB (SSD)
I did a Terraform script for this together with CentOS 7.6
And the minimum I use is following config, and count with 250GB as minimum disk size.
This is applying for ICP CE 3.1.1
##### ICP Cluster Components #####
master = {
nodes = "3"
vcpu = "8"
memory = "16384"
docker_disk_size = "250"
thin_provisioned = "true"
thin_provisioned_etcd = "false"
}
proxy = {
nodes = "3"
vcpu = "4"
memory = "8192"
thin_provisioned = "true"
}
worker = {
nodes = "3"
vcpu = "8"
memory = "8192"
thin_provisioned = "true"
}
management = {
nodes = "3"
vcpu = "4"
memory = "16384"
thin_provisioned = "true"
}
va = {
nodes = "2"
vcpu = "4"
memory = "8192"
thin_provisioned = "true"
}

ElasticSearch all shards remain unassigned (with routing.allocation set to all)

why are none of my shards being assigned? (ES 2.3)
Create index:
PUT 'host:9200/entities?pretty' -d ' {
"mappings": {
x
}
},
"settings" : {
"index" : {
"number_of_shards" : 6,
"number_of_replicas" : 1
}
}
}'
Cluster Settings:
GET 'host:9200/_cluster/settings?pretty'
{
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "all"
}
}
}
},
"transient" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "all"
}
}
}
}
}
Cluster:
host master
node3 m
node2 m
node1 *
Shards
GET 'host:9200/_cat/shards?v'
index shard prirep state docs store ip node
entities 5 p UNASSIGNED
entities 5 r UNASSIGNED
entities 1 p UNASSIGNED
entities 1 r UNASSIGNED
entities 4 p UNASSIGNED
entities 4 r UNASSIGNED
entities 2 p UNASSIGNED
entities 2 r UNASSIGNED
entities 3 p UNASSIGNED
entities 3 r UNASSIGNED
entities 0 p UNASSIGNED
entities 0 r UNASSIGNED
I'm able to assign nodes directly through the routing API, but that doesn't seem to be the wait to go.
If I setup the cluster differently, with 1 master node and 2 data nodes, the problem doesn't occur. But
Turns out I misinterpreted node.master and node.data settings. I thought it had to be either or.
Set all three nodes to node.master: true and node.data: true , now it's working like a charm.

how to make all nodes ping & reply in OMNet++ simulator scanerio is given below

can any one help me in code of simple ping app
i am trying to ping from 4 nodes but only 2 can communicate in my case
omnet ini code is here
# ping app (host[0] pinged by others)
*.host[0].numPingApps = 0
*.host[*].numPingApps = 4
*.host[1].pingApp[*].destAddr = "host[0]"
*.host[2].pingApp[*].destAddr = "host[0]"
*.host[3].pingApp[*].destAddr = "host[0]"
*.host[4].pingApp[*].destAddr = "host[0]"
*.host[1].pingApp[*].startTime = 3s
*.host[1].pingApp[*].sendInterval = 1s
*.host[1].pingApp[*].srcAddr = "host[1]"
#*.host[1].pingApp[*].destAddr = "host[2]"
*.host[2].pingApp[*].startTime = 2s
*.host[2].pingApp[*].sendInterval = 1s
*.host[2].pingApp[*].srcAddr = "host[2]"
#*.host[2].pingApp[*].destAddr = "host[1]"
*.host[3].pingApp[*].startTime = 1s
*.host[3].pingApp[*].sendInterval = 1s
*.host[3].pingApp[*].srcAddr = "host[3]"
#*.host[3].pingApp[*].destAddr = "host[4]"
Hosts 1, 2, 3, and 4 should have only one pingApp. Moreover, srcAddr may be omitted, because this field will be automatically set by network layer. So minimal omnetpp.ini should look like:
*.host[0].numPingApps = 0
*.host[0].pingApp[0].destAddr = ""
*.host[*].numPingApps = 1
*.host[*].pingApp[0].destAddr = "host[0]"

How did the sphinx calculate the weight?

Note:
This is a cross-post, it is firstly posted at the sphinx forum,however I got no answer, so I post it here.
First take a look at a example:
The following is my table(just for test used):
+----+--------------------------+----------------------+
| Id | title | body |
+----+--------------------------+----------------------+
| 1 | National first hospital | NASA |
| 2 | National second hospital | Space Administration |
| 3 | National govenment | Support the hospital |
+----+--------------------------+----------------------+
I want to search the contents from the title and body field, so I config the sphinx.conf
as shown followed:
--------The sphinx config file----------
source mysql
{
type = mysql
sql_host = localhost
sql_user = root
sql_pass =0000
sql_db = testfull
sql_port = 3306 # optional, default is 3306
sql_query_pre = SET NAMES utf8
sql_query = SELECT * FROM test
}
index mysql
{
source = mysql
path = var/data/mysql_old_test
docinfo = extern
mlock = 0
morphology = stem_en, stem_ru, soundex
min_stemming_len = 1
min_word_len = 1
charset_type = utf-8
html_strip = 0
}
indexer
{
mem_limit = 128M
}
searchd
{
listen = 9312
read_timeout = 5
max_children = 30
max_matches = 1000
seamless_rotate = 0
preopen_indexes = 0
unlink_old = 1
pid_file = var/log/searchd_mysql.pid
log = var/log/searchd_mysql.log
query_log = var/log/query_mysql.log
}
------------------
Then I reindex the db and start the searchd daemon.
In my client side I set the attribute as:
----------Client side config-------------------
sc = new SphinxClient();
///other thing
HashMap<String, Integer> weiMap=new HashMap<String, Integer>();
weiMap.put("title", 100);
weiMap.put("body", 0);
sc.SetFieldWeights(weiMap);
sc.SetMatchMode(SphinxClient.SPH_MATCH_ALL);
sc.SetSortMode(SphinxClient.SPH_SORT_EXTENDED,"#weight DESC");
When I try to search "National hospital", I got the following output:
Query 'National hospital' retrieved 3 of 3 matches in 0.0 sec.
Query stats:
'nation' found 3 times in 3 documents
'hospit' found 3 times in 3 documents
Matches:
1. id=3, weight=101
2. id=1, weight=100
3. id=2, weight=100
The match number (three matched) is right,however the order of the result is not what I
wanted.
Obviously the document of id 1 and 2 should be the most closed items to the required
string( "National hospital" ), so in my opinion they should be given the largest
weights,but they are orderd at the last position.
I wonder if there is anyway to meet my requirement?
PS:
1)please do not suggestion me set the sortModel to :
sc.SetSortMode(SphinxClient.SPH_SORT_EXTENDED,"#weight ASC");
This may work for just this example, it will caused some other potinal problems.
2)Actuall the contents in my table is Chinese, I just use the "National Hosp..l" to make
a example.
1° You ask "National hospital" but sphinx search "nation" and "hospit" because
morphology = stem_en, stem_ru, soundex
2° You give weight
weiMap.put("title", 100);
weiMap.put("body", 0);
to unexisting text fields
sql_query = SELECT * FROM test
3° finaly my simple answer to main question
You sort by weight,
the third row has more weight because no words between nation and hospit

Resources