I am getting ElasticsearchStatusWarning saying that the cluster state is yellow. Upon running the health check API, I see below
curl -X GET http://localhost:9200/_cluster/health/
{"cluster_name":"my-elasticsearch","status":"yellow","timed_out":false,"number_of_nodes":8,"number_of_data_nodes":3,"active_primary_shards":220,"active_shards":438,"relocating_shards":0,"initializing_shards":2,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":99.54545454545455}
initializing_shards is 2. So, I further run the below call
curl -X GET
http://localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason
|grep INIT
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 33457 100 33457 0 graph_vertex_24_18549 0 r INITIALIZING ALLOCATION_FAILED
0 79609 0 --:--:-- --:--:-- --:--:-- 79659
curl -X GET http://localhost:9200/_cat/shards/graph_vertex_24_18549
graph_vertex_24_18549 0 p STARTED 8373375 8.4gb IP1 elasticsearch-data-1
graph_vertex_24_18549 0 r INITIALIZING IP2 elasticsearch-data-2
And rerunning the same command in few mins, shows now it's being initialized in elasticsearch-data-0. See below
graph_vertex_24_18549 0 p STARTED 8373375 8.4gb IP1 elasticsearch-data-1
graph_vertex_24_18549 0 r INITIALIZING IP0 elasticsearch-data-0
If i rerun it again in few mins, I can see it's again being initialized in elasticsearch-data-2 again. But it never gets STARTED.
curl -X GET http://localhost:9200/_cat/allocation?v
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
147 162.2gb 183.8gb 308.1gb 492gb 37 IP1 IP1 elasticsearch-data-2
146 217.3gb 234.2gb 257.7gb 492gb 47 IP2 IP2 elasticsearch-data-1
147 216.6gb 231.2gb 260.7gb 492gb 47 IP3 IP3 elasticsearch-data-0
curl -X GET http://localhost:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
IP1 7 77 20 4.17 4.57 4.88 mi - elasticsearch-master-2
IP2 72 59 7 2.59 2.38 2.19 i - elasticsearch-5f4bd5b88f-4lvxz
IP3 57 49 3 0.75 1.13 1.09 di - elasticsearch-data-2
IP4 63 57 21 2.69 3.58 4.11 di - elasticsearch-data-0
IP5 5 59 7 2.59 2.38 2.19 mi - elasticsearch-master-0
IP6 69 53 13 4.67 4.60 4.66 di - elasticsearch-data-1
IP7 8 70 14 2.86 3.20 3.09 mi * elasticsearch-master-1
IP8 30 77 20 4.17 4.57 4.88 i - elasticsearch-5f4bd5b88f-wnrl4
curl -s -XGET http://localhost:9200/_cluster/allocation/explain -d '{
"index": "graph_vertex_24_18549", "shard":
0, "primary": false }' -H 'Content-type: application/json'
{"index":"graph_vertex_24_18549","shard":0,"primary":false,"current_state":"initializing","unassigned_info":{"reason":"ALLOCATION_FAILED","at":"2020-11-04T08:21:45.756Z","failed_allocation_attempts":1,"details":"failed shard on node [1XEXS92jTK-wwanNgQrxsA]: failed to perform indices:data/write/bulk[s] on replica [graph_vertex_24_18549][0], node[1XEXS92jTK-wwanNgQrxsA], [R], s[STARTED], a[id=RnTOlfQuQkOumVuw_NeuTw], failure RemoteTransportException[[elasticsearch-data-2][IP:9300][indices:data/write/bulk[s][r]]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [4322682690/4gb], which is larger than the limit of [4005632409/3.7gb], real usage: [3646987112/3.3gb], new bytes reserved: [675695578/644.3mb]]; ","last_allocation_status":"no_attempt"},"current_node":{"id":"o_9jyrmOSca9T12J4bY0Nw","name":"elasticsearch-data-0","transport_address":"IP:9300"},"explanation":"the shard is in the process of initializing on node [elasticsearch-data-0], wait until initialization has completed"}
Thing is I was earlier getting alerted for Unassigned Shards due to the same exception as above - "CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [4322682690/4gb], which is larger than the limit of [4005632409/3.7gb]"
But back then heap was only 2G. I increased it to 4G. And now I am seeing same error, but this time with respect to Initialising shards instead of Unallocated shards.
How can I remediate this?
Related
When I run the following command:
GET _cat/allocation?v&s=disk.indices&h=shards,disk.indices,disk.used,disk.available,disk.total,disk.percent
it shows the following output:
shards disk.indices disk.used disk.total disk.percent
160 1.4tb 1.4tb 1.7tb 86
160 1.4tb 1.4tb 1.7tb 87
160 1.5tb 1.5tb 1.7tb 89
160 1.5tb 1.5tb 1.7tb 90
480 7.7tb 3.7tb 20tb 18
480 7.7tb 3.9tb 20tb 19
Can anyone help me understand how come the disk.indices in last two rows exceeds disk.used?
Ideally the disk.used should be greater than or equal to disk.indices.
[!] /usr/bin/curl -f -L -o /var/folders/1x/jmv798095x1fbjc_6128mflh0000gn/T/d20170314-59599-7fjizg/file.zip https://github.com/realm/SwiftLint/releases/download/0.16.1/portable_swiftlint.zip --create-dirs --netrc
% Total % Received % Xferd Average Speed Time Time Time
Current
Dload Upload Total Spent Left Speed 100 599 0 599 0 0 166 0 --:--:-- 0:00:03
--:--:-- 166 0 0 0 0 0 0 0 0 --:--:-- 0:01:20 --:--:-- 0curl: (7) Failed to connect to
github-cloud.s3.amazonaws.com port 443: Operation timed out
This is the error when i execute pod install. My network is under socks5 proxy. I can download https://github.com/realm/SwiftLint/releases/download/0.16.1/portable_swiftlint.zip easily. But how can i pod success?
So, I had just started out with Elasticsearch on my local machine.
I have started 5 instances of Elasticsearch nodes. (simple ./bin/elasticsearch)
curl -s 'localhost:9200/_cat/nodes?v' gives:
host ip heap.percent ram.percent load node.role master name
127.0.0.1 127.0.0.1 5 99 3.13 d m Shirow Ishihara
127.0.0.1 127.0.0.1 7 100 3.13 d m Madame Web
127.0.0.1 127.0.0.1 5 100 3.13 d m Anthropomorpho
127.0.0.1 127.0.0.1 5 100 3.13 d m Paste-Pot Pete
127.0.0.1 127.0.0.1 2 100 3.13 d * Mephisto
My index has 2 primary shards and 5 replicas (total 10 replicas).
I had read that ES automatically scales horizontally and assigns/moves shards to new nodes. However, still all the 10 replicas are unassigned and both the 2 primary shards are in the same node.
curl -s 'localhost:9200/_cat/allocation?v' gives:
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
0 0b 105.5gb 6.2gb 111.8gb 94 127.0.0.1 127.0.0.1 Shirow Ishihara
0 0b 105.5gb 6.2gb 111.8gb 94 127.0.0.1 127.0.0.1 Paste-Pot Pete
2 318b 105.5gb 6.2gb 111.8gb 94 127.0.0.1 127.0.0.1 Mephisto
0 0b 105.5gb 6.2gb 111.8gb 94 127.0.0.1 127.0.0.1 Anthropomorpho
0 0b 105.5gb 6.2gb 111.8gb 94 127.0.0.1 127.0.0.1 Madame Web
10 UNASSIGNED
You have too few available disk space an ES is actually trying to move away some shards. But all your nodes are on the same machine so there is nowhere else where to move them and they stay unassigned. The used disk space is more than 90% of the total disk size and ES is hitting the the high watermark.
Read here more about this.
Problems:
More and more data nodes become bad health in Cloudera Manager.
Clue1:
no any task or job, just an idle data node here,
top
-bash-4.1$ top
top - 18:27:22 up 4:59, 3 users, load average: 4.55, 3.52, 3.18
Tasks: 139 total, 1 running, 137 sleeping, 1 stopped, 0 zombie
Cpu(s): 14.8%us, 85.2%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 7932720k total, 1243372k used, 6689348k free, 52244k buffers
Swap: 6160376k total, 0k used, 6160376k free, 267228k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13766 root 20 0 2664m 21m 7048 S 85.4 0.3 190:34.75 java
17688 root 20 0 2664m 19m 7048 S 75.5 0.3 1:05.97 java
12765 root 20 0 2859m 21m 7140 S 36.9 0.3 133:25.46 java
2909 mapred 20 0 1894m 113m 14m S 1.0 1.5 2:55.26 java
1850 root 20 0 1469m 62m 4436 S 0.7 0.8 2:54.53 python
1332 root 20 0 50000 3000 2424 S 0.3 0.0 0:12.04 vmtoolsd
2683 hbase 20 0 1927m 152m 18m S 0.3 2.0 0:36.64 java
Clue2:
-bash-4.1$ ps -ef|grep 13766
root 13766 1850 99 16:01 ? 03:12:54 java -classpath /usr/share/cmf/lib/agent-4.6.3.jar com.cloudera.cmon.agent.DnsTest
Clue3:
in cloudera-scm-agent.log,
[30/Aug/2013 16:01:58 +0000] 1850 Monitor-HostMonitor throttling_logger ERROR Timeout with args ['java', '-classpath', '/usr/share/cmf/lib/agent-4.6.3.jar', 'com.cloudera.cmon.agent.DnsTest']
None
[30/Aug/2013 16:01:58 +0000] 1850 Monitor-HostMonitor throttling_logger ERROR Failed to collect java-based DNS names
Traceback (most recent call last):
File "/usr/lib64/cmf/agent/src/cmf/monitor/host/dns_names.py", line 53, in collect
result, stdout, stderr = self._subprocess_with_timeout(args, self._poll_timeout)
File "/usr/lib64/cmf/agent/src/cmf/monitor/host/dns_names.py", line 42, in _subprocess_with_timeout
return SubprocessTimeout().subprocess_with_timeout(args, timeout)
File "/usr/lib64/cmf/agent/src/cmf/monitor/host/subprocess_timeout.py", line 70, in subprocess_with_timeout
raise Exception("timeout with args %s" % args)
Exception: timeout with args ['java', '-classpath', '/usr/share/cmf/lib/agent-4.6.3.jar', 'com.cloudera.cmon.agent.DnsTest']
"cloudera-scm-agent.log" line 30357 of 30357 --100%-- col 1
Backgrouds:
if I restart all nodes, then everythings are OK, but after half and hour or more, bad health is coming one by one.
Version: Cloudera Standard 4.6.3 (#192 built by jenkins on 20130812-1221 git: fa61cf8559fbefeb5af7f223fd02164d1a0adfdb)
I added all nodes in /etc/hosts
the installed CDH is 4.3.1.
in fact, these nodes are VMs with fixed IP address.
Any suggestions?
BTW, where can I download source code of com.cloudera.cmon.agent.DnsTest?
percentage of memory used used by a process.
normally prstat -J will give the memory of process image and RSS(resident set size) etc.
how do i knowlist of processes with percentage of memory is used by a each process.
i am working on solaris unix.
addintionally ,what are the regular commands that you use for monitoring processes,performences of processes that might be very useful to all!
The top command will give you several memory-consumption numbers. htop is much nicer, and will give you percentages, but it isn't installed by default on most systems.
run
top and then Shift+O this will bring you to the options, press n (this maybe different on your machine) for memory and then hit enter
Example of memory sort.
top - 08:17:29 up 3 days, 8:54, 6 users, load average: 13.98, 14.01, 11.60
Tasks: 654 total, 2 running, 652 sleeping, 0 stopped, 0 zombie
Cpu(s): 14.7%us, 1.5%sy, 0.0%ni, 59.5%id, 23.5%wa, 0.1%hi, 0.8%si, 0.0%st
Mem: 65851896k total, 49049196k used, 16802700k free, 1074664k buffers
Swap: 50331640k total, 0k used, 50331640k free, 32776940k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21635 oracle 15 0 6750m 636m 51m S 1.6 1.0 62:34.53 oracle
21623 oracle 15 0 6686m 572m 53m S 1.1 0.9 61:16.95 oracle
21633 oracle 16 0 6566m 445m 235m S 3.7 0.7 30:22.60 oracle
21615 oracle 16 0 6550m 428m 220m S 3.7 0.7 29:36.74 oracle
16349 oracle RT 0 431m 284m 41m S 0.5 0.4 2:41.08 ocssd.bin
17891 root RT 0 139m 118m 40m S 0.5 0.2 41:08.19 osysmond
18154 root RT 0 182m 98m 43m S 0.0 0.2 10:02.40 ologgerd
12211 root 15 0 1432m 84m 14m S 0.0 0.1 17:57.80 java
Another method on Solaris is to do the following
prstat -s size 1 1
Example prstat output
www004:/# prstat -s size 1 1
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
420 nobody 139M 60M sleep 29 10 1:46:56 0.1% webservd/76
603 nobody 135M 59M sleep 29 10 5:33:18 0.1% webservd/96
339 root 134M 70M sleep 59 0 0:35:38 0.0% java/24
435 iplanet 132M 55M sleep 29 10 1:10:39 0.1% webservd/76
573 nobody 131M 53M sleep 29 10 0:24:32 0.0% webservd/76
588 nobody 130M 53M sleep 29 10 2:40:55 0.1% webservd/86
454 nobody 128M 51M sleep 29 10 0:09:01 0.0% webservd/76
489 iplanet 126M 49M sleep 29 10 0:00:13 0.0% webservd/74
405 root 119M 45M sleep 29 10 0:00:13 0.0% webservd/31
717 root 54M 46M sleep 59 0 2:31:27 0.2% agent/7
Keep in mind this is sorted by Size not RSS, if you need it by RSS use the rss key
www004:/# prstat -s rss 1 1
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
339 root 134M 70M sleep 59 0 0:35:39 0.1% java/24
420 nobody 139M 60M sleep 29 10 1:46:57 0.4% webservd/76
603 nobody 135M 59M sleep 29 10 5:33:19 0.5% webservd/96
435 iplanet 132M 55M sleep 29 10 1:10:39 0.0% webservd/76
573 nobody 131M 53M sleep 29 10 0:24:32 0.0% webservd/76
588 nobody 130M 53M sleep 29 10 2:40:55 0.0% webservd/86
454 nobody 128M 51M sleep 29 10 0:09:01 0.0% webservd/76
489 iplanet 126M 49M sleep 29 10 0:00:13 0.0% webservd/74
I'm not sure if ps is standardized but at least on linux, ps -o %mem gives the percentage of memory used (you would obviously want to add some other columns as well)