When trying to deploy the JHipster Console on Kubernetes the jhipster-elasticsearch-client pod fails to start. The pod fails with reason OOMKilled and exit code 137.
Increasing the default memory limit from 512M to 1G did not solve the issue.
The node also has plenty of memory left:
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default gateway-mysql-5c66b69cb6-r84xb 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default jhipster-console-84c54fbd79-k8hjt 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default jhipster-elasticsearch-client-7cb576d5d7-s48mn 200m (10%) 400m (20%) 512Mi (6%) 1Gi (13%)
default jhipster-import-dashboards-s9k2g 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default jhipster-registry-0 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default jhipster-zipkin-6df799f5d8-7fhz9 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system calico-node-hc5p9 250m (12%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-proxy-cgmqj 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system tiller-deploy-5c688d5f9b-zxnnp 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
450m (22%) 400m (20%) 512Mi (6%) 1Gi (13%)
The default installation of Elasticsearch is configured with a 1 GB heap. You can configure the Docker Elasticsearch memory requirements by adding an ENV variable to your container:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
Related Docs:
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
Related
So I was trying to use the Near Lake Indexer but after completing every step, when I ran
./target/release/near-lake --home ~/.near/mainnet run --endpoint https://my-endpoint --bucket my-bucket --region us-east-1 --stream-while-syncing sync-from-latest
there is nothing other than
INFO stats: # 9820210 Waiting for peers 0 peers ⬇ 0 B/s ⬆ 0 B/s 0.00 bps 0 gas/s CPU: 0%, Mem: 32.8 MB
Why is it trying to fetch block 9820210 but not the latest one, even though I passed in "sync-from-latest"?
You need to follow the standard nearcore network sync procedures: https://github.com/near/near-lake-indexer#syncing
INFO stats: # 9820210 Waiting for peers 0 peers ⬇ 0 B/s ⬆ 0 B/s 0.00 bps 0 gas/s CPU: 0%, Mem: 32.8 MB
This means that your node does not have the network state backup, and it waits to get synced (why it is not happening is not clear to me, and needs to be asked on https://github.com/near/nearcore by providing them with the means to reproduce the issue running neard instead of the indexer).
Why is it trying to fetch block 9820210 but not the latest one, even though I passed in "sync-from-latest"?
stats log is reported by nearcore, which reports the most recent synced block on the node.
I am using a 3 node cluster: 1 master node and 2 worker nodes, using T2.large EC2 instances.
The "free -m" command gives me the following info:
Master:
total used free shared buffers cached
Mem: 7733 6324 1409 0 221 4555
-/+ buffers/cache: 1547 6186
Swap: 1023 0 1023
Worker Node 1:
total used free shared buffers cached
Mem: 7733 3203 4530 0 185 2166
-/+ buffers/cache: 851 6881
Swap: 1023 0 1023
Worker Node 2:
total used free shared buffers cached
Mem: 7733 3402 4331 0 185 2399
-/+ buffers/cache: 817 6915
Swap: 1023 0 1023
In the yarn-site.xml file, I have the following properties set:
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>7733</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>7733</value>
</property>
In $SPARK_HOME/conf/spark-defaults.conf I am setting the spark.executor.cores at 2 and spark.executor.instances at 2.
When looking at the spark-history UI after running my spark application, both executors (1 and 2) show up in the "Executors" tab along with the driver. In the cores column on that same page, it says 2 for both executors.
When I set the executor-memory at 5G and lower, my spark application runs fine with both worker node executors running. When I set the executor memory at 6G or more, only one worker node runs an executor. Why does this happen? Note: I have tried increasing the yarn.nodemanager.resource.memory-mb and it doesn't change this behavior.
I updated my dev Ceph cluster yesterday from Jewel to Luminous. Everything was seemingly okay until I ran this command "ceph osd require-osd-release luminous". After that, the data in my cluster is now completely unknown. If I do a detailed view on any given pg, it shows "active+clean". The cluster thinks they're degraded and unclean. Here's what I am seeing:
CRUSH MAP
-1 10.05318 root default
-2 3.71764 host cephfs01
0 0.09044 osd.0 up 1.00000 1.00000
1 1.81360 osd.1 up 1.00000 1.00000
2 1.81360 osd.2 up 1.00000 1.00000
-3 3.62238 host cephfs02
3 hdd 1.81360 osd.3 up 1.00000 1.00000
4 hdd 0.90439 osd.4 up 1.00000 1.00000
5 hdd 0.90439 osd.5 up 1.00000 1.00000
-4 2.71317 host cephfs03
6 hdd 0.90439 osd.6 up 1.00000 1.00000
7 hdd 0.90439 osd.7 up 1.00000 1.00000
8 hdd 0.90439 osd.8 up 1.00000 1.00000
HEALTH
cluster:
id: 279e0565-1ab4-46f2-bb27-adcb1461e618
health: HEALTH_WARN
Reduced data availability: 1024 pgs inactive
Degraded data redundancy: 1024 pgs unclean
services:
mon: 2 daemons, quorum cephfsmon02,cephfsmon01
mgr: cephfsmon02(active)
mds: ceph_library-1/1/1 up {0=cephfsmds01=up:active}
osd: 9 osds: 9 up, 9 in; 306 remapped pgs
data:
pools: 2 pools, 1024 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs: 100.000% pgs unknown
1024 unknown
HEALTH_WARN
Reduced data availability: 1024 pgs inactive; Degraded data redundancy: 1024 pgs unclean
PG_AVAILABILITY Reduced data availability: 1024 pgs inactive
pg 1.e6 is stuck inactive for 2239.530584, current state unknown, last acting []
pg 1.e8 is stuck inactive for 2239.530584, current state unknown, last acting []
pg 1.e9 is stuck inactive for 2239.530584, current state unknown, last acting []
It looks like this for every PG in the cluster.
PG DETAIL
"stats": {
"version": "57'5211",
"reported_seq": "4527",
"reported_epoch": "57",
"state": "active+clean",
I can't run a scrub or repair on the pgs or osds because of this:
ceph osd repair osd.0
failed to instruct osd(s) 0 to repair (not connected)
Any ideas?
The problem was the firewall. I bounced the firewall on each host and immediately the pgs were found.
We're running SonarQube 5.1.2 on an AWS node. After a short period of use, typically a day or two, the Sonar web server becomes unresponsive and spikes the server's CPUs:
top - 01:59:47 up 2 days, 3:43, 1 user, load average: 1.89, 1.76, 1.11
Tasks: 93 total, 1 running, 92 sleeping, 0 stopped, 0 zombie
Cpu(s): 94.5%us, 0.0%sy, 0.0%ni, 5.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 7514056k total, 2828772k used, 4685284k free, 155372k buffers
Swap: 0k total, 0k used, 0k free, 872440k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2328 root 20 0 3260m 1.1g 19m S 188.3 15.5 62:51.79 java
11 root 20 0 0 0 0 S 0.3 0.0 0:07.90 events/0
2284 root 20 0 3426m 407m 19m S 0.3 5.5 9:51.04 java
1 root 20 0 19356 1536 1224 S 0.0 0.0 0:00.23 init
The 188% CPU load is coming from the WebServer process:
$ ps -eF|grep "root *2328"
root 2328 2262 2 834562 1162384 0 Mar01 ? 01:06:24 /usr/java/jre1.8.0_25/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx768m -XX:MaxPermSize=160m -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonar/temp -cp ./lib/common/*:./lib/server/*:/opt/sonar/lib/jdbc/mysql/mysql-connector-java-5.1.34.jar org.sonar.server.app.WebServer /tmp/sq-process615754070383971531properties
We initially thought that we were running on way too small of a node and recently upgraded to an m3-large instance, but we're seeing the same problem (except now it's spiking 2 CPUs instead of one).
The only interesting info in the log is this:
2016.03.04 01:52:38 WARN web[o.e.transport] [sonar-1456875684135] Received response for a request that has timed out, sent [39974ms] ago, timed out [25635ms] ago, action [cluster:monitor/nodes/info], node [[#transport#-1][xxxxxxxx-build02-us-west-2b][inet[/127.0.0.1:9001]]], id [43817]
2016.03.04 01:53:19 INFO web[o.e.client.transport] [sonar-1456875684135] failed to get node info for [#transport#-1][xxxxxxxx-build02-us-west-2b][inet[/127.0.0.1:9001]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[/127.0.0.1:9001]][cluster:monitor/nodes/info] request_id [43817] timed out after [14339ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:366) ~[elasticsearch-1.4.4.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.8.0_25]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.8.0_25]
at java.lang.Thread.run(Unknown Source) [na:1.8.0_25]
Does anyone know what might be going on here or has some ideas how to further diagnose this problem?
UPDATE
You need to give hdfs-site.xml to hbase/conf so hbase can use the correct target replica, else it uses default 3.
That fixes the message. But my namenode is always in safemode during every process restart.
The fsck is all fine with no errors, no under replicated etc.
I see no logs after:
2012-10-17 13:15:13,278 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON.
The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode
will be turned off automatically.
2012-10-17 13:15:14,228 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node:
/default-rack/127.0.0.1:50010
2012-10-17 13:15:14,238 INFO org.apache.hadoop.hdfs.StateChange: BLOCK
NameSystem.processReport: from 127.0.0.1:50010, blocks: 20, processing time: 0 msecs
Any suggestions ?
I have dfs.replication set to 1.
hbase is in distributed mode.
First write goes through, but when I restart namenode always reports blocks as under reported.
Output from hadoop fsck /hbase
/hbase/tb1/.tableinfo.0000000003: Under replicated blk_-6315989673141511716_1029. Target Replicas is 3 but found 1 replica(s).
.
/hbase/tb1/83859abf0f46016485814a5941b16de5/.oldlogs/hlog.1350414672838: Under replicated blk_-7364606700173135939_1027. Target Replicas is 3 but found 1 replica(s).
.
/hbase/tb1/83859abf0f46016485814a5941b16de5/.regioninfo: Under replicated blk_788178851601564156_1027. Target Replicas is 3 but found 1 replica(s).
Status: HEALTHY
Total size: 8731 B
Total dirs: 34
Total files: 25 (Files currently being written: 1)
Total blocks (validated): 25 (avg. block size 349 B) (Total open file blocks (not validated): 1)
Minimally replicated blocks: 25 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 25 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 1
Average block replication: 1.0
Corrupt blocks: 0
Missing replicas: 50 (200.0 %)
Number of data-nodes: 1
Number of racks: 1
FSCK ended at Tue Oct 16 13:23:55 PDT 2012 in 0 milliseconds
Why does it say target replica is 3 but default replication factor is clearly 1.
Anyone please advice.
My versions are hadoop 1.0.3 and hbase 0.94.1
Thanks!
To force the Hdfs to exit from safemode.
Type this:
hadoop dfsadmin -safemode leave