JVM issue with failed; error='Cannot allocate memory' (errno=12) [duplicate] - memory-management

My code crashes with this error message
Executing "/usr/bin/java com.utils.BotFilter"
OpenJDK 64-Bit Server VM warning: INFO:
os::commit_memory(0x0000000357c80000, 2712666112, 0) failed;
error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 2712666112 bytes for committing reserved memory.
An error report file with more information is saved as:
/tmp/jvm-29955/hs_error.log`
Here is the content of the generated hs_error.log file:
https://pastebin.com/yqF2Yy4P
This line from crash log seems interesting to me:
Memory: 4k page, physical 98823196k(691424k free), swap 1048572k(0k free)
Does it mean that the machine has memory but is running out of swap space?
Here is meminfo from the crash log but I don't really know how to interpret it, like what is the difference between MemFree and MemAvailable? How much memory is this process taking?
/proc/meminfo:
MemTotal: 98823196 kB
MemFree: 691424 kB
MemAvailable: 2204348 kB
Buffers: 145568 kB
Cached: 2799624 kB
SwapCached: 304368 kB
Active: 81524540 kB
Inactive: 14120408 kB
Active(anon): 80936988 kB
Inactive(anon): 13139448 kB
Active(file): 587552 kB
Inactive(file): 980960 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1048572 kB
SwapFree: 0 kB
Dirty: 1332 kB
Writeback: 0 kB
AnonPages: 92395828 kB
Mapped: 120980 kB
Shmem: 1376052 kB
Slab: 594476 kB
SReclaimable: 282296 kB
SUnreclaim: 312180 kB
KernelStack: 317648 kB
PageTables: 238412 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 50460168 kB
Committed_AS: 114163748 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 314408 kB
VmallocChunk: 34308158464 kB
HardwareCorrupted: 0 kB
AnonHugePages: 50071552 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 116924 kB
DirectMap2M: 5115904 kB
DirectMap1G: 95420416 kB

Possible solutions:
Reduce memory load on the system
Increase physical memory or swap space
Check if swap backing store is full
Use 64 bit Java on a 64 bit OS
Decrease Java heap size (-Xmx/-Xms)
Decrease number of Java threads
Decrease Java thread stack sizes (-Xss)
Set larger code cache with -XX:ReservedCodeCacheSize=
In case you have many contexts wars deployed on your tomcat try reduce them

As Scary Wombat mentions, the JVM is trying to allocate 2712666112 bytes (2.7 Gb) of memory, and you only have 691424000 bytes (0.69 Gb) of free physical memory and nothing available on the swap.

Another possibility (which I encountered just now) would be bad settings for "overcommit memory" on linux.
In my situation, /proc/sys/vm/overcommit_memory was set to "2" and /proc/sys/vm/overcommit_ratio to "50" , meaning "don't ever overcommit and only allow allocation of 50% of the available RAM+Swap".
That's a pretty deceptive problem, since there can be a lot of memory available, but allocations still fail for apparently no reason.
The settings can be changed to the default (overcommit in a sensible way) for now (until a restart):
echo 0 >/proc/sys/vm/overcommit_memory
... or permanently:
echo "vm.overcommit_memory=0 >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf # apply it immediately
Note: this can also partly be diagnosed by looking at the output of /proc/meminfo:
...
CommitLimit: 45329388 kB
Committed_AS: 44818080 kB
...
In the example in the question, Committed_AS is much higher than CommitLimit, indicating (together with the fact that allocations fail) that overcommit is enabled, while here both values are close together, meaning that the limit is strictly enforced.
An excellent detailed explanation of these settings and their effect (as well as when it makes sense to modify them) can be found in this pivotal blog entry. (Tl;dr: messing with overcommit is useful if you don't want critical processes to use swap)

Related

sonarqube error java insufficient memory

I am trying to setup sonarqube on ec2 instance Amazon Linux AMI. on t2 micro instance. using the below sonarqube version:6.0, java:java-1.8.0-openjdk, mysql:mysql Ver 14.14 Distrib 5.6.39, for Linux (x86_64) using EditLine wrapper
after sonar start command:
sudo ./sonar.sh start
sonar is not starting. after checking in logs gives out below message.
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.05.16 19:30:50 INFO app[o.s.a.AppFileSystem] Cleaning or
creating temp directory /opt/sonarqube/temp
2018.05.16 19:30:50 INFO app[o.s.p.m.JavaProcessLauncher] Launch
process[es]: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-
7.b10.37.amzn1.x86_64/jre/bin/java -Djava.awt.headless=true -Xmx1G -
Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -
XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -
XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -
Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-1.8.0-
openjdk-1.8.0.171-7.b10.37.amzn1.x86_64/jre/lib/management-agent.jar -
cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer
/opt/sonarqube/temp/sq-process620905092992598791properties
OpenJDK 64-Bit Server VM warning: INFO:
os::commit_memory(0x00000000c5330000, 181207040, 0) failed;
error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 181207040 bytes for
committing reserved memory.
# An error report file with more information is saved as:
# /opt/sonarqube/hs_err_pid30955.log
<-- Wrapper Stopped
Below Memory info:
/proc/meminfo:
MemTotal: 1011176 kB
MemFree: 78024 kB
MemAvailable: 55140 kB
Buffers: 8064 kB
Cached: 72360 kB
SwapCached: 0 kB
Active: 860160 kB
Inactive: 25868 kB
Active(anon): 805628 kB
Inactive(anon): 48 kB
Active(file): 54532 kB
Inactive(file): 25820 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 108 kB
Writeback: 0 kB
AnonPages: 805628 kB
Mapped: 30700 kB
Shmem: 56 kB
Slab: 28412 kB
SReclaimable: 16632 kB
SUnreclaim: 11780 kB
KernelStack: 3328 kB
PageTables: 6108 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 505588 kB
Committed_AS: 1348288 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 47104 kB
DirectMap2M: 1001472 kB
CPU:total 1 (initial active 1) (1 cores per cpu, 1 threads per core)
family 6 model 63 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3,
ssse3, sse4.1, sse4.2, popcnt, avx, avx2, aes, clmul, erms, lzcnt, tsc,
bmi1, bmi2
Have you tried to increase the maximum allowed heap size memory for your SonarQube application ?
You can do so by editing the sonar.properties file, found in your SQ installation folder.
You can follow this guide in order to configure your SQ max heap size.

Insufficient memory for the JRE in Spark on CentOS7 within VMWare

I am new to spark and am trying to run spark on my hadoop node CentOS7, which is on a vmware: 2GB RAM, 20GB disk, 1CPU
I am receiving this error message:
[root#xie1 spark]# bin/spark-shell
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 716177408, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 716177408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /opt/spark/hs_err_pid79417.log
After some googling I checked the mem below:
[root#xie1 spark]# cat /proc/meminfo
MemTotal: 1868688 kB
MemFree: 76428 kB
MemAvailable: 80840 kB
Buffers: 68 kB
Cached: 92172 kB
SwapCached: 189260 kB
Active: 1158888 kB
Inactive: 426036 kB
Active(anon): 1108308 kB
Inactive(anon): 389960 kB
Active(file): 50580 kB
Inactive(file): 36076 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 2097148 kB
SwapFree: 282684 kB
Dirty: 76 kB
Writeback: 0 kB
AnonPages: 1303360 kB
Mapped: 42176 kB
Shmem: 5596 kB
Slab: 95580 kB
SReclaimable: 34792 kB
SUnreclaim: 60788 kB
KernelStack: 19616 kB
PageTables: 32960 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3031492 kB
Committed_AS: 6290644 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 173440 kB
VmallocChunk: 34359561216 kB
HardwareCorrupted: 0 kB
AnonHugePages: 319488 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 102272 kB
DirectMap2M: 1994752 kB
DirectMap1G: 0 kB
what's the meaning of "VmallocTotal: 34359738367 kB", it exceeds the disk size I allocate to the VM which is 20G. Is VmallocTotal some where that can be adjusted? I am guessing the spark is asking for more resource, based on my current allocation of VM, what can I do? and how to do it?
Thank you very much.

jsf pages render <h:dataTable> slowly with larger number of html components

My TomEE jsf project renders as expected when running locally but renders extremely slowly when deployed remotely.
My remotely deployed jsf pages render slowly when a relatively large number of components (relative to just displaying the template page with ) are the result of a call to data sources. Rendering times measure in minutes when there are more than just a few rows in the table.
Any ideas of where to look to isolate this problem?
Edit 2: I have deployed on multiple remote server environments. Rendering time of query results in <dataTable> seems to depend on bandwidth availability. Traveling to the physical location of the remote server, directly logging onto the remote server and accessing the application via localhost in any browser yields the result of nearly instantaneous rendering of the query results in the <dataTable> tag. Can it be that jsf is generating a huge html file size such that bandwidth becomes a constraint?
Edit 1: The first thing that I thought was that the remote server was doing a memory swaps from RAM to disk. It may be but that led me to make sure of checking the memory allocated to the JVM by TomEE in catalina.sh. See catalina.sh CATALINA_OPTS below.
I have found StackOverflow posts about logging being turned on/off for JSF but no reference of where this setting is. My logging.properties files seem to be identical remote and local for tomee's and the jvm's.
I also found posts about Mojarra's version of the API having an exponential time complexity but this project uses Apache MyFaces.
The problem seems to be some type of TomEE or JVM configuration issue. When I run the application on the locally installed version of TomEE, the same pages render in milliseconds.
I am using Apache MyFaces 2.1.13. From pom.xml
<dependency>
<groupId>org.apache.myfaces.core</groupId>
<artifactId>myfaces-api</artifactId>
<version>2.1.13</version>
</dependency>
<dependency>
<groupId>org.apache.myfaces.core</groupId>
<artifactId>myfaces-impl</artifactId>
<version>2.1.13</version>
</dependency>
I'll gladly post the entire pom.xml but logic tells me that if the project renders pages at a different rate depending on the TomEE container that it is deployed in, then the issue is elsewhere.
Remote TomEE Configuration:
Tomcat Version Apache Tomcat (TomEE)/7.0.53 (1.6.0.2)
JVM Version 1.7.0_55-mockbuild_2014_04_16_07_52-b00
JVM Vendor Oracle Corporation
OS Name Linux
OS Version 2.6.18-371.8.1.el5
OS Architecture amd64
Local Tomee Configuration:
Tomcat Version Apache Tomcat (TomEE)/7.0.53 (1.6.0.2)
JVM Version 1.7.0_67-b01
JVM Vendor Oracle Corporation
OS Name Mac OS X
OS Version 10.9.5
OS Architecture x86_64
BalusC mentioned memory which is a good thought. From catalina.sh
grep "CATALINA_OPTS=" -n /usr/share/apache-tomee-webprofile-1.6.0.2/bin/catalina.sh
272: CATALINA_OPTS="$CATALINA_OPTS $JPDA_OPTS -Xms1024m -Xmx1024m -XX:NewSize=256m -XX:MaxNewSize=356m -XX:PermSize=256m -XX:MaxPermSize=356m"
And from system info:
#
cat /proc/meminfo
MemTotal: 6969972 kB
MemFree: 128928 kB
Buffers: 349588 kB
Cached: 4895748 kB
SwapCached: 92 kB
Active: 1633928 kB
Inactive: 4416324 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 6969972 kB
LowFree: 128928 kB
SwapTotal: 10027000 kB
SwapFree: 10025000 kB
Dirty: 144 kB
Writeback: 0 kB
AnonPages: 804828 kB
Mapped: 52836 kB
Slab: 752492 kB
PageTables: 8928 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 13511984 kB
Committed_AS: 1360596 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 267944 kB
VmallocChunk: 34359470039 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB

Tuning Ruby / Rails to work on systems with less memory

I'm trying to run an RoR app on an Amazon micro instance (the one which comes in the free tier). However, I'm being unable to successfully complete rake assets:precompile because it supposedly runs out of RAM and the system kills the process.
First, how can I be sure that this is a low memory issue?
Second, irrespective of the answer to the first question, are there some parameters that I can pass to the Ruby interpreter to make it consume less RAM -- even if at the cost of overall app performance? Any GC tuning possible? Anything at all?
Note: Similar to Making ruby on rails take up less memory
PS: I've added a a file-based swap area to the system as well. Here's the output of cat /proc/meminfo if that helps:
MemTotal: 604072 kB
MemFree: 343624 kB
Buffers: 4476 kB
Cached: 31568 kB
SwapCached: 33052 kB
Active: 17540 kB
Inactive: 199588 kB
Active(anon): 11408 kB
Inactive(anon): 172644 kB
Active(file): 6132 kB
Inactive(file): 26944 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 292840 kB
SwapFree: 165652 kB
Dirty: 80 kB
Writeback: 0 kB
AnonPages: 149640 kB
Mapped: 6620 kB
Shmem: 2964 kB
Slab: 23744 kB
SReclaimable: 14044 kB
SUnreclaim: 9700 kB
KernelStack: 2056 kB
PageTables: 6776 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 594876 kB
Committed_AS: 883644 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 5200 kB
VmallocChunk: 34359732767 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 637952 kB
DirectMap2M: 0 kB
Put config.assets.initialize_on_precompile = false in application.rb to avoid initializing the app and the database connection when you precompile assets. That may help.
Another option is to precompile locally and then deploy the compiled assets. More info here http://guides.rubyonrails.org/asset_pipeline.html#precompiling-assets
Second question first - I have run Rails apps on Micro instances, and do so now. So long as your concurrency is very low (one or two active users, tops. And not super-active either) you will be ok. Also note that Amazon will arbitrarily throttle down your effective CPU whenever it wants if you try to slam the CPU too hard (that's just how they do Micro instances). No GC tweaks or anything like that are necessary, just default settings are fine. I was using Passenger, an older version, and made sure it was spinning up only one process spawner. Stock config. Especially if big chunks of your app are images or static files, your main web server will be serving most of that content, and not Rails.
For your second question - I just checked out a large(ish) rails app, fat_free_crm, on a freshly spun-up micro instance. I was just looking for something big.
I timed a run of assets:precompile and it did complete - after a very long time. I timed it and it seemed to finish in 2 minutes 31 seconds.
I think you might still need more swapspace. I would try a gig to start with. If you still can't precompile your assets after that, you've got some other problem.
dd if=/dev/zero of=/swapfile bs=1k count=1M
mkswap /swapfile
swapon -f /swapfile

RAM split between lowmem and highmem

I have compared /proc/meminfo for a galaxys2 (arm exynos4) device running Android gingerbread and ice cream sandwich (cyanogen CM9). I noticed that the kernel splits the memory differently between low memory and high memory:
For ICS/CM9 (3.0 kernel):
cat /proc/meminfo:
MemTotal: 843624 kB
MemFree: 68720 kB
Buffers: 1532 kB
Cached: 115720 kB
SwapCached: 0 kB
Active: 487780 kB
Inactive: 64524 kB
Active(anon): 436316 kB
Inactive(anon): 1764 kB
Active(file): 51464 kB
Inactive(file): 62760 kB
Unevictable: 748 kB
Mlocked: 0 kB
**HighTotal: 278528 kB**
HighFree: 23780 kB
**LowTotal: 565096 kB**
LowFree: 44940 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 4 kB
Writeback: 0 kB
AnonPages: 435848 kB
Mapped: 45364 kB
Shmem: 2276 kB
Slab: 37996 kB
SReclaimable: 10028 kB
SUnreclaim: 27968 kB
KernelStack: 10064 kB
PageTables: 16688 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 421812 kB
Committed_AS: 8549052 kB
VmallocTotal: 188416 kB
VmallocUsed: 104480 kB
VmallocChunk: 26500 kB
For GB (2.6 kernel):
cat /proc/meminfo:
MemTotal: 856360 kB
MemFree: 22264 kB
Buffers: 57000 kB
Cached: 337320 kB
SwapCached: 0 kB
Active: 339064 kB
Inactive: 379148 kB
Active(anon): 212928 kB
Inactive(anon): 112964 kB
Active(file): 126136 kB
Inactive(file): 266184 kB
Unevictable: 396 kB
Mlocked: 0 kB
**HighTotal: 462848 kB**
HighFree: 1392 kB
**LowTotal: 393512 kB**
LowFree: 20872 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 4 kB
Writeback: 0 kB
AnonPages: 324312 kB
Mapped: 97092 kB
Shmem: 1580 kB
Slab: 29160 kB
SReclaimable: 13924 kB
SUnreclaim: 15236 kB
KernelStack: 8352 kB
PageTables: 23828 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 428180 kB
Committed_AS: 4001404 kB
VmallocTotal: 196608 kB
VmallocUsed: 104804 kB
VmallocChunk: 57092 kB
I have noticed that on the 3.0 kernel memory pressure is evident and the out of memory killer is invoked frequently.
I have two questions regarding this:
Is is possible that in the 3.0 layout (less highmem more lowmem) applications have less available memory? Could that explain the high memory pressure?
Is it possible to change the layout in the 3.0 kernel in order to make it more similar to the 2.6 layout (i.e. more highmem less lowmem)?
As far as I recall, the split between high and low memory is a compilation parameter of the Kernel, so it should be possible to set it differently (at compile time). I do not know why so much is given to the high memory region on your examples. On x86 with 1 GB physical RAM it is about 896 MB for low memory and 128 MB is high memory.
It would seem that Android needs more high memory than a typical 32 bit x86 desktop, I do not know which feature(s) of the Android echo-system would bring such requirements, so hopefully somebody else can tell you that.
You could try to investigate the memory zones to try to see what is the difference between Android ICS and GB. Simply do a cat /proc/zoneinfo. You can find some background information on these zones in this article, although take care that it was described for x86 arch.

Resources