I have an interesting scenario.
The application has higher value of Virtual Bytes than I would expect. On the other hand, the Private Bytes is at a reasonable value.
This a Java based application which also loads via JNI a .Net component to the same process. This is not the Java heap that takes the Virtual Bytes as I limit it via xmx parameter.
Is there a way I could analyze using Windbg the consumption of the Virutal Bytes?
For instance, if the code opens a shared memory with another process - can I see it? Can I sum all those shared memory segments?
This is a production environment so I am somewhat limited
Thanks
Saar
In user-mode debugging session you can use !address command !address -f:FileMap or !address -summary
0:018> !address -summary
Failed to map Heaps (error 80004005)
--- Usage Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
Free 211 7ff`f38e3000 ( 8.000 Tb) 100.00%
Image 577 0`05cec000 ( 92.922 Mb) 46.68% 0.00%
MemoryMappedFile 60 0`0375a000 ( 55.352 Mb) 27.81% 0.00%
<unclassified> 115 0`0289e000 ( 40.617 Mb) 20.41% 0.00%
Stack 60 0`00a00000 ( 10.000 Mb) 5.02% 0.00%
TEB 20 0`00028000 ( 160.000 kb) 0.08% 0.00%
PEB 1 0`00001000 ( 4.000 kb) 0.00% 0.00%
--- Type Summary (for busy) ------ RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_IMAGE 578 0`05ced000 ( 92.926 Mb) 46.68% 0.00%
MEM_MAPPED 60 0`0375a000 ( 55.352 Mb) 27.81% 0.00%
MEM_PRIVATE 195 0`032c6000 ( 50.773 Mb) 25.51% 0.00%
--- State Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_FREE 211 7ff`f38e3000 ( 8.000 Tb) 100.00%
MEM_COMMIT 782 0`08ae4000 ( 138.891 Mb) 69.78% 0.00%
MEM_RESERVE 51 0`03c29000 ( 60.160 Mb) 30.22% 0.00%
--- Protect Summary (for commit) - RgnCount ----------- Total Size -------- %ofBusy %ofTotal
PAGE_READONLY 336 0`050ca000 ( 80.789 Mb) 40.59% 0.00%
PAGE_EXECUTE_READ 104 0`02785000 ( 39.520 Mb) 19.85% 0.00%
PAGE_READWRITE 262 0`010db000 ( 16.855 Mb) 8.47% 0.00%
PAGE_WRITECOPY 59 0`0017d000 ( 1.488 Mb) 0.75% 0.00%
PAGE_READWRITE|PAGE_GUARD 20 0`0003c000 ( 240.000 kb) 0.12% 0.00%
PAGE_EXECUTE_READWRITE 1 0`00001000 ( 4.000 kb) 0.00% 0.00%
--- Largest Region by Usage ----------- Base Address -------- Region Size ----------
Free 0`ff8b5000 7fd`ed39b000 ( 7.992 Tb)
Image 7fe`fe39a000 0`0089e000 ( 8.617 Mb)
MemoryMappedFile 0`007b1000 0`012df000 ( 18.871 Mb)
<unclassified> 0`7f0e0000 0`00f00000 ( 15.000 Mb)
Stack 0`06740000 0`00079000 ( 484.000 kb)
TEB 7ff`fff94000 0`00002000 ( 8.000 kb)
PEB 7ff`fffd9000 0`00001000 ( 4.000 kb)
Virtual bytes represent the processes use of the virtual address space, and don't necessarily represent memory usage, not even virtual memory usage. If the process is 32-bit, don't worry about this statistic unless it is the best part of a gigabyte or more, and if the process is 64-bit don't worry about it full stop.
Mark Russinovich’s blog entry Pushing the Limits of Windows: Virtual Memory provides more detail on this.
The statistic you probably want to look at instead is Page File Bytes. Private Bytes and Working Set may also be of interest. These are described under Process Object in the Technet documentation on the Windows Server 2003 Performance Counters Reference.
Related
My code crashes with this error message
Executing "/usr/bin/java com.utils.BotFilter"
OpenJDK 64-Bit Server VM warning: INFO:
os::commit_memory(0x0000000357c80000, 2712666112, 0) failed;
error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 2712666112 bytes for committing reserved memory.
An error report file with more information is saved as:
/tmp/jvm-29955/hs_error.log`
Here is the content of the generated hs_error.log file:
https://pastebin.com/yqF2Yy4P
This line from crash log seems interesting to me:
Memory: 4k page, physical 98823196k(691424k free), swap 1048572k(0k free)
Does it mean that the machine has memory but is running out of swap space?
Here is meminfo from the crash log but I don't really know how to interpret it, like what is the difference between MemFree and MemAvailable? How much memory is this process taking?
/proc/meminfo:
MemTotal: 98823196 kB
MemFree: 691424 kB
MemAvailable: 2204348 kB
Buffers: 145568 kB
Cached: 2799624 kB
SwapCached: 304368 kB
Active: 81524540 kB
Inactive: 14120408 kB
Active(anon): 80936988 kB
Inactive(anon): 13139448 kB
Active(file): 587552 kB
Inactive(file): 980960 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1048572 kB
SwapFree: 0 kB
Dirty: 1332 kB
Writeback: 0 kB
AnonPages: 92395828 kB
Mapped: 120980 kB
Shmem: 1376052 kB
Slab: 594476 kB
SReclaimable: 282296 kB
SUnreclaim: 312180 kB
KernelStack: 317648 kB
PageTables: 238412 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 50460168 kB
Committed_AS: 114163748 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 314408 kB
VmallocChunk: 34308158464 kB
HardwareCorrupted: 0 kB
AnonHugePages: 50071552 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 116924 kB
DirectMap2M: 5115904 kB
DirectMap1G: 95420416 kB
Possible solutions:
Reduce memory load on the system
Increase physical memory or swap space
Check if swap backing store is full
Use 64 bit Java on a 64 bit OS
Decrease Java heap size (-Xmx/-Xms)
Decrease number of Java threads
Decrease Java thread stack sizes (-Xss)
Set larger code cache with -XX:ReservedCodeCacheSize=
In case you have many contexts wars deployed on your tomcat try reduce them
As Scary Wombat mentions, the JVM is trying to allocate 2712666112 bytes (2.7 Gb) of memory, and you only have 691424000 bytes (0.69 Gb) of free physical memory and nothing available on the swap.
Another possibility (which I encountered just now) would be bad settings for "overcommit memory" on linux.
In my situation, /proc/sys/vm/overcommit_memory was set to "2" and /proc/sys/vm/overcommit_ratio to "50" , meaning "don't ever overcommit and only allow allocation of 50% of the available RAM+Swap".
That's a pretty deceptive problem, since there can be a lot of memory available, but allocations still fail for apparently no reason.
The settings can be changed to the default (overcommit in a sensible way) for now (until a restart):
echo 0 >/proc/sys/vm/overcommit_memory
... or permanently:
echo "vm.overcommit_memory=0 >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf # apply it immediately
Note: this can also partly be diagnosed by looking at the output of /proc/meminfo:
...
CommitLimit: 45329388 kB
Committed_AS: 44818080 kB
...
In the example in the question, Committed_AS is much higher than CommitLimit, indicating (together with the fact that allocations fail) that overcommit is enabled, while here both values are close together, meaning that the limit is strictly enforced.
An excellent detailed explanation of these settings and their effect (as well as when it makes sense to modify them) can be found in this pivotal blog entry. (Tl;dr: messing with overcommit is useful if you don't want critical processes to use swap)
I am trying to setup sonarqube on ec2 instance Amazon Linux AMI. on t2 micro instance. using the below sonarqube version:6.0, java:java-1.8.0-openjdk, mysql:mysql Ver 14.14 Distrib 5.6.39, for Linux (x86_64) using EditLine wrapper
after sonar start command:
sudo ./sonar.sh start
sonar is not starting. after checking in logs gives out below message.
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.05.16 19:30:50 INFO app[o.s.a.AppFileSystem] Cleaning or
creating temp directory /opt/sonarqube/temp
2018.05.16 19:30:50 INFO app[o.s.p.m.JavaProcessLauncher] Launch
process[es]: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-
7.b10.37.amzn1.x86_64/jre/bin/java -Djava.awt.headless=true -Xmx1G -
Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -
XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -
XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -
Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-1.8.0-
openjdk-1.8.0.171-7.b10.37.amzn1.x86_64/jre/lib/management-agent.jar -
cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer
/opt/sonarqube/temp/sq-process620905092992598791properties
OpenJDK 64-Bit Server VM warning: INFO:
os::commit_memory(0x00000000c5330000, 181207040, 0) failed;
error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 181207040 bytes for
committing reserved memory.
# An error report file with more information is saved as:
# /opt/sonarqube/hs_err_pid30955.log
<-- Wrapper Stopped
Below Memory info:
/proc/meminfo:
MemTotal: 1011176 kB
MemFree: 78024 kB
MemAvailable: 55140 kB
Buffers: 8064 kB
Cached: 72360 kB
SwapCached: 0 kB
Active: 860160 kB
Inactive: 25868 kB
Active(anon): 805628 kB
Inactive(anon): 48 kB
Active(file): 54532 kB
Inactive(file): 25820 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 108 kB
Writeback: 0 kB
AnonPages: 805628 kB
Mapped: 30700 kB
Shmem: 56 kB
Slab: 28412 kB
SReclaimable: 16632 kB
SUnreclaim: 11780 kB
KernelStack: 3328 kB
PageTables: 6108 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 505588 kB
Committed_AS: 1348288 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 47104 kB
DirectMap2M: 1001472 kB
CPU:total 1 (initial active 1) (1 cores per cpu, 1 threads per core)
family 6 model 63 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3,
ssse3, sse4.1, sse4.2, popcnt, avx, avx2, aes, clmul, erms, lzcnt, tsc,
bmi1, bmi2
Have you tried to increase the maximum allowed heap size memory for your SonarQube application ?
You can do so by editing the sonar.properties file, found in your SQ installation folder.
You can follow this guide in order to configure your SQ max heap size.
Our microservice stack has now crept up to 15 small services for business logic like Auth, messaging, billing, etc. It's now getting to the point where a docker-compose up uses more ram than our devs have on their laptops.
It's not a crazy amount, about 4GB, but I regularly feel the pinch on my 8GB machine (thanks, Chrome).
There's app-level optimisations that we can be, and are, doing, sure, but eventually we are going to need an alternative strategy.
I see a two obvious options:
Use a big cloudy dev machine, perhaps provisioned with docker-machine and aws.
spinning up some machines into a shared dev cloud, like postgres and redis
These aren't very satisfactory, in (1), local files aren't synced, making local dev a nightmare, and in (2) we can break each other's envs.
Help!
Apendix I: output from docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
0ea1779dbb66 32.53% 137.9 MB / 8.186 GB 1.68% 46 kB / 29.4 kB 42 MB / 0 B
12e93d81027c 0.70% 376.1 MB / 8.186 GB 4.59% 297.7 kB / 243 kB 0 B / 1.921 MB
25f7be321716 34.40% 131.1 MB / 8.186 GB 1.60% 38.42 kB / 23.91 kB 39.64 MB / 0 B
26220cab1ded 0.00% 7.274 MB / 8.186 GB 0.09% 19.82 kB / 648 B 6.645 MB / 0 B
2db7ba96dc16 1.22% 51.29 MB / 8.186 GB 0.63% 10.41 kB / 578 B 28.79 MB / 0 B
3296e274be54 0.00% 4.854 MB / 8.186 GB 0.06% 20.07 kB / 1.862 kB 4.069 MB / 0 B
35911ee375fa 0.27% 12.87 MB / 8.186 GB 0.16% 29.16 kB / 6.861 kB 7.137 MB / 0 B
49eccc517040 37.31% 65.76 MB / 8.186 GB 0.80% 31.53 kB / 18.49 kB 36.27 MB / 0 B
6f23f114c44e 31.08% 86.5 MB / 8.186 GB 1.06% 37.25 kB / 29.28 kB 34.66 MB / 0 B
7a0731639e31 30.64% 66.21 MB / 8.186 GB 0.81% 31.1 kB / 19.39 kB 35.6 MB / 0 B
7ec2d73d3d97 0.00% 10.63 MB / 8.186 GB 0.13% 8.685 kB / 834 B 10.4 MB / 12.29 kB
855fd2c80bea 1.10% 46.88 MB / 8.186 GB 0.57% 23.39 kB / 2.423 kB 29.64 MB / 0 B
9993de237b9c 40.37% 170 MB / 8.186 GB 2.08% 19.75 kB / 1.461 kB 52.71 MB / 12.29 kB
a162fbf77c29 24.84% 128.6 MB / 8.186 GB 1.57% 59.82 kB / 54.46 kB 37.81 MB / 0 B
a7bf8b64d516 43.91% 106.1 MB / 8.186 GB 1.30% 46.33 kB / 31.36 kB 35 MB / 0 B
aae18e01b8bb 0.99% 44.16 MB / 8.186 GB 0.54% 7.066 kB / 578 B 28.12 MB / 0 B
bff9c9ee646d 35.43% 71.65 MB / 8.186 GB 0.88% 63.3 kB / 68.06 kB 45.53 MB / 0 B
ca86faedbd59 38.09% 104.9 MB / 8.186 GB 1.28% 31.84 kB / 18.71 kB 36.66 MB / 0 B
d666a1f3be5c 0.00% 9.286 MB / 8.186 GB 0.11% 19.51 kB / 648 B 6.621 MB / 0 B
ef2fa1bc6452 0.00% 7.254 MB / 8.186 GB 0.09% 19.88 kB / 648 B 6.645 MB / 0 B
f20529b47684 0.88% 41.66 MB / 8.186 GB 0.51% 12.45 kB / 648 B 23.96 MB / 0 B
We have been struggling with this issue as well, and still don't really have an ideal solution. However, we have two ideas that we are currently debating.
Run a "Dev" environment in the cloud, which is constantly updated with the master/latest version of every image as it is built. Then each individual project can proxy to that environment in their docker-compose.yml file... so they are running THEIR service locally, but all the dependencies are remote. An important part of this (from your question) is that you have shared dependencies like databases. This should never be the case... never integrate across the database. Each service should store its own data.
Each service is responsible for building a "mock" version of their app that can be used for local dev and medium level integration tests. The mock versions shouldn't have dependencies, and should enable someone to only need a single layer from their service (the 3 or 4 mocks, instead of the 3 or 4 real services each with 3 or 4 of their own and so on).
I'm trying to run an RoR app on an Amazon micro instance (the one which comes in the free tier). However, I'm being unable to successfully complete rake assets:precompile because it supposedly runs out of RAM and the system kills the process.
First, how can I be sure that this is a low memory issue?
Second, irrespective of the answer to the first question, are there some parameters that I can pass to the Ruby interpreter to make it consume less RAM -- even if at the cost of overall app performance? Any GC tuning possible? Anything at all?
Note: Similar to Making ruby on rails take up less memory
PS: I've added a a file-based swap area to the system as well. Here's the output of cat /proc/meminfo if that helps:
MemTotal: 604072 kB
MemFree: 343624 kB
Buffers: 4476 kB
Cached: 31568 kB
SwapCached: 33052 kB
Active: 17540 kB
Inactive: 199588 kB
Active(anon): 11408 kB
Inactive(anon): 172644 kB
Active(file): 6132 kB
Inactive(file): 26944 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 292840 kB
SwapFree: 165652 kB
Dirty: 80 kB
Writeback: 0 kB
AnonPages: 149640 kB
Mapped: 6620 kB
Shmem: 2964 kB
Slab: 23744 kB
SReclaimable: 14044 kB
SUnreclaim: 9700 kB
KernelStack: 2056 kB
PageTables: 6776 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 594876 kB
Committed_AS: 883644 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 5200 kB
VmallocChunk: 34359732767 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 637952 kB
DirectMap2M: 0 kB
Put config.assets.initialize_on_precompile = false in application.rb to avoid initializing the app and the database connection when you precompile assets. That may help.
Another option is to precompile locally and then deploy the compiled assets. More info here http://guides.rubyonrails.org/asset_pipeline.html#precompiling-assets
Second question first - I have run Rails apps on Micro instances, and do so now. So long as your concurrency is very low (one or two active users, tops. And not super-active either) you will be ok. Also note that Amazon will arbitrarily throttle down your effective CPU whenever it wants if you try to slam the CPU too hard (that's just how they do Micro instances). No GC tweaks or anything like that are necessary, just default settings are fine. I was using Passenger, an older version, and made sure it was spinning up only one process spawner. Stock config. Especially if big chunks of your app are images or static files, your main web server will be serving most of that content, and not Rails.
For your second question - I just checked out a large(ish) rails app, fat_free_crm, on a freshly spun-up micro instance. I was just looking for something big.
I timed a run of assets:precompile and it did complete - after a very long time. I timed it and it seemed to finish in 2 minutes 31 seconds.
I think you might still need more swapspace. I would try a gig to start with. If you still can't precompile your assets after that, you've got some other problem.
dd if=/dev/zero of=/swapfile bs=1k count=1M
mkswap /swapfile
swapon -f /swapfile
I have compared /proc/meminfo for a galaxys2 (arm exynos4) device running Android gingerbread and ice cream sandwich (cyanogen CM9). I noticed that the kernel splits the memory differently between low memory and high memory:
For ICS/CM9 (3.0 kernel):
cat /proc/meminfo:
MemTotal: 843624 kB
MemFree: 68720 kB
Buffers: 1532 kB
Cached: 115720 kB
SwapCached: 0 kB
Active: 487780 kB
Inactive: 64524 kB
Active(anon): 436316 kB
Inactive(anon): 1764 kB
Active(file): 51464 kB
Inactive(file): 62760 kB
Unevictable: 748 kB
Mlocked: 0 kB
**HighTotal: 278528 kB**
HighFree: 23780 kB
**LowTotal: 565096 kB**
LowFree: 44940 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 4 kB
Writeback: 0 kB
AnonPages: 435848 kB
Mapped: 45364 kB
Shmem: 2276 kB
Slab: 37996 kB
SReclaimable: 10028 kB
SUnreclaim: 27968 kB
KernelStack: 10064 kB
PageTables: 16688 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 421812 kB
Committed_AS: 8549052 kB
VmallocTotal: 188416 kB
VmallocUsed: 104480 kB
VmallocChunk: 26500 kB
For GB (2.6 kernel):
cat /proc/meminfo:
MemTotal: 856360 kB
MemFree: 22264 kB
Buffers: 57000 kB
Cached: 337320 kB
SwapCached: 0 kB
Active: 339064 kB
Inactive: 379148 kB
Active(anon): 212928 kB
Inactive(anon): 112964 kB
Active(file): 126136 kB
Inactive(file): 266184 kB
Unevictable: 396 kB
Mlocked: 0 kB
**HighTotal: 462848 kB**
HighFree: 1392 kB
**LowTotal: 393512 kB**
LowFree: 20872 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 4 kB
Writeback: 0 kB
AnonPages: 324312 kB
Mapped: 97092 kB
Shmem: 1580 kB
Slab: 29160 kB
SReclaimable: 13924 kB
SUnreclaim: 15236 kB
KernelStack: 8352 kB
PageTables: 23828 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 428180 kB
Committed_AS: 4001404 kB
VmallocTotal: 196608 kB
VmallocUsed: 104804 kB
VmallocChunk: 57092 kB
I have noticed that on the 3.0 kernel memory pressure is evident and the out of memory killer is invoked frequently.
I have two questions regarding this:
Is is possible that in the 3.0 layout (less highmem more lowmem) applications have less available memory? Could that explain the high memory pressure?
Is it possible to change the layout in the 3.0 kernel in order to make it more similar to the 2.6 layout (i.e. more highmem less lowmem)?
As far as I recall, the split between high and low memory is a compilation parameter of the Kernel, so it should be possible to set it differently (at compile time). I do not know why so much is given to the high memory region on your examples. On x86 with 1 GB physical RAM it is about 896 MB for low memory and 128 MB is high memory.
It would seem that Android needs more high memory than a typical 32 bit x86 desktop, I do not know which feature(s) of the Android echo-system would bring such requirements, so hopefully somebody else can tell you that.
You could try to investigate the memory zones to try to see what is the difference between Android ICS and GB. Simply do a cat /proc/zoneinfo. You can find some background information on these zones in this article, although take care that it was described for x86 arch.