Not enough ram to run whole docker-compose stack - memory-management

Our microservice stack has now crept up to 15 small services for business logic like Auth, messaging, billing, etc. It's now getting to the point where a docker-compose up uses more ram than our devs have on their laptops.
It's not a crazy amount, about 4GB, but I regularly feel the pinch on my 8GB machine (thanks, Chrome).
There's app-level optimisations that we can be, and are, doing, sure, but eventually we are going to need an alternative strategy.
I see a two obvious options:
Use a big cloudy dev machine, perhaps provisioned with docker-machine and aws.
spinning up some machines into a shared dev cloud, like postgres and redis
These aren't very satisfactory, in (1), local files aren't synced, making local dev a nightmare, and in (2) we can break each other's envs.
Help!
Apendix I: output from docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
0ea1779dbb66 32.53% 137.9 MB / 8.186 GB 1.68% 46 kB / 29.4 kB 42 MB / 0 B
12e93d81027c 0.70% 376.1 MB / 8.186 GB 4.59% 297.7 kB / 243 kB 0 B / 1.921 MB
25f7be321716 34.40% 131.1 MB / 8.186 GB 1.60% 38.42 kB / 23.91 kB 39.64 MB / 0 B
26220cab1ded 0.00% 7.274 MB / 8.186 GB 0.09% 19.82 kB / 648 B 6.645 MB / 0 B
2db7ba96dc16 1.22% 51.29 MB / 8.186 GB 0.63% 10.41 kB / 578 B 28.79 MB / 0 B
3296e274be54 0.00% 4.854 MB / 8.186 GB 0.06% 20.07 kB / 1.862 kB 4.069 MB / 0 B
35911ee375fa 0.27% 12.87 MB / 8.186 GB 0.16% 29.16 kB / 6.861 kB 7.137 MB / 0 B
49eccc517040 37.31% 65.76 MB / 8.186 GB 0.80% 31.53 kB / 18.49 kB 36.27 MB / 0 B
6f23f114c44e 31.08% 86.5 MB / 8.186 GB 1.06% 37.25 kB / 29.28 kB 34.66 MB / 0 B
7a0731639e31 30.64% 66.21 MB / 8.186 GB 0.81% 31.1 kB / 19.39 kB 35.6 MB / 0 B
7ec2d73d3d97 0.00% 10.63 MB / 8.186 GB 0.13% 8.685 kB / 834 B 10.4 MB / 12.29 kB
855fd2c80bea 1.10% 46.88 MB / 8.186 GB 0.57% 23.39 kB / 2.423 kB 29.64 MB / 0 B
9993de237b9c 40.37% 170 MB / 8.186 GB 2.08% 19.75 kB / 1.461 kB 52.71 MB / 12.29 kB
a162fbf77c29 24.84% 128.6 MB / 8.186 GB 1.57% 59.82 kB / 54.46 kB 37.81 MB / 0 B
a7bf8b64d516 43.91% 106.1 MB / 8.186 GB 1.30% 46.33 kB / 31.36 kB 35 MB / 0 B
aae18e01b8bb 0.99% 44.16 MB / 8.186 GB 0.54% 7.066 kB / 578 B 28.12 MB / 0 B
bff9c9ee646d 35.43% 71.65 MB / 8.186 GB 0.88% 63.3 kB / 68.06 kB 45.53 MB / 0 B
ca86faedbd59 38.09% 104.9 MB / 8.186 GB 1.28% 31.84 kB / 18.71 kB 36.66 MB / 0 B
d666a1f3be5c 0.00% 9.286 MB / 8.186 GB 0.11% 19.51 kB / 648 B 6.621 MB / 0 B
ef2fa1bc6452 0.00% 7.254 MB / 8.186 GB 0.09% 19.88 kB / 648 B 6.645 MB / 0 B
f20529b47684 0.88% 41.66 MB / 8.186 GB 0.51% 12.45 kB / 648 B 23.96 MB / 0 B

We have been struggling with this issue as well, and still don't really have an ideal solution. However, we have two ideas that we are currently debating.
Run a "Dev" environment in the cloud, which is constantly updated with the master/latest version of every image as it is built. Then each individual project can proxy to that environment in their docker-compose.yml file... so they are running THEIR service locally, but all the dependencies are remote. An important part of this (from your question) is that you have shared dependencies like databases. This should never be the case... never integrate across the database. Each service should store its own data.
Each service is responsible for building a "mock" version of their app that can be used for local dev and medium level integration tests. The mock versions shouldn't have dependencies, and should enable someone to only need a single layer from their service (the 3 or 4 mocks, instead of the 3 or 4 real services each with 3 or 4 of their own and so on).

Related

JVM issue with failed; error='Cannot allocate memory' (errno=12) [duplicate]

My code crashes with this error message
Executing "/usr/bin/java com.utils.BotFilter"
OpenJDK 64-Bit Server VM warning: INFO:
os::commit_memory(0x0000000357c80000, 2712666112, 0) failed;
error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 2712666112 bytes for committing reserved memory.
An error report file with more information is saved as:
/tmp/jvm-29955/hs_error.log`
Here is the content of the generated hs_error.log file:
https://pastebin.com/yqF2Yy4P
This line from crash log seems interesting to me:
Memory: 4k page, physical 98823196k(691424k free), swap 1048572k(0k free)
Does it mean that the machine has memory but is running out of swap space?
Here is meminfo from the crash log but I don't really know how to interpret it, like what is the difference between MemFree and MemAvailable? How much memory is this process taking?
/proc/meminfo:
MemTotal: 98823196 kB
MemFree: 691424 kB
MemAvailable: 2204348 kB
Buffers: 145568 kB
Cached: 2799624 kB
SwapCached: 304368 kB
Active: 81524540 kB
Inactive: 14120408 kB
Active(anon): 80936988 kB
Inactive(anon): 13139448 kB
Active(file): 587552 kB
Inactive(file): 980960 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1048572 kB
SwapFree: 0 kB
Dirty: 1332 kB
Writeback: 0 kB
AnonPages: 92395828 kB
Mapped: 120980 kB
Shmem: 1376052 kB
Slab: 594476 kB
SReclaimable: 282296 kB
SUnreclaim: 312180 kB
KernelStack: 317648 kB
PageTables: 238412 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 50460168 kB
Committed_AS: 114163748 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 314408 kB
VmallocChunk: 34308158464 kB
HardwareCorrupted: 0 kB
AnonHugePages: 50071552 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 116924 kB
DirectMap2M: 5115904 kB
DirectMap1G: 95420416 kB
Possible solutions:
Reduce memory load on the system
Increase physical memory or swap space
Check if swap backing store is full
Use 64 bit Java on a 64 bit OS
Decrease Java heap size (-Xmx/-Xms)
Decrease number of Java threads
Decrease Java thread stack sizes (-Xss)
Set larger code cache with -XX:ReservedCodeCacheSize=
In case you have many contexts wars deployed on your tomcat try reduce them
As Scary Wombat mentions, the JVM is trying to allocate 2712666112 bytes (2.7 Gb) of memory, and you only have 691424000 bytes (0.69 Gb) of free physical memory and nothing available on the swap.
Another possibility (which I encountered just now) would be bad settings for "overcommit memory" on linux.
In my situation, /proc/sys/vm/overcommit_memory was set to "2" and /proc/sys/vm/overcommit_ratio to "50" , meaning "don't ever overcommit and only allow allocation of 50% of the available RAM+Swap".
That's a pretty deceptive problem, since there can be a lot of memory available, but allocations still fail for apparently no reason.
The settings can be changed to the default (overcommit in a sensible way) for now (until a restart):
echo 0 >/proc/sys/vm/overcommit_memory
... or permanently:
echo "vm.overcommit_memory=0 >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf # apply it immediately
Note: this can also partly be diagnosed by looking at the output of /proc/meminfo:
...
CommitLimit: 45329388 kB
Committed_AS: 44818080 kB
...
In the example in the question, Committed_AS is much higher than CommitLimit, indicating (together with the fact that allocations fail) that overcommit is enabled, while here both values are close together, meaning that the limit is strictly enforced.
An excellent detailed explanation of these settings and their effect (as well as when it makes sense to modify them) can be found in this pivotal blog entry. (Tl;dr: messing with overcommit is useful if you don't want critical processes to use swap)

Why Ceph turns status to Err when there is still available storage space

I built a 3 node Ceph cluster recently. Each node had seven 1TB HDD for OSDs. In total, I have 21 TB of storage space for Ceph.
However, when I ran a workload to keep writing data to Ceph, it turns to Err status and no data can be written to it any more.
The output of ceph -s is:
cluster:
id: 06ed9d57-c68e-4899-91a6-d72125614a94
health: HEALTH_ERR
1 full osd(s)
4 nearfull osd(s)
7 pool(s) full
services:
mon: 1 daemons, quorum host3
mgr: admin(active), standbys: 06ed9d57-c68e-4899-91a6-d72125614a94
osd: 21 osds: 21 up, 21 in
rgw: 4 daemons active
data:
pools: 7 pools, 1748 pgs
objects: 2.03M objects, 7.34TiB
usage: 14.7TiB used, 4.37TiB / 19.1TiB avail
pgs: 1748 active+clean
Based on my comprehension, since there is still 4.37 TB space left, Ceph itself should take care about how to balance the workload and make each OSD to not be at full or nearfull status. But the result doesn't work as my expectation, 1 full osd and 4 nearfull osd shows up, the health is HEALTH_ERR.
I can't visit Ceph with hdfs or s3cmd anymore, so here comes the question:
1, Any explanation about current issue?
2, How can I recover from it? Delete data on Ceph node directly with ceph-admin, and relaunch the Ceph?
Not get an answer for 3 days and I made some progress, let me share my findings here.
1, It's normal for different OSD to have size gap. If you list OSD with ceph osd df, you will find that different OSD has different usage ratio.
2, To recover from this issue, the issue here means the cluster crush due to OSD full. Follow steps below, it's mostly from redhat.
Get ceph cluster health info by ceph health detail. It's not necessary but you can get the ID of failed OSD.
Use ceph osd dump | grep full_ratio to get current full_ratio. Do not use statement listed at above link, it's obsoleted. The output can be like
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
Set OSD full ratio a little higher by ceph osd set-full-ratio <ratio>. Generally, we set ratio to 0.97
Now, the cluster status will change from HEALTH_ERR to HEALTH_WARN or HEALTH_OK. Remove some data that can be released.
Change OSD full ratio back to previous ratio. It can't be 0.97 always cause it's a little risky.
Hope this thread is helpful to some one who ran into same issue. The details about OSD configuration please refer to ceph.
Ceph requires free disk space to move storage chunks, called pgs, between different disks. As this free space is so critical to the underlying functionality, Ceph will go into HEALTH_WARN once any OSD reaches the near_full ratio (generally 85% full), and will stop write operations on the cluster by entering HEALTH_ERR state once an OSD reaches the full_ratio.
However, unless your cluster is perfectly balanced across all OSDs there is likely much more capacity available, as OSDs are typically unevenly utilized. To check overall utilization and available capacity you can run ceph osd df.
Example output:
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
2 hdd 2.72849 1.00000 2.7 TiB 2.0 TiB 2.0 TiB 72 MiB 3.6 GiB 742 GiB 73.44 1.06 406 up
5 hdd 2.72849 1.00000 2.7 TiB 2.0 TiB 2.0 TiB 119 MiB 3.3 GiB 726 GiB 74.00 1.06 414 up
12 hdd 2.72849 1.00000 2.7 TiB 2.2 TiB 2.2 TiB 72 MiB 3.7 GiB 579 GiB 79.26 1.14 407 up
14 hdd 2.72849 1.00000 2.7 TiB 2.3 TiB 2.3 TiB 80 MiB 3.6 GiB 477 GiB 82.92 1.19 367 up
8 ssd 0.10840 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 up
1 hdd 2.72849 1.00000 2.7 TiB 1.7 TiB 1.7 TiB 27 MiB 2.9 GiB 1006 GiB 64.01 0.92 253 up
4 hdd 2.72849 1.00000 2.7 TiB 1.7 TiB 1.7 TiB 79 MiB 2.9 GiB 1018 GiB 63.55 0.91 259 up
10 hdd 2.72849 1.00000 2.7 TiB 1.9 TiB 1.9 TiB 70 MiB 3.0 GiB 887 GiB 68.24 0.98 256 up
13 hdd 2.72849 1.00000 2.7 TiB 1.8 TiB 1.8 TiB 80 MiB 3.0 GiB 971 GiB 65.24 0.94 277 up
15 hdd 2.72849 1.00000 2.7 TiB 2.0 TiB 2.0 TiB 58 MiB 3.1 GiB 793 GiB 71.63 1.03 283 up
17 hdd 2.72849 1.00000 2.7 TiB 1.6 TiB 1.6 TiB 113 MiB 2.8 GiB 1.1 TiB 59.78 0.86 259 up
19 hdd 2.72849 1.00000 2.7 TiB 1.6 TiB 1.6 TiB 100 MiB 2.7 GiB 1.2 TiB 56.98 0.82 265 up
7 ssd 0.10840 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 up
0 hdd 2.72849 1.00000 2.7 TiB 2.0 TiB 2.0 TiB 105 MiB 3.0 GiB 734 GiB 73.72 1.06 337 up
3 hdd 2.72849 1.00000 2.7 TiB 2.0 TiB 2.0 TiB 98 MiB 3.0 GiB 781 GiB 72.04 1.04 354 up
9 hdd 2.72849 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 up
11 hdd 2.72849 1.00000 2.7 TiB 1.9 TiB 1.9 TiB 76 MiB 3.0 GiB 817 GiB 70.74 1.02 342 up
16 hdd 2.72849 1.00000 2.7 TiB 1.8 TiB 1.8 TiB 98 MiB 2.7 GiB 984 GiB 64.80 0.93 317 up
18 hdd 2.72849 1.00000 2.7 TiB 2.0 TiB 2.0 TiB 79 MiB 3.0 GiB 792 GiB 71.65 1.03 324 up
6 ssd 0.10840 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 up
TOTAL 47 TiB 30 TiB 30 TiB 1.3 GiB 53 GiB 16 TiB 69.50
MIN/MAX VAR: 0.82/1.19 STDDEV: 6.64
As you can see in the above output, the used OSDs vary from 56.98% (OSD 19) to 82.92% (OSD 14) utilized, which is a significant variance.
As only a single OSD is full, and only 4 of your 21 OSD's are nearfull you likely have a significant amount of storage still available in your cluster, which means that it is time to perform a rebalance operation. This can be done manually by reweighting OSDs, or you can have Ceph do a best-effort rebalance by running the command ceph osd reweight-by-utilization. Once the rebalance is complete (i.e you have no objects misplaced in ceph status) you can check for the variation again (using ceph osd df) and trigger another rebalance if required.
If you are on Luminous or newer you can enable the Balancer plugin to handle OSD rewighting automatically.

sonarqube error java insufficient memory

I am trying to setup sonarqube on ec2 instance Amazon Linux AMI. on t2 micro instance. using the below sonarqube version:6.0, java:java-1.8.0-openjdk, mysql:mysql Ver 14.14 Distrib 5.6.39, for Linux (x86_64) using EditLine wrapper
after sonar start command:
sudo ./sonar.sh start
sonar is not starting. after checking in logs gives out below message.
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.05.16 19:30:50 INFO app[o.s.a.AppFileSystem] Cleaning or
creating temp directory /opt/sonarqube/temp
2018.05.16 19:30:50 INFO app[o.s.p.m.JavaProcessLauncher] Launch
process[es]: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-
7.b10.37.amzn1.x86_64/jre/bin/java -Djava.awt.headless=true -Xmx1G -
Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -
XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -
XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -
Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-1.8.0-
openjdk-1.8.0.171-7.b10.37.amzn1.x86_64/jre/lib/management-agent.jar -
cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer
/opt/sonarqube/temp/sq-process620905092992598791properties
OpenJDK 64-Bit Server VM warning: INFO:
os::commit_memory(0x00000000c5330000, 181207040, 0) failed;
error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 181207040 bytes for
committing reserved memory.
# An error report file with more information is saved as:
# /opt/sonarqube/hs_err_pid30955.log
<-- Wrapper Stopped
Below Memory info:
/proc/meminfo:
MemTotal: 1011176 kB
MemFree: 78024 kB
MemAvailable: 55140 kB
Buffers: 8064 kB
Cached: 72360 kB
SwapCached: 0 kB
Active: 860160 kB
Inactive: 25868 kB
Active(anon): 805628 kB
Inactive(anon): 48 kB
Active(file): 54532 kB
Inactive(file): 25820 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 108 kB
Writeback: 0 kB
AnonPages: 805628 kB
Mapped: 30700 kB
Shmem: 56 kB
Slab: 28412 kB
SReclaimable: 16632 kB
SUnreclaim: 11780 kB
KernelStack: 3328 kB
PageTables: 6108 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 505588 kB
Committed_AS: 1348288 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 47104 kB
DirectMap2M: 1001472 kB
CPU:total 1 (initial active 1) (1 cores per cpu, 1 threads per core)
family 6 model 63 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3,
ssse3, sse4.1, sse4.2, popcnt, avx, avx2, aes, clmul, erms, lzcnt, tsc,
bmi1, bmi2
Have you tried to increase the maximum allowed heap size memory for your SonarQube application ?
You can do so by editing the sonar.properties file, found in your SQ installation folder.
You can follow this guide in order to configure your SQ max heap size.

Tuning Ruby / Rails to work on systems with less memory

I'm trying to run an RoR app on an Amazon micro instance (the one which comes in the free tier). However, I'm being unable to successfully complete rake assets:precompile because it supposedly runs out of RAM and the system kills the process.
First, how can I be sure that this is a low memory issue?
Second, irrespective of the answer to the first question, are there some parameters that I can pass to the Ruby interpreter to make it consume less RAM -- even if at the cost of overall app performance? Any GC tuning possible? Anything at all?
Note: Similar to Making ruby on rails take up less memory
PS: I've added a a file-based swap area to the system as well. Here's the output of cat /proc/meminfo if that helps:
MemTotal: 604072 kB
MemFree: 343624 kB
Buffers: 4476 kB
Cached: 31568 kB
SwapCached: 33052 kB
Active: 17540 kB
Inactive: 199588 kB
Active(anon): 11408 kB
Inactive(anon): 172644 kB
Active(file): 6132 kB
Inactive(file): 26944 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 292840 kB
SwapFree: 165652 kB
Dirty: 80 kB
Writeback: 0 kB
AnonPages: 149640 kB
Mapped: 6620 kB
Shmem: 2964 kB
Slab: 23744 kB
SReclaimable: 14044 kB
SUnreclaim: 9700 kB
KernelStack: 2056 kB
PageTables: 6776 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 594876 kB
Committed_AS: 883644 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 5200 kB
VmallocChunk: 34359732767 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 637952 kB
DirectMap2M: 0 kB
Put config.assets.initialize_on_precompile = false in application.rb to avoid initializing the app and the database connection when you precompile assets. That may help.
Another option is to precompile locally and then deploy the compiled assets. More info here http://guides.rubyonrails.org/asset_pipeline.html#precompiling-assets
Second question first - I have run Rails apps on Micro instances, and do so now. So long as your concurrency is very low (one or two active users, tops. And not super-active either) you will be ok. Also note that Amazon will arbitrarily throttle down your effective CPU whenever it wants if you try to slam the CPU too hard (that's just how they do Micro instances). No GC tweaks or anything like that are necessary, just default settings are fine. I was using Passenger, an older version, and made sure it was spinning up only one process spawner. Stock config. Especially if big chunks of your app are images or static files, your main web server will be serving most of that content, and not Rails.
For your second question - I just checked out a large(ish) rails app, fat_free_crm, on a freshly spun-up micro instance. I was just looking for something big.
I timed a run of assets:precompile and it did complete - after a very long time. I timed it and it seemed to finish in 2 minutes 31 seconds.
I think you might still need more swapspace. I would try a gig to start with. If you still can't precompile your assets after that, you've got some other problem.
dd if=/dev/zero of=/swapfile bs=1k count=1M
mkswap /swapfile
swapon -f /swapfile

RAM split between lowmem and highmem

I have compared /proc/meminfo for a galaxys2 (arm exynos4) device running Android gingerbread and ice cream sandwich (cyanogen CM9). I noticed that the kernel splits the memory differently between low memory and high memory:
For ICS/CM9 (3.0 kernel):
cat /proc/meminfo:
MemTotal: 843624 kB
MemFree: 68720 kB
Buffers: 1532 kB
Cached: 115720 kB
SwapCached: 0 kB
Active: 487780 kB
Inactive: 64524 kB
Active(anon): 436316 kB
Inactive(anon): 1764 kB
Active(file): 51464 kB
Inactive(file): 62760 kB
Unevictable: 748 kB
Mlocked: 0 kB
**HighTotal: 278528 kB**
HighFree: 23780 kB
**LowTotal: 565096 kB**
LowFree: 44940 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 4 kB
Writeback: 0 kB
AnonPages: 435848 kB
Mapped: 45364 kB
Shmem: 2276 kB
Slab: 37996 kB
SReclaimable: 10028 kB
SUnreclaim: 27968 kB
KernelStack: 10064 kB
PageTables: 16688 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 421812 kB
Committed_AS: 8549052 kB
VmallocTotal: 188416 kB
VmallocUsed: 104480 kB
VmallocChunk: 26500 kB
For GB (2.6 kernel):
cat /proc/meminfo:
MemTotal: 856360 kB
MemFree: 22264 kB
Buffers: 57000 kB
Cached: 337320 kB
SwapCached: 0 kB
Active: 339064 kB
Inactive: 379148 kB
Active(anon): 212928 kB
Inactive(anon): 112964 kB
Active(file): 126136 kB
Inactive(file): 266184 kB
Unevictable: 396 kB
Mlocked: 0 kB
**HighTotal: 462848 kB**
HighFree: 1392 kB
**LowTotal: 393512 kB**
LowFree: 20872 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 4 kB
Writeback: 0 kB
AnonPages: 324312 kB
Mapped: 97092 kB
Shmem: 1580 kB
Slab: 29160 kB
SReclaimable: 13924 kB
SUnreclaim: 15236 kB
KernelStack: 8352 kB
PageTables: 23828 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 428180 kB
Committed_AS: 4001404 kB
VmallocTotal: 196608 kB
VmallocUsed: 104804 kB
VmallocChunk: 57092 kB
I have noticed that on the 3.0 kernel memory pressure is evident and the out of memory killer is invoked frequently.
I have two questions regarding this:
Is is possible that in the 3.0 layout (less highmem more lowmem) applications have less available memory? Could that explain the high memory pressure?
Is it possible to change the layout in the 3.0 kernel in order to make it more similar to the 2.6 layout (i.e. more highmem less lowmem)?
As far as I recall, the split between high and low memory is a compilation parameter of the Kernel, so it should be possible to set it differently (at compile time). I do not know why so much is given to the high memory region on your examples. On x86 with 1 GB physical RAM it is about 896 MB for low memory and 128 MB is high memory.
It would seem that Android needs more high memory than a typical 32 bit x86 desktop, I do not know which feature(s) of the Android echo-system would bring such requirements, so hopefully somebody else can tell you that.
You could try to investigate the memory zones to try to see what is the difference between Android ICS and GB. Simply do a cat /proc/zoneinfo. You can find some background information on these zones in this article, although take care that it was described for x86 arch.

Resources