jsf pages render <h:dataTable> slowly with larger number of html components - performance

My TomEE jsf project renders as expected when running locally but renders extremely slowly when deployed remotely.
My remotely deployed jsf pages render slowly when a relatively large number of components (relative to just displaying the template page with ) are the result of a call to data sources. Rendering times measure in minutes when there are more than just a few rows in the table.
Any ideas of where to look to isolate this problem?
Edit 2: I have deployed on multiple remote server environments. Rendering time of query results in <dataTable> seems to depend on bandwidth availability. Traveling to the physical location of the remote server, directly logging onto the remote server and accessing the application via localhost in any browser yields the result of nearly instantaneous rendering of the query results in the <dataTable> tag. Can it be that jsf is generating a huge html file size such that bandwidth becomes a constraint?
Edit 1: The first thing that I thought was that the remote server was doing a memory swaps from RAM to disk. It may be but that led me to make sure of checking the memory allocated to the JVM by TomEE in catalina.sh. See catalina.sh CATALINA_OPTS below.
I have found StackOverflow posts about logging being turned on/off for JSF but no reference of where this setting is. My logging.properties files seem to be identical remote and local for tomee's and the jvm's.
I also found posts about Mojarra's version of the API having an exponential time complexity but this project uses Apache MyFaces.
The problem seems to be some type of TomEE or JVM configuration issue. When I run the application on the locally installed version of TomEE, the same pages render in milliseconds.
I am using Apache MyFaces 2.1.13. From pom.xml
<dependency>
<groupId>org.apache.myfaces.core</groupId>
<artifactId>myfaces-api</artifactId>
<version>2.1.13</version>
</dependency>
<dependency>
<groupId>org.apache.myfaces.core</groupId>
<artifactId>myfaces-impl</artifactId>
<version>2.1.13</version>
</dependency>
I'll gladly post the entire pom.xml but logic tells me that if the project renders pages at a different rate depending on the TomEE container that it is deployed in, then the issue is elsewhere.
Remote TomEE Configuration:
Tomcat Version Apache Tomcat (TomEE)/7.0.53 (1.6.0.2)
JVM Version 1.7.0_55-mockbuild_2014_04_16_07_52-b00
JVM Vendor Oracle Corporation
OS Name Linux
OS Version 2.6.18-371.8.1.el5
OS Architecture amd64
Local Tomee Configuration:
Tomcat Version Apache Tomcat (TomEE)/7.0.53 (1.6.0.2)
JVM Version 1.7.0_67-b01
JVM Vendor Oracle Corporation
OS Name Mac OS X
OS Version 10.9.5
OS Architecture x86_64
BalusC mentioned memory which is a good thought. From catalina.sh
grep "CATALINA_OPTS=" -n /usr/share/apache-tomee-webprofile-1.6.0.2/bin/catalina.sh
272: CATALINA_OPTS="$CATALINA_OPTS $JPDA_OPTS -Xms1024m -Xmx1024m -XX:NewSize=256m -XX:MaxNewSize=356m -XX:PermSize=256m -XX:MaxPermSize=356m"
And from system info:
#
cat /proc/meminfo
MemTotal: 6969972 kB
MemFree: 128928 kB
Buffers: 349588 kB
Cached: 4895748 kB
SwapCached: 92 kB
Active: 1633928 kB
Inactive: 4416324 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 6969972 kB
LowFree: 128928 kB
SwapTotal: 10027000 kB
SwapFree: 10025000 kB
Dirty: 144 kB
Writeback: 0 kB
AnonPages: 804828 kB
Mapped: 52836 kB
Slab: 752492 kB
PageTables: 8928 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 13511984 kB
Committed_AS: 1360596 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 267944 kB
VmallocChunk: 34359470039 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB

Related

Why is `mule:deploy` also downloading dependencies, can it be stopped and only deploy?

We are using Azure CI/CD and we are deploying to CloudHub and/or On-Premise using the maven plugin. But each deploy also downloads some dependencies, which takes a lot of time to complete and after the download it really deploys to Mule.
Can we somehow stop this download and only deploy? Which will take less time to complete. We are using 3.5.4 of the mule maven plugin.
The command executed would be:
[command]/usr/bin/mvn -f /azp/_work/r1/a/_<APP_NAME> CI/<APP_NAME>/pom.xml -Dmule.artifact=/azp/_work/r1/a/_<APP_NAME> CI/<APP_NAME>/package/<APP_NAME>-<VERSION>-mule-application.jar -Dmulesoft.username=<USER> -Dmulesoft.password=*** -Dmulesoft.application.name=<APP_NAME> -Dmulesoft.environment=<ENV> -Dtarget.type=server -Dtarget.name=<TARGET> -Drevision=<VERSION> -Danypoint.platform.client_id=*** -Danypoint.platform.client_secret=*** -Dmule.env=<ENV> -Danypoint.platform.base_uri=https://anypoint.mulesoft.com/ -Danypoint.platform.analytics_base_uri=https://analytics-ingest.anypoint.mulesoft.com/ -Dmule.key=<KEY> mule:deploy
The output of the command will first start some downloads and afterwards the actual build:
2022-05-13T09:54:30.6391145Z [[1;34mINFO[m] Scanning for projects...
2022-05-13T09:54:31.1181089Z Downloading from mulesoft-releases: https://repository.mulesoft.org/releases/org/mule/tools/maven/mule-maven-plugin/3.5.4/mule-maven-plugin-3.5.4.pom
2022-05-13T09:54:32.0477300Z Progress (1): 3.8/5.8 kB
2022-05-13T09:54:32.1507343Z Progress (1): 5.8 kB
2022-05-13T09:54:32.1565163Z
2022-05-13T09:54:32.1567637Z Downloaded from mulesoft-releases: https://repository.mulesoft.org/releases/org/mule/tools/maven/mule-maven-plugin/3.5.4/mule-maven-plugin-3.5.4.pom (5.8 kB at 5.5 kB/s)
2022-05-13T09:54:32.1842212Z Downloading from mulesoft-releases: https://repository.mulesoft.org/releases/org/mule/tools/maven/mule-artifact-tools/3.5.4/mule-artifact-tools-3.5.4.pom
2022-05-13T09:54:32.2768935Z Progress (1): 3.8/26 kB
2022-05-13T09:54:32.2788361Z Progress (1): 7.8/26 kB
2022-05-13T09:54:32.3599724Z Progress (1): 12/26 kB
2022-05-13T09:54:32.3660918Z Progress (1): 16/26 kB
2022-05-13T09:54:32.3687672Z Progress (1): 20/26 kB
2022-05-13T09:54:32.3708468Z Progress (1): 24/26 kB
2022-05-13T09:54:32.4659300Z Progress (1): 26 kB
2022-05-13T09:54:32.4660955Z
2022-05-13T09:54:32.4663598Z Downloaded from mulesoft-releases: https://repository.mulesoft.org/releases/org/mule/tools/maven/mule-artifact-tools/3.5.4/mule-artifact-tools-3.5.4.pom (26 kB at 92 kB/s)
2022-05-13T09:54:32.4924427Z Downloading from mulesoft-releases: https://repository.mulesoft.org/releases/org/mule/tools/maven/mule-packager/3.5.4/mule-packager-3.5.4.pom
2022-05-13T09:54:32.5865328Z Progress (1): 3.8/4.3 kB
2022-05-13T09:54:32.6821126Z Progress (1): 4.3 kB
2022-05-13T09:54:32.6822832Z
2022-05-13T09:54:32.6827017Z Downloaded from mulesoft-releases: https://repository.mulesoft.org/releases/org/mule/tools/maven/mule-packager/3.5.4/mule-packager-3.5.4.pom (4.3 kB at 23 kB/s)
2022-05-13T09:54:32.6975910Z Downloading from mulesoft-releases: https://repository.mulesoft.org/releases/org/mule/tools/maven/mule-classloader-model/3.5.4/mule-classloader-model-3.5.4.pom
2022-05-13T09:54:32.8797068Z Progress (1): 2.2 kB
2022-05-13T09:54:32.8815459Z
2022-05-13T09:54:32.8817163Z Downloaded from mulesoft-releases: https://repository.mulesoft.org/releases/org/mule/tools/maven/mule-classloader-model/3.5.4/mule-classloader-model-3.5.4.pom (2.2 kB at 12 kB/s)
2022-05-13T09:54:32.8918604Z Downloading from mulesoft-releases: https://repository.mulesoft.org/releases/org/apache/commons/commons-lang3/3.10/commons-lang3-3.10.pom
2022-05-13T09:54:33.0002885Z Downloading from central: https://repo.maven.apache.org/maven2/org/apache/commons/commons-lang3/3.10/commons-lang3-3.10.pom
2022-05-13T09:54:33.0736365Z Progress (1): 2.7/31 kB
2022-05-13T09:54:33.0737447Z Progress (1): 5.5/31 kB
2022-05-13T09:54:33.0737921Z Progress (1): 8.2/31 kB
2022-05-13T09:54:33.0738379Z Progress (1): 11/31 kB
2022-05-13T09:54:33.0738807Z Progress (1): 14/31 kB
2022-05-13T09:54:33.0739230Z Progress (1): 16/31 kB
2022-05-13T09:54:33.0739672Z Progress (1): 19/31 kB
2022-05-13T09:54:33.0740133Z Progress (1): 21/31 kB
2022-05-13T09:54:33.0740679Z Progress (1): 24/31 kB
<CONTENT REMOVED>
2022-05-13T09:56:03.4534742Z Downloaded from central: https://repo.maven.apache.org/maven2/javax/annotation/jsr250-api/1.0/jsr250-api-1.0.jar (5.8 kB at 3.2 kB/s)
2022-05-13T09:56:03.4536115Z Progress (3): 379 kB | 4.3 kB | 49/55 kB
2022-05-13T09:56:03.4536568Z Progress (3): 379 kB | 4.3 kB | 53/55 kB
2022-05-13T09:56:03.4544561Z Progress (3): 379 kB | 4.3 kB | 55 kB
2022-05-13T09:56:03.4545335Z
2022-05-13T09:56:03.4546832Z Downloaded from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-component-annotations/1.7.1/plexus-component-annotations-1.7.1.jar (4.3 kB at 2.4 kB/s)
2022-05-13T09:56:03.4614094Z Downloaded from central: https://repo.maven.apache.org/maven2/org/eclipse/sisu/org.eclipse.sisu.inject/0.3.3/org.eclipse.sisu.inject-0.3.3.jar (379 kB at 208 kB/s)
2022-05-13T09:56:03.4659777Z Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/wagon/wagon-provider-api/2.12/wagon-provider-api-2.12.jar (55 kB at 30 kB/s)
2022-05-13T09:56:03.9320327Z [[1;33mWARNING[m]
2022-05-13T09:56:03.9323175Z [[1;33mWARNING[m] Some problems were encountered while building the effective model for com.mycompany:tst-<APP_NAME>:mule-application:<VERSION>
2022-05-13T09:56:03.9325652Z [[1;33mWARNING[m] 'artifactId' contains an expression but should be a constant. # com.mycompany:${mulesoft.application.name}:${revision}, /azp/_work/r1/a/_<APP_NAME> CI/<APP_NAME>/pom.xml, line 5, column 14
2022-05-13T09:56:03.9326790Z [[1;33mWARNING[m]
2022-05-13T09:56:03.9327725Z [[1;33mWARNING[m] It is highly recommended to fix these problems because they threaten the stability of your build.
2022-05-13T09:56:03.9328472Z [[1;33mWARNING[m]
2022-05-13T09:56:03.9329927Z [[1;33mWARNING[m] For this reason, future Maven versions might no longer support building such malformed projects.
2022-05-13T09:56:03.9330768Z [[1;33mWARNING[m]
2022-05-13T09:56:03.9597427Z [[1;34mINFO[m]
2022-05-13T09:56:03.9621642Z [[1;34mINFO[m] [1m---------------------< [0;36mcom.mycompany:tst-<APP_NAME>[0;1m >----------------------[m
2022-05-13T09:56:03.9623310Z [[1;34mINFO[m] [1mBuilding tst-<APP_NAME>-app <VERSION>[m
2022-05-13T09:56:03.9624407Z [[1;34mINFO[m] [1m--------------------------[ mule-application ]--------------------------[m
2022-05-13T09:56:03.9660310Z [[1;34mINFO[m]
2022-05-13T09:56:03.9661604Z [[1;34mINFO[m] [1m--- [0;32mmule-maven-plugin:3.5.4:deploy[m [1m(default-cli)[m # [36mtst-<APP_NAME>[0;1m ---[m
2022-05-13T09:56:08.8821619Z [[1;34mINFO[m] Deploying artifact tst-<APP_NAME>
2022-05-13T09:56:12.1026652Z [[1;34mINFO[m] Found application tst-<APP_NAME> on server <TARGET>. Redeploying application...
2022-05-13T09:56:55.0711750Z [[1;34mINFO[m] Checking application: tst-<APP_NAME> has started
2022-05-13T09:58:55.9768435Z [[1;34mINFO[m] Artifact tst-<APP_NAME> deployed
2022-05-13T09:58:55.9786077Z [[1;34mINFO[m] [1m------------------------------------------------------------------------[m
2022-05-13T09:58:55.9819571Z [[1;34mINFO[m] [1;32mBUILD SUCCESS[m
2022-05-13T09:58:55.9822166Z [[1;34mINFO[m] [1m------------------------------------------------------------------------[m
2022-05-13T09:58:55.9845817Z [[1;34mINFO[m] Total time: 04:25 min
2022-05-13T09:58:55.9852969Z [[1;34mINFO[m] Finished at: 2022-05-13T09:58:55Z
2022-05-13T09:58:55.9855473Z [[1;34mINFO[m] [1m------------------------------------------------------------------------[m
2022-05-13T09:58:56.0327156Z Code analysis is disabled outside of the build environment. Could not find a value for: build.artifactStagingDirectory
2022-05-13T09:58:56.0411723Z ##[section]Finishing: Maven deploy to Mulesoft
That's how Maven works. If you just want to deploy an already built Mule application you can look into using Anypoint CLI for the deployment only. However Anypoint CLI will not build the application for you

JVM issue with failed; error='Cannot allocate memory' (errno=12) [duplicate]

My code crashes with this error message
Executing "/usr/bin/java com.utils.BotFilter"
OpenJDK 64-Bit Server VM warning: INFO:
os::commit_memory(0x0000000357c80000, 2712666112, 0) failed;
error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 2712666112 bytes for committing reserved memory.
An error report file with more information is saved as:
/tmp/jvm-29955/hs_error.log`
Here is the content of the generated hs_error.log file:
https://pastebin.com/yqF2Yy4P
This line from crash log seems interesting to me:
Memory: 4k page, physical 98823196k(691424k free), swap 1048572k(0k free)
Does it mean that the machine has memory but is running out of swap space?
Here is meminfo from the crash log but I don't really know how to interpret it, like what is the difference between MemFree and MemAvailable? How much memory is this process taking?
/proc/meminfo:
MemTotal: 98823196 kB
MemFree: 691424 kB
MemAvailable: 2204348 kB
Buffers: 145568 kB
Cached: 2799624 kB
SwapCached: 304368 kB
Active: 81524540 kB
Inactive: 14120408 kB
Active(anon): 80936988 kB
Inactive(anon): 13139448 kB
Active(file): 587552 kB
Inactive(file): 980960 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1048572 kB
SwapFree: 0 kB
Dirty: 1332 kB
Writeback: 0 kB
AnonPages: 92395828 kB
Mapped: 120980 kB
Shmem: 1376052 kB
Slab: 594476 kB
SReclaimable: 282296 kB
SUnreclaim: 312180 kB
KernelStack: 317648 kB
PageTables: 238412 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 50460168 kB
Committed_AS: 114163748 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 314408 kB
VmallocChunk: 34308158464 kB
HardwareCorrupted: 0 kB
AnonHugePages: 50071552 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 116924 kB
DirectMap2M: 5115904 kB
DirectMap1G: 95420416 kB
Possible solutions:
Reduce memory load on the system
Increase physical memory or swap space
Check if swap backing store is full
Use 64 bit Java on a 64 bit OS
Decrease Java heap size (-Xmx/-Xms)
Decrease number of Java threads
Decrease Java thread stack sizes (-Xss)
Set larger code cache with -XX:ReservedCodeCacheSize=
In case you have many contexts wars deployed on your tomcat try reduce them
As Scary Wombat mentions, the JVM is trying to allocate 2712666112 bytes (2.7 Gb) of memory, and you only have 691424000 bytes (0.69 Gb) of free physical memory and nothing available on the swap.
Another possibility (which I encountered just now) would be bad settings for "overcommit memory" on linux.
In my situation, /proc/sys/vm/overcommit_memory was set to "2" and /proc/sys/vm/overcommit_ratio to "50" , meaning "don't ever overcommit and only allow allocation of 50% of the available RAM+Swap".
That's a pretty deceptive problem, since there can be a lot of memory available, but allocations still fail for apparently no reason.
The settings can be changed to the default (overcommit in a sensible way) for now (until a restart):
echo 0 >/proc/sys/vm/overcommit_memory
... or permanently:
echo "vm.overcommit_memory=0 >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf # apply it immediately
Note: this can also partly be diagnosed by looking at the output of /proc/meminfo:
...
CommitLimit: 45329388 kB
Committed_AS: 44818080 kB
...
In the example in the question, Committed_AS is much higher than CommitLimit, indicating (together with the fact that allocations fail) that overcommit is enabled, while here both values are close together, meaning that the limit is strictly enforced.
An excellent detailed explanation of these settings and their effect (as well as when it makes sense to modify them) can be found in this pivotal blog entry. (Tl;dr: messing with overcommit is useful if you don't want critical processes to use swap)

sonarqube error java insufficient memory

I am trying to setup sonarqube on ec2 instance Amazon Linux AMI. on t2 micro instance. using the below sonarqube version:6.0, java:java-1.8.0-openjdk, mysql:mysql Ver 14.14 Distrib 5.6.39, for Linux (x86_64) using EditLine wrapper
after sonar start command:
sudo ./sonar.sh start
sonar is not starting. after checking in logs gives out below message.
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.05.16 19:30:50 INFO app[o.s.a.AppFileSystem] Cleaning or
creating temp directory /opt/sonarqube/temp
2018.05.16 19:30:50 INFO app[o.s.p.m.JavaProcessLauncher] Launch
process[es]: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-
7.b10.37.amzn1.x86_64/jre/bin/java -Djava.awt.headless=true -Xmx1G -
Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -
XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -
XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -
Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-1.8.0-
openjdk-1.8.0.171-7.b10.37.amzn1.x86_64/jre/lib/management-agent.jar -
cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer
/opt/sonarqube/temp/sq-process620905092992598791properties
OpenJDK 64-Bit Server VM warning: INFO:
os::commit_memory(0x00000000c5330000, 181207040, 0) failed;
error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 181207040 bytes for
committing reserved memory.
# An error report file with more information is saved as:
# /opt/sonarqube/hs_err_pid30955.log
<-- Wrapper Stopped
Below Memory info:
/proc/meminfo:
MemTotal: 1011176 kB
MemFree: 78024 kB
MemAvailable: 55140 kB
Buffers: 8064 kB
Cached: 72360 kB
SwapCached: 0 kB
Active: 860160 kB
Inactive: 25868 kB
Active(anon): 805628 kB
Inactive(anon): 48 kB
Active(file): 54532 kB
Inactive(file): 25820 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 108 kB
Writeback: 0 kB
AnonPages: 805628 kB
Mapped: 30700 kB
Shmem: 56 kB
Slab: 28412 kB
SReclaimable: 16632 kB
SUnreclaim: 11780 kB
KernelStack: 3328 kB
PageTables: 6108 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 505588 kB
Committed_AS: 1348288 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 47104 kB
DirectMap2M: 1001472 kB
CPU:total 1 (initial active 1) (1 cores per cpu, 1 threads per core)
family 6 model 63 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3,
ssse3, sse4.1, sse4.2, popcnt, avx, avx2, aes, clmul, erms, lzcnt, tsc,
bmi1, bmi2
Have you tried to increase the maximum allowed heap size memory for your SonarQube application ?
You can do so by editing the sonar.properties file, found in your SQ installation folder.
You can follow this guide in order to configure your SQ max heap size.

Tuning Ruby / Rails to work on systems with less memory

I'm trying to run an RoR app on an Amazon micro instance (the one which comes in the free tier). However, I'm being unable to successfully complete rake assets:precompile because it supposedly runs out of RAM and the system kills the process.
First, how can I be sure that this is a low memory issue?
Second, irrespective of the answer to the first question, are there some parameters that I can pass to the Ruby interpreter to make it consume less RAM -- even if at the cost of overall app performance? Any GC tuning possible? Anything at all?
Note: Similar to Making ruby on rails take up less memory
PS: I've added a a file-based swap area to the system as well. Here's the output of cat /proc/meminfo if that helps:
MemTotal: 604072 kB
MemFree: 343624 kB
Buffers: 4476 kB
Cached: 31568 kB
SwapCached: 33052 kB
Active: 17540 kB
Inactive: 199588 kB
Active(anon): 11408 kB
Inactive(anon): 172644 kB
Active(file): 6132 kB
Inactive(file): 26944 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 292840 kB
SwapFree: 165652 kB
Dirty: 80 kB
Writeback: 0 kB
AnonPages: 149640 kB
Mapped: 6620 kB
Shmem: 2964 kB
Slab: 23744 kB
SReclaimable: 14044 kB
SUnreclaim: 9700 kB
KernelStack: 2056 kB
PageTables: 6776 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 594876 kB
Committed_AS: 883644 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 5200 kB
VmallocChunk: 34359732767 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 637952 kB
DirectMap2M: 0 kB
Put config.assets.initialize_on_precompile = false in application.rb to avoid initializing the app and the database connection when you precompile assets. That may help.
Another option is to precompile locally and then deploy the compiled assets. More info here http://guides.rubyonrails.org/asset_pipeline.html#precompiling-assets
Second question first - I have run Rails apps on Micro instances, and do so now. So long as your concurrency is very low (one or two active users, tops. And not super-active either) you will be ok. Also note that Amazon will arbitrarily throttle down your effective CPU whenever it wants if you try to slam the CPU too hard (that's just how they do Micro instances). No GC tweaks or anything like that are necessary, just default settings are fine. I was using Passenger, an older version, and made sure it was spinning up only one process spawner. Stock config. Especially if big chunks of your app are images or static files, your main web server will be serving most of that content, and not Rails.
For your second question - I just checked out a large(ish) rails app, fat_free_crm, on a freshly spun-up micro instance. I was just looking for something big.
I timed a run of assets:precompile and it did complete - after a very long time. I timed it and it seemed to finish in 2 minutes 31 seconds.
I think you might still need more swapspace. I would try a gig to start with. If you still can't precompile your assets after that, you've got some other problem.
dd if=/dev/zero of=/swapfile bs=1k count=1M
mkswap /swapfile
swapon -f /swapfile

RAM split between lowmem and highmem

I have compared /proc/meminfo for a galaxys2 (arm exynos4) device running Android gingerbread and ice cream sandwich (cyanogen CM9). I noticed that the kernel splits the memory differently between low memory and high memory:
For ICS/CM9 (3.0 kernel):
cat /proc/meminfo:
MemTotal: 843624 kB
MemFree: 68720 kB
Buffers: 1532 kB
Cached: 115720 kB
SwapCached: 0 kB
Active: 487780 kB
Inactive: 64524 kB
Active(anon): 436316 kB
Inactive(anon): 1764 kB
Active(file): 51464 kB
Inactive(file): 62760 kB
Unevictable: 748 kB
Mlocked: 0 kB
**HighTotal: 278528 kB**
HighFree: 23780 kB
**LowTotal: 565096 kB**
LowFree: 44940 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 4 kB
Writeback: 0 kB
AnonPages: 435848 kB
Mapped: 45364 kB
Shmem: 2276 kB
Slab: 37996 kB
SReclaimable: 10028 kB
SUnreclaim: 27968 kB
KernelStack: 10064 kB
PageTables: 16688 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 421812 kB
Committed_AS: 8549052 kB
VmallocTotal: 188416 kB
VmallocUsed: 104480 kB
VmallocChunk: 26500 kB
For GB (2.6 kernel):
cat /proc/meminfo:
MemTotal: 856360 kB
MemFree: 22264 kB
Buffers: 57000 kB
Cached: 337320 kB
SwapCached: 0 kB
Active: 339064 kB
Inactive: 379148 kB
Active(anon): 212928 kB
Inactive(anon): 112964 kB
Active(file): 126136 kB
Inactive(file): 266184 kB
Unevictable: 396 kB
Mlocked: 0 kB
**HighTotal: 462848 kB**
HighFree: 1392 kB
**LowTotal: 393512 kB**
LowFree: 20872 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 4 kB
Writeback: 0 kB
AnonPages: 324312 kB
Mapped: 97092 kB
Shmem: 1580 kB
Slab: 29160 kB
SReclaimable: 13924 kB
SUnreclaim: 15236 kB
KernelStack: 8352 kB
PageTables: 23828 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 428180 kB
Committed_AS: 4001404 kB
VmallocTotal: 196608 kB
VmallocUsed: 104804 kB
VmallocChunk: 57092 kB
I have noticed that on the 3.0 kernel memory pressure is evident and the out of memory killer is invoked frequently.
I have two questions regarding this:
Is is possible that in the 3.0 layout (less highmem more lowmem) applications have less available memory? Could that explain the high memory pressure?
Is it possible to change the layout in the 3.0 kernel in order to make it more similar to the 2.6 layout (i.e. more highmem less lowmem)?
As far as I recall, the split between high and low memory is a compilation parameter of the Kernel, so it should be possible to set it differently (at compile time). I do not know why so much is given to the high memory region on your examples. On x86 with 1 GB physical RAM it is about 896 MB for low memory and 128 MB is high memory.
It would seem that Android needs more high memory than a typical 32 bit x86 desktop, I do not know which feature(s) of the Android echo-system would bring such requirements, so hopefully somebody else can tell you that.
You could try to investigate the memory zones to try to see what is the difference between Android ICS and GB. Simply do a cat /proc/zoneinfo. You can find some background information on these zones in this article, although take care that it was described for x86 arch.

Resources