YUI Compressor in low memory environment - memory-management

Is there a way to reduce the memory required by the YUI compressor or is there another compressor able to run via command line in "low" memory environments?
My hosting provider has limits on the amount of memory and virtual memory I can use from the shell. Currently it looks like: ulimit -m 200000 -v 200000. The -v argument is the one that seem to have a real effect. I get one of the following two results when trying to run the YUI Compressor in this environment:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
or
Exception java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
The difference is due to using the JVM arguments -Xms18m -Xmx18m for the second one. I can duplicate this effect on my local linux box with the following:
( ulimit -v 200000; java -Xmx18m -jar yui-compressor-2.4.2.jar -o foo-min.css foo.css )
I'm looking to build both javascript and the css on the hosting provider immediately after an update of the source code to push to the live site.

I was able to get the YUI Compressor to execute in the restricted memory space by using the Small Footprint Runtime Environment from Sun.
$ java -version
java version "1.5.0_10-eval"
Java(TM) 2 Runtime Environment, Standard Edition for Embedded (build 1.5.0_10-eval-b02, headless)
Java HotSpot(TM) Client VM (build 1.5.0_10-eval-b02, mixed mode)
Evaluation version, 90 days remain in evaluation period
Only problem I see is that it's an evaluation version but with this version I didn't have to monkey around with -Xmx or -XX:MaxPermPool options whatsoever.

Related

Sonarqube scanner - Java HotSpot(TM) 64-Bit Server VM warning The paging file is too small

I am trying to run the sonar scanner on a docker virtual machine as part of my private Azure DevOps build server and am getting an error with the pagefile not having enough memory to complete the analysis. My docker image is running windows server core 2019 base image with JDK 11.0.13 installed and Sonarqube scanner 5.0.0. The server also has the following environment variables set to try and increase the Java VM size:
JAVA_OPTS="-Xms1024m -Xmx4608m"
SONAR_SCANNER_OPTS="-Xmx4608m"
My image is running with 5GB RAM and monitoring the container is showing that there is plenty of memory still available to use. I have noticed that the first time I run the scan after starting the container it runs fine but each attempt afterwards gets the error:
##[error]Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000789c00000, 703594496, 0) failed; error='The paging file is too small for this operation to complete' (DOS error/errno=1455)
Can someone please help me with why it is failing to allocate around 700MB when there is more than 2GB RAM available.
The versions of everything are:
Azure DevOps agents: 2.194.0
JDK: 11.0.13
Sonarqube scanner extension: 5.0.0
Docker: 20.10.7
Docker base image: dotnet/framework/sdk:4.8-gbt-windowsservercore-ltsc2019
The issue turned out to be related to the JavaXmlSensor detecting some very large xml test files in the test project, it was trying to load them into memory and analyse them causing the out of memory error.
To fix it I added **/*.xml to the sonar.exclusions and also added the same list of exclusions to the sonar.test.exclusions setting.

Cassandra - Improperly specified VM option 'ThreadPriorityPolicy=42'

During single node installation when I am trying to see the nodetool status, this below error message is coming:
ubuntu#ip-172-31-6-128:~/apache-cassandra-3.11.4/bin$ ./cassandra -R
ubuntu#ip-172-31-6-128:~/apache-cassandra-3.11.4/bin$ [0.000s][warning][gc] -Xloggc is deprecated. Will use -Xlog:gc:./../logs/gc.log instead.
intx ThreadPriorityPolicy=42 is outside the allowed range [ 0 ... 1 ]
Improperly specified VM option 'ThreadPriorityPolicy=42'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
This is happening because ThreadPriorityPolicy is not a valid JVM option in whichever version of Java you are using. You can see this by checking the version from the command prompt:
$ java -version
openjdk version "1.8.0_275"
OpenJDK Runtime Environment (IcedTea 3.17.1) (Alpine 8.275.01-r0)
OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
Note that Cassandra 3.x will only function with Java 8. You are likely seeing this error, because the Java 8 options specified in cassandra-env.sh are not valid with your version of Java. Install the latest Java 8, or run Cassandra with Docker.
Edit:
Based on this: Cassandra start error with ThreadPriorityPolicy=42
Try setting ThreadPriorityPolicy=1.

JVM running out of memory in Bamboo

I'm facing an issue.
Currently running a build, in On-Demand Bamboo server in AWS, I'm getting an error and the log says:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory.......failed; error='Cannot allocate memory' (errno=12)
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map XXXXX bytes for committing reserved memory."
Does anyone know how can I allocate memory to Bamboo, since is hosted in AWS? (I do not have much experience with both)
Thank you.
Did you ever solve this? I would start by checking the memory usage (free -m) and then try running the build outside of Bamboo, to see if that work as expected.
You can also update the setenv.sh file in the bamboo bin directory to add memory options. Update the JAVA_OPTS with some reasonable values, e.g. -Xmx768m -Xms512m, that makes sense for your build projects.

New ActiveMQ Installation Runs Out Of Memory After 30 Minutes

We have a fresh installation of ActiveMQ 5.9.1 running on Red Hat Linux. With no outside connections, no queues, and only the default topic on the system, the process runs out of memory (1GB allocated at startup with "-Xms1G -Xmx1G") after about 30 minutes, even with absolutely no activity. I initially ran into this problem with version 5.10.0, and downgraded to 5.9.1 to see if maybe it was something introduced in the new build.
Literally, all I did was:
tar xzf apache-activemq-5.9.1-bin.tar.gz
mv apache-activemq-5.9.1-bin activemq
cd activemq
bin/activemq start
Using "top", I noted that it started with about 150MB of real memory used, and it continued to creep upward. Once top showed it with 1.1GB, there were several heapdump, core, javacore and trace files in the base directory. The javacore files all state:
Dump Event "systhrow" (00040000) Detail "java/lang/OutOfMemoryError" "Java heap space" received
Has anyone else encountered this? How did you fix it?
UPDATE 2014-08-22
"java -version" yeilds:
java version "1.7.0"
Java(TM) SE Runtime Environment (build pxa6470sr6fp1-20140108_01(SR6 FP1))
IBM J9 VM (build 2.6, JRE 1.7.0 Linux amd64-64 Compressed References 20140106_181350 (JIT enabled, AOT enabled)
J9VM - R26_Java726_SR6_20140106_1601_B181350
JIT - r11.b05_20131003_47443.02
GC - R26_Java726_SR6_20140106_1601_B181350_CMPRSS
J9CL - 20140106_181350)
JCL - 20140103_01 based on Oracle 7u51-b11
I'm starting to think the IBM JVM may be the problem.
EDIT 2014-08-29
Replaced the IBM JVM with the standard Oracle JVM and updated ActiveMQ to 5.10.0, but still have the problem. No connections to the server, one queue with no messages on it. Using the Eclipse Memory Analyzer, the Leak Suspects Report shows 197 instances of org.apache.activemq.broker.jmx.ManagedTransportConnection consuming approx. 500MB of memory out of 529MB total. Not sure what this means or how to fix it.

Running a play framework app in Amazon EC2 micro instance

I have a really basic play! app which simply handles a couple of normal GET and POST requests and talks to a MySQL database, nothing fancy.
I ran play dist and transferred the zip file to my EC2 instance. After unzipping it, going to the bin folder and running ./myapp, I get a message:
Java HotSpot(TM) 64-Bit Server VM warning: Info: os::commit_memory ... error='Cannot allocate memory' (errorno=12)
There is insufficient memory for the Java Runtime Environment to continue.
I'm running Play version 2.2.1 and this instance has about 512MB of ram, with the 64-bit version of the Oracle JDK. Is this not enough to run a play! app or am I missing something?
Thanks.
Play Framework 2.3 now has a nifty little feature.
$ /path/to/bin/<project-name> -mem 512 -J-server
Shoule get the job done.
Read http://www.playframework.com/documentation/2.3-SNAPSHOT/ProductionConfiguration
Specifying additional JVM arguments
You can specify any JVM arguments to the start script. Otherwise the default JVM settings will be used:
$ /path/to/bin/ -J-Xms128M -J-Xmx512m -J-server
As a convenience you can also set memory min, max, permgen and the reserved code cache size in one go; a formula is used to
determine these values given the supplied parameter (which represents maximum memory):
$ /path/to/bin/ -mem 512 -J-server
Using play 2.2.1 I had to run play dist to generate the zip file. Then I copied that to the aws instance.
Once there, I extracted the zip and changed the executable file:
from:
local mem=${1:-1024}
to:
local mem=${1:-512}
That did it for me. I got the idea from here but I didn't want to just delete the logic they had there, so I just reduced the default value.
Also please note that on aws ec2 micro:
$ java -version
java version "1.6.0_24"
OpenJDK Runtime Environment (IcedTea6 1.11.14) (amazon-65.1.11.14.57.amzn1-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
So you have to use the same Java JDK when runnin play dist.
EDIT:
I updated java to openjdk 7 and was able to run the sample play applications without any errors.

Resources