Running out of RAM during daemon process - gradle

I am trying to make a Minecraft mod. I have set the gradle.properties to use 3GB of ram. If I do this it is able to load the daemon process however it immediately displays
Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the user guide chapter on the daemon at https://docs.gradle.org/2.14/userguide/gradle_daemon.html
Please read the following process output to find out more:
-----------------------
Error occurred during initialization of VM
Could not reserve enough space for 3072000KB object heap
It is running out of RAM, and I have no idea why as I have 16GB installed.
I tried closing all other programs but got the same problem. Next up I changed the gradle.properties to use 1GB and then 2GB of RAM, and instead just got an error saying it ran out of memory (I googled this error and the common solution was to give it some more RAM)
* What went wrong: Execution failed for task ':decompileMc'. > Process 'command 'C:\Program Files (x86)\Java\jdk1.8.0_211\bin\java.exe'' finished with non-zero exit value 1
Francisco Mateo recommended I update my gradle version - sadly didn't help.
I tried changing the distributionUrl in the gradle-wrapper.properties to the new version of gradle I am using so it now reads distributionUrl=https\://services.gradle.org/distributions/gradle-5.5.1-bin.zip
I also tried running InteliJ as administrator. Still no luck.
Not a clue what is happening here - help appreciated. If it makes any difference the error occures at 10:08 of https://www.youtube.com/watch?v=RZ66HdNkank

Related

java.lang.OutOfMemoryError: Java heap space in Intellij cannot work even if i change memory space

I was just ruined by this error for the whole week. I really don't know what happened, I got this error after compiling the source, I yet write custom code there, the source is fresh from the repo. I asked my friend to run the source and it worked on his laptop.
What I was tried to fix this problem :
File -> Invalidate Cache
Help -> Change Memory Setting (all size was tried) (current size: 2048)
Intellij -> Preference -> Compiler -> Shared Build Process Heap Size (all size was tried) (current size: 6048)
And I am still stuck at that error.
Lately I run this command ./gradlew build --stacktrace --info to see the log, and i had no idea what was that. this is the full result:
You can assign more heap
for your ide --> help | change memory settings
for your current programm --> Run | Edit Configurations... see screenshot
for all programs -->
Set Environment Variable: SET _JAVA_OPTIONS=-Xms512m -Xmx2048m
or
File | Settings | Build, Execution, Deployment | Compiler and set user User-local build process VM options with like 2048
Verify heap size with How to view the current heap size that an application is using?
thanks for all your suggestions guys. I've done it all and got the same result.
finally, I can escape from this bug by running this command ./gradlew build in the terminal and close my Intellij. I don't know what really happened, but now it all working perfectly.
I want to ask u guys, I am new in mac, are partitioning disk in mac book take an effect on its performance?

dockerd failed to start daemon: error initializing graphdriver: driver not supported

I've been running a few containers (approximately a dozen) for awhile now. I've approached whatever the hard limit is on container/image sizes in the past, and had to clean these up to keep it from barfing all over everything, and recently the same has happened again.
I have identified several containers and images I can safely remove to reduce its footprint. But just as I was getting ready to do so, Docker crashed on me. And when I attempt to restart it, it crashes with the error message:
Fatal Error
Docker daemon failed to start
[timestamp] dockerd failed to start daemon: error initializing graphdriver: driver not supported
Thus, I can't use any of the command-line tools to remove these images/containers.
As there are running containers that I don't dare delete at this point, this makes it a little difficult to resolve. Is there a way to start Docker (on the mac) that doesn't actually start any of the containers so that maybe I can avoid this error?
Is the error message even related to my problem? I'm on Docker 2.3.0.4 if it matters.
You could switch to overlay2 driver instead of graph driver
You can follow the document below to switch
https://docs.docker.com/storage/storagedriver/overlayfs-driver/

Gluon Mobile Not Being Able to Port to iOS

I'm on the home stretch of this App I'm working on and I can't seem to get it to port to iOS. When trying to build my application on iOS, I get an OutOfMemory exception over and over again.
Then I tried to build a basic Gluon Mobile application and port it to iOS and I get this:
:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:createDefaultIOSLauncher UP-TO-DATE
:compileIosJava UP-TO-DATE
:processIosResources UP-TO-DATE
:iosClasses UP-TO-DATE
:iosExtractNativeLibs UP-TO-DATE
:createIpa
RoboVM has detected that you are running on a slow HDD. Please consider mounting a RAM disk.
To create a 2GB RAM disk, run this in your terminal:
SIZE=2048 ; diskutil erasevolume HFS+ 'RoboVM RAM Disk' `hdiutil attach -nomount ram://$((SIZE * 2048))`
See http://docs.robovm.com/ for more info
RoboVM has detected that you are running on a slow HDD. Please consider mounting a RAM disk.
To create a 2GB RAM disk, run this in your terminal:
SIZE=2048 ; diskutil erasevolume HFS+ 'RoboVM RAM Disk' `hdiutil attach -nomount ram://$((SIZE * 2048))`
See http://docs.robovm.com/ for more info
Root pattern javax.annotations.**.* matches no classes
Root pattern javax.inject.**.* matches no classes
RoboVM has detected that you are running on a slow HDD. Please consider mounting a RAM disk.
To create a 2GB RAM disk, run this in your terminal:
SIZE=2048 ; diskutil erasevolume HFS+ 'RoboVM RAM Disk' `hdiutil attach -nomount ram://$((SIZE * 2048))`
See http://docs.robovm.com/ for more info
Root pattern javax.annotations.**.* matches no classes
Root pattern javax.inject.**.* matches no classes
Warning: javax.xml.bind.annotation.XmlRootElement is a phantom class!
Warning: java.nio.file.StandardOpenOption is a phantom class!
Warning: java.nio.file.FileSystem is a phantom class!
Warning: java.nio.file.OpenOption is a phantom class!
Warning: java.nio.file.FileSystems is a phantom class!
Warning: com.oracle.jrockit.jfr.TimedEvent is a phantom class!
Warning: com.oracle.jrockit.jfr.EventToken is a phantom class!
Warning: com.oracle.jrockit.jfr.ValueDefinition is a phantom class!
Warning: com.oracle.jrockit.jfr.EventDefinition is a phantom class!
Warning: com.oracle.jrockit.jfr.Producer is a phantom class!
Warning: com.oracle.jrockit.jfr.FlightRecorder is a phantom class!
Daemon stopping because JVM tenured space is exhausted
Daemon stopping because JVM tenured space is exhausted
My IMac is running 8GB of RAM on a 2.7GHz I5.
I have also attempted mounting a RoboVM RAM Disk to no success. Please help!
Usually, the iOS deployment requires a lot of memory, and it is a good practice to increase by default the maximum allocation memory pool of the JVM heap, up until 2 GB.
Running from your IDE, you can set this default value in the Gradle preferences.
For instance, on NetBeans, go to Preferences->Miscellaneous->Gradle, Scripts & Tasks, and as Gradle JVM arguments add -Xmx2048m:
Another option is to set a gradle property in your gradle.properties file (the one with the ANDROID_HOME property, under <user>/.gradle):
org.gradle.jvmargs=-Xmx2048m
This property file will be applied either running from IDE or from command line, so it is more appropriate.
A typical situation where an out of memory error is found is the first time the RoboVM compiler is launched. Luckily, all the compiled classes are cached, so restarting the task just resumes the process.
Also, if the process fails, sometimes can be convenient to stop all the deamon threads with gradle --stop and ./gradlew --stop, and start the task again.
If the process ends successfully, even if there are warning messages, just check on your iOS device that the app was installed and runs fine. Note that you could find memory issues as well, but this is a different issue.
Edit
when running long tasks, it's always convenient using --info to find out more about the process, with a more verbose output.
Also, the process can be run from the console (from NetBeans, right click on the build.gradle file and select Tools->Open in Terminal).

Failure with running TorqueBox on Ubuntu Quantal

I'm trying to set up TorqueBox inside Vagrant on Ubuntu Quantal. I've deployed my app into TorqueBox, but when I try to run bin/standalone.sh, it hangs for a long time after "Setting up Bundler" and then simply says "Killed".
I'm at a complete loss as to how to debug this.
I followed this guide for the installation of TorqueBox: http://torquebox.org/documentation/2.3.0/production-setup.html
Here's the full log: https://gist.github.com/elabs-dev/5411966
Is there a dump file in $TORQUEBOX_HOME/jboss/standalone/bin ? If so, it could indicate that the JVM is crashing.
Otherwise, it could be that there is insufficient memory available to deploy whatever you're deploying - how large is your app?

require_once: cannot allocate memory

I have a pretty "unconventional" setup that looks like this:
I have a VMware virtual machine running Ubuntu Desktop 12.10 with networking set to bridge. Installed inside the VM is nginx, php-fpm, php-cli and MySQL. PHP is 5.4.10.
I have a folder containing all the files for the application I am working on in the host (Windows 7). This folder is then made available as a network share.
The virtual machine then mounts the windows share via samba into a directory. This directory is then served by nginx and php-fpm. So far, everything has worked fine.
I have a script that I run via the php CLI to help me build and process some files. Previously, this script has worked fine on my windows host. However, it seems to throw cannot allocate memory errors when I run it in the Ubuntu VM. The weird thing is that it's sporadic as well and does not happen all the time.
user#ubuntu:~$ sudo /usr/local/php/bin/php -f /www/app/process.php
Warning: require_once(/www/app/somecomponent.php): failed to open stream: Cannot allocate memory in /www/app/loader.php on line 130
Fatal error: require_once(): Failed opening required '/www/app/somecomponent.php' (include_path='.:/usr/local/php/lib/php') in /www/app/loader.php on line 130
I have checked and confirmed the following:
/www/app/somecomponent.php definitely exists.
The permissions for /www and all files and sub directories inside are set to execute and read+write for owner, group and others.
I have turned off APC after reading this question to see if APC is the cause, but the problem still persists after doing so.
php-cli is using the same php.ini as php-fpm, which is located in /etc/php/php.ini.
memory_limit in php.ini is set to 128M (default) which seems plenty for the app.
Even after increasing the memory limit to 256M, the error still occurs.
The VM has 2GB of memory.
I have been googling to find out what causes cannot allocate memory errors, but have found nothing useful. What could be causing this problem?
It turns out this was a problem with my windows share. Perhaps because Windows 7 is a client OS, it is not tuned to serve large amounts of files frequently (which is happening in my case).
To fix, set the following keys in the registry:
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache to 1
HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size to 3
I have been running my setup with these new settings for about a week and have not encountered any memory allocation errors since making the change.

Resources