I got this error while I was building spidermonkey using cygwin terminal, the error is
"***virtual memory exhausted stop"
so how to solve this error ?
Or, simply add a swap file to increase memory, hope that helps
http://www.cyberciti.biz/faq/linux-add-a-swap-file-howto/
As a potential quick fix: You can reduce the memory usage by doing
make -j 1
which tells the build tool to use only one CPU.
Related
I've seen a few people encounter this issue with the heap size, which seems to be the issue in my case:
2> Could not reserve enough space for 1048576KB object heap (TaskId:336)
I tried manually setting it to 1G:
Got the same error, realised the space required is actually greater than 1G (it's about 1.04GB), so I set it to 2G. But this just escalated the error:
1> Could not reserve enough space for 2097152KB object heap (TaskId:305)
I thought I'd go nuclear and just set it to 10G, but then I got a different error saying it failed to create the Java VM.
In all honesty, I don't actually know what these mean, I'm just following along based on research of other SO and Xamarin Forums posts. Can anyone explain to me why I'm seeing these errors and how I can fix them?
Notes based on other questions: It's on debug, not release, and I don't have ProGuard ticked.
Steps to fix:
Select 64-bit Java SDK (as per instructions)
Set heap size to 5G (as per screenshot in question)
Built and ran successfully after this.
I saw a few post here and there talking about this issue, and the only two options seemed to be to either rebuild kernel with appropriate memory split or buy a package from Eltechs.
Since were are talking about an open source software, I believe there should be people who got Wine (installed from jessie-backports) working on RPi3 without buying some extra patch.
However, every time I build a kernel with required memory split option and try executing winecfg, it gives me this kind off error asking to build kernel with a different memory split option. So I'm going in circles here.
Warning: memory above 0x80000000 doesn't seem to be accessible.
Wine requires a 3G/1G user/kernel memory split to work properly.
wine: failed to map the shared user data: c0000018
So first it asked to rebuild the kernel with 2G/2G memory split, then, after I've done that, it asked me to build the kernel with 1G/3G, and then with 3G/1G again.
Can we, the open source people, sort this issue out once and forever and run x86 apps on RPi3? :)
A possible cause of a CL_OUT_OF_RESOURCES error is that the card is being used to run a display (Ref). I have found, however, that I continue to get this error after disconnecting the display and it persists until I restart. Is there a command that will make the OpenCL resources available again?
CL_OUT_OF_RESOURCES is a common error to nVIDIA driver. And can be caused by:
Real out of resources (rare)
Reading an array that was used by a kernel that read/writed out of bounds. (typical)
Any other strange error that does not have an appropriate error code.
You are provably facing the second one, so, I would check the kernels.
EDIT: As you said that it happens until restart. Maybe you can check if you are deleting properly all the OpenCL objects. Events are very tricky and easy to leak some OpenCL memory.
How much memory are you trying to allocate, and how much does the card have on board? A video card driving a display has a certain amount of memory set aside for some operations. The driver may simply be reserving this memory and not caring if the display is gone till it is restarted.
On that note, it is possible to restart the video driver in Windows using devcon. On Linux, you could try an
lsmod | grep nvidia
and once you know the module name, perhaps an
rmmod
or
modprobe -r
I don't know if this will work on OSX.
I am using kubuntu 10.10 with a 4 cores cpu. When I use 'make -j2' to build a cpp project, 2 core's cup usage become 100%, desktop environment become no response, and build procedure make no progress.
Version info:
The GNU make's version is 3.81
gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5)
How to resolve this problem? Thanks.
There's not really enough information here to give you a definitive answer. First it's not clear if this happens only when you run with -j2. What if you run without parallelism (no -j)? When you say "2 core's CPU usage [goes to] 100%", what is happening on those CPUs? If you run "top" in another terminal and then start your build, what is showing in top?
Alternatively, if you run "make -d -j2" what program(s) is make running right before the CPU goes to 100%?
The fact that the desktop is unresponsive as well hints at some other problem, rather than CPU usage, since you have 4 cores and only 2 are busy. Maybe something is chewing up all your RAM? Does the system come back after a while (indicating that the OOM killer got involved and stomped something)?
If none of that helps, you can run make under strace, something like "strace -f make -j2" and see if you can figure out what is going on. This will generate a metric ton or two of output but if, when the CPU is pegged, you see something running over and over and over you might get a hint.
Basically I can see these possibilities:
It's not make at all, but rather whatever command make is running that's just bringing your system down. You imply it's just compiling C++ code so that seems unlikely unless there's a bug somewhere.
Make is recursing infinitely. Make will rebuild its own makefile, plus any included makefile, then re-exec itself. If you are not a bit careful defining rules for rebuilding included makefiles make can decide they're always out of date and rebuild/rexec forever.
Something else
Hopefully the above hints will set you on a path to discovering what's going on.
Are you sure the project is prepared for parallel compilation? Maybe the prerequisites aren't correctly ordered.
If you build the project with just "make" the compilation finish? If it gets to the end is a target dependency problem.
Anyone know likely avenues of investigation for kernel launch failures that disappear when run under cuda-gdb? Memory assignments are within spec, launches fail on the same run of the same kernel every time, and (so far) it hasn't failed within the debugger.
Oh Great SO Gurus, What now?
cuda-gdb spills all shared memory and registers to local memory. So when something runs ok built for debugging and fails otherwise, it usually means out of bounds shared memory access. cuda-memcheck might help, depending on what sort of card you are using. Fermi is better than older cards in that respect.
EDIT:
Casting my mind back to the bad old days, I remember having an ornery GT9500 which used to throw similar NV13 errors and have random code failures when running very memory intensive kernels with a lot of shared memory activity. Never when debugging. I put it down to bad hardware and moved on to a GT200, never to see a similar error since. One possibility might be bad hardware. Is this a G92 (9800GT or similar)?
CUDA GDB can make some of the cuda operations synchronous.
Are you reading from a memory after has been initialized ?
are you using Streams?
Are you launching more than one kernel?
Where and how does it fail ?