Maven application install error: Not enough space on device, virtual disk - maven

I have a Maven application with hundreds of tests, some of which access PSQL database, which when I try to install using mvn clean install command I get around 40 failed tests with Error message PSQL ERROR: could not write to file "base/26718/22294": No space left on device (numbers in quotes vary considering different tests of course but rest of the message is identical). Also each time I attempt to install it system gives me The volume "Filesystem root" has only 737.0 MB disk space left remaining warning in the process (again the number varies in different attempts).
What I suspect the problem is that since I'm running it on virtual machine with dynamically allocated space and it allocates just enough for what currently is on the virtual machine and somehow doesn't count with the files installed by the application so it throws an error.
When examining disk space it says 22 GB but the space that virtual machine should have is 100 GB.
I attempted to download and delete various larger files like pictures and videos in hopes there will be 'empty space' left but still the same, I'm just getting different count of failed tests. The tests are passing on GitLab.

Related

Slow guest performance after live snapshot via virsh (QEMU/KVM)

I came across a weird problem for which I cannot find a solution elsewhere. Maybe you can help me.
I have a system running Ubuntu 20 LTS which is the host of six guests (four Ubuntu 20 LTS and two Windows Server 2019) and they are running quite fast up to the point where I have taken live snapshots. I'm running the guest on QEMU/KVM while using QCOW2 files and I'm using virsh to manage these virtual systems.
I take the live snapshots (without the RAM state) of the guests with the following command:
virsh snapshot-create-as $VM --no-metadata $timestamp --disk-only --atomic
This almost immediately snapshots all the virtual disks of a particular guest and creates new delta files to which the differences are written to. I then have for all guests and for all disks the following structure:
base <- snapshot <- live_delta_file
After copying away the snapshots, I commit them to their base files with the following command:
virsh blockcommit $currentVM $disk --base $path_to_base --top $path_to_snapshot --verbose --wait
After that, I delete the snapshots and all of this works without producing any errors.
However, after taking the snapshots and while all the guests are still running without any errors, each VM is horribly slow with respect to any command in the shell. Furthermore, I can see via top on the host, that the RAM usage of each guest has dramatically reduced (e.g. for the Windows Server 2019 with GUI from 25 GB to 2.5 GB).
It seems, that all the cached data was removed from the RAM which - of course - strongly reduces the performance. However, taking the snapshots (without the --quiesce parameter) should not lead to this behavior, or?. After a reboot of all the guests, everything again works quite fast (while nothing was changed with respect to the snapshot-structure).
Do you have an idea which configuration or situation can lead to such a behavior?
Thank you in advance!
----- EDIT -----
It seems that the actual problem is copying away the files via scp/rsync after the snapshots were taken because one of these programs (rsync?) is eaten up all the memory on the host leading to swapping parts of the RAM of the guests to disk.
Even after the copy process has finished, the copied data seems to remain in the host cache and the guests are further using parts of the swap space of the host.
This of course explains the bad performance of the guests. It can be fixed by clearing the page cache and the swap space by using the following commands:
sync; echo 1 > /proc/sys/vm/drop_caches
swapoff -a; swapon -a
But be careful, clearing the swap space can take several hours with pausing the operation of the guests. Either it should be done at night when they are not used or the problem should be solved at its root, i.e., at the rsync/scp part.
I recognize your experiences.
I solved it by making the caching and swapping less agressive like so.
Maybe it can help you too.
(from /etc/sysctl.conf)
# Make the kernel less swappy
vm.swappiness = 5
# Make the kernel free cached dentries and inodes sooner
vm.vfs_cache_pressure = 200

Running Redis on Windows as service

I have followed all the suggestions I can find.
I am running the current version on redis on windows 2008
I can run fin from command line
I can install the service but it doesnt run
I do...
redis-server --service-install redis.windows.conf
and get "redis successfully installed as a service"
Then I try to start the service doing...
redis-server --service-start redis.windows.conf --loglevel verbose
and get Redis service failed to start
I have made sure I have the .net framework 4.5.2 installed, I have tried with the firewall off and have played with security on the folder.
Anyone have any ideas?
(Merry Christmas all)
Start redis server from the command line instead of as a service and it will display a more useful error message. If you are just using the default configuration it is most likely a problem with the maxmemory/maxheap configuration.
C:\redis>redis-server.exe redis.windows.conf
[1576] 04 Feb 10:32:54.172 #
The Windows version of Redis allocates a memory mapped heap for sharing with
the forked process used for persistence operations. In order to share this
memory, Windows allocates from the system paging file a portion equal to the
size of the Redis heap. At this time there is insufficient contiguous free
space available in the system paging file for this operation (Windows error
0x5AF). To work around this you may either increase the size of the system
paging file, or decrease the size of the Redis heap with the --maxheap flag.
Sometimes a reboot will defragment the system paging file sufficiently for
this operation to complete successfully.
Please see the documentation included with the binary distributions for more
details on the --maxheap flag.
Redis can not continue. Exiting.
In my case the default commandline config did not have logging enabled and the service one did. And no place where it complains about that.
Try creating the directory ./Logs.
Old question, but I came across it while trying to get a Win7x64 install working using binaries Redis-x64-2.8.2101. Couldn't get it to start despite fiddling with various options, no meaningful error given when run with the config, and only an apparently spurious disk space error when run natively.
There appears to be a issue on the github related, linked here for future benefit: https://github.com/MSOpenTech/redis/issues/267

5.6 GB not enough for Cloudera?

I am running Cloudera Hadoop on my laptop and Oracle VirtualBox VM.
I have given 5.6 GB out of mine 8 and six from eight cores as well.
And still I am not able to keep it up and running.
Even without load services would not stay up and running and when I try a query at least Hive will be down within 20 minutes. And sometimes they go down like dominoes: one after another.
More memory seemed to help some: with 3GB and all services, Hue was blinking with red colors when the Hue itself managed to get up. And after rebooting it would takes 30 - 60 minutes before I manage to get the system up enough to even try running anything on it.
There has been two sensible notes (that I have managed to find):
- Warning of swapping.
- Crashing note when the system used 26 GB of virtual memory which was not enough.
My dataset is less than one megabyte, so it is hard to understand why the system would go up to dozens of gigabytes, but for whatever was reason for that has passed: now the system is running more steadily around the 5.6 GB that I have given to it after closing down a few services: see my answer to myself.
And still it is just more stable. Right after I got a warning of swapping and the Hive went down again. What could be reason for more-or-less all Hadoop services going down if the VM starts to swap?
I don't have enough reputation to post the picture to here, but when Hive went down again it was swapping 13 pages / second and utilizing 5.9 GB / 5.6 GB. So basically my system starts crashing more-or-less right after it start to swap. "428 pages were swapped to disk in the previous 15 minute(s)"
I have used default installation options as far as hard drive is concerned.
Only addition is a shared folder between Windows and VM. That works somewhat strangely locking files all the time, so I used it just like FTP and only for passing files from one system to another. Thus I can go days without using it, but systems still crash, so that is not the cause either.
Now that the system is mostly up, services crash still about twice a day: Service Monitor and Hive are quite even with their crashing frequency. After those come Activity Monitor and Event Server, which appear to crash always together. I believe Yarn crashes as well, but it gets up on its own. Last time Hive crashed first, and then it got followed by Service Monitor, Hive (second time), Activity Monitor and Event Server all.
As swap is disk, perhaps the problem is with disk:
# cat /etc/fstab
# swapoff -a
# badblocks -v /dev/VolGroup/lv_swap
Checking blocks 0 to 8388607
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found.
# badblocks -vw /dev/VolGroup/lv_swap
Checking for bad blocks in read-write mode
From block 0 to 8388607
Testing with pattern 0xaa: done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done
Pass completed, 0 bad blocks found.
So nothing wrong with swap disk and I have not noticed any disk error anywhere else either.
Note that you could check file system from Windows side also. But I expect that if you make Windows to fix your Linux file system, you have good chances of destroying your Linux with that, so I did my checks somewhat pessimistically, because AFAIK these commands are safe to execute.
About half of the services kept going down, so giving more specifics would be a long story.
I succeeded to get the system more stable by closing down flume, hbase, impala, ks_indexer, oozie, spark and sqoop. And by increasing more memory to some remaining services that complained they had not been given enough memory.
Also I fixed couple of thing on the Windows side, I am not sure which one of these helped:
- MsMpEng.exe kept my hard drive busy. I didn't have permissions to kill it, but I decreased its priority to lowest possible.
- CcmExec.exe got to loop on my DVD and kept reading it for forever. This I solved by taking the DVD out from the drive. Then later on I killed the process tree to keep it from bothering for a while.
I found these using Windows resource manager.
The VM requires 4GB: http://www.cloudera.com/content/cloudera-content/cloudera-docs/DemoVMs/Cloudera-QuickStart-VM/cloudera_quickstart_vm.html You should use that.
I am not clear whether you are using the QuickStart VM though. It's set up to run just the essential services and tuned to conserve memory rather than exploit lots of memory.
It sounds like you are running your own installation, on one virtual machine, on your Windows machine. You may be running an entire cluster's worth of services on one desktop machine. Each of these services has master, worker processes, monitoring processes, etc. You don't need most of them.
You also probably have left memory settings at default suitable for a server-class machine of 16+ GB RAM. Remember these services usually run across many machines, not all on one.
Finally, you're clearly swapping, and that makes things incredibly slow. Remember this is all through a VM too!
Bottom line, use the QuickStart VM if you really want a 1-machine cluster tuned correctly. If you want a real cluster or more services, you need more hardware.
Also consider: cloudera.com/live contains a full CDH 5.1 cluster + sample data, running on demand on AWS. Of course, the advantage of the VM is that you can BYOD, but if you're simply looking for a hands-on Hadoop experience, Live is a great option.

Error occurred during initialization of VM Could not reserve enough space for object heap errorlevel=1

On launching Jmeter, getting below error:
Error occurred during initialization of VM Could not reserve enough space for object heap errorlevel=1.
Please help me to resolve this. I am using laptop with 4GB RAM, 64-bit OS, intel i-5 core.
There could be a lot of processes running in your laptop. That is why you get this error.
Stop all the unwanted programs.
Check the HEAP size setting in JMeter.bat file. Reduce it (128, 512, 1024..)
You need to do the following.
Go to JMETER_HOME/bin folder
Open "jmeter.bat" (assuming you are using windows)
Check "set HEAP=-Xms512m -Xmx512m" (check the heap size- change accordingly)
Update, Close and Relaunch JMeter
You might need more heap for simulating more load.

jRuby Zip out of Memory

We have a small utility that finds unused items on our server and zips them up then moves them this is written in jRuby. When we go to run this on the actual servers needing clean up they run out of memory before they can complete the operation of the clean up. The java memory is as high as we can get it to run stably on 32bit and we can't move to 64bit at this time it is around 1800m max heap size. There is our main application running as well that we would like to avoid shutting down. The zips the system is creating are 800megs plus is there any way to do this and not have the entire zip file open in memory?
Can you execute zip via the command line?
You may also want to look at pbzip2, you will still need tar to do the archival of multiple files though.

Resources