Slow guest performance after live snapshot via virsh (QEMU/KVM) - performance

I came across a weird problem for which I cannot find a solution elsewhere. Maybe you can help me.
I have a system running Ubuntu 20 LTS which is the host of six guests (four Ubuntu 20 LTS and two Windows Server 2019) and they are running quite fast up to the point where I have taken live snapshots. I'm running the guest on QEMU/KVM while using QCOW2 files and I'm using virsh to manage these virtual systems.
I take the live snapshots (without the RAM state) of the guests with the following command:
virsh snapshot-create-as $VM --no-metadata $timestamp --disk-only --atomic
This almost immediately snapshots all the virtual disks of a particular guest and creates new delta files to which the differences are written to. I then have for all guests and for all disks the following structure:
base <- snapshot <- live_delta_file
After copying away the snapshots, I commit them to their base files with the following command:
virsh blockcommit $currentVM $disk --base $path_to_base --top $path_to_snapshot --verbose --wait
After that, I delete the snapshots and all of this works without producing any errors.
However, after taking the snapshots and while all the guests are still running without any errors, each VM is horribly slow with respect to any command in the shell. Furthermore, I can see via top on the host, that the RAM usage of each guest has dramatically reduced (e.g. for the Windows Server 2019 with GUI from 25 GB to 2.5 GB).
It seems, that all the cached data was removed from the RAM which - of course - strongly reduces the performance. However, taking the snapshots (without the --quiesce parameter) should not lead to this behavior, or?. After a reboot of all the guests, everything again works quite fast (while nothing was changed with respect to the snapshot-structure).
Do you have an idea which configuration or situation can lead to such a behavior?
Thank you in advance!
----- EDIT -----
It seems that the actual problem is copying away the files via scp/rsync after the snapshots were taken because one of these programs (rsync?) is eaten up all the memory on the host leading to swapping parts of the RAM of the guests to disk.
Even after the copy process has finished, the copied data seems to remain in the host cache and the guests are further using parts of the swap space of the host.
This of course explains the bad performance of the guests. It can be fixed by clearing the page cache and the swap space by using the following commands:
sync; echo 1 > /proc/sys/vm/drop_caches
swapoff -a; swapon -a
But be careful, clearing the swap space can take several hours with pausing the operation of the guests. Either it should be done at night when they are not used or the problem should be solved at its root, i.e., at the rsync/scp part.

I recognize your experiences.
I solved it by making the caching and swapping less agressive like so.
Maybe it can help you too.
(from /etc/sysctl.conf)
# Make the kernel less swappy
vm.swappiness = 5
# Make the kernel free cached dentries and inodes sooner
vm.vfs_cache_pressure = 200

Related

Master Node becomes very slow

Recently My ICp 2.1.0.1 environment (built on openStack VM) master node become very slow. Actually it is the Linux VM that very slow, not only for ICP product, even some general simple Linux command (ls, pwd, cd, etc.).
The thing here is, I didn't even use this environment, no workload, system completely idled.
I used top to monitor the CPU usage but didn't find long running processes taking long time.
How is that happen?
Note, this same issue occur at least twice. I just setup the env and leave it there.
Although the root cause is not yet identified, but simply restart the cluster solved the issue:
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/manage_cluster/restart_cluster.html
The restarting of docker takes several hours to finish. But finally it becomes fast.
I'm not sure if the same issue might happen again in the future. Monitoring..
Note that I didn't reboot the Linux VM as I found after rebooting Linux last time, some docker containers cannot startup successful, which lead me to check below part:
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/troubleshoot/restart_master_console.html
From ICP 2.1.0 release, there are some new features added, you need to ensure your VM hardware configuration matches the hardware requirements listed in below link, thanks.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/supported_system_config/hardware_reqs.html

Docker Run Time Statistics ( Benchmarks)

I know there are lots of docker experts around but I spent considerable time to find out some facts and figure about Docker's run time performance, but unfortunately i could not get any concrete answer. Let me start with telling you my System's configuration:
(a) Running CentOS 6.5 on a machine having 48GB RAM, 1TB Disc and 12 Core CPUs.
(b) I build up a Docker image which is having size almost 6.5GB
Below are questions if someone can answer for the benefit of readers:
(a) Now with the given configuration, question comes that how many containers I can run in parallel without break any functionality?
(b) Assume I have two Images each having size 3.5GB, then is it suggested to run multiple small size images or we get a good performance with big sized image?
(c) What is the best file systems option to use with Docker?
EDIT: more information
(d) Actually I'm trying to put many compilers inside a container and trying to give facility to users to compile their languages online. This tool is under development and will replace my existing website compileonlone.com. Things are going fine, I build up two images with few compilers in each. I'm able to run around 250 containers successfully and after that I start getting too many files opened. After 250 containers, my RAM is reaching somewhere 40GB and CPU utilization is around 50%,.
Main problem I'm facing is removal of the old containers. Because user will come and compile his code and then will go away, so I need to remove those container after certain period of time but when I'm trying to remove such stopped containers using docker rm -v, its slowing down main docker process and its almost hanging. I mean docker -d daemon which is listening at /var/run/docker.sock. Not sure if there is any other way around to clean these containers or I have a bug. Here is the detail of Docker:
# docker info
Containers: 1016
Images: 41
Storage Driver: devicemapper
Pool Name: docker-0:20-258-pool
Pool Blocksize: 64 Kb
Data file: /var/lib/docker/devicemapper/devicemapper/data
Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
Data Space Used: 17820.7 Mb
Data Space Total: 102400.0 Mb
Metadata Space Used: 102.4 Mb
Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.17.2-1.el6.elrepo.x86_64
Operating System: <unknown>
WARNING: No swap limit support
If someone can help me on how to delete old containers in fastest way then it will be great. Simple shell script and all are not working. I already have tried like
#docker rm -v $(docker ps -a |grep Exited | awk '{print $1}')
but its completely slowing down main docker process and its unable to create new containers while this removal process is running.
Thanks for your time taken to answer these questions, which will help me as well as many others in going ahead with Docker.
a): A container is like a process. This question is like asking "how many processes can I run in parallel". It is not answerable without knowing what the processes are doing. Please add this information to your question.
b) Both 3.5GB and 6.5GB are very large for a Docker image. Best practice is to put one application in one container: if you have an application that size, then great. If not, maybe you have put your application's data into the image. This is not a good idea because the layered filesystem is slower than a regular filesystem, and you won't be wanting any of the features of layering or snapshotting on your transactional data.
The documentation on managing data explains how to mount regular disk so it is accessible from your containers.
Edit, after more information was supplied
d) Using up RAM implies the containers are still running. If there is some way within the logic of your site to know when a container is no longer needed you can docker kill it, then docker rm to remove the disk storage. Or docker rm -f does those two operations in one.
After a lot R&D and discussing with many experts, I found a solution to delete containers with lightening speed. Its simple you have run your docker daemon with dm.blkdiscard=false option as follow.
docker -d --storage-opt dm.blkdiscard=false
By the way here is what I have developed. Here I need to create and delete containers with a high speed
http://codingground.tutorialspoint.com
Hope this will help many others.

5.6 GB not enough for Cloudera?

I am running Cloudera Hadoop on my laptop and Oracle VirtualBox VM.
I have given 5.6 GB out of mine 8 and six from eight cores as well.
And still I am not able to keep it up and running.
Even without load services would not stay up and running and when I try a query at least Hive will be down within 20 minutes. And sometimes they go down like dominoes: one after another.
More memory seemed to help some: with 3GB and all services, Hue was blinking with red colors when the Hue itself managed to get up. And after rebooting it would takes 30 - 60 minutes before I manage to get the system up enough to even try running anything on it.
There has been two sensible notes (that I have managed to find):
- Warning of swapping.
- Crashing note when the system used 26 GB of virtual memory which was not enough.
My dataset is less than one megabyte, so it is hard to understand why the system would go up to dozens of gigabytes, but for whatever was reason for that has passed: now the system is running more steadily around the 5.6 GB that I have given to it after closing down a few services: see my answer to myself.
And still it is just more stable. Right after I got a warning of swapping and the Hive went down again. What could be reason for more-or-less all Hadoop services going down if the VM starts to swap?
I don't have enough reputation to post the picture to here, but when Hive went down again it was swapping 13 pages / second and utilizing 5.9 GB / 5.6 GB. So basically my system starts crashing more-or-less right after it start to swap. "428 pages were swapped to disk in the previous 15 minute(s)"
I have used default installation options as far as hard drive is concerned.
Only addition is a shared folder between Windows and VM. That works somewhat strangely locking files all the time, so I used it just like FTP and only for passing files from one system to another. Thus I can go days without using it, but systems still crash, so that is not the cause either.
Now that the system is mostly up, services crash still about twice a day: Service Monitor and Hive are quite even with their crashing frequency. After those come Activity Monitor and Event Server, which appear to crash always together. I believe Yarn crashes as well, but it gets up on its own. Last time Hive crashed first, and then it got followed by Service Monitor, Hive (second time), Activity Monitor and Event Server all.
As swap is disk, perhaps the problem is with disk:
# cat /etc/fstab
# swapoff -a
# badblocks -v /dev/VolGroup/lv_swap
Checking blocks 0 to 8388607
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found.
# badblocks -vw /dev/VolGroup/lv_swap
Checking for bad blocks in read-write mode
From block 0 to 8388607
Testing with pattern 0xaa: done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done
Pass completed, 0 bad blocks found.
So nothing wrong with swap disk and I have not noticed any disk error anywhere else either.
Note that you could check file system from Windows side also. But I expect that if you make Windows to fix your Linux file system, you have good chances of destroying your Linux with that, so I did my checks somewhat pessimistically, because AFAIK these commands are safe to execute.
About half of the services kept going down, so giving more specifics would be a long story.
I succeeded to get the system more stable by closing down flume, hbase, impala, ks_indexer, oozie, spark and sqoop. And by increasing more memory to some remaining services that complained they had not been given enough memory.
Also I fixed couple of thing on the Windows side, I am not sure which one of these helped:
- MsMpEng.exe kept my hard drive busy. I didn't have permissions to kill it, but I decreased its priority to lowest possible.
- CcmExec.exe got to loop on my DVD and kept reading it for forever. This I solved by taking the DVD out from the drive. Then later on I killed the process tree to keep it from bothering for a while.
I found these using Windows resource manager.
The VM requires 4GB: http://www.cloudera.com/content/cloudera-content/cloudera-docs/DemoVMs/Cloudera-QuickStart-VM/cloudera_quickstart_vm.html You should use that.
I am not clear whether you are using the QuickStart VM though. It's set up to run just the essential services and tuned to conserve memory rather than exploit lots of memory.
It sounds like you are running your own installation, on one virtual machine, on your Windows machine. You may be running an entire cluster's worth of services on one desktop machine. Each of these services has master, worker processes, monitoring processes, etc. You don't need most of them.
You also probably have left memory settings at default suitable for a server-class machine of 16+ GB RAM. Remember these services usually run across many machines, not all on one.
Finally, you're clearly swapping, and that makes things incredibly slow. Remember this is all through a VM too!
Bottom line, use the QuickStart VM if you really want a 1-machine cluster tuned correctly. If you want a real cluster or more services, you need more hardware.
Also consider: cloudera.com/live contains a full CDH 5.1 cluster + sample data, running on demand on AWS. Of course, the advantage of the VM is that you can BYOD, but if you're simply looking for a hands-on Hadoop experience, Live is a great option.

Is there a way to cap the file size of slony log shipping files?

I am working with a SuSE machine (cat /etc/issue: SUSE Linux Enterprise Server 11 SP1 (i586)) running Postgresql 8.1.3 and the Slony-I replication system (slon version 1.1.5). We have a working replication setup going between two databases on this server, which is generating log shipping files to be sent to the remote machines we are tasked to maintain. As of this morning, we ran into a problem with this.
For a while now, we've had strange memory problems on this machine - the oom-killer seems to be striking even when there is plenty of free memory left. That has set the stage for our current issue to occur - we ran a massive update on our system last night, while replication was turned off. Now, as things currently stand, we cannot replicate the changes out - slony is attempting to compile all the changes into a single massive log file, and after about half an hour or so of running, it trips over the oom-killer issue, which appears to restart the replication package. Since it is constantly trying to rebuild that same package, it never gets anywhere.
My first question is this: Is there a way to cap the size of Slony log shipping files, so that it writes out no more than 'X' bytes (or K, or Meg, etc.) and after going over that size, closes the current log shipping file and starts a new one? We've been able to hit about four megs in size before the oom-killer hits with fair regularity, so if I could cap it there, I could at least start generating the smaller files and hopefully eventually get through this.
My second question, I guess, is this: Does anyone have a better solution for this issue than the one I'm asking about? It's quite possible I'm getting tunnel vision looking at the problem, and all I really need is -a- solution, not necessarily -my- solution.

How do I improve Windows Subversion client update performance?

How do I improve Subversion client update performance? It appears to be disk bound on the client.
Details:
CollabNet Windows client version 1.6.2 (r37639)
Windows XP SP2
3 GB RAM with PF Usage around 1 GB and System Cache of 1.1 GB.
Disk has write caching enabled
Update takes 7-15 minutes (when very little to update).
Checkout has 36,083 directories/files (from svn list)
Repository has 58,750 revisions.
Checkout takes about 2.7 GB
Perf monitor shows % Disk Write time stays near 90% during update.
Max Disk Read Bytes/sec got up to 12.8M and write got up to 5.2M
CPU, paging file usage, and network usage are all low.
Watching the server performance seems to show that it isn't a bottleneck.
I'm especially interested in answers besides getting a faster disk (especially configuration changes).
Updates from some of the suggestions:
I need the whole thing so sparse directories won't work.
Another client (TortoiseSVN) takes 7 minutes also
TortoiseSVN icon overlays have be configured so they don't cause the problem.
Anti-virus is configured to to skip that directory is it isn't causing the problem.
I experience exatly the same thing. Recently replaced Perforce with svn, but if we cannot overcome the performance problems on Windows me must consider another tool.
Using svn 1.6.6, Win XP and Vista clients. RedHat server.
My observations matches yours:
Huge disk-write activity.
Antivirus not a bottleneck.
No matter witch svn-clients are used.
No server or network bottleneck.
Complementary info
More than 3 times faster operations on:
Linux (Ubuntu).
Linux (Ubuntu) running on VirtualBox at Win Vista host.
Win XP running on VMWare at RedHat host.
Do you need every bit of the repository on your working copy? If you truly only care about particular portions of the tree, look into Subversion's Sparse Directories (a.k.a. "Sparse Checkouts") feature. It allows you to manipulate your working copy so it only contains those directories of interest.
Just as an example, you might use this to prune documentation, installer-related files, etc. Depending on what you truly need on your local machine, embracing this approach could make a serious dent in your wait times.
Try svn client version 1.5.. It helped me on my Vista laptop. Versions 1.6. are extremely slow.
This is more likely to be your network and the amount of data moved as well as your client. Are you using Tortoise? I find it to be a bit slow myself when moving that much data!
Are you using TortoiseSVN? If so, the Icon Overlays do slow down operations. If you go to TortoiseSVN Settings/Icon Overlays there are several settings you can tweak to control the level to which you want to use the Overlays, including turning them off completely. See if that affects your performance.
Do you run a virus checker that uses on-access scanning? That can really make it crawl. If so, turn it off and see if that helps. Most scanners will have a way to exclude specific directories if that helps.
Nobody seems to be pointing out the one reason that I often consider a design flaw. Subversion creates a second "pristine" copy of the checkout for offline operations. If you're checking out 4G of files, it's actually writing 8G to disk.
Compare a checkout to an export. That will show you the massive difference when writing those second copies.
There's nothing you can do about that.
Upgrade to svn 1.7
From Discussion of Slow Performance of SVN Update:
The update process in svn 1.6 goes something like this:
search the entire working copy, to see what's there at the moment, and locking it so no one changes the answer during the next steps
tell that to the server
receive from the server whatever new stuff you need, applying the changes to the files as you go
recurse over the entire working copy again, unlocking it
If there are many directories and files, steps 1 and 4 can take up a
lot of time. This would be consistent with your observation of long
delays with no network traffic.
Working copy format was changed in svn 1.7. Now all meta information is stored in SQLite database in root folder of working copy and there is no need to perform steps 1 and 4 any more which consumed most of the time durring svn update.

Resources