~/.git on OSX weighing more than 27GB - macos

I work on Macbook Air, 128GB. After a month of usage, it began reporting disk being full quite often. As I ran omnidisksweeper to check for potential data hogs I have found that ~/.git folder takes 27.4 GB of the precious disk space. I tried to check what sort of data it is, but any git operation on the folder returns
fatal: bad default revision 'HEAD'
I ran git gc --aggressive --prune, deleted more than 250 duplicates, but this has not affected the disk space usage, not in the slightest. I have only one git (1.8.5.2) installed at /usr/bin/git. Is there a way to compress/delete it without ravishing the whole system?
Please advise.

Related

Slow guest performance after live snapshot via virsh (QEMU/KVM)

I came across a weird problem for which I cannot find a solution elsewhere. Maybe you can help me.
I have a system running Ubuntu 20 LTS which is the host of six guests (four Ubuntu 20 LTS and two Windows Server 2019) and they are running quite fast up to the point where I have taken live snapshots. I'm running the guest on QEMU/KVM while using QCOW2 files and I'm using virsh to manage these virtual systems.
I take the live snapshots (without the RAM state) of the guests with the following command:
virsh snapshot-create-as $VM --no-metadata $timestamp --disk-only --atomic
This almost immediately snapshots all the virtual disks of a particular guest and creates new delta files to which the differences are written to. I then have for all guests and for all disks the following structure:
base <- snapshot <- live_delta_file
After copying away the snapshots, I commit them to their base files with the following command:
virsh blockcommit $currentVM $disk --base $path_to_base --top $path_to_snapshot --verbose --wait
After that, I delete the snapshots and all of this works without producing any errors.
However, after taking the snapshots and while all the guests are still running without any errors, each VM is horribly slow with respect to any command in the shell. Furthermore, I can see via top on the host, that the RAM usage of each guest has dramatically reduced (e.g. for the Windows Server 2019 with GUI from 25 GB to 2.5 GB).
It seems, that all the cached data was removed from the RAM which - of course - strongly reduces the performance. However, taking the snapshots (without the --quiesce parameter) should not lead to this behavior, or?. After a reboot of all the guests, everything again works quite fast (while nothing was changed with respect to the snapshot-structure).
Do you have an idea which configuration or situation can lead to such a behavior?
Thank you in advance!
----- EDIT -----
It seems that the actual problem is copying away the files via scp/rsync after the snapshots were taken because one of these programs (rsync?) is eaten up all the memory on the host leading to swapping parts of the RAM of the guests to disk.
Even after the copy process has finished, the copied data seems to remain in the host cache and the guests are further using parts of the swap space of the host.
This of course explains the bad performance of the guests. It can be fixed by clearing the page cache and the swap space by using the following commands:
sync; echo 1 > /proc/sys/vm/drop_caches
swapoff -a; swapon -a
But be careful, clearing the swap space can take several hours with pausing the operation of the guests. Either it should be done at night when they are not used or the problem should be solved at its root, i.e., at the rsync/scp part.
I recognize your experiences.
I solved it by making the caching and swapping less agressive like so.
Maybe it can help you too.
(from /etc/sysctl.conf)
# Make the kernel less swappy
vm.swappiness = 5
# Make the kernel free cached dentries and inodes sooner
vm.vfs_cache_pressure = 200

Impossible to update catalina OS

I tried to update my mojave to catalina in order to update Xcode... But impossible I have 27Gb Free space but all time this error:
an error occurred while installing the selected updates
And after each error I have 8Gb in less, so if I continue I while arrive to not have free space, I don't know how to delete this fail load on my system storage.
I found that 8Gb are add in private/var/folders (never touch this folders) just reboot in safe mode, this will erase temporary file...
As found here:
A clean install uses up around 20 GB of storage space. In addition,
you need to allow for space for your user data, applications, and
future updates. As if that weren’t enough, you should keep at least 10
to15 percent of the startup drive free to ensure adequate performance.
I normally suggest a good deal more free space than that, but here
we’re just talking about a minimum to ensure you can install and use
macOS Catalina.
So the solution may be freeing up to 30 GB of space, just to be sure.
You can try this utility to better identify some files to delete.

Tortoise Is very Slow And uses Huge amount of memory

Since some days TortoiseSVN uses lots of memory when I want to commit also it takes 10 - 20 minutes before the changed files appear.
On normal use it doensn't use much memory only when commiting or comparing changed files.
As you can see the memory usage is not normal.
I have already reinstalled the newest version (1.8.10) but no difference.
Does anyone have any clue?
(the directory I am working in is 2 GB This includes the tempdata witch is excluded from svn and i am working on w7 x64)
Here is a Screenshot of the Icon Overlay settings i use
I had the same issue since I updated to (TortoiseSVN 1.8.10); excessive amounts of memory used and a each refresh of your view would increase this amount even further.
The new version 1.8.11 appears to have resolved the issue.

How do I improve Windows Subversion client update performance?

How do I improve Subversion client update performance? It appears to be disk bound on the client.
Details:
CollabNet Windows client version 1.6.2 (r37639)
Windows XP SP2
3 GB RAM with PF Usage around 1 GB and System Cache of 1.1 GB.
Disk has write caching enabled
Update takes 7-15 minutes (when very little to update).
Checkout has 36,083 directories/files (from svn list)
Repository has 58,750 revisions.
Checkout takes about 2.7 GB
Perf monitor shows % Disk Write time stays near 90% during update.
Max Disk Read Bytes/sec got up to 12.8M and write got up to 5.2M
CPU, paging file usage, and network usage are all low.
Watching the server performance seems to show that it isn't a bottleneck.
I'm especially interested in answers besides getting a faster disk (especially configuration changes).
Updates from some of the suggestions:
I need the whole thing so sparse directories won't work.
Another client (TortoiseSVN) takes 7 minutes also
TortoiseSVN icon overlays have be configured so they don't cause the problem.
Anti-virus is configured to to skip that directory is it isn't causing the problem.
I experience exatly the same thing. Recently replaced Perforce with svn, but if we cannot overcome the performance problems on Windows me must consider another tool.
Using svn 1.6.6, Win XP and Vista clients. RedHat server.
My observations matches yours:
Huge disk-write activity.
Antivirus not a bottleneck.
No matter witch svn-clients are used.
No server or network bottleneck.
Complementary info
More than 3 times faster operations on:
Linux (Ubuntu).
Linux (Ubuntu) running on VirtualBox at Win Vista host.
Win XP running on VMWare at RedHat host.
Do you need every bit of the repository on your working copy? If you truly only care about particular portions of the tree, look into Subversion's Sparse Directories (a.k.a. "Sparse Checkouts") feature. It allows you to manipulate your working copy so it only contains those directories of interest.
Just as an example, you might use this to prune documentation, installer-related files, etc. Depending on what you truly need on your local machine, embracing this approach could make a serious dent in your wait times.
Try svn client version 1.5.. It helped me on my Vista laptop. Versions 1.6. are extremely slow.
This is more likely to be your network and the amount of data moved as well as your client. Are you using Tortoise? I find it to be a bit slow myself when moving that much data!
Are you using TortoiseSVN? If so, the Icon Overlays do slow down operations. If you go to TortoiseSVN Settings/Icon Overlays there are several settings you can tweak to control the level to which you want to use the Overlays, including turning them off completely. See if that affects your performance.
Do you run a virus checker that uses on-access scanning? That can really make it crawl. If so, turn it off and see if that helps. Most scanners will have a way to exclude specific directories if that helps.
Nobody seems to be pointing out the one reason that I often consider a design flaw. Subversion creates a second "pristine" copy of the checkout for offline operations. If you're checking out 4G of files, it's actually writing 8G to disk.
Compare a checkout to an export. That will show you the massive difference when writing those second copies.
There's nothing you can do about that.
Upgrade to svn 1.7
From Discussion of Slow Performance of SVN Update:
The update process in svn 1.6 goes something like this:
search the entire working copy, to see what's there at the moment, and locking it so no one changes the answer during the next steps
tell that to the server
receive from the server whatever new stuff you need, applying the changes to the files as you go
recurse over the entire working copy again, unlocking it
If there are many directories and files, steps 1 and 4 can take up a
lot of time. This would be consistent with your observation of long
delays with no network traffic.
Working copy format was changed in svn 1.7. Now all meta information is stored in SQLite database in root folder of working copy and there is no need to perform steps 1 and 4 any more which consumed most of the time durring svn update.

SVN/TortoiseSVN painfully slow

I'm experiencing painfully slow operations with one of our SVN repositories/projects.
For example, it's taking 5-10 minutes to revert the changes in one small file (10 KB). Or about 40-60 minutes to check out the project of 100 MB.
There are about 30 other projects on the same server, some vastly bigger than this one, and none of them preform like this.
One thing to note is that this project is a Magento project. It's not very large in terms of disk space, but I have 23k Files and 11k folders, and I have read SVN preforms badly when there are lots of little files; is this true? And is there anything I can do to speed things up?
The Subversion working copy performs quite badly when there's a huge number of directories, like in your case. For write operations (even only locally) to the working copy, the working copy has to be locked, which means that a lock file is created in every directory (that's 11k file creates), then the action executes, and the those 11k files are deleted again.
Subversion 1.7 is moving to a different working copy format, that should resolve these problems. Until then there's a few tricks you might try to speed things up, like excluding the working copy from your virus scanner, disabling file monitors on the directory (like TortoiseSvnCache), and trying to reduce the total number of directories. (Perhaps by checking out a few separate working copies)
There is a known issue with the use of the recycle bin with revert which causes slow reverting. Emptying your recycle bin and setting TortoiseSVN not to use it during revert operations both speed up this operation (see http://www.nabble.com/Revert-is-too-slow-td18222196.html).
This has definitely sped up my revert operations.
I experienced extreme slowness with Subversion on Windows after changing my password. I had to delete all directories and files from %APPDATA%\Subversion\auth.
Now SVN is fast as a hare. My slowness occurred via both TortoiseSVN and the command line.
SVN is slow if you use NFS (Network File System) for the working copy. This could be your problem.
We have face similar issue, the problem was TortoiseSvn (Version 1.9.7). For example, the repo browser took about 10 minutes to initial.
We have turned of the Show Locks feature and every thing fixed!
Right click on a folder and select Tortoise\Settings then General\Dialog 3 then deselect Show Locks
Also some good hints can be found at http://tigris-scm.10930.n7.nabble.com/Workaround-for-slow-RepositoryBrowser-on-large-repositories-td92324.html
Reverting changes in SVN is a local operation which shouldn't go to the server at all. So it sounds as though the problem is in your working copy of the project.
Try running 'svn cleanup' in the working copy; you may also want to check if you have problems with the hard drive or filesystem.
Our SVN was running painfully slow through TortoiseSVN, Eclipse and command line. Commits and exports were slow. Our Zend Framework-based PHP projects would take an age to update and popping in a small commit of about three files would take 5-10 minutes.
Our SVN virtual machine (CentOS) only had 700 MB of RAM which seemed reasonable for a Linux CLI only running Subversion via Apache and has been running fine for about one year. We've only got about 20 projects and only three developers.
I've upped it to 1.5 GB of RAM and things are running much faster now, back to our old speeds.
I also suffered a large slowdown after upgrading to TortoiseSVN 1.7.3.
Then I discovered I had a separate install of SVN 1.6.5. I uninstalled both and reinstalled TortoiseSVN and now things are much better. First update of the day in TortoiseSVN is still slow (1-2 minutes), but fast after that.
I have some projects which use the Eclipse IDE. If you capture the Eclipse project directories you get hundreds and hundreds of tiny files which has the same effect for my project as you're suffering on yours.
I think that when you check files out SVN does so one at a time which means that projects with huge numbers of files are always going to be slow and there's not much you can do about it (aside from avoiding frequent whole-repository operations).
Making changes to a single file shouldn't be slow though.
You may try the suggestions in another post on Stack Overflow about slow SVN. It could also be due to using a BDB database.

Resources