I often run into disk space issues when building docker images (like JS errors "ENOSPC: no space left on device). I am used to run docker system prune to clear up some space, but it was a bit too often to my liking and I realised maybe something was not working as expected
After running a docker system prune I have the following docker system df output
> docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 102 0 41.95GB 41.95GB (100%)
Containers 0 0 0B 0B
Local Volumes 53 0 5.254GB 5.254GB (100%)
Build Cache 383 0 0B 0B
It seems like there are still 42GB disk space used by images (or does this refer to some sort of "reserved space" for docker ? Anyways if those 42GB are held up somehow, it could very much explain why my disk is getting so full so often
I am on macOS and with the above docker system df, when I open my docker app > Resources I see
Disk image size:
120 GB (81.3 GB used)
Am I missing something ?
As #Oo.oO mentioned in his comment you probably have images that arent considered "dangling" but you still don't use them for anything or have a container using them, as noted in this answer explaining the difference between dangling image and an unused one is that a dangling image is plainly put a previous version of the same image that's now will probably show as:
<none> <none>
while an unused image is just that, unused which doesn't fall under the "dangling" classification and that's why your images arent being deleted. that's when running the following command will solve the issue:
docker system prune -a
as noted in the help menu for the command
-a, --all Remove all unused images not just dangling ones
Though if you want only to delete unused images (as this is your main use case) you can use the following command instead:
docker images prune -a
Related
I am on Windows and using diff to compare two text files. It was working successfully for small files but, when I start comparing 2GB file with another 2GB file it shows me:
diff: C:/inetpub/wwwroot/webclient/database_sequences/est_mouse_2.txt: Permission denied
My code:
$OldDatabaseFile = "est_mouse_1";
$NewDatabaseFile = "est_mouse_2";
shell_exec("C:\\cygwin64\\bin\\bash.exe --login -c 'diff $text_files_path/$OldDatabaseFile.txt $text_files_path/$NewDatabaseFile.txt > $text_files_path/TempDiff_$OldDatabaseFile$NewDatabaseFile.txt 2>&1'");
est_mouse_1.txt and est_mouse_2.txt are created by me and I check file permission and folder permission, it is full control. And all other text files which I compared are in the same folder and they were successfully compared.
Any idea?
You are using cygwin for this operation, Cygwin's heap is extensible. However, it does start out at a fixed size and attempts to extend it may run into memory which has been previously allocated by Windows.
Heap memory can be allocated up to the size of the biggest available free block in the processes virtual memory (VM). On 64 bit systems this results in a 4GB VM for a process started from that executable. I think that why you can't compare two 2GB files, I agree that the error pretty strange but explains that your access to the memory is limited. Please see cygwin user guide for the more info.
When I run the command:
rsync -aviuP /src /trgt
The command seems to be missing some files that are not correct at the target destination. Like a 25gb file on src, being 4gb at the target and corrupt. I run the command 3 times to be safe.
When I start the sync, I have to stop the sync with ctrl+C every now and then as I need the drives to work faster for some other task but I thought the P flag was meant to make that cosher.
Am I missing something here? The problem is happening relatively frequently and I can't seem to find any answers on the web.
Thanks in advance.
Avoid interrupting the sync process. Instead limit the maximum bandwidth used by rsync using the bwlimit option
rsync --bwlimit=1000 -aviuP /src /trgt
In this example the maximum bandwidth used would be limited to roughly 1MB/s (1000KB/s).
Good evening everyone! I have been working on this for sometime, but can't figure it out. I am simply trying to get the working bootcode of a bootloader installed on an attached media, but can't figure this out!!! I have tried grub legacy, lilo, and grub2... The host system has it's drive listed as /dev/sda* and the target attached media is listed as /dev/sdb* and is mounted to /mnt/target.
With grub legacy, I was attempting to work with another media (/dev/sdc*, /mnt/source) that already had it installed and tried dirty hacks like:
dd if=/mnt/source/boot/grub/stage1 of=/dev/sdb bs=446 count=1
dd if=/mnt/source/boot/grub/stage2 of=/dev/sdb bs=512 seek=1
This will actually boot into a grub interface where you can enter things like:
root (hd0,0)
setup (hd0)
I get no error messages, but grub will boot to garbage on the screen and then stop.
With lilo, I actually had the package installed and tried to setup (after creating a lilo.conf):
default=Test1
timeout=10
compact
prompt
lba32
backup=/mnt/target/boot/lilo/MBR.hda.990428
map=/mnt/target/boot/lilo/map
install=/mnt/target/boot/lilo/boot.b
image=/mnt/target/boot/vmlinuz
label=Test1
append="quiet ... settime"
initrd=/mnt/target/boot/ramdisks/working.gz
And then from the prompt execute the following:
$ lilo -C /mnt/target/boot/lilo/lilo.conf -b /dev/sdb
Warning: /dev/sdb is not on the first disk
Fatal: Sorry, don't know how to handle device 0x0701
With grub2, I tried something like:
grub-mkconfig -o /mnt/target/boot/grub/grub.cfg
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-3.11.0-12-generic
Found initrd image: /boot/initrd.img-3.11.0-12-generic
Found memtest86+ image: /boot/memtest86+.bin
No volume groups found
done
I couldn't even get the above to generate a grub.cfg correctly or in the right spot so I gave up on this one... The entries listed above are for the host system, not the target system.
I can provide any additional information that you guys need to help resolve this problem.
-UPDATE-
After working with the media a bit longer, I decided to run an 'fdisk -l' and was presented with the following info:
Partition 1 has different physical/logical beginnings (non-Linux?):
phys(0,32,33) logical(0,37,14)
Partition 1 has different physical/logical endings:
phys(62,53,55) logical(336,27,19)
I should also note that when I try to mount the partition I always get a message that states:
EXT4-fs (sdb1): couldn't mount as ext3 due to feature incompatibilities
Not sure if that is just specific to busybox, or if that is related to the fdisk output. Anyhow, I don't know if the fdisk info is indicating that there may be a problem with the disk geometry that could be causing all these bootloaders to not work.
First stage boot sector code for grub legacy is in "stage1", for grub(2) in "boot.img". Fist stage code contains the address of next stage to be loaded on same disk.
On some other disk the address of next stage to be loaded could be (and is maybe) different.
I think using chroot and grub-install would be a better way to go.
See Grub2/Installing.
As for disk/partition structure:
dd if=/mnt/source/boot/grub/stage2 of=/dev/sdb bs=512 seek=1
maybe has overwritten partition table in MBR of sdb.
I am running boot2docker on Mac. I have started a container mounting a volume from my Mac to the container using -v command.
Problem is, all files with special encoded characters simply don't appear in the volume from within the container. The left part of the screenshot is ls from my Mac, and the right is ls from within the container.
It seems to me files with encoding, in this case "Ätt-Arlech-75x75.png" is simply ignored when mounted - how can that be explained, and avoided?
I'm using Cloud9 (railstutorial.org) and noticed that the disk space used by my workspace is fastly growing toward the disk quota.
Is there a way to clean up the workspace and thereby reduce the disk space used?
The workspace is currently 817MB (see below using quota -s). I downloaded it to look at the size of the directories, and I don't understand it. The directory containing my project is only 170 MB in size and the .9 folder is only 3 MB. So together that doesn't come near the 817 MB... And the disk space used keeps growing even though I don't I'm making any major changes to the content of my project.
Size Used Avail Use%
1.1G 817M 222M 79%
Has it perhaps got to do with the .9 folder? For example, I've manually deleted several sub-projects but in the .9 folder these projects still exist, including their files. I also wonder if perhaps different versions of gems remain installed in the .9 folder... so that if you update a gem, it includes both versions of the gem.
I'm not sure how this folder or Cloud9 storage in general works, but my question is how to clean up disk space (without having to remove anything in my project)? Is there perhaps some clean-up function? I could of course create a new workspace and upload my project there, but perhaps there's an alternative while keeping the current workspace.
The du-c9 command lists all the files contributing to your quota. You can reclaim disk space by deleting files listed by this command.
For a user-friendly interface, you may want to install ncdu to see the size of all your folders. First, free some space for the install. A common way to do this is by removing your tmp folder:
rm -rf /tmp/*
Then install ncdu:
sudo apt-get install ncdu
Then run ncdu and navigate through your folders to see which ones are using up the most space:
ncdu ~
Reference: https://docs.c9.io/discuss/557ecf787eafa719001d1af8
For me the answers above unfortunately did not work (the first produced a list incomprehensibly long, so long that I run out of scroll space in the shell and the second one produced a strange list-- see at the end of this answer):
What did was the following:
1) From this support faq article: du -hx / -t 50000000
2) Identify the culprit from the easy to read, easy to understand list: in my case 1.1G /home/ubuntu/.local/share/heroku/tmp
3) From the examples of this article: rm -r /home/ubuntu/.local/share/heroku/tmp
Strange list:
1 ./.bundle
1 ./.git
1 ./README.md
1 ./Project_5
2 ./.c9
2 ./Project_1
3 ./Project_2
17 ./Project_3
28 ./Project_4
50 .
If you want to dig into more details of which file is affecting your workspace disk try this command: sudo du -h -t 50M / --exclude=/nix --exclude=/mnt --exclude=/proc
This will give you all the files on your Linux server and then you can remove any file by this command:
sudo rm -rf /fileThatNeedsToDelete/*
From AWS in Cloud9 this command df -hT /dev/xvda1 worked for me:
[ec2-user ~]$ df -hT /dev/xvda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/xvda1 xfs 8.0G 1.2G 6.9G 15% /
more info here:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-describing-volumes.html