How to keep control over disk-size - cloud9-ide

I'm using Cloud9 (railstutorial.org) and noticed that the disk space used by my workspace is fastly growing toward the disk quota.
Is there a way to clean up the workspace and thereby reduce the disk space used?
The workspace is currently 817MB (see below using quota -s). I downloaded it to look at the size of the directories, and I don't understand it. The directory containing my project is only 170 MB in size and the .9 folder is only 3 MB. So together that doesn't come near the 817 MB... And the disk space used keeps growing even though I don't I'm making any major changes to the content of my project.
Size Used Avail Use%
1.1G 817M 222M 79%
Has it perhaps got to do with the .9 folder? For example, I've manually deleted several sub-projects but in the .9 folder these projects still exist, including their files. I also wonder if perhaps different versions of gems remain installed in the .9 folder... so that if you update a gem, it includes both versions of the gem.
I'm not sure how this folder or Cloud9 storage in general works, but my question is how to clean up disk space (without having to remove anything in my project)? Is there perhaps some clean-up function? I could of course create a new workspace and upload my project there, but perhaps there's an alternative while keeping the current workspace.

The du-c9 command lists all the files contributing to your quota. You can reclaim disk space by deleting files listed by this command.

For a user-friendly interface, you may want to install ncdu to see the size of all your folders. First, free some space for the install. A common way to do this is by removing your tmp folder:
rm -rf /tmp/*
Then install ncdu:
sudo apt-get install ncdu
Then run ncdu and navigate through your folders to see which ones are using up the most space:
ncdu ~
Reference: https://docs.c9.io/discuss/557ecf787eafa719001d1af8

For me the answers above unfortunately did not work (the first produced a list incomprehensibly long, so long that I run out of scroll space in the shell and the second one produced a strange list-- see at the end of this answer):
What did was the following:
1) From this support faq article: du -hx / -t 50000000
2) Identify the culprit from the easy to read, easy to understand list: in my case 1.1G /home/ubuntu/.local/share/heroku/tmp
3) From the examples of this article: rm -r /home/ubuntu/.local/share/heroku/tmp
Strange list:
1 ./.bundle
1 ./.git
1 ./README.md
1 ./Project_5
2 ./.c9
2 ./Project_1
3 ./Project_2
17 ./Project_3
28 ./Project_4
50 .

If you want to dig into more details of which file is affecting your workspace disk try this command: sudo du -h -t 50M / --exclude=/nix --exclude=/mnt --exclude=/proc
This will give you all the files on your Linux server and then you can remove any file by this command:
sudo rm -rf /fileThatNeedsToDelete/*

From AWS in Cloud9 this command df -hT /dev/xvda1 worked for me:
[ec2-user ~]$ df -hT /dev/xvda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/xvda1 xfs 8.0G 1.2G 6.9G 15% /
more info here:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-describing-volumes.html

Related

Docker system prune leaves a lot (42GB) of image data

I often run into disk space issues when building docker images (like JS errors "ENOSPC: no space left on device). I am used to run docker system prune to clear up some space, but it was a bit too often to my liking and I realised maybe something was not working as expected
After running a docker system prune I have the following docker system df output
> docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 102 0 41.95GB 41.95GB (100%)
Containers 0 0 0B 0B
Local Volumes 53 0 5.254GB 5.254GB (100%)
Build Cache 383 0 0B 0B
It seems like there are still 42GB disk space used by images (or does this refer to some sort of "reserved space" for docker ? Anyways if those 42GB are held up somehow, it could very much explain why my disk is getting so full so often
I am on macOS and with the above docker system df, when I open my docker app > Resources I see
Disk image size:
120 GB (81.3 GB used)
Am I missing something ?
As #Oo.oO mentioned in his comment you probably have images that arent considered "dangling" but you still don't use them for anything or have a container using them, as noted in this answer explaining the difference between dangling image and an unused one is that a dangling image is plainly put a previous version of the same image that's now will probably show as:
<none> <none>
while an unused image is just that, unused which doesn't fall under the "dangling" classification and that's why your images arent being deleted. that's when running the following command will solve the issue:
docker system prune -a
as noted in the help menu for the command
-a, --all Remove all unused images not just dangling ones
Though if you want only to delete unused images (as this is your main use case) you can use the following command instead:
docker images prune -a

How can I safely move Elasticsearch indices to another mount in Linux?

I'm having a number of indices which are actually causing some space issues at the moment in my Ubuntu machine. The indices keep growing on a daily basis.
So I thought of moving it to another mount directory which has more space apparently. How can I do this safely?
And I have to make sure that the existing ES indices and the Kibana graphs would be safe enough after the doing the move.
What I did: Followed this SO and moved my data directory of Elasticsearch somehow to the directory (/data/es_data) I needed, but after I did that, I couldn't view my existing indices plus the Kibana graphs and dashboards which I created as well.
Am I doing something wrong? Any help could be appreciated.
FWIW If it were me, I would stop elasticsearch & kibana (& logstash if this is the only elasticsearch node in the cluster) then move the old data dir to a new location out of the way:
sudo mv /var/lib/elasticsearch /var/lib/elasticsearch-old
Then set up the new volume (which should be at least 15% larger than the size of the indexes you have on disk as elasticsearch won't create new indexes on a disk with less than 15% free space) with a file system and find out it's UUID and get ready to mount it:
sudo fdisk /dev/sdX # New volume, use all the space
sudo mkfs.ext4 /dev/sdX1
ls -la /dev/disk/by-uuid/ | grep /dev/sdX1 # Or forget the grep and manually look for it
Then add the following to your /etc/fstab, replacing with the UUID from previous command:
UUID=<RESPONSE> /var/lib/elasticsearch ext4 defaults 0 0
Make the new directory as the old one is gone, it probably wants chowning (I assume the owner should be elasticsearch but you can confirm by checking ownership of the old folder) and you want to copy the content from the old one:
sudo mkdir /var/lib/elasticsearch
sudo chown -R elasticsearch: /var/lib/elasticsearch
cp -rp /var/lib/elasticsearch-old/* /var/lib/elasticsearch
Once everything has finished copying across you should then be able to start elasticsearch back up, it should find the indexes as they haven't moved, config doesn't need updating.
Once you're happy that everything is working you can delete /var/lib/elasticsearch-old and reclaim your space. Failing that you can revert to the old data and it should continue to work.

Symlink not being created

I'm running Fedora on a laptop with a small SSD and large HDD. I've got the OS installed on the SSD and my data on the HDD.
All my files are located at /run/media/kennedy/data/Kennedy
What I had before (and want again) is a symlink from /home/kennedy to that location. That way I'm not messing with actual /home, but when I am in /home as normal user, all my things are easily accessed and stored with plenty of space. Right now /home/kennedy has the standard directories; desktop, documents, downloads, and so forth. No files worth worrying about.
So I opened a shell, logged in as su, and entered
ln -s /home/kennedy /run/media/kennedy/data/Kennedy
expecting that when I cd /home/kennedy and ls, I would see all my lovelies. Instead, I see that standard folders and nothing more. Whisky Tango Foxtrot, over.
edit to add: I'm pretty sure the permissions are right, but only pretty sure. How do I check and correct that (if off)?
You have to reverse the arguments:
ln -s /run/media/kennedy/data/Kennedy /home/kennedy
This will:
run successfully if /home/kennedy doesn't exist (kennedy would be the new symlink)
fail if /home/kennedy exists and it is not a directory (symlink or a regular file); need add -f flag in such a case - ls -sf ...
if /home/kennedy is a directory, then the symlink will be created as /home/kennedy/kennedy
See this related post: How to symlink a file in Linux?
You have the command backwards, it should be:
ln -s /run/media/kennedy/data/Kennedy kennedy
Invoke the command while you are in your /home directory, then you should be set.

diff: text file permission denied

I am on Windows and using diff to compare two text files. It was working successfully for small files but, when I start comparing 2GB file with another 2GB file it shows me:
diff: C:/inetpub/wwwroot/webclient/database_sequences/est_mouse_2.txt: Permission denied
My code:
$OldDatabaseFile = "est_mouse_1";
$NewDatabaseFile = "est_mouse_2";
shell_exec("C:\\cygwin64\\bin\\bash.exe --login -c 'diff $text_files_path/$OldDatabaseFile.txt $text_files_path/$NewDatabaseFile.txt > $text_files_path/TempDiff_$OldDatabaseFile$NewDatabaseFile.txt 2>&1'");
est_mouse_1.txt and est_mouse_2.txt are created by me and I check file permission and folder permission, it is full control. And all other text files which I compared are in the same folder and they were successfully compared.
Any idea?
You are using cygwin for this operation, Cygwin's heap is extensible. However, it does start out at a fixed size and attempts to extend it may run into memory which has been previously allocated by Windows.
Heap memory can be allocated up to the size of the biggest available free block in the processes virtual memory (VM). On 64 bit systems this results in a 4GB VM for a process started from that executable. I think that why you can't compare two 2GB files, I agree that the error pretty strange but explains that your access to the memory is limited. Please see cygwin user guide for the more info.

Decompressed gziped file disappears?

I have an Amazon EC2 instance running CentOs. Unfortunately I don't have a gui. I tried setting up x11 forwarding but apparently it works differently with Ubuntu than it does with CentOs. But thats not the point. I download a pretty large .gz file (8.7Gb) and extracted using the following command:
gzip -d [filename] &
it took nearly an hour to decompress, and using ls -l I could see that the uncompressed directory was going to be nearly 30 gb. Anyway the process finishes and when I ls again the directory is no where to be found. I tried ls -a as well but still nothing. Any thoughts on this?
This sounds like gzip is silently failing when it runs out of space. How large is your instance's EBS volume / local disk that you're unzipping onto? (run df -h and figure out which device you're unzipping in.)
Additionally you could try to run gzip in verbose mode to catch any errors it might not be showing. I don't have a CentOS machine handy, but you might be able to use gzip -l [filename] to figure out whether your file is too big for the target directory.

Resources