Remote SSH: Disk quota exceeded - bash

I use vscode to connect to a supercomputer using SSH remote.
I ran a selenium request from a jupyter notebook that took too long, then failed and then the bash: cannot create temp file for here-document: Disk quota exceeded started to appear when trying to complete with tab the name of a file/folder in the terminal.
These are the outputs of df and quota commands:
>> df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 189G 0 189G 0% /dev
tmpfs 189G 2.6G 186G 2% /dev/shm
tmpfs 189G 17M 189G 1% /run
tmpfs 189G 0 189G 0% /sys/fs/cgroup
/dev/mapper/vg_loginnode1-lv_root 1.0T 44G 981G 5% /
/dev/sda2 1014M 315M 700M 31% /boot
/dev/sda1 100M 12M 89M 12% /boot/efi
/dev/mapper/vg_loginnode1-lv_tmp 1.0T 26G 999G 3% /tmp
/dev/mapper/vg_loginnode1-lv_vartmp 200G 379M 200G 1% /var/tmp
tmpfs 38G 12K 38G 1% /run/user/42
home 51T 19T 32T 37% /home
proj 4.4P 3.6P 801T 83% /proj
sw7 21T 4.9T 16T 25% /software
tmpfs 38G 0 38G 0% /run/user/10754
>> df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
devtmpfs 49329535 638 49328897 1% /dev
tmpfs 49339830 1050 49338780 1% /dev/shm
tmpfs 49339830 5418 49334412 1% /run
tmpfs 49339830 16 49339814 1% /sys/fs/cgroup
/dev/mapper/vg_loginnode1-lv_root 107374144 262482 107111662 1% /
/dev/sda2 524288 27 524261 1% /boot
/dev/sda1 0 0 0 - /boot/efi
/dev/mapper/vg_loginnode1-lv_tmp 107374144 33180 107340964 1% /tmp
/dev/mapper/vg_loginnode1-lv_vartmp 104857600 518 104857082 1% /var/tmp
tmpfs 49339830 9 49339821 1% /run/user/42
home 157286400 84893563 72392837 54% /home
proj 500000000 343005480 156994520 69% /proj
sw7 104857600 33867547 70990053 33% /software
tmpfs 49339830 10 49339820 1% /run/user/10754
>> quota -u
Disk quotas for user ***** (uid 10754):
Filesystem blocks quota limit grace files quota limit grace
/dev/mapper/vg_loginnode1-lv_tmp
10485760* 10485760 10485760 22639 0 0
Obviously, I am surpassing the quota. As I told you at the beginning, this is a supercomputer, so I have limited to no access to the base folders (tmp, etc.). So, any ideas on how to solve this without cleaning tmp are more than welcome.

Hit the same issue. I cannot find any files that I own in /tmp, so I don't know why the system says my tmp quota is exceeded.
I found a work-around: change the location of my bash temp files.
$ mkdir -p ~/tmp
$ export TMPDIR=~/tmp
$ echo "export TMPDIR=~/tmp" >> ~/.bashrc

Related

Ubuntu 20.0.4 server install didn't use whole NVMe drive

I installed Ubuntu 20.0.4 LTS onto a fresh Samsung 250GB NVMe and used all the defaults during installation. Everything seemed to go fine but I'm not seeing that it is only showing ~100GB on the drive. How do I extend the partition
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.1G 1.5M 3.1G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 98G 26G 68G 28% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/nvme0n1p2 1.5G 111M 1.3G 8% /boot
/dev/loop0 56M 56M 0 100% /snap/core18/2128
/dev/nvme0n1p1 1.1G 5.3M 1.1G 1% /boot/efi
/dev/loop1 62M 62M 0 100% /snap/core20/1328
/dev/loop2 71M 71M 0 100% /snap/lxd/21029
/dev/loop3 68M 68M 0 100% /snap/lxd/21835
/dev/loop4 56M 56M 0 100% /snap/core18/2284
/dev/loop5 44M 44M 0 100% /snap/snapd/149
sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE MOUNTPOINT LABEL
loop0 squashfs 55.4M /snap/core18/2128
loop1 squashfs 61.9M /snap/core20/1328
loop2 squashfs 70.3M /snap/lxd/21029
loop3 squashfs 67.2M /snap/lxd/21835
loop4 squashfs 55.5M /snap/core18/2284
loop5 squashfs 43.6M /snap/snapd/14978
nvme0n1 232.9G
├─nvme0n1p1 vfat 1.1G /boot/efi
├─nvme0n1p2 ext4 1.5G /boot
└─nvme0n1p3 LVM2_member 230.3G
└─ubuntu--vg-ubuntu--lv ext4 100G /
running these commands did the trick
sudo lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

error: file write error: No space left on device

I am attempting to update my git repo by doing "git add ." within Visual Studio Code on my MAC. I have done this many times without issues but this time it is giving me this error "error: file write error: No space left on device" and "error: unable to create temporary file: No space left on device." Also when trying to clone another repo onto my device from github it says "fatal: could not create work tree dir 'cs1550-project1-domm2': No space left on device" I have 12.96gb left of 256gb. I do not know how I am out of space or how to free it. I just need to update my github repo for class.
This is running df -h within the VS terminal:
df -h
Filesystem Size Used Avail Use% Mounted on
udev 31G 0 31G 0% /dev
tmpfs 6.2G 5.2M 6.2G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 196G 22G 165G 12% /
tmpfs 31G 0 31G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 31G 0 31G 0% /sys/fs/cgroup
/dev/sda2 976M 310M 600M 35% /boot
/dev/loop0 56M 56M 0 100% /snap/core18/2253
/dev/loop2 62M 62M 0 100% /snap/core20/1270
/dev/loop1 56M 56M 0 100% /snap/core18/2284
/dev/loop3 44M 44M 0 100% /snap/snapd/14295
/dev/mapper/ubuntu--vg-lv_u 1.6T 1.6T 0 100% /u
/dev/loop6 44M 44M 0 100% /snap/snapd/14549
/dev/loop5 68M 68M 0 100% /snap/lxd/21835
/dev/loop4 62M 62M 0 100% /snap/core20/1328
/dev/loop7 68M 68M 0 100% /snap/lxd/21803
AFS 2.0T 0 2.0T 0% /afs
tmpfs 6.2G 0 6.2G 0% /run/user/16778380
tmpfs 6.2G 0 6.2G 0% /run/user/16778801
tmpfs 6.2G 0 6.2G 0% /run/user/16778557
tmpfs 6.2G 0 6.2G 0% /run/user/16777582
tmpfs 6.2G 0 6.2G 0% /run/user/16778716
tmpfs 6.2G 0 6.2G 0% /run/user/16778813
tmpfs 6.2G 0 6.2G 0% /run/user/16777367
tmpfs 6.2G 0 6.2G 0% /run/user/16778536
tmpfs 6.2G 0 6.2G 0% /run/user/16778347
tmpfs 6.2G 0 6.2G 0% /run/user/16778708
tmpfs 6.2G 0 6.2G 0% /run/user/16778462
tmpfs 6.2G 0 6.2G 0% /run/user/16778799
tmpfs 6.2G 0 6.2G 0% /run/user/16778330
tmpfs 6.2G 0 6.2G 0% /run/user/16777512
tmpfs 6.2G 0 6.2G 0% /run/user/16778756
tmpfs 6.2G 0 6.2G 0% /run/user/16778555
tmpfs 6.2G 0 6.2G 0% /run/user/16778783
tmpfs 6.2G 0 6.2G 0% /run/user/16778747
tmpfs 6.2G 0 6.2G 0% /run/user/16778712
tmpfs 6.2G 0 6.2G 0% /run/user/16778329
tmpfs 6.2G 0 6.2G 0% /run/user/16777696
tmpfs 6.2G 0 6.2G 0% /run/user/16778494
tmpfs 6.2G 0 6.2G 0% /run/user/16778816
tmpfs 6.2G 0 6.2G 0% /run/user/16778752
tmpfs 6.2G 0 6.2G 0% /run/user/16778706
tmpfs 6.2G 0 6.2G 0% /run/user/16778823
tmpfs 6.2G 0 6.2G 0% /run/user/16778556
tmpfs 6.2G 0 6.2G 0% /run/user/16778766
tmpfs 6.2G 0 6.2G 0% /run/user/16778343
tmpfs 6.2G 0 6.2G 0% /run/user/16778828
tmpfs 6.2G 0 6.2G 0% /run/user/16778538
tmpfs 6.2G 0 6.2G 0% /run/user/16777510
tmpfs 6.2G 0 6.2G 0% /run/user/16778809
tmpfs 6.2G 0 6.2G 0% /run/user/16778353
tmpfs 6.2G 0 6.2G 0% /run/user/16777596
tmpfs 6.2G 0 6.2G 0% /run/user/16778342
tmpfs 6.2G 0 6.2G 0% /run/user/16778822
tmpfs 6.2G 0 6.2G 0% /run/user/16778540
tmpfs 6.2G 0 6.2G 0% /run/user/16778542
This is running it through my MAC terminal:
df -h
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1s5s1 233Gi 14Gi 12Gi 55% 553788 2447547532 0% /
devfs 196Ki 196Ki 0Bi 100% 679 0 100% /dev
/dev/disk1s4 233Gi 3.0Gi 12Gi 21% 4 2448101316 0% /System/Volumes/VM
/dev/disk1s2 233Gi 320Mi 12Gi 3% 1226 2448100094 0% /System/Volumes/Preboot
/dev/disk1s6 233Gi 24Mi 12Gi 1% 17 2448101303 0% /System/Volumes/Update
/dev/disk1s1 233Gi 203Gi 12Gi 95% 576260 2447525060 0% /System/Volumes/Data
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /System/Volumes/Data/home
df -i
Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk1s5s1 489620264 29961272 24759664 55% 553788 2447547532 0% /
devfs 392 392 0 100% 679 0 100% /dev
/dev/disk1s4 489620264 6294912 24759664 21% 4 2448101316 0% /System/Volumes/VM
/dev/disk1s2 489620264 654696 24759664 3% 1226 2448100094 0% /System/Volumes/Preboot
/dev/disk1s6 489620264 49528 24759664 1% 17 2448101303 0% /System/Volumes/Update
/dev/disk1s1 489620264 426413408 24759664 95% 576260 2447525060 0% /System/Volumes/Data
map auto_home 0 0 0 100% 0 0 100% /System/Volumes/Data/home
It depends on where you are trying to do the git add .
As mentioned here:
The /snap mounts come from using software packages installed with Snap.
These utilize loop devices and are usually not writable.
You will get some sort of "No space on device" error when trying to write to any of these locations and that is represented in df -h as showing those mounts as 100% in use.
But in your case, assuming the repository is in /dev/...:
devfs 392 392 0 100% 679 0 100% /dev
That means you don't have enough inodes.
Look for files to delete:
for x in /* ; do echo $x ; find $x | wc -l ; done
The goal is to free inode usage or increase the number of inodes.

How can I add multiple Ansible variables onto a single line in a csv file?

I have this playbook that gather information about a server using command like df, free, sar. sorry I am very new too ansible.
Here is the playbook I am using.
- name:
hosts: all
tasks:
- name:
shell: df -h
register: mount
- name:
shell: free -h
register: ram
- name:
shell: echo "{{ mount.stdout }}" "{{ ram.stdout }}" >> prechecks.csv
delegate_to: localhost
The outcome I get is this:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 10M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda3 18G 11G 6.8G 63% /
/dev/sda1 295M 288M 7.2M 98% /boot
tmpfs 372M 1.2M 370M 1% /run/user/42
tmpfs 372M 5.7M 366M 2% /run/user/0
tmpfs 372M 4.0K 372M 1% /run/user/1001 total used free shared buff/cache available
Mem: 3.6Gi 1.5Gi 904Mi 18Mi 1.2Gi 1.9Gi
Swap: 2.0Gi 0B 2.0Gi
As you can see the memory information is under the mount filesystem. Is there a way I can get them like this?
The way I want them to be:
Filesystem Size Used Avail Use% Mounted on total used free shared buff/cache available
devtmpfs 1.8G 0 1.8G 0% /dev Mem: 3.6Gi 1.5Gi 904Mi 18Mi 1.2Gi 1.9Gi
tmpfs 1.9G 0 1.9G 0% /dev/shm Swap: 2.0Gi 0B
tmpfs 1.9G 10M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda3 18G 11G 6.8G 63% /
/dev/sda1 295M 288M 7.2M 98% /boot
tmpfs 372M 1.2M 370M 1% /run/user/42
tmpfs 372M 5.7M 366M 2% /run/user/0
tmpfs 372M 4.0K 372M 1% /run/user/1001
You can try using the paste command. Here is what would result using the outputs you provided in your question:
Combined using paste:
Filesystem Size Used Avail Use% Mounted on total used free shared buff/cache available
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm Mem: 3.6Gi 1.5Gi 904Mi 18Mi 1.2Gi 1.9Gi
tmpfs 1.9G 10M 1.9G 1% /run Swap: 2.0Gi 0B 2.0Gi
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda3 18G 11G 6.8G 63% /
/dev/sda1 295M 288M 7.2M 98% /boot
tmpfs 372M 1.2M 370M 1% /run/user/42
tmpfs 372M 5.7M 366M 2% /run/user/0
tmpfs 372M 4.0K 372M 1% /run/user/1001
There is an empty line included in the free command output in your source data but your desired output does NOT include that empty line. You can remove the empty line by using sed '/^$/d' < <(free -h) instead of free -h. Output after removing the empty line:
$ paste df.out mp.out
Filesystem Size Used Avail Use% Mounted on total used free shared buff/cache available
devtmpfs 1.8G 0 1.8G 0% /dev Mem: 3.6Gi 1.5Gi 904Mi 18Mi 1.2Gi 1.9Gi
tmpfs 1.9G 0 1.9G 0% /dev/shm Swap: 2.0Gi 0B 2.0Gi
tmpfs 1.9G 10M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda3 18G 11G 6.8G 63% /
/dev/sda1 295M 288M 7.2M 98% /boot
tmpfs 372M 1.2M 370M 1% /run/user/42
tmpfs 372M 5.7M 366M 2% /run/user/0
tmpfs 372M 4.0K 372M 1% /run/user/1001
contents of df.out:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 10M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda3 18G 11G 6.8G 63% /
/dev/sda1 295M 288M 7.2M 98% /boot
tmpfs 372M 1.2M 370M 1% /run/user/42
tmpfs 372M 5.7M 366M 2% /run/user/0
tmpfs 372M 4.0K 372M 1% /run/user/1001
contents of mp.out
total used free shared buff/cache available
Mem: 3.6Gi 1.5Gi 904Mi 18Mi 1.2Gi 1.9Gi
Swap: 2.0Gi 0B 2.0Gi

Bash: Multiple npm install in the background give error, 'No space left on device'

I am setting up docker on a google cloud compute machine, 1 vCPU and 3.75 GB ram.
If I simply run docker-compose up --build, it does work but the process is sequential and slow. So I am using this bash script so that I can build images in the background, and skip the usual sequential process.
command=$1
shift
jobsList=""
taskList[0]=""
i=0
#Replaces all the fluff with nothing, and we get our job Id
function getJobId(){
echo "$(echo $STRING | sed s/^[^0-9]*// | sed s/[^0-9].*$//)"
}
for task in "$#"
do
echo "Command is $command $task"
docker-compose $command $task &> ${task}.text &
lastJob=`getJobId $(jobs %%)`
jobsList="$jobsList $lastJob"
echo "jobsList is $jobsList"
taskList[$i]="$command $task"
i=$(($i + 1))
done
i=0
for job in $jobsList
do
wait %$job
echo "${taskList[$i]} completed with status $?"
i=$(($i + 1))
done
and I use it in the following manner:
availableServices=$(docker-compose config --services)
while IFS='' read -r line || [[ -n "$line" ]]
do
services+=$(echo "$line ")
done <<<"$availableServices"
./runInParallel.sh build $services
I string together available services in docker-compose.yml, and pass it to my script.
But the issue is eventually all the processes fail with the following error:
npm WARN tar ENOSPC: no space left on device, write
Unhandled rejection Error: ENOSPC: no space left on device, write
I checked inodes, and on /dev/sda1 only 44% were used.
Here's my output for the command df -h:
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 370M 892K 369M 1% /run
/dev/sda1 9.6G 9.1G 455M 96% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/loop0 55M 55M 0 100% /snap/google-cloud-sdk/64
/dev/loop2 55M 55M 0 100% /snap/google-cloud-sdk/62
/dev/loop1 55M 55M 0 100% /snap/google-cloud-sdk/63
/dev/loop3 79M 79M 0 100% /snap/go/3095
/dev/loop5 89M 89M 0 100% /snap/core/5897
/dev/loop4 90M 90M 0 100% /snap/core/6130
/dev/loop6 90M 90M 0 100% /snap/core/6034
/dev/sda15 105M 3.6M 101M 4% /boot/efi
tmpfs 370M 0 370M 0% /run/user/1001
and here's the output for df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 469499 385 469114 1% /dev
tmpfs 472727 592 472135 1% /run
/dev/sda1 1290240 636907 653333 50% /
tmpfs 472727 1 472726 1% /dev/shm
tmpfs 472727 8 472719 1% /run/lock
tmpfs 472727 18 472709 1% /sys/fs/cgroup
/dev/loop0 20782 20782 0 100% /snap/google-cloud-sdk/64
/dev/loop2 20680 20680 0 100% /snap/google-cloud-sdk/62
/dev/loop1 20738 20738 0 100% /snap/google-cloud-sdk/63
/dev/loop3 9417 9417 0 100% /snap/go/3095
/dev/loop5 12808 12808 0 100% /snap/core/5897
/dev/loop4 12810 12810 0 100% /snap/core/6130
/dev/loop6 12810 12810 0 100% /snap/core/6034
/dev/sda15 0 0 0 - /boot/efi
tmpfs 472727 10 472717 1% /run/user/1001
From your df -h output, root directory (/dev/sda1) has only 455MB free space.
Whenever you run docker build, the docker-client (CLI) will send all the contents of the Dockerfile directory to docker-daemon which builds the image.
So, for example, if you have three services each with 300MB directories, you can build them sequentially with 455MB available free space, but to build them all at the same time you need 300MB*3 amount of free space for docker-daemon to cache and build the images.

Memory error in Rails console on AWS

I logged into our production instance on AWS, and tried to go into Rails console:
bundle exec rails c production
But I'm getting the following error
There was an error while trying to load the gem 'mini_magick' (Bundler::GemRequireError)
Gem Load Error is: Cannot allocate memory - animate
When I run free I see there's no swap:
free
total used free shared buffers cached
Mem: 7659512 7515728 143784 408 1724 45604
-/+ buffers/cache: 7468400 191112
Swap: 0 0 0
df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 3824796 12 3824784 1% /dev
tmpfs 765952 376 765576 1% /run
/dev/xvda1 15341728 11289944 3323732 78% /
none 4 0 4 0% /sys/fs/cgroup
none 5120 0 5120 0% /run/lock
none 3829756 0 3829756 0% /run/shm
none 102400 0 102400 0% /run/user
/dev/xvdf 10190136 6750744 2898720 70% /mnt
Not sure what's causing this or how to resolve it. Any help is appreciated.
Thanks!
You can increase EC2 instance memory or add swap memory to EC2.
grep Mem /proc/meminfo
grep Swap /proc/meminfo
free
uname -a
# Set swap file to /swapfile1
sudo dd if=/dev/zero of=/swapfile1 bs=1M count=512
grep Swap /proc/meminfo
ll /swapfile1
sudo chmod 600 /swapfile1
mkswap /swapfile1
ll /swapfile1
sudo mkswap /swapfile1
swapon -s
free
sudo swapon /swapfile1
free
grep Swap /proc/meminfo

Resources