vagrant - no space left on device error - vagrant

I am new to vagrant. I am following the instructions at http://gettingstartedwithdjango.com/en/lessons/introduction-and-launch/
I am getting the following error on running "sudo ./postinstall.sh" script
+ apt-get -y clean
+ rm -f /var/lib/dhcp3/*
+ rm /etc/udev/rules.d/70-persistent-net.rules
rm: cannot remove `/etc/udev/rules.d/70-persistent-net.rules': Is a directory
+ mkdir /etc/udev/rules.d/70-persistent-net.rules
mkdir: cannot create directory `/etc/udev/rules.d/70-persistent-net.rules': File exists
+ rm -rf /dev/.udev/
+ rm /lib/udev/rules.d/75-persistent-net-generator.rules
rm: cannot remove `/lib/udev/rules.d/75-persistent-net-generator.rules': No such file or directory
+ rm -f /home/vagrant/{*.iso,postinstall*.sh}
+ dd if=/dev/zero of=/EMPTY bs=1M
dd: writing `/EMPTY': No space left on device
78504+0 records in
78503+0 records out
82316406784 bytes (82 GB) copied, 105.122 s, 783 MB/s
+ rm -f /EMPTY
+ exit
But I seem to have enough space:
vagrant#precise64:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/precise64-root 79G 2.3G 73G 3% /
udev 174M 0 174M 0% /dev
tmpfs 74M 272K 73M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 183M 0 183M 0% /run/shm
/dev/sda1 228M 25M 192M 12% /boot
/vagrant 220G 91G 130G 42% /vagrant
/tmp/vagrant-chef-1/chef-solo-1/cookbooks 220G 91G 130G 42% /tmp/vagrant-chef-1/chef-solo-1/cookbooks
/tmp/vagrant-chef-1/chef-solo-2/cookbooks 220G 91G 130G 42% /tmp/vagrant-chef-1/chef-solo-2/cookbooks
Can somebody please help? Thank you.

It's supposed to do this :) It's making your virtual disk as small as possible since it is thinly provisioned.
Creating a file full of zeros on the disk until it is full is clearing the blocks on the disk and as such your file representing the VMs disk will be as small as the actual data you have on the disk.

The problem resides in the following statement:
dd if=/dev/zero of=/EMPTY bs=1M
If you don't specify count=<some value>, the dd command will continue until the end of device is reached. So basically with the above command you're trying to create a file called that spawns through the whole partition, called EMPTY under /. Thus the error.

Related

How to mount Nvm EBS volumes of different size on desired mount point using shell

After adding volumes to an ec2 instance using ansible. How can I mount these devices by size to desired mount point using a shell script that I will pass to user_data
[ec2-user#xxx ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:4 0 200G 0 disk
├─nvme0n1p1 259:5 0 1M 0 part
└─nvme0n1p2 259:6 0 200G 0 part /
nvme1n1 259:0 0 70G 0 disk
nvme2n1 259:1 0 70G 0 disk
nvme3n1 259:3 0 70G 0 disk
nvme4n1 259:2 0 20G 0 disk
This is what i wrote initially but realized the NAME and SIZE are not same always for nvm's
#!/bin/bash
VOLUMES=(nvme1n1 nvme2n1 nvme3n1 nvme4n1)
PATHS=(/abc/sfw /abc/hadoop /abc/log /kafka/data/sda)
for index in ${!VOLUMES[*]}; do
sudo mkfs -t xfs /dev/"${VOLUMES[$index]}"
sudo mkdir -p "${PATHS[$index]}"
sudo mount /dev/"${VOLUMES[$index]}" "${PATHS[$index]}"
echo "Mounted ${VOLUMES[$index]} in ${PATHS[$index]}"
done
I am creating these using ansible and want the 20G to be mounted on /edw/logs but 20G randomly goes on any device. (nvme1n1 or nvme2n1 or nvme3n1 or nvme4n1)
How to write/modify my script?

/tmp size fluctuating on Heroku

I noticed when checking the size of the tmp folder (df -h /tmp) that it's quite large ~70 GB and fluctuates a few GB every minute. I don't have anything in my application saving to the /tmp folder and am wondering what is going on? What is the "mask" file that appears under /tmp? Also, I was under the impression that a /tmp folder clears all contents when a dyno is restarted, it's ephemeral, so why is so much storage taken.
Below is the output from Heroku Bash, I ran df -h /tmp seconds apart.
~ $ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/evg0-evol0 376G 75G 282G 21% /tmp
~ $ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/evg0-evol0 376G 77G 280G 22% /tmp
~ $ cd /tmp
/tmp $ dir
mask

Bash: Multiple npm install in the background give error, 'No space left on device'

I am setting up docker on a google cloud compute machine, 1 vCPU and 3.75 GB ram.
If I simply run docker-compose up --build, it does work but the process is sequential and slow. So I am using this bash script so that I can build images in the background, and skip the usual sequential process.
command=$1
shift
jobsList=""
taskList[0]=""
i=0
#Replaces all the fluff with nothing, and we get our job Id
function getJobId(){
echo "$(echo $STRING | sed s/^[^0-9]*// | sed s/[^0-9].*$//)"
}
for task in "$#"
do
echo "Command is $command $task"
docker-compose $command $task &> ${task}.text &
lastJob=`getJobId $(jobs %%)`
jobsList="$jobsList $lastJob"
echo "jobsList is $jobsList"
taskList[$i]="$command $task"
i=$(($i + 1))
done
i=0
for job in $jobsList
do
wait %$job
echo "${taskList[$i]} completed with status $?"
i=$(($i + 1))
done
and I use it in the following manner:
availableServices=$(docker-compose config --services)
while IFS='' read -r line || [[ -n "$line" ]]
do
services+=$(echo "$line ")
done <<<"$availableServices"
./runInParallel.sh build $services
I string together available services in docker-compose.yml, and pass it to my script.
But the issue is eventually all the processes fail with the following error:
npm WARN tar ENOSPC: no space left on device, write
Unhandled rejection Error: ENOSPC: no space left on device, write
I checked inodes, and on /dev/sda1 only 44% were used.
Here's my output for the command df -h:
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 370M 892K 369M 1% /run
/dev/sda1 9.6G 9.1G 455M 96% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/loop0 55M 55M 0 100% /snap/google-cloud-sdk/64
/dev/loop2 55M 55M 0 100% /snap/google-cloud-sdk/62
/dev/loop1 55M 55M 0 100% /snap/google-cloud-sdk/63
/dev/loop3 79M 79M 0 100% /snap/go/3095
/dev/loop5 89M 89M 0 100% /snap/core/5897
/dev/loop4 90M 90M 0 100% /snap/core/6130
/dev/loop6 90M 90M 0 100% /snap/core/6034
/dev/sda15 105M 3.6M 101M 4% /boot/efi
tmpfs 370M 0 370M 0% /run/user/1001
and here's the output for df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 469499 385 469114 1% /dev
tmpfs 472727 592 472135 1% /run
/dev/sda1 1290240 636907 653333 50% /
tmpfs 472727 1 472726 1% /dev/shm
tmpfs 472727 8 472719 1% /run/lock
tmpfs 472727 18 472709 1% /sys/fs/cgroup
/dev/loop0 20782 20782 0 100% /snap/google-cloud-sdk/64
/dev/loop2 20680 20680 0 100% /snap/google-cloud-sdk/62
/dev/loop1 20738 20738 0 100% /snap/google-cloud-sdk/63
/dev/loop3 9417 9417 0 100% /snap/go/3095
/dev/loop5 12808 12808 0 100% /snap/core/5897
/dev/loop4 12810 12810 0 100% /snap/core/6130
/dev/loop6 12810 12810 0 100% /snap/core/6034
/dev/sda15 0 0 0 - /boot/efi
tmpfs 472727 10 472717 1% /run/user/1001
From your df -h output, root directory (/dev/sda1) has only 455MB free space.
Whenever you run docker build, the docker-client (CLI) will send all the contents of the Dockerfile directory to docker-daemon which builds the image.
So, for example, if you have three services each with 300MB directories, you can build them sequentially with 455MB available free space, but to build them all at the same time you need 300MB*3 amount of free space for docker-daemon to cache and build the images.

Memory error in Rails console on AWS

I logged into our production instance on AWS, and tried to go into Rails console:
bundle exec rails c production
But I'm getting the following error
There was an error while trying to load the gem 'mini_magick' (Bundler::GemRequireError)
Gem Load Error is: Cannot allocate memory - animate
When I run free I see there's no swap:
free
total used free shared buffers cached
Mem: 7659512 7515728 143784 408 1724 45604
-/+ buffers/cache: 7468400 191112
Swap: 0 0 0
df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 3824796 12 3824784 1% /dev
tmpfs 765952 376 765576 1% /run
/dev/xvda1 15341728 11289944 3323732 78% /
none 4 0 4 0% /sys/fs/cgroup
none 5120 0 5120 0% /run/lock
none 3829756 0 3829756 0% /run/shm
none 102400 0 102400 0% /run/user
/dev/xvdf 10190136 6750744 2898720 70% /mnt
Not sure what's causing this or how to resolve it. Any help is appreciated.
Thanks!
You can increase EC2 instance memory or add swap memory to EC2.
grep Mem /proc/meminfo
grep Swap /proc/meminfo
free
uname -a
# Set swap file to /swapfile1
sudo dd if=/dev/zero of=/swapfile1 bs=1M count=512
grep Swap /proc/meminfo
ll /swapfile1
sudo chmod 600 /swapfile1
mkswap /swapfile1
ll /swapfile1
sudo mkswap /swapfile1
swapon -s
free
sudo swapon /swapfile1
free
grep Swap /proc/meminfo

Use parted into a Chef recipe to build a partition

Just starting on chef, I'm trying to convert my old bash provisioning scripts to something more modern and realiable, using Chef.
The first script is somewhat i used to build a partition and mount it to /opt.
This is the script: https://github.com/theclue/db2-vagrant/blob/master/provision_for_mount_disk.sh
#!/bin/bash
yum install -y parted
parted /dev/sdb mklabel msdos
parted /dev/sdb mkpart primary ext4 0% 100%
sleep 3
#-m swith tells mkfs to only reserve 1% of the blocks for the super block
mkfs.ext4 /dev/sdb1
e2label /dev/sdb1 "opt"
######### mount sdb1 to /opt ##############
chmod 777 /opt
mount /dev/sdb1 /opt
chmod 777 /opt
echo '/dev/sdb1 /opt ext4 defaults 0 0' >> /etc/fstab
I found a parted recipe here, but it doesn't seem to support all the parameters I need (0% and 100%, to name two) and anyway I've no idea on how to do the formatting/mounting block.
any idea?

Resources