I noticed when checking the size of the tmp folder (df -h /tmp) that it's quite large ~70 GB and fluctuates a few GB every minute. I don't have anything in my application saving to the /tmp folder and am wondering what is going on? What is the "mask" file that appears under /tmp? Also, I was under the impression that a /tmp folder clears all contents when a dyno is restarted, it's ephemeral, so why is so much storage taken.
Below is the output from Heroku Bash, I ran df -h /tmp seconds apart.
~ $ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/evg0-evol0 376G 75G 282G 21% /tmp
~ $ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/evg0-evol0 376G 77G 280G 22% /tmp
~ $ cd /tmp
/tmp $ dir
mask
Related
After adding volumes to an ec2 instance using ansible. How can I mount these devices by size to desired mount point using a shell script that I will pass to user_data
[ec2-user#xxx ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:4 0 200G 0 disk
├─nvme0n1p1 259:5 0 1M 0 part
└─nvme0n1p2 259:6 0 200G 0 part /
nvme1n1 259:0 0 70G 0 disk
nvme2n1 259:1 0 70G 0 disk
nvme3n1 259:3 0 70G 0 disk
nvme4n1 259:2 0 20G 0 disk
This is what i wrote initially but realized the NAME and SIZE are not same always for nvm's
#!/bin/bash
VOLUMES=(nvme1n1 nvme2n1 nvme3n1 nvme4n1)
PATHS=(/abc/sfw /abc/hadoop /abc/log /kafka/data/sda)
for index in ${!VOLUMES[*]}; do
sudo mkfs -t xfs /dev/"${VOLUMES[$index]}"
sudo mkdir -p "${PATHS[$index]}"
sudo mount /dev/"${VOLUMES[$index]}" "${PATHS[$index]}"
echo "Mounted ${VOLUMES[$index]} in ${PATHS[$index]}"
done
I am creating these using ansible and want the 20G to be mounted on /edw/logs but 20G randomly goes on any device. (nvme1n1 or nvme2n1 or nvme3n1 or nvme4n1)
How to write/modify my script?
Just starting on chef, I'm trying to convert my old bash provisioning scripts to something more modern and realiable, using Chef.
The first script is somewhat i used to build a partition and mount it to /opt.
This is the script: https://github.com/theclue/db2-vagrant/blob/master/provision_for_mount_disk.sh
#!/bin/bash
yum install -y parted
parted /dev/sdb mklabel msdos
parted /dev/sdb mkpart primary ext4 0% 100%
sleep 3
#-m swith tells mkfs to only reserve 1% of the blocks for the super block
mkfs.ext4 /dev/sdb1
e2label /dev/sdb1 "opt"
######### mount sdb1 to /opt ##############
chmod 777 /opt
mount /dev/sdb1 /opt
chmod 777 /opt
echo '/dev/sdb1 /opt ext4 defaults 0 0' >> /etc/fstab
I found a parted recipe here, but it doesn't seem to support all the parameters I need (0% and 100%, to name two) and anyway I've no idea on how to do the formatting/mounting block.
any idea?
I wanted to limit luakit to a maximum of 150mb virtual memory. Here is my shell script:
#!/bin/bash
#limit virtual memory to 150mb
ulimit -H -v 153600
while true
do
startx /usr/bin/luakit -U -- -s 0 dpms
done
But when memory usage goes above 150mb (in htop VIRT column) nothing happens.
I am new to vagrant. I am following the instructions at http://gettingstartedwithdjango.com/en/lessons/introduction-and-launch/
I am getting the following error on running "sudo ./postinstall.sh" script
+ apt-get -y clean
+ rm -f /var/lib/dhcp3/*
+ rm /etc/udev/rules.d/70-persistent-net.rules
rm: cannot remove `/etc/udev/rules.d/70-persistent-net.rules': Is a directory
+ mkdir /etc/udev/rules.d/70-persistent-net.rules
mkdir: cannot create directory `/etc/udev/rules.d/70-persistent-net.rules': File exists
+ rm -rf /dev/.udev/
+ rm /lib/udev/rules.d/75-persistent-net-generator.rules
rm: cannot remove `/lib/udev/rules.d/75-persistent-net-generator.rules': No such file or directory
+ rm -f /home/vagrant/{*.iso,postinstall*.sh}
+ dd if=/dev/zero of=/EMPTY bs=1M
dd: writing `/EMPTY': No space left on device
78504+0 records in
78503+0 records out
82316406784 bytes (82 GB) copied, 105.122 s, 783 MB/s
+ rm -f /EMPTY
+ exit
But I seem to have enough space:
vagrant#precise64:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/precise64-root 79G 2.3G 73G 3% /
udev 174M 0 174M 0% /dev
tmpfs 74M 272K 73M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 183M 0 183M 0% /run/shm
/dev/sda1 228M 25M 192M 12% /boot
/vagrant 220G 91G 130G 42% /vagrant
/tmp/vagrant-chef-1/chef-solo-1/cookbooks 220G 91G 130G 42% /tmp/vagrant-chef-1/chef-solo-1/cookbooks
/tmp/vagrant-chef-1/chef-solo-2/cookbooks 220G 91G 130G 42% /tmp/vagrant-chef-1/chef-solo-2/cookbooks
Can somebody please help? Thank you.
It's supposed to do this :) It's making your virtual disk as small as possible since it is thinly provisioned.
Creating a file full of zeros on the disk until it is full is clearing the blocks on the disk and as such your file representing the VMs disk will be as small as the actual data you have on the disk.
The problem resides in the following statement:
dd if=/dev/zero of=/EMPTY bs=1M
If you don't specify count=<some value>, the dd command will continue until the end of device is reached. So basically with the above command you're trying to create a file called that spawns through the whole partition, called EMPTY under /. Thus the error.
I have two nodes and for experiment i have install glusterfs and create volume and successfully mounted on own node, but if i create file in node1 it is not showing in node2, look like both behaving like they are separate.
node1
10.101.140.10:/nova-gluster-vol
2.0G 820M 1.2G 41% /mnt
node2
10.101.140.10:/nova-gluster-vol
2.0G 33M 2.0G 2% /mnt
volume info split brian
$ sudo gluster volume heal nova-gluster-vol info split-brain
Gathering Heal info on volume nova-gluster-vol has been successful
Brick 10.101.140.10:/brick1/sdb
Number of entries: 0
Brick 10.101.140.20:/brick1/sdb
Number of entries: 0
test
node1
$ echo "TEST" > /mnt/node1
$ ls -l /mnt/node1
-rw-r--r-- 1 root root 5 Oct 27 17:47 /mnt/node1
node2 (file isn't there, while they are shared mount)
$ ls -l /mnt/node1
ls: cannot access /mnt/node1: No such file or directory
What i am missing??
Iptable solved my problem
iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 49152 -j ACCEPT