I know that the maximum slug size allowed is 200 MB. But what is the maximum disk space you can use per instance? Say I'm downloading a couple of files when the node process is running.
heroku run bash
~ $ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
- 620G 6.1G 583G 2% /tmp
You have approximately 620Gb in the /tmp folder.
Any other folder doesn't really matter, as they're readonly anyway.
When I ran df on one of my instances just now (Cedar 14), I get 304G total, 240G available. Still a lot, but less than the responder above. Just an FYI to check your specific instance.
~ $ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/evg0-evol0 304G 49G 240G 17% /tmp
I think the total size for whole server is 620GB. you can run df -k . in free account and I got 394267100 byte.. 400mb
You have approximately 620Gb in the /tmp folder. Any other folder
doesn't really matter, as they're readonly anyway.
Related
This question already has answers here:
Size() vs ls -la vs du -h which one is correct size?
(3 answers)
Closed last month.
I had to create a random file of 10GB size, which I can using dd or fallocate, but the size shown by du -sh is twice the one I created:
$ dd bs=1MB count=10000 if=/dev/zero of=foo
10000+0 records in
10000+0 records out
10000000000 bytes (10 GB, 9.3 GiB) copied, 4.78419 s, 2.1 GB/s
$ du -sh foo
19G foo
$ ls -sh foo
19G foo
$ fallocate -l 10G bar
$ du -sh bar
20G bar
$ ls -sh bar
20G bar
Can someone please explain me this apparent discrepancy?
On wikipedia, it mentions about GPFS ...
The system stores data on standard block storage volumes, but includes an internal RAID layer that can virtualize those volumes for redundancy and parallel access much like a RAID block storage system.
I conclude that there is at least one non-visible duplicate for every file, and therefore each file actually uses twice the amount of space than the actual content of a single file. So the underlying RAID imposes the double-usage.
That could explain it, because I have created a similar massive file for other purposes, also using dd, on an ext4 filesystem, but the OS reports my file size matching the dd creation size, as per design intent (no RAID in effect on my drive).
The fact that you indicate that stat does report the correct file size as per dd's actions, confirms what I put forward above.
I've been passing by a problem.
I got 1 folder inside /tmp called schemajs.
This folder is from a project in jenkins.
If you use the command du -ahx it shows like this
15G .
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517089398135/schemajs-2.1.1-linux-x86_64
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517089398135
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517087611935/schemajs-2.1.1-linux-x86_64
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517087611935
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517085797988/schemajs-2.1.1-linux-x86_64
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517085797988
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517084059192/schemajs-2.1.1-linux-x86_64
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517084059192
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517082197124/schemajs-2.1.1-linux-x86_64
The dot has 15G although there aren't enough files in the folder to justify this among of space used.
With the command ls -lha it shows several files with 4.0k with the total of 32M.
if I used vim .. it show more than 7500 files.
What could be happening in this particular case?
Platform: centos 6.9
4KB is the size of a directory and from what I see of du output your /tmp directory is full of directories (named for instance schemajs-2.1.1-linux-x86_64.tar.bz2-extract-151708939813) . Inside those directories are the file occupying disk space. It is always the same file of 66 MB named
schemajs-2.1.1-linux-x86_64
Today I started getting errors on simple operations, like creating small files in vim, the bash completion started to complain as well.
Here is the result of df -h :
vagrant#machine:/vagrant$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 40G 38G 249M 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 2.0G 12K 2.0G 1% /dev
tmpfs 396M 396K 395M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 0 2.0G 0% /run/shm
none 100M 0 100M 0% /run/user
overflow 1.0M 148K 876K 15% /tmp
192.168.50.1:/Users/nha/repo/assets 233G 141G 93G 61% /var/www/assets
vagrant 233G 141G 93G 61% /vagrant
So apparently / doesn`t have space anymore ? Isn't it weird since I have space in the other filesystems (or am I misreading something) ?
How do I get more space on my vm ?
Even though you have space on your Guest OS, the VM is limited.There are couple of steps required in order to increase the size of your disk:
first, vagrant haltto close your VM
resize disk
VBoxManage clonehd box-disk1.vmdk box-disk1.vdi --format vdi
VBoxManage modifyhd box-disk1.vdi --resize 50000
start Virtual box and change configuration of the VM to associate the new disk
use fdisk to resize disk
you need to create a new partition with the new space and allocate it, so first start the VM and logged on as super user
vagrant up && vagrant ssh
su -
the command (as illustrated from my instance) are
[root#oracle ~]# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): p
Disk /dev/sda: 52.4 GB, 52428800000 bytes
255 heads, 63 sectors/track, 6374 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00041a53
Device Boot Start End Blocks Id System
/dev/sda1 * 1 39 307200 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 39 2611 20663296 8e Linux LVM
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (2611-6374, default 2611):
Using default value 2611
Last cylinder, +cylinders or +size{K,M,G} (2611-6374, default 6374):
Using default value 6374
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root#oracle ~]#
note you might need to change /dev/sda compare to your configuration
create a new partition (again logged on as super user su -)
su -
[root#oracle ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 linux lvm2 a-- 19.70g 0
[root#oracle ~]# pvcreate /dev/sda3
Physical volume "/dev/sda3" successfully created
[root#oracle ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 linux lvm2 a-- 19.70g 0
/dev/sda3 lvm2 a-- 28.83g 28.83g
[root#oracle ~]# vgextend linux /dev/sda3
Volume group "linux" successfully extended
[root#oracle ~]# lvextend -l +100%FREE /dev/linux/root
[root#oracle ~]# resize2fs /dev/linux/home
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/linux/home is mounted on /home; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/linux/home to 7347200 (4k) blocks.
The filesystem on /dev/linux/home is now 7347200 blocks long.
You can increase space in your box, without losing data or creating new partitions.
Halt your VM;
Go to /home_dir/VirtualBox VMs
Change file format from .vmdk to .vdi. Then use command from the answer above to increase space.
Change the file extension back and change the file name.
Attach an extended disk to your VM.
VBoxManage storageattach <your_box_name> --storagectl "IDE Controller" --
port 0 --device 0 --type hdd --medium new_extended_file.vmdk
In your VirtualBox application go to Your_VM -> Settings -> Storage. Click on the controller and choose 'add new disk' below. Choose from existing disks the one you have just expanded.
Here's a step by step instruction how to expand the space in your vagrant box or virtual machine.
The easiest way to increase the size of the vagrant box is with the vagrant-disksize plugin.
In your vagrant root folder, run vagrant plugin install vagrant-disksize
Then add the new size to the Vagrantfile:
Vagrant.configure('2') do |config|
...
config.disksize.size = '60GB'
end
Then vagrant halt and vagrant up.
vagrant reload will not work.
I have read that the plugin has issues shrinking disk size if you overshoot.
EDIT:
On Mac, this plugin also resized the partition within the Guest OS (Ubuntu in my case).
On Windows, Vagrant reserves the space on the host OS (it enlarges the disk), but you can't use the space until resizing the partition from within the Guest OS.
I used GParted, but other solutions look simpler, such as: https://nguyenhoa93.github.io/Increase-VM-Partition
I sometimes have to destroy the machine and build it up again which in my case frees up quite a lot of space, you can do that by running
vagrant destroy
vagrant up
Please note this will result in database data being lost.
I have a hadoop cluster we assuming is performing pretty "bad". The nodes are pretty beefy.. 24 cores, 60+G RAM ..etc. And we are wondering if there are some basic linux/hadoop default configuration that prevent hadoop from fully utilizing our hardware.
There is a post here that described a few possibilities that I think might be true.
I tried logging in the namenode as root, hdfs and also myself and trying to see the output of lsof and also the setting of ulimit. Here are the output, can anyone help me understand why the setting doesn't match with the open files number.
For example, when I logged in as root. The lsof looks like this:
[root#box ~]# lsof | awk '{print $3}' | sort | uniq -c | sort -nr
7256 cloudera-scm
3910 root
2173 oracle
1886 hbase
1575 hue
1180 hive
801 mapred
470 oozie
427 yarn
418 hdfs
244 oragrid
241 zookeeper
94 postfix
87 httpfs
...
But when I check out the ulimit output, it looks like this:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 806018
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I am assuming, there should be no more than 1024 files opened by one user, however, when you look at the output of lsof, there are 7000+ files opened by one user, can anyone help explain what is going on here?
Correct me if I had made any mistake understanding the relation between ulimit and lsof.
Many thanks!
You need to check limits for the process. It may be different from your shell session:
Ex:
[root#ADWEB_HAPROXY3 ~]# cat /proc/$(pidof haproxy)/limits | grep open
Max open files 65536 65536 files
[root#ADWEB_HAPROXY3 ~]# ulimit -n
4096
In my case haproxy has a directive on its config file to change maximum open files, there should be something for hadoop as well
I had a very similar issue, which caused one of the claster's YARN TimeLine server to stop due to reaching magical 1024 files limit and crashing with "too many open files" errors.
After some investigation it came out that it had some serious issues with dealing with too many files in TimeLine's LevelDB. For some reason YARN ignored yarn.timeline-service.entity-group-fs-store.retain-seconds setting (by default it's set to 7 days, 604800ms). We had LevelDB files dating back for over a month.
What seriously helped was applying a fix described in here: https://community.hortonworks.com/articles/48735/application-timeline-server-manage-the-size-of-the.html
Basically, there are a couple of options I tried:
Shrink TTL (time to live) settings First enable TTL:
<property>
<description>Enable age off of timeline store data.</description>
<name>yarn.timeline-service.ttl-enable</name>
<value>true</value>
</property>
Then set yarn.timeline-service.ttl-ms (set it to some low settings for a period of time):
\
<property>
<description>Time to live for timeline store data in milliseconds.</description>
<name>yarn.timeline-service.ttl-ms</name>
<value>604800000</value>
</property>
Second option, as described, is to stop TimeLine server, delete the whole LevelDB and restart the server. This will start the ATS database from scratch. Works fine if you failed with any other options.
To do it, find the database location from yarn.timeline-service.leveldb-timeline-store.path, back it up and remove all subfolders from it. This operation will require root access to the server where TimeLine is located.
Hope it helps.
I have very little left on /, but at the same time there are more than plenty on /mnt volume, how can I use the /mnt and have all my stuff move to there?
# df -l
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 2064208 1947044 12308 100% /
/dev/sda2 153899044 192212 145889208 1% /mnt
none 873880 0 873880 0% /dev/shm
And also what's /mnt volume (/dev/sda2) for? is it EBS volume? do I got charged for using it if I move my data/binaries over?
Another solution that I am looking at is to resize the default / volume (/dev/sda2) to a bigger size, then the question would be is it possible? legitimate? and free of charge?
I wrote an article for you on how to resize the root EBS volume:
http://alestic.com/2010/02/ec2-resize-running-ebs-root
I don't recommend using /mnt ephemeral storage except for temporary, unimportant files. The content of ephemeral storage is lost when an instance is stopped or fails.