Running out of inodes on a docker volume - macos

I have the following docker-compos.yml file:
web:
build: .
ports:
- "4200:4200"
- "35729:35729"
volumes:
- ..:/code
- ../home:/home/dev
which maps the 2 volumes above. When I login into my VM and run df -i i see
Filesystem Inodes IUsed IFree IUse% Mounted on
none 1218224 509534 708690 42% /
tmpfs 256337 18 256319 1% /dev
shm 256337 1 256336 1% /dev/shm
tmpfs 256337 11 256326 1% /sys/fs/cgroup
none 1000 0 1000 0% /code
none 1000 0 1000 0% /home/dev
/dev/sda1 1218224 509534 708690 42% /etc/resolv.conf
/dev/sda1 1218224 509534 708690 42% /etc/hostname
/dev/sda1 1218224 509534 708690 42% /etc/hosts
tmpfs 256337 18 256319 1% /proc/kcore
tmpfs 256337 18 256319 1% /proc/timer_stats
As you can /code and /home/dev my 2 volumes only have 1000 inodes, so when I run my build process and it ends up creating a ton of files, I get an error that I don't have enough inodes.
Host = OSX
Guest = CentOs 6.5
Using Virtualbox
My question is: how do I assign more inodes to my data volumes /code and /home/dev above?

I'm looking for a similar solution and so far, I've found this post to have some useful information in it:
how to free inode usage
Also, according to this post there doesn't seem to be a dynamic way of allocating inodes:
how can I increase the number of inodes ...
And finally, there is this line in the documentation:
it is not possible to expand the number of inodes on a filesystem after it is created

Related

Splitting the Output data of a command and process the splitted and saved data further in Ansible

I have a situation as below.
I have to run a command that fetches a data related to two nodes (let us say:NODE-1, NODE-2. Please note that there is no way to fetch the data seperately for NODE-1 and NODE-2). The sample data is as shown below.
#################### NODE-1 ####################
Filesystem Size Used Avail Use% Mounted on
devtmpfs 12G 8.0K 12G 1% /dev
root 4.0G 2.4G 1.7G 59% /
tmpfs 17G 796K 17G 1% /dev/user1
tmpfs 12G 1.2G 11G 10% /run
tmpfs 12G 0 12G 0% /sys/user1
tmpfs 2.4G 0 2.4G 0% /run/user1/0
/dev/sda1 3.9G 158M 3.5G 5% /boot
/dev/sda2 3.9G 158M 3.5G 5% /var/log
/dev/sda3 9.8G 1.1G 8.2G 12% /var/opt
none 64M 0 64M 0% /var/opt/group1
none 64M 0 64M 0% /var/opt/group2
#################### NODE-2 ####################
Filesystem Size Used Avail Use% Mounted on
devtmpfs 12G 8.0K 12G 1% /dev
root 4.0G 2.4G 1.7G 59% /
tmpfs 17G 796K 17G 1% /dev/user1
tmpfs 12G 1.2G 11G 10% /run
tmpfs 12G 0 12G 0% /sys/user1
tmpfs 2.4G 0 2.4G 0% /run/user1/0
/dev/sda1 3.9G 158M 3.5G 5% /boot
/dev/sda2 3.9G 158M 3.5G 5% /var/log
/dev/sda3 9.8G 1.1G 8.2G 12% /var/opt
none 64M 0 64M 0% /var/opt/group1
none 64M 0 64M 0% /var/opt/group2
It is required to process the data of these two nodes sepearely and draw & print the conclusions individually based on their repsctive values.
I am unable to find a way to seperate the data of these two nodes and save them as seperate variables/ in files to process them further.
PS: The number of fields (rows) in the data of the nodes is not constant.
Can I get a way (atleast a hint) to do it using ansible ?
i suggest you to use a custom filter (splitregex): you put this file
in folder filter_plugin (same level than your playbook)
#!/usr/bin/python
import re
class FilterModule(object):
def filters(self):
return {
'splitregex': self.splitregex
}
def splitregex(self, l1, pattern):
return filter(None, re.split(pattern, l1, flags=re.MULTILINE))
you use it in this playbook:(the file datas.txt contains your initial result)
- hosts: localhost
gather_facts: false
tasks:
- name: read file
shell: cat datas.txt
register: cc
- name: show item
debug:
msg: "{{ item }}"
loop: "{{ parts }}"
vars:
parts: "{{ cc.stdout | splitregex('^#+ NODE-[0-9]+ #+$') }}"
parts is a list which contains your initial file cut following the pattern
you could use '^#+ NODE-[0-9]+ #+$\n' if you want to remove the \n in beginning of each part
if you are using roles
for role test (for example), you create a folder roles/test/filter_plugins and put it the python file in it

Resize Amazon EC2 volume without AMI

I have a server on aws-ec2 with defaulit free tier. How can I increase the size of volume without using an AMI?
Here are the Steps which will help you to resize ec2 volume without AMI (Snapshots).
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
Choose Volumes, select the volume to modify, and then choose Actions, Modify Volume.
The Modify Volume window displays the volume ID and the volume's current configuration, including type, size, and IOPS. You can change any or all of these settings in a single action. Set new configuration values as follows:
To modify the type, choose a value for Volume Type.
To modify the size, enter an allowed integer value for Size.
If you chose Provisioned IOPS (IO1) as your volume type, enter an allowed integer value for IOPS.
After you have specified all of the modifications to apply, choose Modify, Yes.
Modifying volume size has no practical effect until you also extend the volume's file system to make use of the new storage capacity.
After then you have to run these command on ec2 terminal
ubuntu#ip-192-168-1-26:~$ sudo su
root#ip-192-168-1-26:/home/ubuntu# df -h
Filesystem Size Used Avail Use% Mounted on
udev 487M 0 487M 0% /dev
tmpfs 100M 12M 88M 12% /run
/dev/xvda1 7.8G 5.5G 2.0G 74% /
tmpfs 496M 0 496M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/999
tmpfs 100M 0 100M 0% /run/user/1000
root#ip-192-168-1-26:/home/ubuntu# sudo file -s /dev/xvd*
/dev/xvda: DOS/MBR boot sector
/dev/xvda1: Linux rev 1.0 ext4 filesystem data, UUID=e6d1a865-817b-456f-99e7-118135343487, volume name "cloudimg-rootfs" (needs journal recovery) (extents) (large files) (huge files)
root#ip-192-168-1-26:/home/ubuntu# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 8G 0 part /
root#ip-192-168-1-26:/home/ubuntu# sudo growpart /dev/xvda 1
CHANGED: partition=1 start=16065 old: size=16761118 end=16777183 new: size=33538334,end=33554399
root#ip-192-168-1-26:/home/ubuntu# sudo resize2fs /dev/xvda1
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/xvda1 is now 4192291 (4k) blocks long.
that's command will help you to resize ec2 volume

EC2 r3.xlarge storage space not correspond to the documentation

I'm using Hadoop YARN on EC2 over r3.xlarge instances, I launched the instances from an AMI using spark-ec2 scripts.
On https://aws.amazon.com/ec2/instance-types/, the specifications of r3.xlarge are the following:
vCPU: 4
Mem: 30.5 GiB
Storage: 1 x 80 GB
The Memory is good, free command gives me this result:
root#ip-xxx-xx-xx-xxx ~]$ free -g
total used free shared buffers cached
Mem: 29 2 27 0 0 1
But the storage not correspond to the indicated one.
root#ip-xxx-xx-xx-xxx ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 6.9G 783M 91% /
devtmpfs 15G 64K 15G 1% /dev
tmpfs 15G 0 15G 0% /dev/shm
Is it normal to have only ~40GB and not 80GB like this was specified in the documentation ? Or this is because I launched the instance from an AMI ?
The two tmpfs directories aren't where your missing 80gb is. This is looks like an Debian/Ubuntu distro. I can reproduce something similar to your df:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 15G 12K 15G 1% /dev
tmpfs 3.0G 328K 3.0G 1% /run
/dev/xvda1 7.8G 790M 6.6G 11% /
none 15G 0 15G 0% /run/shm
Note /dev/xvda1. That's your boot partition that is on ELB. Your 80gb SSD is actually at /dev/xvdb. You need to make use of it:
mkdir -p /mnt/ssd && mkfs.ext4 /dev/xvdb \
&& echo '/dev/xvdb /mnt/ssd auto defaults,nobootwait 0 0' >> /etc/fstab \
&& mount /mnt/ssd
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 15G 12K 15G 1% /dev
tmpfs 3.0G 328K 3.0G 1% /run
/dev/xvda1 7.8G 790M 6.6G 11% /
/dev/xvdb 74G 52M 70G 1% /mnt/ssd
Congrats! You are now the proud owner of a 80gb mount. Okay, not quite 80gb. Lets get 80gb:
$ df -H
Filesystem Size Used Avail Use% Mounted on
udev 17G 13k 17G 1% /dev
tmpfs 3.3G 336k 3.3G 1% /run
/dev/xvda1 8.4G 828M 7.1G 11% /
/dev/xvdb 80G 55M 76G 1% /mnt/ssd
Your filesystem is probably on EBS, not the instance storage that comes with r3.xlarge. This is the default for most AMIs. Note the size of the EBS volume is not part of the image. You can choose it when you create the instance.
Instance store is available on the larger instance types as shown here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
AMI images have two options for the root storage device. The most common are EBS images, which use EBS for the root device. Since EBS isn't locked to specific hardware, these instances are much more flexible.
The other option is an AMI with an instance store root storage device. However, you lose the ability to stop the instance without terminating, change the instance type, resize the storage device, and manage the storage separately from the instance itself.
Instance store AMIs are often tagged with S3. For example: amzn-ami-hvm-2016.03.0.x86_64-s3 (ami-152bc275).

EC2 error: cannot create temp file for here-document: Read-only file system

Looks like my Ubuntu 14.04 EC2 made the fs read-only.
cd /var/ (pressing tab for autocomplete)
cannot create temp file for here-document: Read-only file system
But I have plenty of free space and memory is not full either:
Welcome to Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-48-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Wed Feb 3 14:40:58 UTC 2016
System load: 0.0 Processes: 126
Usage of /: 14.9% of 11.67GB Users logged in: 0
Memory usage: 19% IP address for eth0: 172.31.15.38
Swap usage: 0%
df -hi:
/dev/xvda1 768K 85K 684K 12% /
none 251K 2 251K 1% /sys/fs/cgroup
udev 249K 387 249K 1% /dev
tmpfs 251K 309 250K 1% /run
none 251K 1 251K 1% /run/lock
none 251K 1 251K 1% /run/shm
none 251K 2 251K 1% /run/user
free:
total used free shared buffers cached
Mem: 2048484 1199420 849064 6248 180300 635596
-/+ buffers/cache: 383524 1664960
du -sch /tmp*
9.9M /tmp
9.9M total
What's the solution here? How can I fix the fs without losing my data?
Should I run:
mount -o remount,rw /
or should I reboot?
Thanks in advance!
Do you have btrfs filesystems? If so, when there´s not enough space for more snapshots, the OS changes their properties to read-only (including /tmp). For me, the solution was to delete snapshots and disable the snapper.
Use the following commands:
snapper list #shows a numbered list of snapshots
snapper delete nmbr #deletes snapshot number nmbr (retry after reboot if doesn't work at first)
Also, disable automatic snapshots by deleting corresponding files under /etc/cron.hourly, /etc/cron.daily, and so on.
I got the same issue and fixed the same as below:
There was a bad arguments given by someone for /etc/fstab
Wrong Entry in "/etc/fstab":
[ec2-user#XXXXXXXXXXX ~]$ cat /etc/fstab
#
UUID=XXXXXX / xfs defaults,noation 1 1
and Corrected entry is as below:
[ec2-user#XXXXXXXXXXX ~]$ cat /etc/fstab
#
UUID=XXXXXX / xfs defaults,noatime 1 1
Maybe you don't have write permission to the /tmp/ directory.
Check permission and it should be look like
ls -ld /tmp
drwxrwxrwt 10 root root 4096 Jun 5 11:32 /tmp
You can fix the permissions followed by
chmod a+rwxt /tmp

Can't decompress csv: "No space left on device", but using EC2 m3.2xlarge?

I'm attempting to decompress a csv file on my EC2 instance. The instance should definitely be large enough so I guess it has to do with partitioning, but I am new to that stuff and don't really understand the posts I've found here and here, or whether they apply to me. (I'm not using Hadoop nor do I have a full "/tmp" folder). The .csv.gz file is 1.6 GB and it should be 14 GB decompressed. Executing gzip -d data.csv.gz, I get the error gzip: data.csv: No space left on device, and df -h shows:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 2.8G 5.0G 36% /
devtmpfs 15G 56K 15G 1% /dev
tmpfs 15G 0 15G 0% /dev/shm
Thanks for your help!

Resources