cannot expand partition - amazon-ec2

I am trying to expand /dev/xvda1 to 25Gb does anyone know how to do that.
[ec2-user#ip-XX-XXX-XX-XX~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 7.8G 47M 100% /
tmpfs 849M 0 849M 0% /dev/shm
[ec2-user#ip-XX-XXX-XX-XX~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda1 202:1 0 25G 0 disk /
xvda3 202:3 0 896M 0 disk
When I try umount:sudo umount /
it says that the
device is busy. (In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))

Try:
sudo resize2fs /dev/xvda1

Related

How to implement A/B dual copy scheme partitioning?

I am trying to implement A+B dual copy scheme partition on my Avenger96 board. I am using Yocto build system and also .wks file to create partitions. My .wks file:
part fsbl1 --source rawcopy --sourceparams="file=u-boot-spl.stm32" --part-name "fsbl1" --ondisk mmcblk --align 1 --size 256k
part fsbl2 --source rawcopy --sourceparams="file=u-boot-spl.stm32" --part-name "fsbl2" --ondisk mmcblk --align 1 --size 256k
part ssbl --source rawcopy --sourceparams="file=u-boot.itb" --part-name "ssbl" --ondisk mmcblk --align 1 --size 2M
part / --source rootfs --ondisk mmcblk0 --fstype=ext4 --label root_A --part-name "rootfs_A" --align 4096 --use-uuid --active --size 3G
part /rootfsB --source rootfs --ondisk mmcblk0 --fstype=ext4 --label root_B --part-name "rootfs_B" --align 4096 --use-uuid --size 3G
bootloader --ptable gpt
And I am able to build .wic.xz image and copied image to SD. It created 2 rootfs partitions.
But when I boot using this SD card, I could see both partitions are mounted. For example, there is also /dev/root which refers to current active partition /dev/mmcblk0p4 and also /dev/mmcblk0p5 (/rootfsB) when I do df -h
Filesystem Size Used Available Use% Mounted on
/dev/root 5.0G 1.1G 3.6G 23% /
devtmpfs 469.7M 0 469.7M 0% /dev
tmpfs 502.2M 0 502.2M 0% /dev/shm
tmpfs 502.2M 9.6M 492.5M 2% /run
tmpfs 502.2M 0 502.2M 0% /sys/fs/cgroup
tmpfs 502.2M 0 502.2M 0% /tmp
tmpfs 502.2M 16.0K 502.1M 0% /var/volatile
/dev/mmcblk0p5 5.0G 1.1G 3.6G 23% /rootfsB
tmpfs 100.4M 0 100.4M 0% /run/user/0
And boot messages also show:
Mounting /rootfsB...
Starting Start psplash boot splash screen...
[ OK ] Mounted /rootfsB.
Also doing mount from linux user-space results in:
/dev/mmcblk0p4 on / type ext4 (rw,relatime)
devtmpfs on /dev type devtmpfs (rw,relatime,size=480932k,nr_inodes=120233,mode=755)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /var/volatile type tmpfs (rw,relatime)
/dev/mmcblk0p5 on /rootfsB type ext4 (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=102840k,mode=700)
Is this the expected behavior for A+B type partitioning?
Can anyone please let me know what could be the issue and how to resolve it?
Your help will be much appreciated.
Thanks in advance.
P.S: Please let me know if any info is missing here.
This is actually what I would expect. You want to boot from the "active" (A) partition, but you also want to be able to update the "passive" (B) partition. After you updated the "passive" (B) partition you typically tell the bootloader to try to boot from it. If that works (B) becomes the "active" partition and (A) the "passive" one.

Resize Amazon EC2 volume without AMI

I have a server on aws-ec2 with defaulit free tier. How can I increase the size of volume without using an AMI?
Here are the Steps which will help you to resize ec2 volume without AMI (Snapshots).
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
Choose Volumes, select the volume to modify, and then choose Actions, Modify Volume.
The Modify Volume window displays the volume ID and the volume's current configuration, including type, size, and IOPS. You can change any or all of these settings in a single action. Set new configuration values as follows:
To modify the type, choose a value for Volume Type.
To modify the size, enter an allowed integer value for Size.
If you chose Provisioned IOPS (IO1) as your volume type, enter an allowed integer value for IOPS.
After you have specified all of the modifications to apply, choose Modify, Yes.
Modifying volume size has no practical effect until you also extend the volume's file system to make use of the new storage capacity.
After then you have to run these command on ec2 terminal
ubuntu#ip-192-168-1-26:~$ sudo su
root#ip-192-168-1-26:/home/ubuntu# df -h
Filesystem Size Used Avail Use% Mounted on
udev 487M 0 487M 0% /dev
tmpfs 100M 12M 88M 12% /run
/dev/xvda1 7.8G 5.5G 2.0G 74% /
tmpfs 496M 0 496M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/999
tmpfs 100M 0 100M 0% /run/user/1000
root#ip-192-168-1-26:/home/ubuntu# sudo file -s /dev/xvd*
/dev/xvda: DOS/MBR boot sector
/dev/xvda1: Linux rev 1.0 ext4 filesystem data, UUID=e6d1a865-817b-456f-99e7-118135343487, volume name "cloudimg-rootfs" (needs journal recovery) (extents) (large files) (huge files)
root#ip-192-168-1-26:/home/ubuntu# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 8G 0 part /
root#ip-192-168-1-26:/home/ubuntu# sudo growpart /dev/xvda 1
CHANGED: partition=1 start=16065 old: size=16761118 end=16777183 new: size=33538334,end=33554399
root#ip-192-168-1-26:/home/ubuntu# sudo resize2fs /dev/xvda1
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/xvda1 is now 4192291 (4k) blocks long.
that's command will help you to resize ec2 volume

EC2 instance store volumes issue

I have created an c3.2xlarge EC2 instance with the store volume specified as 2 x 80 GB (160 GB). But when I use df -H command, this is what i see, and there is not enough storage as specified.
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 62k 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
/dev/xvda1 8.4G 1.1G 7.2G 14% /
I need an EC2 instance to have at least 80 gigs of storage, which instance should I choose?
Thanks for the points in the comments.
The problem was; I used EC2 Management Console and didn't add the volumes when I created the cluster, I terminated that cluster, created a new one, on Storage page Added New Volume, chose the volume type as Instance Store 0.
[]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 75G 0 disk /media/ephemeral0
Now the 80 Gig volume is there.

EC2 r3.xlarge storage space not correspond to the documentation

I'm using Hadoop YARN on EC2 over r3.xlarge instances, I launched the instances from an AMI using spark-ec2 scripts.
On https://aws.amazon.com/ec2/instance-types/, the specifications of r3.xlarge are the following:
vCPU: 4
Mem: 30.5 GiB
Storage: 1 x 80 GB
The Memory is good, free command gives me this result:
root#ip-xxx-xx-xx-xxx ~]$ free -g
total used free shared buffers cached
Mem: 29 2 27 0 0 1
But the storage not correspond to the indicated one.
root#ip-xxx-xx-xx-xxx ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 6.9G 783M 91% /
devtmpfs 15G 64K 15G 1% /dev
tmpfs 15G 0 15G 0% /dev/shm
Is it normal to have only ~40GB and not 80GB like this was specified in the documentation ? Or this is because I launched the instance from an AMI ?
The two tmpfs directories aren't where your missing 80gb is. This is looks like an Debian/Ubuntu distro. I can reproduce something similar to your df:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 15G 12K 15G 1% /dev
tmpfs 3.0G 328K 3.0G 1% /run
/dev/xvda1 7.8G 790M 6.6G 11% /
none 15G 0 15G 0% /run/shm
Note /dev/xvda1. That's your boot partition that is on ELB. Your 80gb SSD is actually at /dev/xvdb. You need to make use of it:
mkdir -p /mnt/ssd && mkfs.ext4 /dev/xvdb \
&& echo '/dev/xvdb /mnt/ssd auto defaults,nobootwait 0 0' >> /etc/fstab \
&& mount /mnt/ssd
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 15G 12K 15G 1% /dev
tmpfs 3.0G 328K 3.0G 1% /run
/dev/xvda1 7.8G 790M 6.6G 11% /
/dev/xvdb 74G 52M 70G 1% /mnt/ssd
Congrats! You are now the proud owner of a 80gb mount. Okay, not quite 80gb. Lets get 80gb:
$ df -H
Filesystem Size Used Avail Use% Mounted on
udev 17G 13k 17G 1% /dev
tmpfs 3.3G 336k 3.3G 1% /run
/dev/xvda1 8.4G 828M 7.1G 11% /
/dev/xvdb 80G 55M 76G 1% /mnt/ssd
Your filesystem is probably on EBS, not the instance storage that comes with r3.xlarge. This is the default for most AMIs. Note the size of the EBS volume is not part of the image. You can choose it when you create the instance.
Instance store is available on the larger instance types as shown here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
AMI images have two options for the root storage device. The most common are EBS images, which use EBS for the root device. Since EBS isn't locked to specific hardware, these instances are much more flexible.
The other option is an AMI with an instance store root storage device. However, you lose the ability to stop the instance without terminating, change the instance type, resize the storage device, and manage the storage separately from the instance itself.
Instance store AMIs are often tagged with S3. For example: amzn-ami-hvm-2016.03.0.x86_64-s3 (ami-152bc275).

Can't decompress csv: "No space left on device", but using EC2 m3.2xlarge?

I'm attempting to decompress a csv file on my EC2 instance. The instance should definitely be large enough so I guess it has to do with partitioning, but I am new to that stuff and don't really understand the posts I've found here and here, or whether they apply to me. (I'm not using Hadoop nor do I have a full "/tmp" folder). The .csv.gz file is 1.6 GB and it should be 14 GB decompressed. Executing gzip -d data.csv.gz, I get the error gzip: data.csv: No space left on device, and df -h shows:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 2.8G 5.0G 36% /
devtmpfs 15G 56K 15G 1% /dev
tmpfs 15G 0 15G 0% /dev/shm
Thanks for your help!

Resources