How do I reduce my EBS volume capacity without losing data? [closed] - amazon-ec2

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I would like to reduce EBS volume capacity without losing data.
I would like to change the capacity from 1 TB to 200 GB.
Please provide detailed steps on how to do it.

The approach I take to decreasing an EBS root volume is as follows:
Stop (not terminate) the target instance, and detach the root EBS volume. Alternatively, you can snapshot the root volume (or use an existing snapshot) and create a new EBS volume from that. (e.g. /dev/xvda1)
Note: the volume you use from the step above will be altered - so you may want to take a snapshot if you didn't already.
Create a new EBS volume that is the desired size (e.g. /dev/xvdg)
Launch an instance, and attach both EBS volumes to it
Check the file system (of the original root volume): (e.g.) e2fsck -f /dev/xvda1
Maximally shrink the original root volume: (e.g. ext2/3/4) resize2fs -M -p /dev/xvda1
Copy the data over with dd:
Choose a chunk size (I like 16MB)
Calculate the number of chunks (using the number of blocks from the resize2fs output): blocks*4/(chunk_size_in_mb*1024) - round up a bit for safety
Copy the data: (e.g.) dd if=/dev/xvda1 ibs=16M of=/dev/xvdg obs=16M count=80
Resize the filesystem on the new (smaller) EBS volume: (e.g.) resize2fs -p /dev/xvdg
Check the file system (of the new volume): (e.g.) e2fsck -f /dev/xvdg
Detach your new EBS root volume, and attach it to your original instance

Answer from ezhilrean is OK, but there is an easier way.
Let's say you have an instance with your /var partition on /dev/sdf1 and you want to reduce this from 300GB to 200GB (presuming that there is < 200GB of data on /var)
Create a new volume in the same AZ as the original volume
Attach it to the instance as /dev/sdg
Login to instance with root permissions
fdisk /dev/sdg
n (for New)
p (for Primary)
Accept defaults for other fdisk options
w (for Write)
fdisk will then exit. You now need to make a file system on the new partition
mkfs.ext4 /dev/sdg1 (presuming that ext4 was used on existing partition)
Next, mount you new partition at a temporary mount point
mkdir /new
mount /dev/sdg1 /new
Now, copy your data
cd /var
cp -ax * /new/
Update your /etc/fstab to use the new partition for /var
/dev/sdg1 /var ext4 defaults 0 0
Reboot
init 6
If you need your /var partition to have identifier /dev/sdf1, you can stop the instance, detach both EBS volumes, and re-attach the new smaller one as /dev/sdf Remember to change /etc/fstab before you do this

Related

How to proceed with a data node with corrupt disk file system

I would really appreciate help on the correct course of action. The setup is 3 ELK nodes which have all roles.
No shard replication is done. Node 3 experienced a failure on the disk which contains the data folder. An old copy (about a month) of that folder exists, and I know it would not be sufficient to copy the data in.
My question is, what is the correct course of action at this point which would return the stack to normal operation mode:
install a new disk and just launch the node? By a strike of luck, that was our least important data.
install the new disk and copy the old data and see if it can recover that data?
Also, would doing option 1, while launching an experimental node on which the data folder is mounted and restore whichever recoverable data and re-index them remotely to the original cluster?
Another option is to try to use the bin/elasticsearch-shard tool to see if you can repair part of the data.

Anaconda Installer (Fedora/Cent/RH/Qubes)- CLI Disk Prep Prior to Install

I'm looking to have root on a RAID BtrFS built on a number of luks disks. I typically do this on Debian or Ubuntu by preparing my disks before-hand, then running the install into those disks. At the end, I need to pivot into the new system to modify crypttab and fstab.
I'm trying the same thing with Qubes, which uses the Anaconda installer. When I get to the GUI partitioner, the BtrFS appears under the "Unknown" dropdown, but if I try to set "mount point to "/" and then "Update Settings," it errors with "You must create a new filesystem on the root device." (But there is already one there.) If I use "+" instead, I am told "Not enough free space for thin provisioning." The installer is clearly confused about how much space is available: "Available space 992.5 KiB," "Total space 238.47 GiB." In fact, there is 932.35GiB in the RAID'ed BtrFS.
If I just open the luks devices, but put no FS in there, then all /dev/mapper/luks* devices appear in the partitioner under the "Unknown" dropdown, but choosing "New mount points will use the following partitioning scheme: Btrfs," none of the devices allow me to associate a mount point. It's greyed out, or if I try to use "+" and test it with a single disk, it comes back with an error "Not enough disks for single." (But I have multiple LUKS disks there!)
Trying without any prior formatting, neither luks nor Btrfs, I find that the partitioner can't handle bare disks. It wants a partition table (which I don't).
Does anyone have a way through this?
Edit: It appears there are serious issues with this installer.
The answer to all of this appears to be: "Don't try to fight the Anaconda, as you will lose." Despite the access to a root terminal (Control-Alt-F1 reaches a tmux session, Control-b 2, reaches a terminal with root privileges), you must return to the graphical installer, which is too limited to allow any headway, particularly with BtrFS disks. Anaconda sees BtrFS not as a filesystem, but as a device, and this makes problems insurmountable.
The solution is to do a dummy install and then modify all disks, editing crypttab, fstab, /etc/default/grub as needed. Then pivot in and run dracut -f, along with grub2-mkinstall if needed. Also, if necessary, grub2-install.
One advantage of BtrFS in this process, is that it's possible to avoid having to use a live-DVD or Anaconda's rescue shell to make changes in a system "at rest", afterward pivoting in to run dracut et al. You'd just use btrfs device add to add a device to the root, and then btrfs device remove the original. Then make relevant changes to the original partitions, afterwards reversing the add/remove. So it's possible to make changes by moving back and forth from one disk to the other.

Expanding root partition on AWS EC2

I created a public VPC and then added a bunch of nodes to it so that I can use it for a spark cluster. Unfortunately, all of them have a partition setup that looks like the following:
ec2-user#sparkslave1: lsblk
/dev/xvda 100G
/dev/xvda1 5.7G /
I setup a cloud manager on top of these machines and all of the nodes only have 1G left for HDFS. How do I extend the partition so that it takes up all of the 100G?
I tried created /dev/xvda2, then created a volume group, added all of /dev/xvda* to it but /dev/xvda1 doesn't get added as it's mounted. I cannot boot from a live CD in this case, it's on AWS. I also tried resize2fs but it says that the root partition already takes up all of the available blocks, so it cannot be resized. How do I solve this problem and how do I avoid this problem in the future?
Thanks!
I don't think you can just resize the running root volume. This is how you'd go about increasing the root size:
create a snapshot of your current root volume
create a new volume from this snapshot of the size that you want (100G?)
stop the instance
detach the old small volume
attach the new bigger volume
start instance
I had a same problem before, but can't remember the solution. Did you try to run
e2resize /dev/xvda1
*This is when your using ext3, which is usually the default. The e2resize command will "grow" the ext3 filesystem to use the remaining free space.

How to copy files and directories to EBS from a S3 bucket mount with s3fs

I have a S3 bucket mount as a volume in one EC2 instance with S3FS. I've created with PHP a directory structure there with more than 20GB now.
By the moment S3FS is exhausting instance memory and uploading is really slow so I want to move all the files to an EBS attached to the same instance.
I've tried S3CMD but there are some incompatibilities since S3FS creates zero sized objects in the bucket with the same name as directories.
Also tried writing a script to recursively copy the structure skipping those zero sized objects.
None worked.
Have anyone tried to do this? Thanks in advance for your help.
#hernangarcia Dont make things complicated, use recursive wget that wget -r followed by the url of the bucket end point. You can download all the contents to a EBS volume. Also my suggestion is not to store all those files which is like 20 GB on the root volume of the instance, instead attach another volume to it and then store all those file in that extra volume and if you have a High IOPS volume for it so that operations will be faster.

Deleted file recovery program using C C++ [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to write a program that can recover deleted files from hard drive ( FAT32/NTFS partition Windows). I don't know where to start from. What should be the starting point of this? What should i read to pursue this? Help is required. Which system level structs should i study?
It's entirely a matter of the filesystem layout, how a "file" actually looks on disk, and what remains when a file is deleted. As such, pretty much all you need to understand is the filesystem spec (for each and every filesystem you want to support), and how to get direct block-level access to the HD data. It might be possible to reuse some code from existing filesystem drivers, but it will need to be modified to process structures that, from the point of view of the filesystem, are gone.
NTFS technical reference
NTFS.com
FAT32 spec
You should know first how file deletion is done in FAT32/NTFS, and how other undelete softwares work.
Undelete software understands the internals of the system used to store files on a disk (the file system) and uses this knowledge to locate the disk space that was occupied by a deleted file. Because another file may have used some or all of this disk space there is no guarantee that a deleted file can be recovered or if it is, that it won't have suffered some corruption. But because the space isn't re-used straight away there is a very good chance that you will recover the deleted file 100% intact. People who use deleted file recovery software are often amazed to find that it finds files that were deleted months or even years ago. The best undelete programs give you an indication of the chances of recovering a file intact and even provide file viewers so you can check the contents before recovery.
Here's a good read (but not so technical): http://www.tech-pro.net/how-to-recover-deleted-files.html
This is not as difficult as you think. You need to understand how files are stored in fat32 and NTFS. I recommend you use winhex an application used for digital forensics to check your address calculations are correct.
Ie NTFS uses master file records to store data of the file in clusters. Unlink deletes file in c but if you look at the source code all it does is removes entry from table and updates the records. Use an app like winhex to read information of the master file record. Here are some useful info.
Master boot record - sector 0
Hex 0x55AA is the end of MBR. Next will be mft
File name is mft header.
There is a flag to denote folder or file (not sure where).
The file located flag tells if file is marked deleted. You will need to change this flag if you to recover deleted file.
You need cluster size and number of clusters as well as the cluster number of where your data starts to calculate the start address if you want to access data from the master file table.
Not sure of FAT32 but just use same approach. There is a useful 21 YouTube video which explains how to use winhex to access deleted file data on NTFS. Not sure the video but just type in winhex digital forensics recover deleted file. Once you watch this video it will become much clearer.
good luck
Just watched the 21 min YouTube video on how to recover files deleted in NTFS using winhex. Don't forget resident flag which denotes if the file is resident or not. This gives you some idea of how the file is stored either in clusters or just in the mft data section if small. This may be required if you want to access the deleted data. This video is perfect to start with as it contains all the offset byte position to access most of the required information relative to beginning of the file record. It even shows you how to do the address calculation for the start of the cluster. You will need to access the table in binary format using a pointer and adding offsets to the pointer to access the required information. The only way to do it is go through the whole table and do a binary comparison of the filename byte for byte. Some fields are little eindian so make sure you got winhex to check your address calculations.

Resources