In linux, fsck gpt external hard disk fail - google-publisher-tag

In linux, fsck gpt external hard disk
fsck fail to check a gpt partitioned external hdd.
What to do? I cannot fsck the filesystem of that disk!
How I can check my filesystem?
What I am doing wrong?
Below is some information on my external HDD.
elias#eliasc:~$ sudo fdisk -l /dev/sdc
Disk /dev/sdc: 931.5 GiB, 1000170586112 bytes, 1953458176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 66BAEFE2-F3F9-491C-B40F-C964F28AE483
Device Start End Sectors Size Type
/dev/sdc1 2048 1953456127 1953454080 931.5G Microsoft basic data
elias#eliasc:~$ sudo fsck /dev/sdc
fsck from util-linux 2.31.1
e2fsck 1.44.1 (24-Mar-2018)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/sdc
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
Found a gpt partition table in /dev/sdc
sudo gdisk /dev/sdc
GPT fdisk (gdisk) version 1.0.3
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): i
Using 1
Partition GUID code: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 (Microsoft basic data)
Partition unique GUID: 593BA7FF-C46F-4A0E-BAAF-FF505C0425F8
First sector: 2048 (at 1024.0 KiB)
Last sector: 1953456127 (at 931.5 GiB)
Partition size: 1953454080 sectors (931.5 GiB)
Attribute flags: 0000000000000000
Partition name: 'MyPassport'
Command (? for help): v
No problems found. 4029 free sectors (2.0 MiB) available in 2
segments, the largest of which is 2015 (1007.5 KiB) in size.

As of 15Jul2020, There is no option to check filesystem integrity of a Hard disk formatted as NTFS under Linux.
What I did?
I downloaded the free trial of Windows Enterprise as ISO
sudo apt-get remove Virtualbox virtualbox-dkms virtualbox*
Install the latest Virtualbox from here
Download the matched version of Virtualbox extpack
Add to disk group, me, as user, due to access deny hard disk error
sudo usermod -a -G disk $USER
sudo usermod -a -G vboxusers $USER
Run sudo /sbin/vboxconfig
Run VirtualBox, and add Virtualbox extpack at File -> Preferences ->
Extentions
Restart your Computer
Create a link to your physical Hard Disk which is NTFS (either usb or not) using
VBoxManage internalcommands createrawvmdk -filename "</path/to/file>.vmdk" -rawdisk /dev/sdX
Create Machine -> New -> Windows 10 (64bit) (whatever match) -> Create a Virtual Disk
Attach the Downloaded Windows Enterprise Free trial ISO image
At your newly created Virtual Windows Image --> preferences -> Storage -> attach the vmdk image of your ntfs hdd
It may fail. Dont worry. To me failed too. I refer to it, because I found it. It may work for you.
Go to Virtual Windows Image --> preferences -> USB, check USB 3, add your NTFS HDD
Run your Virtual "Windows Enterprize Free Trial" machine.
Click Continue, Repair your computer (at left down side), Troubleshooting, Run Command Prompt
Go to your disk by writing eg. C:
Check that it is your disk somehow eg. dir
Run chkdsk /f when you are on your disk
This process fixed my NTFS filesystem that it was fault. I hope it helpt you too.
If you find any easier solution, only under Linux, please post it.

Related

Can't mount a permanently installed USB flash drive to the mount point of my choosing with Raspian 10

I want my Raspberry Pi 4B, running Raspian 10, to boot up with a permanently-inserted USB flash drive mounted to /srv/www. The flash drive will never be removed. I formatted the flash drive with an ext4 filesystem. I can manually mount the drive to /srv/www and perform normal file operations.
When I add an entry to /etc/fstab like this:
/dev/sda1 /srv/www ext4 0 0
or like this:
UUID=651003ce-5261-4b00-9940-6207625a5334 /srv/www ext4 0 0
the mount does not succeed when the system boots. I've been at this for hours, trying various suggestions for configuring systemd and see a variety of errors in system logs such as "/dev/sda does not contain a filesystem" but fsck tells me it does. Before I go spend more hours, is what I'm trying to do possible and where am I going wrong?
Unable to find a fstab-related solution, I solved the problem by placing these two lines in /etc/rc.local
sleep 5s
mount /dev/sda1 /srv/www

Should I detach EBS Volume in Root Device after attaching new EBS Volume in Amazon EC2 Instance?

My Amazon EC2 Instance only have 8GB of EBS Volume sda1, this volume is near to full capacity.
Then I attach new 21GB EBS Volume sdf to this EC2 Instance.
When I use df -h to check this usage, this is what I get:
Filesystem Size Used Avail Use% Mounted on
/dev/xvdf 7.9G 5.3G 2.6G 67% /
tmpfs 298M 0 298M 0% /dev/shm
Then I use resize2fs /dev/xvdf to resize, this is the df -h:
Filesystem Size Used Avail Use% Mounted on
/dev/xvdf 21G 5.3G 16G 26% /
tmpfs 298M 0 298M 0% /dev/shm
Should I detach the first EBS Volume sda1 ?
Why is sda1 not showing in df -h?
Updated Results:
$ ls /dev/xvd*
/dev/xvda1 /dev/xvdf
No. You need to mount the volume before it will show up in the df command.
Also a little education, the 8gb drive is your root drive. Try not to put stuff there other than application installs, etc.
Creating and mounting a new volume like you want takes the following steps:
Create the volume in the AWS Management Console.
Attach the volume in the AWS Management Console.
Decide what type of file system you want it to be, I typically use XFS.
yum install xfsprogs, or apt-get it or whatever
mkfs.xfs /dev/NEWVOLUME (note: amazon will tell you its attached to sdf or whatever when sometimes its really attached to xvdf or something)
Warm up the volume. This is a little known secret, but all the space on the volume as be assigned to that volume but not yet allocated. So writing a bunch of zeros to the volume will warm it up and make it perform much faster. This can take a while for big volumes. The command is: dd if=/dev/<device> of=/dev/null
Make a dir to mount it to: mkdir /logs (or whatever)
mount /dev/NEWVOLUME /logs
Done. Now run your df -h and you will see it.

EC2 Can't resize volume after increasing size

I have followed the steps for resizing an EC2 volume
Stopped the instance
Took a snapshot of the current volume
Created a new volume out of the previous snapshot with a bigger size in the same region
Deattached the old volume from the instance
Attached the new volume to the instance at the same mount point
Old volume was 5GB and the one I created is 100GB
Now, when i restart the instance and run df -h I still see this
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 4.7G 3.5G 1021M 78% /
tmpfs 296M 0 296M 0% /dev/shm
This is what I get when running
sudo resize2fs /dev/xvde1
The filesystem is already 1247037 blocks long. Nothing to do!
If I run cat /proc/partitions I see
202 64 104857600 xvde
202 65 4988151 xvde1
202 66 249007 xvde2
From what I understand if I have followed the right steps xvde should have the same data as xvde1 but I don't know how to use it
How can I use the new volume or umount xvde1 and mount xvde instead?
I cannot understand what I am doing wrong
I also tried sudo ifs_growfs /dev/xvde1
xfs_growfs: /dev/xvde1 is not a mounted XFS filesystem
By the way, this a linux box with centos 6.2 x86_64
There's no need to stop instance and detach EBS volume to resize it anymore!
13-Feb-2017 Amazon announced: "Amazon EBS Update – New Elastic Volumes Change Everything"
The process works even if the volume to extend is the root volume of running instance!
Say we want to increase boot drive of Ubuntu from 8G up to 16G "on-the-fly".
step-1) login into AWS web console -> EBS -> right mouse click on the one you wish to resize -> "Modify Volume" -> change "Size" field and click [Modify] button
step-2) ssh into the instance and resize the partition:
let's list block devices attached to our box:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 8G 0 part /
As you can see /dev/xvda1 is still 8 GiB partition on a 16 GiB device and there are no other partitions on the volume.
Let's use "growpart" to resize 8G partition up to 16G:
# install "cloud-guest-utils" if it is not installed already
apt install cloud-guest-utils
# resize partition
growpart /dev/xvda 1
Let's check the result (you can see /dev/xvda1 is now 16G):
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 16G 0 part /
Lots of SO answers suggest to use fdisk with delete / recreate partitions, which is nasty, risky, error-prone process especially when we change boot drive.
step-3) resize file system to grow all the way to fully use new partition space
# Check before resizing ("Avail" shows 1.1G):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 6.3G 1.1G 86% /
# resize filesystem
resize2fs /dev/xvda1
# Check after resizing ("Avail" now shows 8.7G!-):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 16G 6.3G 8.7G 42% /
So we have zero downtime and lots of new space to use.
Enjoy!
Update: Update: Use sudo xfs_growfs /dev/xvda1 instead of resize2fs when XFS filesystem.
Thank you Wilman your commands worked correctly, small improvement need to be considered if we are increasing EBSs into larger sizes
Stop the instance
Create a snapshot from the volume
Create a new volume based on the snapshot increasing the size
Check and remember the current's volume mount point (i.e. /dev/sda1)
Detach current volume
Attach the recently created volume to the instance, setting the exact mount point
Restart the instance
Access via SSH to the instance and run fdisk /dev/xvde
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u')
Hit p to show current partitions
Hit d to delete current partitions (if there are more than one, you have to delete one at a time) NOTE: Don't worry data is not lost
Hit n to create a new partition
Hit p to set it as primary
Hit 1 to set the first cylinder
Set the desired new space (if empty the whole space is reserved)
Hit a to make it bootable
Hit 1 and w to write changes
Reboot instance OR use partprobe (from the parted package) to tell the kernel about the new partition table
Log via SSH and run resize2fs /dev/xvde1
Finally check the new space running df -h
Prefect comment by jperelli above.
I faced same issue today. AWS documentation does not clearly mention growpart. I figured out the hard way and indeed the two commands worked perfectly on M4.large & M4.xlarge with Ubuntu
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
[SOLVED]
This is what it had to be done
Stop the instance
Create a snapshot from the volume
Create a new volume based on the snapshot increasing the size
Check and remember the current's volume mount point (i.e. /dev/sda1)
Detach current volume
Attach the recently created volume to the instance, setting the exact mount point
Restart the instance
Access via SSH to the instance and run fdisk /dev/xvde
Hit p to show current partitions
Hit d to delete current partitions (if there are more than one, you have to delete one at a time) NOTE: Don't worry data is not lost
Hit n to create a new partition
Hit p to set it as primary
Hit 1 to set the first cylinder
Set the desired new space (if empty the whole space is reserved)
Hit a to make it bootable
Hit 1 and w to write changes
Reboot instance
Log via SSH and run resize2fs /dev/xvde1
Finally check the new space running df -h
This is it
Good luck!
This will work for xfs file system just run this command
xfs_growfs /
login into AWS web console -> EBS -> right mouse click on the one you wish to resize -> "Modify Volume" -> change "Size" field and click [Modify] button
growpart /dev/xvda 1
resize2fs /dev/xvda1
This is a cut-to-the-chase version of Dmitry Shevkoplyas' answer. AWS documentation does not show the growpart command. This works ok for ubuntu AMI.
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
the above two commands saved my time for AWS ubuntu ec2 instances.
Once you modify the size of your EBS,
List the block devices
sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 10G 0 disk
|-nvme0n1p1 259:3 0 1M 0 part
`-nvme0n1p2 259:4 0 10G 0 part /
Expand the partition
Suppose you want to extend the second partition mounted on /,
sudo growpart /dev/nvme0n1 2
If all your space is used up in the root volume and basically you're not able to access /tmp i.e. with error message Unable to growpart because no space left,
temporarily mount a /tmp volume: sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp
unmount after the complete resize is done: sudo umount -l /tmp
Verify the new size
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 20G 0 disk
|-nvme0n1p1 259:3 0 1M 0 part
`-nvme0n1p2 259:4 0 10G 0 part /
Resize the file-system
For XFS (use the mount point as argument)
sudo xfs_growfs /
For EXT4 (use the partition name as argument)
sudo resize2fs /dev/nvme0n1p2
Just in case if anyone here for GCP google cloud platform , Try this:
sudo growpart /dev/sdb 1
sudo resize2fs /dev/sdb1
So in Case anyone had the issue where they ran into this issue with 100% use , and no space to even run growpart command (because it creates a file in /tmp)
Here is a command that i found that bypasses even while the EBS volume is being used , and also if you have no space left on your ec2 , and you are at 100%
/sbin/parted ---pretend-input-tty /dev/xvda resizepart 1 yes 100%
see this site here:
https://www.elastic.co/blog/autoresize-ebs-root-volume-on-aws-amis
Did you make a partition on this volume? If you did, you will need to grow the partition first.
Thanks, #Dimitry, it worked like a charm with a small change to match my file system.
source: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html#recognize-expanded-volume-linux
Then use the following command, substituting the mount point of the filesystem (XFS file systems must be mounted to resize them):
[ec2-user ~]$ sudo xfs_growfs -d /mnt
meta-data=/dev/xvdf isize=256 agcount=4, agsize=65536 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 262144 to 26214400
Note
If you receive an xfsctl failed: Cannot allocate memory error, you may need to update the Linux kernel on your instance. For more information, refer to your specific operating system documentation.
If you receive a The filesystem is already nnnnnnn blocks long. Nothing to do! error, see Expanding a Linux Partition.
Bootable flag (a) didn't worked in my case (EC2, centos6.5), so i had to re-create volume from snapshot.
After repeating all steps EXCEPT bootable flag - everything worked flawlessly so i was able to resize2fs after.
Thank you!
Don't have enough rep to comment above; but also note per the comments above that you can corrupt your instance if you start at 1; if you hit 'u' after starting fdisk before you list your partitions with 'p' this will infact give you the correct start number so you don't corrupt your volumes. For centos 6.5 AMI, also as mentioned above 2048 was correct for me.
Put space between name and number, ex:
sudo growpart /dev/xvda 1
Note that there is a space between the device name and the partition number.
To extend the partition on each volume, use the following growpart
commands. Note that there is a space between the device name and the
partition number.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
I faced similar issue for Ubuntu system in EC2
Firstly checked the filesystem
lsblk
Then after increasing volume size from console, I ran below commands
sudo growpart /dev/nvme0n1 1
This will show change in lsblk command
Then I could then extend the FS with
sudo resize2fs /dev/nvme0n1p1
Finally verify it with df -h command, it will work

How to check for usb device with if statement in bash

I'm attempting to create an automated bash script that fills up a file with urandom in the unit's flash storage. I can manually use all of the commands to make this happen, but I'm trying to create a script and having difficulty figuring out how to check for the usb device. I know that it will be either sda1 or sdb1, but not sure if the code below is sufficient ...? Thanks! Below, is the code:
if /dev/sda1
then
mount -t vfat /dev/sda1 /media/usbkey
else
mount -t vfat /dev/sdb1 /media/usbkey
fi
The way I script mountable drives is to first put a file on the drive, e.g. "Iamthemountabledrive.txt", then check for the existence of that file. If it isn't there, then I mount the drive. I use this technique to make sure an audio server is mounted for a 5 radio station network, checking every minute in case there is a network interrupt event.
testfile="/dev/usbdrive/Iamthedrive.txt"
if [ -e "$testfile" ]
then
echo "drive is mounted."
fi
You can mount by label or UUID and hence reduce the complexity of your script. For example if your flash storage has label MYLABEL (you can set and display VFAT labels using mtools' mlabel):
$ sudo mount LABEL=MYLABEL /media/usbkey

Amazon EC2 and EBS disk space problem

I am having a problem reconciling the space available on my EBS volume. According to the AWS console the volume is 50GB and is attached to an instance.
If I ssh to this instance and do a df -h, I get the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 15G 13G 3.0G 81% /
udev 858M 76K 858M 1% /dev
none 858M 0 858M 0% /dev/shm
none 858M 72K 858M 1% /var/run
none 858M 0 858M 0% /var/lock
none 858M 0 858M 0% /lib/init/rw
I am pretty new to AWS. I interpret this as "there is a device attached and it has 15GB capacity. Whats more, you're nearly out of space!"
Can anyone point out the cause of the apparent discrepancy between the space advertised in the console and what is displayed on the instance?
Many thanks in advance
S
Yes, the issue is simple. The volume is only associated with the instance, but not mounted.
Check on the AWS console which drive it is mounted as - most likely /dev/sdf.
Then (on ubuntu):
sudo mkfs.ext3 /dev/sdf
sudo mkdir /ebs
sudo mount /dev/sdf /ebs
The first line formats the volume - using the ext3 file system type. This is pretty standard -- but depending on your usage (e.g. app server, database server, ...) you could also select another one like ext4 or xfs.
The second command creates a mount point and the third mounts it into it. This means that effectively, the new volume will be at /ebs. It should also show up in df now.
Last but not least, maybe also add an entry to /etc/fstab to make it reboot-proof.
Perhaps the original 15 GB Volume was cloned into a 50 GB volume but then not resized?
Please see this tutorial on how to clone and resize: How to increase disk space on existing AWS EC2 Linux (Ubuntu) Instance without losing data
Hope that helps.
Here is the simple way...
Assuming that you are using a linux AMI, in your case you have an easy method for increasing the size of the file system:
1) Stop the instance
2) Detach the root volume
3) Snapshot the volume
4) Create a new volume from the snapshot using the new size
5) Attach the new volume to the instance on the same place where the original one was
6) Start the instance, stop all services except ssh and set the root filesystem read only
7) Enlarge the filesystem (using for example resize2fs) and or the partition if needed
8) Reboot
As an alternative you can also launch a new instance and map the instance storage or you can create a new ami combining the two previous steps.
Only Rebooting the instance solved my problem
Earlier:
/dev/xvda1 8256952 7837552 0 100% /
udev 299044 8 299036 1% /dev
tmpfs 121892 164 121728 1% /run
none 5120 0 5120 0% /run/lock
none 304724 0 304724 0% /run/shm
Now
/dev/xvda18256952 1062780 6774744 14% /
udev 299044 8 299036 1% /dev
tmpfs 121892 160 121732 1% /run
none 5120 0 5120 0% /run/lock
none 304724 0 304724 0% /run/shm
The remaining of your space is mounted by default at /mnt.
See Resizing the Root Disk on a Running EBS Boot EC2 Instance
It is because, "After you increase the size of an EBS volume, you must use file system–specific commands to extend the file system to the larger size. You can resize the file system as soon as the volume enters the optimizing state.", without bouncing an instance.
I was just facing the same issue today, I was able to resolve it,
Figure out the type of your file system,
$ cat /etc/fstab
Follow this AWS doc, that precisely documents the steps to extend the linux Partition/FS after resizing a volume of a EC2 instance.
Extending a Linux File System After Resizing a Volume
On Ubuntu, for Extend the Filesystem.
To find block device:
blkid
In my case type is TYPE="ext4".
To resize the disk volume:
sudo resize2fs /dev/xvdf

Resources