Reattache a volume on a EC2 instance - amazon-ec2

I can not attach back a volume to an instance.
Already try to name it /dev/xvda, /dev/sda1 or /dev/sda(don't even attach) but when looking to instance status the volume don't apear at Root device, but on Block devices..
the instance: i-037681c5cce05ec5d
the volume: vol-07d60c2e66bcf2a36
When I try to start the instance this error appear:
Invalid value 'i-037681c5cce05ec5d' for instanceId. Instance does not have a volume attached at root (xvda)
Root device -
Block devices /dev/xvda
or
Block devices /dev/sda1
instance proprieties
neither works

I had the same problem and it is you are using /dev/xvda when attaching the volume again.
Instead. You must use xvda alone. It solves the problem.

Related

Resize Virtualbox Ubuntu VM storage not taking effect

I followed these instructions to resize my VirtualBox Ubuntu VM on Mac:
http://osxdaily.com/2015/04/07/how-to-resize-a-virtualbox-vdi-or-vhd-file-on-mac-os-x/
This is after the change:
*****-M-D2KA:$ VBoxManage showhdinfo ~/VirtualBox\ VMs/P4_Runtime/P4_Runtime.vdi
UUID: ce0ccd77-f265-46cd-9679-e25e64f1c992
Parent UUID: base
State: locked read
Type: normal (base)
Location: /Users/*****/VirtualBox VMs/P4_Runtime/P4_Runtime.vdi
Storage format: VDI
Format variant: dynamic default
Capacity: 25000 MBytes
Size on disk: 9967 MBytes
Encryption: disabled
In use by VMs: P4_Runtime (UUID: 5ea52b11-997f-45d8-b7d6-effa37a3b649) [Snapshot 1 (UUID: 409c1035-2134-4532-a931-a29018d33dc6)]
Child UUIDs: 540ae750-5307-44ef-a313-95134ae353b7
165fe99e-490d-4dd9-9602-00e3aaa8f82c
But for some reason, it does not seem to take effect:
This is the "df -k" output in the VM, and I get "No space left on device" error:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda 10253588 9713020 0 100% /
What am I missing?
I found out what I missed. I used gparted to resize the partition.
Resizing the VHD doesn't change the size of the partition /dev/sda. You can run lsblk inside the guest to see the additional space. To get the extra space that is available in the guest OS, you may
Use something like gparted as mentioned here. Instructions to do that on VHD can be found here. Note that this might not be easiest, but you may be forced to if you plan to not move some of your mount points (Example if you're not ready to move /home/ to a new partition).
Or, create a new partition, again instructions on how to do is present here. I would prefer this option over the first.

Spot instance fails to mount EFS volume

I can launch an on-demand EC2 instance and mount my EFS volume with complete success.
However when I try this using a spot instance using exactly the same configuration I find this in the system log...
mount.nfs4: Failed to resolve server fs-9273b65b.efs.eu-west-1.amazonaws.com: Name or service not known
If I log into the instance and check /etc/fstab the mount is there and if I then execute "sudo mount -a" the volume mounts without a hitch.
It would appear that at the instant the cloud-config script runs to mount that volume the name does not resolve but a few minutes later can be resolved without error.
Any guidance greatly appreciated.

Docker container can't see a serial port device

I'm trying to run a Docker container with access to a serial port on the host.
Here is what I did:
I used a Mac
Installed drivers on the host
(http://www.prolific.com.tw/US/ShowProduct.aspx?p_id=229&pcid=41)
Plugged in the device
Ran ls /dev/t* that returned
/dev/tty.usbserial - so it worked
Ran the container, docker run -it --privileged -v /dev:/dev
node:4.4.0 /bin/bash, and then ls /dev/t* inside the container which didn't return the /dev/tty.usbserial device...
I played a lot with different variations of parameters, but I haven't found the working one :)
Also the --device flag is not suitable for me since the device might be reconnected and the name could differ from /dev/tty.usbserial.
You can check if the script described in "Notification of new USB devices in docker container" (from Guido Diepen -- gdiepen) can help.
He too runs his container with the --privileged argument to allow it to access the devices. And he mounts the host directory /dev/bus/usb to the /dev/bus/usb directory within the container with the argument -v /dev/bus/usb:/dev/bus/usb when starting said container.
The script uses both inotifywait and lsusb to wait for devices to be (un)plugged and check if it was the device we are interested in.
The inotifywait will keep on listening to inodes create/delete events under the dev/bus/usb directory and will execute commands whenever an inode corresponding to a relevant device has been just created.
See also, once you have detected an plugged USB device, How to get Bus and Device relationship for a /dev/ttyUSB (not related to Docker, but still relevant).
As pointed by #pgayvallet on GitHub:
As the daemon runs inside a VM in Docker Desktop, it is not possible to actually share a mac host device with the container inside the VM, and this will most definitely never be possible.

Error: Instance does not have a volume attached at root (/dev/sda1)

Getting error while starting ec2 instance after attaching volume:
I have defined device_name as "/dev/sda1" but it still picking up "/dev/sdf".
Here is my code:
ec2_vol:
instance: "{{ instance_id }}"
id: "{{ ec2_vol.volume_id }}"
device_name: "/dev/sda1"
region: "{{ aws_region }}"
With /dev/sda1 won't work anymore (at least in my case, I replaced a gp2 root volume for a magnetic one) and had to specify directly xvda (without /dev) on the device field.
So, in the field Device: xvda
That's it.
(well, seems a transient issue and related only to debian instances, this happened in Oregon region, I tried it in Ireland and you got to specify /dev/sda1 as usual)
You need to detach the volume and then reattach the same to EC2. And also while attaching Volume make sure you specify Device as /dev/sada1.
Please follow below step by step Procedure for the same:
Link
Seems there have been changes in AWS. In May 2020 I had an instance which had a 10 GB EBS volume attached as root disk, as "/dev/sda1"
While stopped I misclicked and detached the root disk, then immediately reattached it as /dev/sda1. Sunsequently a boot failed with this error:
"Instance does not have a volume attached at root (xvda)"
The solution was right there in the error - I had to detach the root EBS volume and reattach it as just xvda that is without any /dev/ and without a partition number. At that point the instance was prepared to boot.
You can also use aws cli:
aws ec2 attach-volume --volume-id vol-abcde12345678901e --instance-id i-acdef123456789012 --device /dev/sda1
https://docs.aws.amazon.com/cli/latest/reference/ec2/attach-volume.html#examples
It seems like this works differently for literally everyone, so let me add my 2 cents and describe what worked for me.
I had attached my root volume as (literally) sda1 because that is the format described on the "attach volume" page, however, as the error message above suggests, it is looking for a device name that is literally /dev/sda1.
Detaching and re-attaching my root volume as /dev/sda1 (vs. sda1 or xvda as described in the example) solved my problem.
When I try to run the instance, it suggested the volume is not attached as root (/dev/xvda1). So I went to volume and re-attached it as /dev/xvda1 to that instance to make it work.
Go to the volumes section (under the Elastic Block Store), right click on your volume, attach, select your instance, then type /dev/sda1 in the text field.
EC2 is looking for literally same name of volume to be attached, so just by detaching the volume and reattaching with expected name (mentioned in the storage section of instance) worked for me.
With /dev/sda1 won't work anymore (at least in my case, I replaced a gp2 root volume for a magnetic one) and had to specify directly xvda (without /dev) on the device field.
So, in the field Device: xvda

Why does my system change my device from /dev/sdj to /dev/xvdj?

I have an Amazon EC2 instance. I booted up a volume and attached it to /dev/sdj. I edited my fstab file to have the line
/dev/sdj /home/ec2-user/mydirectory xfs noatime 0 0
Then I mounted it (sudo mount /home/ec2-user/mydirectory)
However, running the "mount" command says the following:
/dev/xvdj on /home/ec2-user/mydirectory type xfs (rw,noatime)
What? Why is it /dev/xvdj instead of /dev/sdj?
The devices are named /dev/xvdX rather than sdX in 11.04. This was a kernel change. The kernel name for xen block devices is 'xvd'. Previously Ubuntu carried a patch to rename those devices as sdX. That patch became problematic.
https://askubuntu.com/a/47909

Resources