I have 4 instances in ap-mumbai-1 region with 1 block volume attached to each of them. I have configured a cross region backup policy to ship the backups to ap-osaka-1 region.
But in ap-osaka-1 region, I am unable to identify the source block volumes as the backups are named like - " Auto-backup for 2022-04-13 18:30:00 via policy: CrossRegionPolicy-Osaka_from_AP_MUMBAI_1 "
When I open the details I can see the Original Boot Volume ocid is listed. But is there any way to keep the source volume name in the backup? to be able to identify the source volume when we need to restore it and attach to a new instance?
We need the solution in order to prepare the terraform scripts to activate DR on the ap-osaka-1 region from the volume backups.
Unless we can properly identify the volumes in ap-osaka-1 region, we can't create the proper terraform script or activate the DR manually.
Please suggest alternative easy process to build VMs in DR region from the volumes backups.
Related
Let me explain the problem and context. This is a server for a database solution. The database was created with docker, and added a volume to the server. Then all docker installation path was moved to the volume added to the server (for security and backup mantain). Then for monitoring, i added a metricbeat agent to capture data, like disk and other stuff, but for this context occurs the problem.
Im searching for a specific mount (the is a volume mount), and when in terminal type df -aTh | grep "/dev" for show filesystem, its show this image:
Then in metricbeat.yaml i have this configuration for system module:
- module: system
period: 30s
metricsets: ["filesystem"]
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|etc|host|hostfs)($|/)'
Notice in last line i omitted "dev" , because i want to obtain the mount volume "/dev/sda" high lighted in the screen shot. But when i discoverd in kibana, that device is not showed, and i dont now why, must be showed.
Thanks for reading and help :) . All of this if for monitoring, and show data in grafana. But i can't find the filesystem "/dev/sda/", for the disk dashboard...
From the documentation about the setting filesystem.ignore_types:
A list of filesystem types to ignore. Metrics will not be collected from filesystems matching these types. This setting also affects the fsstats metricset. If this option is not set, metricbeat ignores all types for virtual devices in systems where this information is available (e.g. all types marked as nodev in /proc/filesystems in Linux systems). This can be set to an empty list ([]) to make filebeat report all filesystems, regardless of type.
If you check the file /proc/filesystems you can see which files are marked as "nodev". Is it possible that ext4 is marked as nodev?
Can you try to set the setting filesystem.ignore_types: [] to see if the file is now considered?
I have windows build agents for Jenkins running in EC2, and I would like to make use of the ephemeral disks coming from the "d" type instances (C5ad.4xl for example gives 2 x 300GB NVMe) to take advantage of the high IO available on those disks.
Since they are build agents the ephemeral nature of the drives is fine. I need something that will detect, provision and mount those disks as a drive in Windows basically regardless of size and number. I can do this easily (LVM or software RAID etc.) in Linux but although there is a guide from 2014 here for achieving this, it doesn't seem to work on Windows Server 2019 and the latest instances.
That same post makes reference to new commandlets added from Server 2012 R2 but those do not support converting the disks to dynamic (which is a key step needed to stripe them done by diskpart in the original post's code), so they cannot be used to directly do what is required.
Are there any other options to make this work dynamically, ideally with powershell (or similar) that can be passed to the Jenkins agent at boot time as part of its config?
Windows now has Storage Pools and these can be used to do what is needed here. This code successfully detected multiple disks, added them to the pool striped, used the max size available and mounted the new volume on the drive letter "E":
# get a list of the disks that can be pooled
$PhysicalDisks = (Get-PhysicalDisk -CanPool $true)
# only take action if there actually are disks
if ($PhysicalDisks) {
# create storage pool using the discovered disks, called ephemeral in the standard subsystem
New-StoragePool –FriendlyName ephemeral -StorageSubSystemFriendlyName "Windows Storage*" –PhysicalDisks $PhysicalDisks
# Create a virtual disk, striped (simple resiliency in its terms), use all space
New-VirtualDisk -StoragePoolFriendlyName "ephemeral" -FriendlyName "stripedephemeral" -ResiliencySettingName Simple -UseMaximumSize
# initialise the disk
Get-VirtualDisk -FriendlyName 'stripedephemeral'|Initialize-Disk -PartitionStyle GPT -PassThru
# create a partition, use all available size (this will pop up if you do it interactively to format the drive, not a problem when running as userdata via Jenkins config)
New-Partition -DiskNumber 3 -DriveLetter 'E' -UseMaximumSize
# format as NTFS to make it useable
Format-Volume -DriveLetter E -FileSystem NTFS -Confirm:$false
# this creates a folder on the drive to use as the workspace by the agent
New-Item -ItemType Directory -Force -Path E:\jenkins\workspace
}
There are some assumptions here about the number of disks, and that will vary based on the instance type, but generally it will take any ephemeral disks it finds, if there is more than one it will stripe across them, and then uses the entire disk size available once the volume has been created/formatted. This can all be wrapped in <powershell></powershell> and added to the user data section of the Jenkins agent config so that it is run at boot.
I'd like to specify the snapshot id which would be used to create a root device image for a EC2 instance created with cloudformation. How do I do that?
I could only find a way to make volume from a snapshot, but no way to use it in the instance.
If you want to use an EBS snapshot as the basis of the root disk (EBS volume) for an instance, you need to first register the snapshot as an AMI (e.g., using ec2-register).
Make sure to specify the correct architecture and kernel (AKI) when you register the snapshot as an AMI.
Alternatively, instead of taking a snapshot and registering it as separate steps, you could use the ec2-create-image command/API/console function to perform the snapshot and registration in a single step. This also takes care of picking the right architecture, kernel, and other parameters.
Once you have an AMI, you can tell CloudFormation to use that AMI when running a new instance.
I concur. This has nothing to do with cloudformation, but I just did this following a crippling 'do-release-upgrade'. It's just a matter of creating an image from the snapshot, and in my case making sure to change the virtualization type to "hardware assisted virtualization" (HVM). Then you can just launch the resulting image (AMI).
1) I had an instance and sudo commands were not working do to some mistakes on this instance
so i had to create a new instance.
2) I want to use old EBS volume with new instance and to stop old instance.
3) I created a new instance (New EBS Volume is created automatically with new instance)
4) I created snapshot of old volume and attached with new instance.
5) So two EBS volumes are attached with new instance.
6) When i login using SSH into new instance, i don't see old data anywhere.
7) I want every old data on new instance.
my question is.....
how i can use old volume with new instance?
please help me.. i am trying it from last 10 hours continuously :(..
What you need to do is mount the old volume on the new instance. Go to the Amazon EC2 control panel, and click "Volumes" (under Elastic Block Store). Look at the attachment information for the old EBS volume. This will be something like <instance id> (<instance name>):/dev/sdg
Make a note of the path given here, so that'd be /dev/sdg in the example above. Then use SSH and connect to your new instance, and type mkdir /mnt/oldvolume and then mount /dev/sdg /mnt/oldvolume (or whatever the path given in the control panel was). Your files should now be available under /mnt/oldvolume. If this does not solve your problem, please post again with the output of your df command after doing all of this.
So, to recap, to use an EBS volume on an instance, you need to attach it to that instance using the control panel (or API tools), and then mount it on the instance itself.
Following the instructions at http://aws.amazon.com/articles/1663?_encoding=UTF8&jiveRedirect=1 I created an instance with mysql's dbs running on an EBS volume.
I've been installing other software on the instance's filesystem (not the EBS volume) and would like to be able to save the whole it as an AMI.
In Elasticfox, both AMI commands were greyed out.
Is it not possible to do this?
I am not so familiar with ElasticFOX, but in general you cannot create an AMI of an EC2 instance created from instance-store explicitly. You need a series of ec2-ami-tools to create one. I have wrote a script which I used to create an AMI. Feel free to use.
Copy the following script:
https://github.com/rakesh-sankar/Tools/blob/master/AmazonAWS/AMI/CreateAMI.sh
-make sure, you update the following before use
Imagename Shortname
Path to priavetKey
Path to certificateKey
S3 User-id (in general, this is yourAWS account ID)
Bucket Name
Path to JavaHome
Give permission to the file.
chmod +x createAMI.sh
./createAMI.sh
It should create an AMI image under your account and register it with the name you have given.