i would like to move my AWS instance to another region.
Now, i will make it through the AMI.
But my question is if i move it to different region will it affect any configuration, or configuration will stay as it is?
Thanks in advance :)
You can create an AMI from current region and copy the AMI to new region and then launch it. All your EC2 OS configurations will remain intact. However you need to make sure your new VPC/Security Group/DNS configurations match the corresponding entities in the old region.
1) Make sure your new VPC has same configuration as old one
2) Make sure your new subnet has same configuration as old one
3) Make sure your new route table has same configuration as old one
4) Make sure your new NACL has same configuration as old one
5) Make sure your new security group has same configuration as old one
6) If you are accessing the instance using domain names, make sure the Route 53 dns zone now points to the new Instance
Related
I have some Windows Server 2016 instances on GCE (for Jenkins agents).
I'm wondering what is the best/good practice when it comes to computer name.
Currently, when I want to create a new node, I clone an instance (create images from disks + create template + create instance from template).
On this clone, I change the computer name (in Windows) so that it has the same name as on GCE. Is it useful? recommended? bad? needed?
I know that the name of the Jenkins node needs to be the same as the name of the GCE instance (to be picked up easily). However, I don't think the Windows computer name matters.
So, should I pick an identical generic name for all of them? A prefix+random generated name? Continue with the instance=computer=node name?
The node name that I use in Jenkins is always retrieved from env.NODE_NAME (when needed), so that should not break any pipeline. Not sure thought, as I may be missing something (internal to Jenkins).
Bonus question: After cloning, I have to do some modifications on the clone for Perforce (p4) to work.
I temporarily set some env variables
I duplicate the workspace: p4 client -t prefix-buildX-suffix prefix-buildY-suffix
I setup the stream (not sure if doable in one step)
Then regenerate the list of files: p4 sync -k <root_folder_to_be_generated>/...#YYYY/MM/DD
So, here also there's a name prefix-buildY-suffix which is the same as the one from the instance=computer=node (buildY). It may be a separate question, but as it's still from the same context, I'm putting it here: should I recreate a new workspace all the time? Knowing that it's on several machines, I'd say yes. Otherwise, I "imagine" that p4 would have contradictory information about the state of this workspace. So, here also, I currently need to customize the name. So, even if I make the Windows computer name generic, I would still need to customize the p4 workspace name, wouldn't I?
Jenkins must have the same computer name as the one on the network.
So, all three names must be identical.
I've created an autoscaling group on EC2 and it's working just fine. Servers scale up and down depending on load. I'd like to have a little more info on the management side and am wondering if there's a way to get the autoscaling group to dynamically add names to the instances that it boots up. I'm referring to adding a Tag with key=Name and value=autogeneratedid.
For example, if I had an autoscaling group called test-group, servers would boot up with the following names:
test-group-1
test-group-2
test-group-3
...
I'd like to find them an enumerate them in the EC2 Management Console, but right now they're just showing up as "empty" names (the Tag key=Name isn't explicitly set on the instances).
Any ideas?
In order to get the tags to be set on the instances, make sure you are setting the PropagateAtLaunch flag ("p=1") for the tag in the Auto Scaling Group.
You'll want to read this section in Amazon's documentation:
http://docs.amazonwebservices.com/AutoScaling/latest/DeveloperGuide/ASTagging.html
As far as having Amazon dynamically adding parameters to the tag value, I'm not aware of any such feature.
I'd like to specify the snapshot id which would be used to create a root device image for a EC2 instance created with cloudformation. How do I do that?
I could only find a way to make volume from a snapshot, but no way to use it in the instance.
If you want to use an EBS snapshot as the basis of the root disk (EBS volume) for an instance, you need to first register the snapshot as an AMI (e.g., using ec2-register).
Make sure to specify the correct architecture and kernel (AKI) when you register the snapshot as an AMI.
Alternatively, instead of taking a snapshot and registering it as separate steps, you could use the ec2-create-image command/API/console function to perform the snapshot and registration in a single step. This also takes care of picking the right architecture, kernel, and other parameters.
Once you have an AMI, you can tell CloudFormation to use that AMI when running a new instance.
I concur. This has nothing to do with cloudformation, but I just did this following a crippling 'do-release-upgrade'. It's just a matter of creating an image from the snapshot, and in my case making sure to change the virtualization type to "hardware assisted virtualization" (HVM). Then you can just launch the resulting image (AMI).
1) I had an instance and sudo commands were not working do to some mistakes on this instance
so i had to create a new instance.
2) I want to use old EBS volume with new instance and to stop old instance.
3) I created a new instance (New EBS Volume is created automatically with new instance)
4) I created snapshot of old volume and attached with new instance.
5) So two EBS volumes are attached with new instance.
6) When i login using SSH into new instance, i don't see old data anywhere.
7) I want every old data on new instance.
my question is.....
how i can use old volume with new instance?
please help me.. i am trying it from last 10 hours continuously :(..
What you need to do is mount the old volume on the new instance. Go to the Amazon EC2 control panel, and click "Volumes" (under Elastic Block Store). Look at the attachment information for the old EBS volume. This will be something like <instance id> (<instance name>):/dev/sdg
Make a note of the path given here, so that'd be /dev/sdg in the example above. Then use SSH and connect to your new instance, and type mkdir /mnt/oldvolume and then mount /dev/sdg /mnt/oldvolume (or whatever the path given in the control panel was). Your files should now be available under /mnt/oldvolume. If this does not solve your problem, please post again with the output of your df command after doing all of this.
So, to recap, to use an EBS volume on an instance, you need to attach it to that instance using the control panel (or API tools), and then mount it on the instance itself.
I do the following using AWS web console:
Attach EBS volume-A to instance-A. Make some changes to data on volume-A and detach it
Launch new instance-B (in the same zone as instance-A)
Try attach volume-A to the new instance-B. But the new instance does not appear in the instances list during attach volume process (dialog box).
If I try the same attach using command line EC2 API (volume-A and instance-B), it works fine!
Do you know if this is a bug in AWS web console or am I doing something wrong in the console? Tried page refresh in Step #3 but it still would not list the new instance.
In order to attach, both volumes has to be in the same zone. So if you are going to attach a volume into a instance check the zone of the instance's attached volume. If those are not matching create a new instance with the same zone as the zone of the volume that you need to attached.
The volume and the instance have to be in the same region AND the same zone.
If you have a volume in us-east-1a and the instance in us-east-1b, you would need to move the volume to us-east-1b to make it work.
Even I had faced this problem yesterday and a day before. It looks like Amazon problem with their cache. Not sure WHY.
To bring back the stuff as is, I had to sign-out and make sure things are good. But it's always good to work with CLI, works better.
Although the user interface may not list the instance ID, you can attempt to add the volume anyway. If it's genuinely impossible (rather than a cache issue) you will get an error message.
Paste in the instance ID (i-xxxxxxx) manually then type your mount point (e.g. /dev/sdf) and click Attach.
For the benefit of others: some instance types do not support encrypted volumes, which may be why the instance doesn't appear in the list. I get the following error:
Error attaching volume: 'vol-12341234' is encrypted and 't2.medium' does not support encrypted volumes.