EC2 AMI instances shared password management - amazon-ec2

My goal is to launch 200 instance of windows node of the same ami in aws. These node come up and connect to my head node. Now, every launch of a new node create a new password for that node. This is hard to manage specially if I want to do group remote maitenance.
I was thinking, maybe I can make all of specific ami to have the same password but do I do that ? Should I modify sysprep condfig file C:\Program Files\Amazon\Ec2ConfigService\sysprep2008.xml or should I disable both set password for the ec2 config tool and then create a AMI?
If the config file, what exactly should I put in the sysprep2008.xml file?

Related

Accidentally Deleted Local Key Pair

I am running an AWS EC2 VM for a school project. I accidentally deleted the local key pair on my computer then emptied my recycle bin on my Mac. I dont see a way to redownload the keypair.
There are important things running on the VM that I need.
Is it possible to redownload the local keypair?
I cant even seem to regenerate a new keypair to that instance.
There are two ways to recover the access.
AWS Systems Manager (SSM) automation
To recover access to your Linux instance using AWS Systems Manager (SSM) automation, run the AWSSupport-ResetAccess Automation automation document. For more information, see Reset Passwords and SSH Keys on Amazon EC2 Instances.
Manually recover access
To manually recover access to your Linux instance, create a new key pair to replace the lost key pair. For more information, see Connecting to Your Linux Instance If You Lose Your Private Key.

Automating a new node into Ansible configuration

I am trying to automate bootstrapping a new node for instance an EC2 instance into Ansible as a slave node. I have seen some solutions like copying public keys using user-data. Can anyone suggest a more solid approach with an example on how I can achieve this? Thanks in advance
Ansible has two types of nodes
Master / Control Machines : The node from which ansible is invoked
Client / Remote Machine : The nodes on which ansible operates
Ansible's primary mode of transport is ssh, using which it applies the playbook to the remote machine. In order for ansible to ssh into the remote machine, ssh has to be already setup between the machine using private/public key authentication preferably.
When it comes to EC2:
1) Every node has a default key with which the instance is launched, and ansible could use this key to ssh, but this considered insecure / not a best pratice.
2) A key has to present in the client node, with which ansible can authenticate successfully, and the most preferred way is to pull the keys using user-data from a restricted S3 bucket.

AWS EC2 Instance Hacked

One of my EC2 instances was hacked a few days ago.
I tried logging in via SSH to the server, but I couldn't connect. I am the only one with access to the private key, and I keep it in a safe place.
Luckily, I had a backup of everything and was able to move the web app to a new instance quite fast.
My concern right now is that I don't know how my instance was hacked in the first place.
Why can't I log in via SSH using my private key? I would assume that the private key stored on the server can't be (easily) deleted.
Is there a way I can find out how the hacker gained access to the instance? Perhaps a log file that would point me in the right direction.
Should I attach the EBS volume in question to a new instance and see what's on it or what are my options in this case?
Right now, it seems I have to access at all to the hacked instance.
Thank you!
#Krishna Kumar R is correct about the hacker probably changing the ssh keys.
Next steps:
Security concerns (do these now!):
Stop the instance, but don't terminate yet
Revoke/expire any sensitive credentials that were stored on the instance, including passwords and keys for other sites and services. Everything stored on that instance should be considered compromised.
Post-mortem
Take an EBS snapshot of the instance's root volume (assuming that's where logs are stored)
Make a new volume from the snapshot and attach to a (non-production) instance
Mount and start reading logs. If this is a linux host and you have port 22 open in the firewall, I'd start with /<mount-point>/var/log/auth.log
They might have logged into your machine via password. In ssh config, check the value of: PasswordAuthentication. If it is set to yes, then users can login to the instance remotely via password. Check /var/log/secure for any remote logins. It will show up all logins (password or key based).
If someone logged in as 'root', they can modify the ssh keys.
The fact that you are unable to login to the machine does not mean that it has been "hacked". It could be due to a configuration change on the instance, or the instance might have changed IP address after a stop/start.
Do a search on StackOverflow for standard solutions to problems connecting to an instance and see if you can connect (eg recheck IP address, check security group, turn on ssh -v debugging, check network connectivity & VPC settings, view Get System Log, etc).
Worst case, yes, you could:
Stop the instance
Detach the EBS volume
Attach the EBS volume to another EC2 instance
Access the content of the EBS volume

Deploy Application to AWS EC2 Instance using terraform

I need to deploy my Java application to AWS EC2 Instance using terraform. The catch here, we should not use *.pem file to deploy the application.
I try to create ELB and associate instances using terraform.I can able to deploy the application using ssh and pem file to ec2 instances Private IPs. But we shouldn't use *.pem or *.ppk file, as it'll not be allowed in production servers.
I tried using chef with terraform , but that also requires *.pem to connect to AWS Instances.
Please let me know the detailed steps/suggestions of how to deploy the application using terraform without using pem file.
If you can't make any changes to your instance after creating it (including deploying the application) then you will need to bake any and all changes into the AMI that Terraform deploys.
You might want to look into using Packer to create AMIs with your intended configuration and then use Terraform to deploy these AMIs.
For reference, this strategy is known as "immutable infrastructure" so you might want to do some further reading into this area.
If instead it's simply that SSH connectivity is not allowed and you can make changes over other ports then you should be able to use an AMI that has a Chef client, Puppet agent or Salt minion on it (there may well be other tools that work over a non SSH protocol/port but this restriction rules out Ansible) and then use any of those tools to continue to configure your instance. Obviously you could find a suitable AMI from the AMI marketplace or, once again, use Packer to set up the relevant configuration management client.

automatically start apache on instance launch - aws autoscaling

I have an ec2 instance serving a webpage with apache. I created an autoscaling group using an AMI of this instance in the launch config. Once CPU went over 80% and the autoscale policy ran, a new instance was created. But the CPU of my original instance continued to rise and the CPU of my new instance remained at 0%.
The new instance was not serving the web page. I am guessing this is because apache was not started with the launch of the image. I tried to ssh into the new instance to run "service httpd start" but I got the following error:
ssh: Could not resolve hostname http://ec2-xxx-xx-xxx-xxx.compute-1.amazonaws.com:
nodename nor servname provided, or not known
Why could I not ssh in? How do I configure autoscaling to automatically start apache on launch?
It would appear that you are attempting to ssh to a host with http:// in the hostname. Remove that and ssh should work.
Assuming that you created an AMI to use in AutoScaling, you would need to ensure that you chkconfig httpd on in the source instance before creating a new AMI for AutoScaling.
In order for you to connect to an EC2 instance you need two things:
The Security Group associated with your instance has an inbound rule that allows SSH communication.
Make sure you have the private key generated for the instance. Note: This is only needed if you chose to use a key in the first place.
If those two things are correct, then you can connect to your instance like this:
ssh -i "PATH_TO_YOUR_KEY.pem" ec2-user#ec2-xxx-xx-xxx-xxx.compute-1.amazonaws.com
For the other point, that is, to make sure you can start apache on launch, you can do two things:
As #atbell mentioned on a previous answer, you can make sure that the chkconfig YOUR_SERVICE on is on the AMI used to start your instance.
You can add a command as user data to your LaunchConfiguration so it runs it as soon as the instance is started:
What this will do is run start YOUR_SERVICE start as soon as the instance can respond to commands. So, whenever your AutoScaling group creates another instance, your service will surely be started. Note that the commands added to the user data field of the LaunchConfiguration are, by default, going to be executed as sudo.

Resources