How to replace key-pair used to access AWS EC2 machines.
We have 3 machines (machineA, machineB, machineC) all using the same key-pair for SSH access. How can we replace this to a new key-pair
Here are the steps to use new key-pair to access ec2 machine:
Generate public-private key pair from within your instance. Sample command if using amazon linux2 to generate keypair:
ssh-keygen -t rsa
Append the newly generated public key to .ssh/authorized_keys file
cat newkeypair.pub >> .ssh/authorized_keys
Now you will need to download the private key part. One way would be to give AmazonS3FullAccess role to ec2instance and upload the private key part to a bucket as below
aws s3 cp newkeypair s3://my-bucket
After downloading the private key part to your local machine, change its permission and connect with your ec2-instance
chmod 400 newkeypair
ssh -i newkeypair ec2-user#instance-public-ip
Related
When ssh into a aws ec2 linux instance, the user is ec2-user by default. Then I need to set aws credentials by writing to ~/.aws/credentials, but got permission denied. I feel that if I use sudo then the credentials file would be owned by root user, as a result my api server can't read from it.
What's the correct approach to set up aws credentials there?
The 'correct' way to setup the credentials, is to assign a role to the ec2 instance when you create it (or assign them after you create it). That role can be created and assigned to the EC2 instance via the AWS console - there is no need to ssh in and create the credentials there.
See: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console | AWS Security Blog
You can create the credentials file locally, then upload to your ec2 instance.
create the credentials file locally
$ vim credentials
upload to your ec2 instance
$ scp /path/credentials username#servername:/path
How to transfer files from an on-prem production server to an AWS EC2 in PLE server using password-less SCP from the same service account id on the origination and the destination servers?
The easiest option is to setup some authentication on your AWS machine. I assume you would already have key based SSH access to the server. For key based access you can simply execute the below mentioned command:
scp -i <path_to_private_key> <source_file_path> username#PublicIP:/tmp/
path_to_private_key is the private key used for SSH to your AWS machine
source_file_path is the file to be copied
username is theSSH username used to SSH to your AWS machine
PublicIP is the IP of your AWS machine
I am unable to run an ansible-playbook or use ansible ping on a AWS instance. However, I can ssh into the instance with no problem. My hosts file is this:
[instance]
xx.xx.xxx.xxx ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/home/josh/Ansible/Amazon/AWS.pem
Should I not use a direct path. I am trying to use ansible to install apache onto the server. In my security group in the AWS console, I allowed all incoming ssh traffic in port 22, and ansi
service: name=apache2 state=started`ble tries to ssh through port 22 so that should not be the problem. Is there some crucial idea behind sshing into instances that I didn't catch onto to. I tried following this post: Ansible AWS: Unable to connect to EC2 instance but to no avail.
make sure inside ansible.cfg ***
private_key_file = path of private key(server-private-key)
and in host machine don't change default authorized_keys file ,better way is create one user, for that user create .ssh directory and then inside create a file called authorized_keys & paste your server-public key
$~/.ssh/authorized_keys
try: ansible-playbook yourplaybookname.yml --connection=local
ansible defaults to ssh
When you create a Key Pair in the AWS console, then create an EC2 instance is that key automatically added to that instance?
Or do you have to add it when creating the EC2 instance?
While launching an EC2 instance from AWS Console, you will be prompted to choose a Key Pair. The Public Key of this chosen Key Pair will be added to the authorized_keys file of the default login user.
If launching an EC2 instance using
AWS CLI:
aws ec2 run-instances --image-id ami-id --key-name name_of_keypair --other-options
Python Boto3:
ec2client = boto3.client('ec2')
ec2client.run_instances(ImageId='ami-id',
KeyName='name_of_keypair',
....)
I have a jenkins box, I have ssh in to it and from there I want to access one of the Ec2 instance in AWS, I tried ssh -i "mykeyname.pem" ec2-user#DNSname but It throws me an error "Permission denied (publickey,gssapi-keyex,gssapi-with-mic)".
I have the PEM file of the EC2 instance I want to connect. But is there any way I can ssh in to the instance..?
There are two possible reasons.
Default user name is not "ec2-user"
Please check your using image "jenkins box".
If it doesn't use "ec2-user", change user name for ssh commands
Your key-pair is incorrect
Once you created EC2 instance with correct key-pair, you could access EC2 instance with such commands
Please check your using key-pair name
FYI
Connecting to Your Linux Instance Using SSH