I am new to Ansible and started learning and working on Ansible Playboks especially on network automation. Part of our hosting infra, inorder to login to any device we have default script runs to ssh into the device, something like goto . Hence no need to give any username and password, it directly logs into the device.
How we can include this customization in Ansible playbook without using any username or password.
Ansible supports using ssh keys.
Confirm that you can connect using SSH to all the nodes in your inventory using the same username. If necessary, add your public SSH key to the authorized_keys file on those systems.
Refer to documentation here
Also, it is a good idea to read the 'getting started' page
You will still need to supply a Username, that the SSH Key belongs to:
Guide on Setting up an SSH key for a Linux User: Here
Once SSH Key is configured and Copied over to your Ansible Server:
Edit the Sudoers File on the Slave Node and set NOPASSWD for the user, that way your user won't be prompted for a password when you are duing Sudo Commands: Reference Here
Related
We are in the midway of implementing Ansible CI for app deployment. For connecting the Remote host from Control Host , we used passwordless SSH authentication (by adding SSH key to authorized_keys).
But with recent changes, Unix team not allowing this any more on higher env as corporate unix policy. So have to use the password way.
The user with which Ansible running & connecting to Remote machine is a sudo user & does not have a password for itself.
So in this case, how do we connect from Control Host to Remote host, without the SSH key?
while running the ansible playbook we get an option to provide the user using which we can do ssh --user . Also the same configuration can be achieved by providing the configuration in the inventory file.
ansible_user=<user_name>
For password you can use vault
I am editing the answer to provide info that we can use other user than the one with which ansible is installed. You can create a new user which has password or passwordless authentication setup.
Hope so this helps.
Cheers,
Yash
I'm using a Cloudformation template in YAML with an embedded cloud-init UserData to set hostname and install packages and so on, and I've found that once I include the write_files directive it will break the default SSH key on the EC2 instance i.e. it seems to interfer with whatever process AWS uses to manage authorized_files, in EC2 logs I can see fingerprints of random keys be generating, not the expected keypair.
#cloud-config
hostname: ${InstanceHostname}
fqdn: ${InstanceHostname}.${PublicDomainName}
manage_etc_hosts: true
package_update: true
package_upgrade: true
packages:
- build-essential
- git
write_files:
- path: /home/ubuntu/.ssh/config
permissions: '0600'
owner: "ubuntu:ubuntu"
content: |
Host github.com
IdentityFile ~/.ssh/git
Host *.int.${PublicDomainName}
IdentityFile ~/.ssh/default
User ubuntu
power_state:
timeout: 120
message: Rebooting to ensure hostname has stuck correctly
mode: reboot
Removing the write_files block works fine, leave it in and I cannot SSH to the host due to ssh key mismatch.
So is it due writing a file to ~/.ssh, maybe ~/.ssh/authorized_keys gets deleted? Or maybe the permissions on the directory are changed?
Appending to ~/.ssh/authorized_keys with runcmd works fine, but I'd like to use the proper write_files method for larger files
For AWS EC2 Linux instances, the SSH public key from your keypair is stored in ~/.ssh/authorized_keys. If you overwrite it with something else, make sure that you understand the implications.
The correct procedure is to append public keys from keypairs to authorized_keys AND set the correct file permissions.
If you are setting up a set of keypairs in authorized_keys, this is OK also. Make sure that you are formatting the file correctly with public keys and setting the file permissions correctly.
The file permissions should be 644 so that the SSH server can read them.
Another possible issue is that when you change authorized_keys you also need to restart the SSH server but I do see that you are rebooting the server which removes that problem.
Ubuntu example:
sudo service ssh restart
I am unable to run an ansible-playbook or use ansible ping on a AWS instance. However, I can ssh into the instance with no problem. My hosts file is this:
[instance]
xx.xx.xxx.xxx ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/home/josh/Ansible/Amazon/AWS.pem
Should I not use a direct path. I am trying to use ansible to install apache onto the server. In my security group in the AWS console, I allowed all incoming ssh traffic in port 22, and ansi
service: name=apache2 state=started`ble tries to ssh through port 22 so that should not be the problem. Is there some crucial idea behind sshing into instances that I didn't catch onto to. I tried following this post: Ansible AWS: Unable to connect to EC2 instance but to no avail.
make sure inside ansible.cfg ***
private_key_file = path of private key(server-private-key)
and in host machine don't change default authorized_keys file ,better way is create one user, for that user create .ssh directory and then inside create a file called authorized_keys & paste your server-public key
$~/.ssh/authorized_keys
try: ansible-playbook yourplaybookname.yml --connection=local
ansible defaults to ssh
I have a jenkins box, I have ssh in to it and from there I want to access one of the Ec2 instance in AWS, I tried ssh -i "mykeyname.pem" ec2-user#DNSname but It throws me an error "Permission denied (publickey,gssapi-keyex,gssapi-with-mic)".
I have the PEM file of the EC2 instance I want to connect. But is there any way I can ssh in to the instance..?
There are two possible reasons.
Default user name is not "ec2-user"
Please check your using image "jenkins box".
If it doesn't use "ec2-user", change user name for ssh commands
Your key-pair is incorrect
Once you created EC2 instance with correct key-pair, you could access EC2 instance with such commands
Please check your using key-pair name
FYI
Connecting to Your Linux Instance Using SSH
I am trying to add a known host to the known_hosts file using ansible
vagrant#jedi:/vagrant$ ansible web -m known_hosts -a "name=web state=present"
paramiko: The authenticity of host 'web' can't be established.
The ssh-rsa key fingerprint is afb8cf4885468badb1a7b8afc16ac211.
Are you sure you want to continue connecting (yes/no)?
I keep getting prompted as above.
I thought this module took care of this? Otherwise I should do a keyscan of web and add that to the known_hosts file.
What am I doing wrong or misunderstanding?
#udondan explained very well. Just another note:
This is the default behaivior of Ansible. It will always check the host key. If Ansible never connected to that host this prompt will appear. That is the reason you aren't able to call known_hosts module without accepting the key first.
If this is not desirable you can set host_key_checking=False on ansible.cfg.
MORE SECURE APPROACH
The best approach is set this variable: export ANSIBLE_HOST_KEY_CHECKING=False while you're deploying new servers, then remove it. unset ANSIBLE_HOST_KEY_CHECKING
Host key checking it's an important security feature.
Learn more about host_key_checking.
The module takes care of it. But there are two problems:
Since you're running the task on the target host Ansible will first try to connect to the host before it is able run your task
Also, since the task runs on the target host, you will add the fingerprint to the know_hosts file on that machine, not locally.
You would need to run the task on the local machine, not on the target machine(s).
ansible localhost -m known_hosts -a "name=web state=present"
Otherwise I should do a keyscan of web and add that to the known_hosts file.
I think you need to do that anyway, since the known_hosts module expects you to pass the key. It does not auto-detect the fingerprint.