Ansible passwordless access between nodes - ansible

To give passwordless access to all nodes for all other nodes.
For each node:
Get public ssh key
Add that key to authorized_keys files of all other nodes
Below is what I tried, but it's not working as expected.
- name: Get ssh public key from all the nodes for some_user user
shell: cat ~/.ssh/id_rsa.pub
register: ssh_pub_key
become: yes
become_user: some_user
changed_when: "ssh_pub_key.rc != 0"
always_run: yes
- set_fact:
auth_keys: "{{ ssh_pub_key.stdout | join() }}"
- debug: var=auth_keys
- name: Add public key to all other nodes for some_user user
authorized_key:
user: some_user
key: "{{ ssh_pub_key.stdout }}"

No need to collect every ssh keys from every node and distribute them to every node, this is bad practice.
Use ssh agent forwarding instead.
All you need is one key (create it onto central server or use existing one) and simply push pub (by ansible) to your nodes.

Clone the repository to your ansible-enabled host:
git clone https://github.com/ilias-sp/ansible-setup-passwordless-ssh.git
Alternatively, you can download the ansible_setup_passwordless_ssh.yml and hosts from this repository.
run:
ansible-playbook -i hosts ansible_setup_passwordless_ssh.yml
By running this playbook, these things happen to your hosts:
Localhost: An SSH key is generated and placed under .ssh folder. Its file name is configurable, default is ansible_rsa. This SSH key is added to the ~/.ssh/config file for SSH client to utilize it when connecting to remote hosts.
Remote hosts: The generated SSH key is propagated to the list of remote hosts you configured in hosts inventory file, and added to their ~/.ssh/authorized_keys file. This is done using the ssh-copy-id linux utility that is meant for this job. sshpass linux utility is used to assist running the script without the need to prompt for user password.
Reference : https://github.com/ilias-sp/ansible-setup-passwordless-ssh

Related

How to run playbook on my application server using sudo?

I am writing a playbook for my Application tomcat node It will copy, deploy and stop/start tomcats.
I have a hop box serverA, another hop box serverB and tomcat node tomcatC. Manually using putty i use below steps to get on to the tomcat
Login to serverA using userId1
ssh to serverB using userId2
ssh to tomcatC using userId1
sudo to tomcat user.
Also I am able to directly ssh to tomcatC from serverA and my Ansible master is also serverA from where I am running the playbooks.
How do i run my playbook for this? Below is my playbook i am using as of now but it's not working.
ansible-playbook -i my-inventory my-V3.yml --tags=download,copy,deploy -e release_version=5.7 -e target_env=tomcatC -u userId1 --ask-pass. AND my-v3.yml looks like below -
hosts: '{{ target_env }}'
#serial: 1
remote_user: userId1
become: yes
become_user: tomcat
Getting this Error NOW -
GATHERING FACTS ***************************************************************
fatal: [tomcatC] => Missing become password
You can set the user a command is run as like so:
- cron:
name: "Clear Root Mail"
minute: "0"
hour: "22"
job: "sudo rm /var/spool/mail/root"
user: myuser
Or use become: true like so:
- name: Start Server
shell: "nohup /home/myuser/StartServer.sh &"
become: true
You can have shell scripts run the commands you need to run as well that ansible calls from the jump box you have. Your problem looks like you dont have the correct ssh key applied though.

How to switch out of root acount during server set up?

I need to automate the deployment of some remote Debian servers. These servers come with only the root account. I wish to make it such that the only time I ever need to login as root is during the set up process. Subsequent logins will be done using a regular user account, which will be created during the set up process.
However during the set up process, I need to set PermitRootLogin no and PasswordAuthentication no in /etc/ssh/sshd_config. Then I will be doing a service sshd restart. This will stop the ssh connection because ansible had logged into the server using the root account.
My question is: How do I make ansible ssh into the root account, create a regular user account, set PermitRootLogin no and PasswordAuthentication no, then ssh into the server using the regular user account and do the remaining set up tasks?
It is entirely possible that my set-up process is flawed. I will appreciate suggestions.
You can actually manage the entire setup process with Ansible, without requiring manual configuration prerequisites.
Interestingly, you can change ansible_user and ansible_password on the fly, using set_fact. Remaining tasks executed after set_fact will be executed using the new credentials:
- name: "Switch remote user on the fly"
hosts: my_new_hosts
vars:
reg_ansible_user: "regular_user"
reg_ansible_password: "secret_pw"
gather_facts: false
become: false
tasks:
- name: "(before) check login user"
command: whoami
register: result_before
- debug: msg="(before) whoami={{ result_before.stdout }}"
- name: "change ansible_user and ansible_password"
set_fact:
ansible_user: "{{ reg_ansible_user }}"
ansible_password: "{{ reg_ansible_password }}"
- name: "(after) check login user"
command: whoami
register: result_after
- debug: msg="(after) whoami={{ result_after.stdout }}"
Furthermore, you don't have to fully restart sshd to cause configuration changes to take effect, and existing SSH connections will stay open. Per sshd(8) manpage:
sshd rereads its configuration file when it receives a hangup signal, SIGHUP....
So, your setup playbook could be something like:
login initially with the root account
create the regular user and set his password or configure authorized_keys
configure sudoers to allow regular user to execute commands as root
use set_fact to switch to that account for the rest of the playbook (remember to use become: true on tasks after this one, since you have switched from root to regular user. you might even try executing a test sudo command before locking root out)
change sshd configuration
execute kill -HUP<sshd_pid>
verify by setting ansible_user back to root, fail if login works
You probably just want to make a standard user account and add it to sudoers. You could then run ansible with the standard user and if you need a command to run as root, you just prefix with command with sudo.
I wrote an article about setting up a deploy user
http://www.altmake.com/2013/03/06/secure-lamp-setup-on-amazon-linux-ami/

Ansible execute remote ssh task

I have tried connecting to a remote "custom" Linux vm, and copied my ssh public ssh it, yet, I'm able to get Ansible to ping / connect it, as I keep getting remote host unreachable. "I think its because its running custom version of linux and ssh auth might be behaving differently"
The following remote ssh command works
# ssh -t user#server_ip "show vd"
password: mypassword
I'm trying to convert this above command to an Ansible playbook
---
- hosts: server_ip
gather_facts: no
remote_user: root
tasks:
- name: check devise status
shell: ssh -i server_ip "show vd"
register: output
- expect:
command: sh "{{ output.stdout }}"
responses:
(?i)password: user
I'm unable to get it to work, and I'm not sure if this is the write way of doing it, your input is highly appreciated it.
Whenever you have ssh connection problems you should add the -vvvv parameter to your ansible-playbook command line. That way it will give detailed information about the ssh connection and errors.

How to copy file from host2 to host3 assuming ansible was run on host1

I have ansible installed on host1. There is file on host2 which I need to copy on host3 by ansible. I am using RHEL.
I have following yml running on host2 but its getting stuck at copy file task.
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "RHEL: Restart sshd"
shell: /etc/init.d/sshd restart
sudo: yes
when:
- changed_sshd_config | changed
- name: Copy file from host2 to host3
shell: rsync -r -v -e ssh /root/samplefile.gz root#host3:/root
Can anyone explain me what is missing here. If you can, Please provide details steps mentioning correct hosts.
The problem most likely is you can not log in from host2 to host3 and Ansible is hanging at this task because rsync is waiting for you to enter the ssh password. Are you able to log in manually from host2 to host3?
I answered the same question before in How communicate two remote machine through ansible (can't flag as duplicate because no answer was accepted...)
The following is a slightly modified version of the linked answer. The tasks were written for CentOS, so it should work on RHEL.
For this to work, you will either need to have your private ssh key deployed to host2, or, preferable enable ssh agent forwarding, for example in your .ssh/config:
Host host2
ForwardAgent yes
Additionally sshd on host2 would need to accept agent forwarding. Here are some tasks which I use to do this:
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "Restart sshd"
shell: systemctl restart sshd.service
sudo: yes
when: changed_sshd_config | changed
You might need to configure this in a separate playbook. Because Ansible might keep an open ssh connection to the host and after activating agent forwarding you probably will need to re-connect.

How communicate two remote machine through ansible

I am running ansible playbook from system 1 which runs tasks on system 2 to take backup and after that, I want to copy backup file from system 2 to system 3.
I am doing this task for automating below command
where /bck1/test on system 2 and opt/backup on system 3
rsync -r -v -e ssh /bck1/test.* root#host3:/opt/backup
You can run the raw rsync command with the shell module.
tasks:
- shell: rsync -r -v -e ssh /bck1/test.* root#host3:/opt/backup
For this to work, you will either need to have your private ssh key deployed to system 2, or, preferable enable ssh agent forwarding, for example in your .ssh/config:
Host host2
ForwardAgent yes
Additionally sshd on system 2 would need to accept agent forwarding. Here are some tasks which I use to do this:
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "Debian: Restart sshd"
shell: invoke-rc.d ssh restart
sudo: yes
when:
- ansible_distribution in [ "Debian", "Ubuntu" ]
- changed_sshd_config | changed
- name: "CentOS 7: Restart sshd"
shell: systemctl restart sshd.service
sudo: yes
when:
- ansible_distribution == "CentOS"
- ansible_distribution_major_version == "7"
- changed_sshd_config | changed
There are two separate tasks for restarting sshd on Debian and CentOS7. Pick what you need or maybe you have to adapt that to your system.
You might need to configure this in a separate playbook. Because Ansible will keep an open ssh connection to the host and after activating agent forwarding you most probably will need to re-connect.
PS: It's not the best idea to allow ssh login for user root, but that is another topic. :)

Resources