How communicate two remote machine through ansible - ansible

I am running ansible playbook from system 1 which runs tasks on system 2 to take backup and after that, I want to copy backup file from system 2 to system 3.
I am doing this task for automating below command
where /bck1/test on system 2 and opt/backup on system 3
rsync -r -v -e ssh /bck1/test.* root#host3:/opt/backup

You can run the raw rsync command with the shell module.
tasks:
- shell: rsync -r -v -e ssh /bck1/test.* root#host3:/opt/backup
For this to work, you will either need to have your private ssh key deployed to system 2, or, preferable enable ssh agent forwarding, for example in your .ssh/config:
Host host2
ForwardAgent yes
Additionally sshd on system 2 would need to accept agent forwarding. Here are some tasks which I use to do this:
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "Debian: Restart sshd"
shell: invoke-rc.d ssh restart
sudo: yes
when:
- ansible_distribution in [ "Debian", "Ubuntu" ]
- changed_sshd_config | changed
- name: "CentOS 7: Restart sshd"
shell: systemctl restart sshd.service
sudo: yes
when:
- ansible_distribution == "CentOS"
- ansible_distribution_major_version == "7"
- changed_sshd_config | changed
There are two separate tasks for restarting sshd on Debian and CentOS7. Pick what you need or maybe you have to adapt that to your system.
You might need to configure this in a separate playbook. Because Ansible will keep an open ssh connection to the host and after activating agent forwarding you most probably will need to re-connect.
PS: It's not the best idea to allow ssh login for user root, but that is another topic. :)

Related

How to run playbook on my application server using sudo?

I am writing a playbook for my Application tomcat node It will copy, deploy and stop/start tomcats.
I have a hop box serverA, another hop box serverB and tomcat node tomcatC. Manually using putty i use below steps to get on to the tomcat
Login to serverA using userId1
ssh to serverB using userId2
ssh to tomcatC using userId1
sudo to tomcat user.
Also I am able to directly ssh to tomcatC from serverA and my Ansible master is also serverA from where I am running the playbooks.
How do i run my playbook for this? Below is my playbook i am using as of now but it's not working.
ansible-playbook -i my-inventory my-V3.yml --tags=download,copy,deploy -e release_version=5.7 -e target_env=tomcatC -u userId1 --ask-pass. AND my-v3.yml looks like below -
hosts: '{{ target_env }}'
#serial: 1
remote_user: userId1
become: yes
become_user: tomcat
Getting this Error NOW -
GATHERING FACTS ***************************************************************
fatal: [tomcatC] => Missing become password
You can set the user a command is run as like so:
- cron:
name: "Clear Root Mail"
minute: "0"
hour: "22"
job: "sudo rm /var/spool/mail/root"
user: myuser
Or use become: true like so:
- name: Start Server
shell: "nohup /home/myuser/StartServer.sh &"
become: true
You can have shell scripts run the commands you need to run as well that ansible calls from the jump box you have. Your problem looks like you dont have the correct ssh key applied though.

Ansible passwordless access between nodes

To give passwordless access to all nodes for all other nodes.
For each node:
Get public ssh key
Add that key to authorized_keys files of all other nodes
Below is what I tried, but it's not working as expected.
- name: Get ssh public key from all the nodes for some_user user
shell: cat ~/.ssh/id_rsa.pub
register: ssh_pub_key
become: yes
become_user: some_user
changed_when: "ssh_pub_key.rc != 0"
always_run: yes
- set_fact:
auth_keys: "{{ ssh_pub_key.stdout | join() }}"
- debug: var=auth_keys
- name: Add public key to all other nodes for some_user user
authorized_key:
user: some_user
key: "{{ ssh_pub_key.stdout }}"
No need to collect every ssh keys from every node and distribute them to every node, this is bad practice.
Use ssh agent forwarding instead.
All you need is one key (create it onto central server or use existing one) and simply push pub (by ansible) to your nodes.
Clone the repository to your ansible-enabled host:
git clone https://github.com/ilias-sp/ansible-setup-passwordless-ssh.git
Alternatively, you can download the ansible_setup_passwordless_ssh.yml and hosts from this repository.
run:
ansible-playbook -i hosts ansible_setup_passwordless_ssh.yml
By running this playbook, these things happen to your hosts:
Localhost: An SSH key is generated and placed under .ssh folder. Its file name is configurable, default is ansible_rsa. This SSH key is added to the ~/.ssh/config file for SSH client to utilize it when connecting to remote hosts.
Remote hosts: The generated SSH key is propagated to the list of remote hosts you configured in hosts inventory file, and added to their ~/.ssh/authorized_keys file. This is done using the ssh-copy-id linux utility that is meant for this job. sshpass linux utility is used to assist running the script without the need to prompt for user password.
Reference : https://github.com/ilias-sp/ansible-setup-passwordless-ssh

Ansible execute remote ssh task

I have tried connecting to a remote "custom" Linux vm, and copied my ssh public ssh it, yet, I'm able to get Ansible to ping / connect it, as I keep getting remote host unreachable. "I think its because its running custom version of linux and ssh auth might be behaving differently"
The following remote ssh command works
# ssh -t user#server_ip "show vd"
password: mypassword
I'm trying to convert this above command to an Ansible playbook
---
- hosts: server_ip
gather_facts: no
remote_user: root
tasks:
- name: check devise status
shell: ssh -i server_ip "show vd"
register: output
- expect:
command: sh "{{ output.stdout }}"
responses:
(?i)password: user
I'm unable to get it to work, and I'm not sure if this is the write way of doing it, your input is highly appreciated it.
Whenever you have ssh connection problems you should add the -vvvv parameter to your ansible-playbook command line. That way it will give detailed information about the ssh connection and errors.

How to copy file from host2 to host3 assuming ansible was run on host1

I have ansible installed on host1. There is file on host2 which I need to copy on host3 by ansible. I am using RHEL.
I have following yml running on host2 but its getting stuck at copy file task.
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "RHEL: Restart sshd"
shell: /etc/init.d/sshd restart
sudo: yes
when:
- changed_sshd_config | changed
- name: Copy file from host2 to host3
shell: rsync -r -v -e ssh /root/samplefile.gz root#host3:/root
Can anyone explain me what is missing here. If you can, Please provide details steps mentioning correct hosts.
The problem most likely is you can not log in from host2 to host3 and Ansible is hanging at this task because rsync is waiting for you to enter the ssh password. Are you able to log in manually from host2 to host3?
I answered the same question before in How communicate two remote machine through ansible (can't flag as duplicate because no answer was accepted...)
The following is a slightly modified version of the linked answer. The tasks were written for CentOS, so it should work on RHEL.
For this to work, you will either need to have your private ssh key deployed to host2, or, preferable enable ssh agent forwarding, for example in your .ssh/config:
Host host2
ForwardAgent yes
Additionally sshd on host2 would need to accept agent forwarding. Here are some tasks which I use to do this:
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "Restart sshd"
shell: systemctl restart sshd.service
sudo: yes
when: changed_sshd_config | changed
You might need to configure this in a separate playbook. Because Ansible might keep an open ssh connection to the host and after activating agent forwarding you probably will need to re-connect.

Create and use group without restart

I have a task, that creates a group.
- name: add user to docker group
user: name=USERNAME groups=docker append=yes
sudo: true
In another playbook I need to run a command that relies on having the new group permission. Unfortunately this does not work because the new group is only loaded after I logout and login again.
I have tried some stuff like:
su -l USERNAME
or
newgrp docker; newgrp
But nothing worked. Is there any change to force Ansible to reconnect to the host and does a relogin? A reboot would be the last option.
You can use an (ansible.builtin.)meta: reset_connection task:
- name: add user to docker group
ansible.builtin.user:
name: USERNAME
groups: docker
append: yes
- name: reset ssh connection to allow user changes to affect ansible user
ansible.builtin.meta:
reset_connection
Note that you can not use a variable to only run the task when the ansible.builtin.user task did a change as “reset_connection task does not support when conditional”, see #27565.
The reset_connection meta task was added in Ansible 2.3, but remained a bit buggy until excluding v2.5.8, see #27520.
For Ansible 2 I created a Galaxy role: https://galaxy.ansible.com/udondan/ssh-reconnect/
Usage:
- name: add user to docker group
user: name=USERNAME groups=docker append=yes
sudo: true
notify:
- Kill all ssh connections
If you immediately need the new group you can either call the module yourself:
- name: Kill own ssh connections
ssh-reconnect: all=True
Or alternatively fire the handlers when required
- meta: flush_handlers
For Ansible < 1.9 see this answer:
Do you use ssh control sockets? If you have ControlMaster activated in your ssh config, this would explain the behavior. Ansible re-connects for every task, so the user should have the correct role assigned on the next task. Though when you use ssh session sharing, Ansible would of course re-use the open ssh connection and therefore result in not logging in again.
You can deactivate the session sharing in your ansible.cfg:
[ssh_connection]
ssh_args= -S "none"
Since session sharing is a good thing to speed up Ansible plays, there is an alternative. Run a task which kills all ssh connections for your current user.
- name: add user to docker group
user: name=USERNAME groups=docker append=yes
sudo: true
register: user_task
- name: Kill open ssh sessions
shell: "ps -ef | grep sshd | grep `whoami` | awk '{print \"kill -9\", $2}' | sh"
when: user_task | changed
failed_when: false
This will force Ansible to re-login at the next task.
Another option I've found would be to use async: to queue up killing sshd in the background, without relying on an open connection. It feels incredibly hacky, but it seems to work reliably in both Ansible 1.9 and 2.0.
- name: Kill SSH
shell: sleep 1; pkill -u {{ ansible_ssh_user }} sshd
async: 3
poll: 2
Pause for 1 second, then kill sshd. Start checking for the job to be finished after 2 seconds, maximum allowed time is 3 seconds. In my limited testing, it seems to solve the problem of refreshing the current user's groups with only a minimal delay.
Try to remove socket folder during the play, it works on my side (I don't know if it's the thinest solution). Oddly, meta: reset_connection is not working with Ansible 2.4
- name: reset ssh connection
local_action:
module: file
path: "~/.ansible/cp"
state: absent

Resources