Ansible execute remote ssh task - ansible

I have tried connecting to a remote "custom" Linux vm, and copied my ssh public ssh it, yet, I'm able to get Ansible to ping / connect it, as I keep getting remote host unreachable. "I think its because its running custom version of linux and ssh auth might be behaving differently"
The following remote ssh command works
# ssh -t user#server_ip "show vd"
password: mypassword
I'm trying to convert this above command to an Ansible playbook
---
- hosts: server_ip
gather_facts: no
remote_user: root
tasks:
- name: check devise status
shell: ssh -i server_ip "show vd"
register: output
- expect:
command: sh "{{ output.stdout }}"
responses:
(?i)password: user
I'm unable to get it to work, and I'm not sure if this is the write way of doing it, your input is highly appreciated it.

Whenever you have ssh connection problems you should add the -vvvv parameter to your ansible-playbook command line. That way it will give detailed information about the ssh connection and errors.

Related

Using Ansible Playbook run an interactive script and pick the response correctly

I need to run an interactive script, to get the application client installed on my servers using Ansible playbook. During the installation it asks for IP address, port number, server name, username and password.
- name: Install application client
hosts: all
tasks: Run the script
- name: Execute the user interactive script
command: /home/ansible/install.sh
Below prompts for the responses
Enter IP: **1.2.3.4**
Enter Port: **440**
Enter Server Name: **AppServerName**
Connectivity Succeeded
Enter Username: **UserName**
Enter Password: **xxxx**
I would like to know how we can predefine these responses in playbook itself and pick it when it prompts for?
Thanks,
Jean Thomas
Adding this as an answer. As the shell script you are trying to run "expects" some responses, we need to supply those responses using Linux expect.
Let's say we have a simple shell script test.sh like below. It takes IP address and Port, then runs the nc command:
#!/bin/bash
echo "IP address:"
read ip_addr
echo "Port:"
read port
nc -vz $ip_addr $port
To run this script from Ansible with expect, then we would have a simple playbook as below:
- hosts: localhost
vars:
send_ip_addr: "1.2.3.4"
send_port: "22"
tasks:
- shell: |
spawn ./test.sh
expect "IP address:"
send -- "{{ send_ip_addr }}\n"
expect "Port:"
send -- "{{ send_port }}\n"
expect eof
args:
executable: /usr/bin/expect
Linux expect is a scripting language in itself, and what we have above is a simple .exp script within the Ansible shell task. I think we only can set timeout at the beginning. See the manpage for all supported options.
There is also a useful autoexpect command that will create a script.exp script for us. Example:
autoexpect test.sh

How to run playbook on my application server using sudo?

I am writing a playbook for my Application tomcat node It will copy, deploy and stop/start tomcats.
I have a hop box serverA, another hop box serverB and tomcat node tomcatC. Manually using putty i use below steps to get on to the tomcat
Login to serverA using userId1
ssh to serverB using userId2
ssh to tomcatC using userId1
sudo to tomcat user.
Also I am able to directly ssh to tomcatC from serverA and my Ansible master is also serverA from where I am running the playbooks.
How do i run my playbook for this? Below is my playbook i am using as of now but it's not working.
ansible-playbook -i my-inventory my-V3.yml --tags=download,copy,deploy -e release_version=5.7 -e target_env=tomcatC -u userId1 --ask-pass. AND my-v3.yml looks like below -
hosts: '{{ target_env }}'
#serial: 1
remote_user: userId1
become: yes
become_user: tomcat
Getting this Error NOW -
GATHERING FACTS ***************************************************************
fatal: [tomcatC] => Missing become password
You can set the user a command is run as like so:
- cron:
name: "Clear Root Mail"
minute: "0"
hour: "22"
job: "sudo rm /var/spool/mail/root"
user: myuser
Or use become: true like so:
- name: Start Server
shell: "nohup /home/myuser/StartServer.sh &"
become: true
You can have shell scripts run the commands you need to run as well that ansible calls from the jump box you have. Your problem looks like you dont have the correct ssh key applied though.

Running a command in an ansible-playbook from the ansible host using variables from the current ansible process

Having hit a brick wall with troubleshooting why one shell script is hanging when I'm trying to run it via Ansible on the remote host, I've discovered that if I run it in an ssh session from the ansible host it executes successfully.
I now want to build that into a playbook as follows:
- name: Run script
local_action: shell ssh $TARGET "/home/ansibler/script.sh"
I just need to know how to access the $TARGET that this playbook is running on from the selected/limited inventory so I can concatenate it into that local_action.
Is there an easy way to access that?
Try with ansible_host:
- name: Run script
local_action: 'shell ssh {{ ansible_host }} "/home/ansibler/script.sh"'

Ansible passwordless access between nodes

To give passwordless access to all nodes for all other nodes.
For each node:
Get public ssh key
Add that key to authorized_keys files of all other nodes
Below is what I tried, but it's not working as expected.
- name: Get ssh public key from all the nodes for some_user user
shell: cat ~/.ssh/id_rsa.pub
register: ssh_pub_key
become: yes
become_user: some_user
changed_when: "ssh_pub_key.rc != 0"
always_run: yes
- set_fact:
auth_keys: "{{ ssh_pub_key.stdout | join() }}"
- debug: var=auth_keys
- name: Add public key to all other nodes for some_user user
authorized_key:
user: some_user
key: "{{ ssh_pub_key.stdout }}"
No need to collect every ssh keys from every node and distribute them to every node, this is bad practice.
Use ssh agent forwarding instead.
All you need is one key (create it onto central server or use existing one) and simply push pub (by ansible) to your nodes.
Clone the repository to your ansible-enabled host:
git clone https://github.com/ilias-sp/ansible-setup-passwordless-ssh.git
Alternatively, you can download the ansible_setup_passwordless_ssh.yml and hosts from this repository.
run:
ansible-playbook -i hosts ansible_setup_passwordless_ssh.yml
By running this playbook, these things happen to your hosts:
Localhost: An SSH key is generated and placed under .ssh folder. Its file name is configurable, default is ansible_rsa. This SSH key is added to the ~/.ssh/config file for SSH client to utilize it when connecting to remote hosts.
Remote hosts: The generated SSH key is propagated to the list of remote hosts you configured in hosts inventory file, and added to their ~/.ssh/authorized_keys file. This is done using the ssh-copy-id linux utility that is meant for this job. sshpass linux utility is used to assist running the script without the need to prompt for user password.
Reference : https://github.com/ilias-sp/ansible-setup-passwordless-ssh

How to copy file from host2 to host3 assuming ansible was run on host1

I have ansible installed on host1. There is file on host2 which I need to copy on host3 by ansible. I am using RHEL.
I have following yml running on host2 but its getting stuck at copy file task.
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "RHEL: Restart sshd"
shell: /etc/init.d/sshd restart
sudo: yes
when:
- changed_sshd_config | changed
- name: Copy file from host2 to host3
shell: rsync -r -v -e ssh /root/samplefile.gz root#host3:/root
Can anyone explain me what is missing here. If you can, Please provide details steps mentioning correct hosts.
The problem most likely is you can not log in from host2 to host3 and Ansible is hanging at this task because rsync is waiting for you to enter the ssh password. Are you able to log in manually from host2 to host3?
I answered the same question before in How communicate two remote machine through ansible (can't flag as duplicate because no answer was accepted...)
The following is a slightly modified version of the linked answer. The tasks were written for CentOS, so it should work on RHEL.
For this to work, you will either need to have your private ssh key deployed to host2, or, preferable enable ssh agent forwarding, for example in your .ssh/config:
Host host2
ForwardAgent yes
Additionally sshd on host2 would need to accept agent forwarding. Here are some tasks which I use to do this:
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "Restart sshd"
shell: systemctl restart sshd.service
sudo: yes
when: changed_sshd_config | changed
You might need to configure this in a separate playbook. Because Ansible might keep an open ssh connection to the host and after activating agent forwarding you probably will need to re-connect.

Resources