Ansible 2.4.2 - Can I use delegate_to with password? - ansible

I want execute a task from my ansible role on another host. Can I do it with delegate_to or I should use other a way?
- name: Create user on remote host
user:
name: "{{ item.1 }}"
state: present
delegate_to: "user#{{ item.0.host }}"
run_once: true
become: True
with_subelements:
- "{{ user_ssh_keys_slave_hosts }}"
- users
when: item.1 is defined
Goal: I want create an user on remote host use login:password and ssh argument = '-o StrictHostKeyChecking=no'
For my inventory file I use next vars:
ansible_connection: ssh
ansible_user: user
ansible_ssh_pass: pass
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'

SSH gets configured by ~/.ssh/config. There you can say which authentication type should be users. And there you should put your SSH options. This defines that the connection to a host should be made with a different user name:
Host item-0-host
User user
StrictHostKeyCecking no
RSAAuthentication no
HostName name-of-the-item-0-host
By this you have two host names. The original host name, you can use to manage the host itself. And a new host name, you can use to do the user management.
I am not sure but I think, you can not specify a user name in the value of the delegate_to option.
Btw: do you know the user module of Ansible?

Tnx for your answer. I resolved my question.
ansible_ssh_common_args='-o StrictHostKeyChecking=no' in inventory file.
And
- name: Set authorized key on remote host
authorized_key:
user: "{{ item.1 }}"
state: present
key: "{{ user_ssh_public_key_fingerprint.stdout }}"
delegate_to: "{{ item.0.host }}"
run_once: true
become: True
with_subelements:
- "{{ user_ssh_keys_slave_hosts }}"
- users
when: item.1 is defined
user_ssh_keys_slave_group: "{{ groups['slave_hosts']|default([''])}}"
user_ssh_keys_slave_hosts: "{%- set ssh_user = [] %}{%- for host in user_ssh_keys_slave_group %}
{{- ssh_user.append(dict(host=host, ssh_user=hostvars[host]['slave_hosts']|default([]))) }}
{%- endfor %}
{{- ssh_user -}}"

Related

Ansible shell - How to connect to hosts with custom credentials

My goal is to execute a shell command on different hosts with credentials given directly from the playbook. Until now I have tried two things.
Attempt one:
- name: test
shell:
command: "whoami"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
args:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
Attempt 2:
- name: set default credentials
set_fact:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
- name: test
shell:
command: "whoami"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
The Username in the variable {{cred_ios_r_user}} is 'User1'. But when I look in the Output from Ansible, it tells me it used the default ssh user named "SSH_User".
What do I need to change so that Ansible takes the given credentials?
first thing you should to check - its "remote_user" attribute of module:
- name: DigitalOcean | Disallow root SSH access
remote_user: root
lineinfile: dest=/etc/ssh/sshd_config
regexp="^PermitRootLogin"
line="PermitRootLogin no"
state=present
notify: Restart ssh
Next thing, its ssh_keys. By default, your connections have your own private key, but you can override it by the extraargs of ansible-playbook command, of variables in inventory-file or job-vars:
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "{{ role_path }}/files/ansible.key"
so if you want, you can set custom keys you have in vars or files:
- name: DigitalOcean | Add Pub key
remote_user: root
authorized_key:
user: "{{ do_user }}"
key: "{{ lookup('file', do_key_public) }}"
state: present
you have take more information in my auto-droplet-digitalocean-role
Your vars should be... vars available for your task, not arguments passed to the shell module. So in a nutshell:
- name: test
vars:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
shell:
command: "whoami"
with_items: "{{ lookup('file', '../files/deviceList.txt').splitlines() }}"
delegate_to: "{{ item.split(';')[0] }}"
Meanwhile this looks like a bad practice. You would normally use add_host to build an in-memory inventory with the correct hosts and credentials and then run your task using the natural host batch play loop. Something like
- name: manage hosts
hosts: localhost
gather_facts: false
tasks:
- name: add hosts to inventory
add_host:
name: "{{ item.split(';')[0] }}"
groups:
- my_custom_group
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
with_items: "{{ lookup('file', '../files/deviceList.txt').splitlines() }}"
- name: run my command on all added hosts
hosts: my_custom_group
gather_facts: false
tasks:
- name: test command
shell: whoami
Alternativelly, you can create a full inventory from scratch from your csv file and use it directly or use/develop an inventory plugin that will read your csv file directly (like in this example)

how to get inventory_hostanme and ansible_ssh_hosts info by running playbook on localhost

I want to write hostname and ssh_host under a specific group from an inventory file to a file in my localhost.
I have tried this:
my_role
set_fact:
hosts: "{{ hosts_list|default({}) | combine( {inventory_hostname: ansible_ssh_host} ) }}"
when: "'my_group' in group_names"
Playbook
become: no
gather_facts: no
roles:
- my_role
But now if i try to write this to a file in another task, it will create a file for the hosts in inventory file.
Can you please suggest if there is a better way to create a file containing dictionary with inventory_hostname and ansible_ssh_host as key and value respectively, by running playbook on localhost.
An option would be to use lineinfile and delegate_to localhost.
- hosts: test_01
tasks:
- set_fact:
my_inventory_hostname: "{{ ansible_host }}"
- tempfile:
delegate_to: localhost
register: result
- lineinfile:
path: "{{ result.path }}"
line: "inventory_hostname: {{ my_inventory_hostname }}"
delegate_to: localhost
- debug:
msg: "inventory_hostname: {{ ansible_host }}
stored in {{ result.path }}"
Note: There is no need to quote the conditions. Conditions are expanded by default.
when: my_group in group_names
Thanks a lot for your answer#Vladimir Botka
I came up with another requirement to write the hosts belonging to a certain group to a file, below code can help with that (adding to #Vladimir's solution).
- hosts: all
become: no
gather_facts: no
tasks:
- set_fact:
my_inventory_hostname: "{{ ansible_host }}"
- lineinfile:
path: temp/hosts
line: "inventory_hostname: {{ my_inventory_hostname }}"
when: inventory_hostname in groups['my_group']
delegate_to: localhost
- debug:
msg: "inventory_hostname: {{ ansible_host }}"

For each individual user, I want to use their /home/ directory for Ansible credentials

I'm working in a multi user environment. I want to have some accountability of who ran which plays. So we can't have a common group_vars/all that contains ansible login credentials. I'm not wanting to force my users to use extra vars at the command line for ansible credentials either.
Ideally I'd like to have each user have a specific file off of their /home/user_name/ directory where they set their ansible credentials (I'll use ansible_info.yml so we have something specific to talk about). This would provide me security, as other users not be able to access another users home directory.
Off of the /home/user_name/, would I have to get each user to create a "all" file (as ansible by default looks for a "all" file)? Or could I use a different name for this file (ansible_info.yml)? The contents of the all/ansible_info.yml would be:
ansible_user: end_user_ssh_username
ansible_ssh_pass: end_user_ssh_username_password
ansible_become: yes
ansible_become_user: service_account_name
ansible_become_pass: service_account_name_password
To grab the user_name of who SSH'ed onto the server, I can use any of the three below commands:
command: whoami
{{ lookup('env','USER') }}
command: id -un
After one of the above commands has been run, then I register the variable.
register: logged_in_user
Then I'm a little stuck on what to do next for supplying the rest of the ansible credentials.
Do I use vars_files?
vars_files:
- /home/{{ logged_in_user }}/ansible_info.yml
Or in the play should do I need to state the all of the ansible credentials as variables that refer to the /home/{{ logged_in_user }}/ansible_info.yml file?
Or using "lookup" create an individual file for each Ansible credential that I'm looking for, as I've shown below:
vars:
ansible_user: "{{lookup('file', '/home/{{ logged_in_user }}/ansible_user.yml')}}"
ansible_ssh_pass: "{{lookup('file', '/home/{{ logged_in_user }}/ansible_ssh_pass')}}"
ansible_become_user: "{{lookup('file', '/home/{{ logged_in_user }}/ansible_become_user.yml')}}"
ansible_become_pass: "{{lookup('file', '/home/{{ logged_in_user }}/ansible_become_pass')}}"
Here is was I wrote to get this to pull data from my files. Next I'll see if I can authenticate with it.
Basically what was messing me up was the extra characters from "stdout", so I just made an extra variable to put the results into, and it work.
hosts: localhost
tasks:
name: get the username running the deploy
local_action: command whoami
register: whoami_results
name: show the output of the home_directory results
vars:
whoami_results_stdout: "{{whoami_results.stdout}}"
home_directory: /home/{{ whoami_results_stdout }}
ansible_user: "{{ lookup('file', '{{ home_directory }}/ansible_user.yml') }}"
ansible_ssh_pass: "{{ lookup('file', '{{ home_directory }}/ansible_ssh_pass.yml') }}"
ansible_become: "{{ lookup('file', '{{ home_directory }}/ansible_become.yml') }}"
ansible_become_user: "{{ lookup('file', '{{ home_directory }}/ansible_become_user.yml') }}"
ansible_become_pass: "{{ lookup('file', '{{ home_directory }}/ansible_become_pass.yml') }}"
I tweaked this again, and found a much better solution. I made the contents of group_vars/all become this.
ansible_user: "{{ lookup('file', '~/ansible_user') }}"
ansible_ssh_pass: "{{ lookup('file', '~/ansible_ssh_pass') }}"
ansible_become: "{{ lookup('file', '~/ansible_become') }}"
ansible_become_user: "{{ lookup('file', '~/ansible_become_user') }}"
ansible_become_pass: "{{ lookup('file', '~/ansible_become_pass') }}"
Then we are going to have the end users just have the corresponding files off of their /home/. The end user will need to update these files when they have to update their Active Directory password though (minor issue). The files off of their home directory work with ansible-vault.

How to switch Ansible playbook to another host when calling second playbook

I have two playbooks - my first playbook iterates on the list of ESXi servers getting list of all VMs, and then passes that list to the second playbook, that should iterates on the IPs of the VMs. Instead it is still trying to execute on the last ESXi server. I have to switch host to that VM IP that I'm currently passing to the second playbook. Don't know how to switch... Anybody?
First playbook:
- name: get VM list from ESXi
hosts: all
tasks:
- name: get facts
vmware_vm_facts:
hostname: "{{ inventory_hostname }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_pass }}"
delegate_to: localhost
register: esx_facts
- name: Debugging data
debug:
msg: "IP of {{ item.key }} is {{ item.value.ip_address }} and is {{ item.value.power_state }}"
with_dict: "{{ esx_facts.virtual_machines }}"
- name: Passing data to include file
include: includeFile.yml ip_address="{{ item.value.ip_address }}"
with_dict: "{{ esx_facts.virtual_machines }}"
My second playbook:
- name: <<Check the IP received
debug:
msg: "Received IP: {{ ip_address }}"
- name: <<Get custom facts
vmware_vm_facts:
hostname: "{{ ip_address }}"
username: root
password: passw
validate_certs: False
delegate_to: localhost
register: custom_facts
I do receive the correct VM's ip_address but vmware_vm_facts is still trying to run on the ESXi server instead...
If you can't afford to setup dynamic inventory for VMs, your playbook should have two plays: one for collecting VMs' IPs and another for tasks on that VMs, like this (pseudocode):
---
- hosts: hypervisors
gather_facts: no
connection: local
tasks:
- vmware_vm_facts:
... params_here ...
register: vmfacts
- add_host:
name: "{{ item.key }}"
ansible_host: "{{ item.value.ip_address }}"
group: myvms
with_dict: "{{ vmfacts.virtual_machines }}"
- hosts: myvms
tasks:
- apt:
name: "*"
state: latest
Within the first play we collect facts from each hypervisor and populate myvms group of inmemory inventory. Within second play we run apt module for every host in myvms group.

Ansible authorized key module unable to read public key

I'm trying to use ansible (version 2.1.2.0) to create named ssh access across our network of servers. Running ansible from a jump box I'm creating a set of users and creating a private/public key pair with the users module
- user:
name: "{{ item }}"
shell: /bin/bash
group: usergroup
generate_ssh_key: yes
ssh_key_comment: "ansible-generated for {{ item }}"
with_items: "{{ list_of_usernames }}"
From there I would like to copy across the public key to the authorized_users file on each remote server. I'm using the authorized_key_module to fetch and copy the user's generated public key as the authorized_key on the remote servers:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
What I'm finding is that the lookup will fail as it is unable to access the pub file within the user's .ssh folder. It will run happily however if I run ansible as the root user (which for me is not an option I'm willing to take).
fatal: [<ipAddress>]: FAILED! => {"failed": true, "msg": "{{ lookup('file', '/home/testuser/.ssh/id_rsa.pub') }}: the file_name '/home/testuser/.ssh/id_rsa.pub' does not exist, or is not readable"}
Is there a better way to do this other than running ansible as root?
Edit: Sorry I had forgotten to mention that I'm running with become: yes
Edit #2: Below is a condensed playbook of what I'm trying to run, of course in this instance it'll add the authorized_key to the localhost but shows the error I'm facing:
---
- hosts: all
become: yes
vars:
active_users: ['user1','user2']
tasks:
- group:
name: "users"
- user:
name: "{{ item }}"
shell: /bin/bash
group: users
generate_ssh_key: yes
with_items: "{{ active_users }}"
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
become: yes
become_user: "{{ item }}"
with_items:
- "{{ active_users }}"
OK, the problem is with lookup plugin.
It is executed on ansible control host with permissions of user that run ansible-playbook and become: yes don't elevate plugins' permissions.
To overcome this, capture result of user task and use its output in further tasks:
- user:
name: "{{ item }}"
shell: /bin/bash
group: docker
generate_ssh_key: yes
ssh_key_comment: "ansible-generated for {{ item }}"
with_items:
- ansible5
- ansible6
register: new_users
become: yes
- debug: msg="user {{ item.item }} pubkey {{ item.ssh_public_key }}"
with_items: "{{ new_users.results }}"
Although you need to delegate some of this tasks, the idea will be the same.
On most linux/unix machines only two accounts have access to /home/testuser/.ssh/id_rsa.pub: root and testuser, so if you would like to modify those files you need to be either root, or testuser.
You can use privilige escalation using become. You said you don't want to run ansible as root, which is perfectly fine. You haven't mentioned if the user you are running ansible with has sudo access or not. If it does, and using sudo is fine for you then you can simply do:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
become: yes
This by default will run these command as root using sudo, while keeping the rest of the tasks run by the non-root user. This is also the preferred way of doing runs with ansible.
If you don't want this, and you want to keep your user as root-free as possible, then you need to run the command as testuser (and any other user you want to modify). This would mean you still need to patch up the sudoers file to allow your ansible user to transform into any of these users (which can also open up a few security issues - albeit not as much as being able to run anything as root), after which you can do:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
become: yes
become_user: "{{ item }}"
There are some caveats using this approach, so you might want to read the full Privilege Escalation page on ansible's documentation, especially the section on Unpriviliged Users before you try this.

Resources