I'm trying to create a playbook that has to complete the following tasks:
retrieve hostnames and releases from a file file
connect to those hostnames one by one
retrieve the contents of another file another_file in each host that will give us the environment (dev, qa, prod)
so my first task is to retrieve the names of the hosts I need to connect to
- name: retrieve nodes
shell: cat file | grep ";;" | grep foo | grep -v "bar" | sort | uniq
register: nodes
- adding nodes:
add_host:
name: "{{ item }}"
group: "servers"
with_items: "{{ nodes.stdout_lines }}"
the next task connects to the hosts
- hosts: "{{ nodes }}"
become: yes
become_user: user1
become_method: sudo
gather_facts: no
tasks:
- name: check remote file
slurp:
src: /xyz/directory/another_file
register: thing
- debug: msg="{{ thing['content'] | b64decode }}"
but it doesn't work, when I add verbosity I still see the task being executed with the user running the playbook (local_user)
<node> ESTABLISH SSH CONNECTION FOR USER: local_user
what am I doing wrong?
ansible version is 2.7.1 over rhel 6.1
UPDATE
I've also used remote_user: user1
- hosts: "{{ nodes }}"
remote_user: user1
gather_facts: no
tasks:
- name: check remote file
slurp:
src: /xyz/directory/another_file
register: thing
- debug: msg="{{ thing['content'] | b64decode }}"
no luck so far, same error
You need to set remote_user, which is the one that controls what username is used when connecting to the server. Only after the connection is established, become_user is used.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html#hosts-and-users
if you use become_user it is like using sudo command on the remote machine.
You want to connect to your machines as a specific user. And after the connection is established issue the commands with sudo.
This question has some answers that should help you.
Related
I have a task based on a shell command that needs to run on a local computer. As part of the command, I need to add the IP address on part of the groups I have in my inventory file
Inventory file :
[server1]
130.1.1.1
130.1.1.2
[server2]
130.1.1.3
130.1.1.4
[server3]
130.1.1.5
130.1.1.6
I need to run the following command from the local computer on the Ips that are part of the Server 2 + 3 groups
ssh-copy-id user#<IP>
# <IP> should be 130.1.1.3 , 130.1.1.4 , 130.1.1.5 , 130.1.1.6
Playbook - were I'm missing the part of the ip
- hosts: localhost
gather_facts: no
become: yes
tasks:
- name: Generate ssh key for root user
shell: "ssh-copy-id user#{{ item }}"
run_once: True
with_items:
- server2 group
- server3 group
In a nutshell:
- hosts: server1:server2
gather_facts: no
become: true
tasks:
- name: Push local root pub key for remote user
shell: "ssh-copy-id user#{{ inventory_hostname }}"
delegate_to: localhost
Note that I kept your exact shell command which is actually a bad practice since there is a dedicated ansible module to manage that. So this could be translated to something like.
- hosts: server1:server2
gather_facts: no
become: true
tasks:
- name: Push local root pub key for remote user
authorized_key:
user: user
state: present
key: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
I had our security group ask for all the information we gather from the hosts we manage with Ansible Tower. I want to run the setup command and put it into a file in a folder I can run ansible-cmdb against. I need to do this in a playbook because we have disabled root login on the hosts and only allow public / private key authentication of the Tower user. The private key is stored in the database so I cannot run the setup command from the cli and impersonate the Tower user.
EDIT I am adding my code so it can be tested elsewhere.
gather_facts.yml
---
- hosts: all
gather_facts: true
become: true
tasks:
- name: Check for temporary dir make it if it does not exist
file:
path: /tmp/ansible
state: directory
mode: 0755
- name: Gather Facts into a file
copy:
content: '{"ansible_facts": {{ ansible_facts | to_json }}}'
dest: /tmp/ansible/{{ inventory_hostname }}
cmdb_gather.yml
---
- hosts: all
gather_facts: no
become: true
tasks:
- name: Fetch fact gather files
fetch:
src: /tmp/ansible/{{ inventory_hostname }}
dest: /depot/out/
flat: yes
CLI:
ansible -i devinventory -m setup --tree out/ all
This would basically look like (wrote on spot not tested):
- name: make the equivalent of "ansible somehosts -m setup --tree /some/dir"
hosts: my_hosts
vars:
treebase: /path/to/tree/base
tasks:
- name: Make sure we have a folder for tree base
file:
path: "{{ treebase }}"
state: directory
delegate_to: localhost
run_once: true
- name: Dump facts host by host in our treebase
copy:
dest: "{{ treebase }}/{{ inventory_hostname }}"
content: '{"ansible_facts": {{ ansible_facts | to_json }}}'
delegate_to: localhost
Ok I figured it out. While #Zeitounator had a correct answer I had the wrong idea. In checking the documentation for ansible-cmdb I caught that the application can use the fact cache. When I included a custom ansible.cfg in the project folder and added:
[defaults]
fact_caching=jsonfile
fact_caching_connection = /depot/out
ansible-cmdb was able to parse the output correctly using:
ansible-cmdb -f /depot/out > ansible_hosts.html
This is code of my ansible script .
---
- hosts: "{{ host }}"
remote_user: "{{ user }}"
ansible_become_pass: "{{ pass }}"
tasks:
- name: Creates directory to keep files on the server
file: path=/home/{{ user }}/fabric_shell state=directory
- name: Move sh file to remote
copy:
src: /home/pankaj/my_ansible_scripts/normal_script/installation/install.sh
dest: /home/{{ user }}/fabric_shell/install.sh
- name: Execute the script
command: sh /home/{{ user }}/fabric_shell/install.sh
become: yes
I am running the ansible playbook using command>>>
ansible-playbook send_run_shell.yml --extra-vars "user=sakshi host=192.168.0.238 pass=Welcome01" .
But I don't know why am getting error
ERROR! 'ansible_become_pass' is not a valid attribute for a Play
The error appears to have been in '/home/pankaj/go/src/shell_code/send_run_shell.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- hosts: "{{ host }}"
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
Please guide , what I am doing wrong.
Thanks in advance ...
ansible_become_pass is a connection parameter which you can set as variable:
---
- hosts: "{{ host }}"
remote_user: "{{ user }}"
vars:
ansible_become_pass: "{{ pass }}"
tasks:
# ...
That said, you can move remote_user to variables too (refer to the whole list of connection parameters), save it to a separate host_vars- or group_vars-file and encrypt with Ansible Vault.
Take a look on this thread thread and Ansible Page. I propose to use become_user in this way:
- hosts: all
tasks:
- include_tasks: task/java_tomcat_install.yml
when: activity == 'Install'
become: yes
become_user: "{{ aplication_user }}"
Try do not use pass=Welcome01,
When speaking with remote machines, Ansible by default assumes you are using SSH keys. SSH keys are encouraged but password authentication can also be used where needed by supplying the option --ask-pass. If using sudo features and when sudo requires a password, also supply --ask-become-pass (previously --ask-sudo-pass which has been deprecated).
I want to run an Ansible playbook on multiple hosts and register outputs to a variable. Now using this variable, I want to copy output to single file. The issue is that in the end there is output of only one host in the file. How can I add output of all the hosts in a file one after the other. I don't want to use serial = 1 as it slows down execution considerably if we have multiple hosts.
-
hosts: all
remote_user: cisco
connection: local
gather_facts: no
vars_files:
- group_vars/passwords.yml
tasks:
- name: Show command collection
ntc_show_command:
connection: ssh
template_dir: /ntc-ansible/ntc-templates/templates
platform: cisco_ios
host: "{{ inventory_hostname }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_pass }}"
command: "{{commands}}"
register: result
- local_action:
copy content="{{result.response}}" dest='/home/user/show_cmd_ouput.txt'
result variable will be registered as a fact on each host the task ntc_show_command was run, thus you should access the value through hostvars dictionary.
- local_action:
module: copy
content: "{{ groups['all'] | map('extract', hostvars, 'result') | map(attribute='response') | list }}"
dest: /home/user/show_cmd_ouput.txt
run_once: true
You also need run_once because the action would still be run as many times as hosts in the group.
My Ansible playbook deploys to both database and webservers and I need to use some shared variables between them. The answer from this question almost gives me what I need:
---
- hosts: all
tasks:
- set_fact: my_global_var='hello'
- hosts: db
tasks:
- debug: msg={{my_global_var}}
- hosts: web
tasks:
- debug: msg={{my_global_var}}
However, in my case the variable is a password that is generated randomly by the playbook on each run and then has to be shared:
---
- hosts: all
tasks:
- name: Generate new password
shell: "tr -dc _[:alnum:] < /dev/urandom | head -c${1:-20}"
register: new_password
- name: Set password as fact
set_fact:
my_global_var: "{{ new_password.stdout }}"
- hosts: db
tasks:
- debug: msg={{my_global_var}}
- hosts: web
tasks:
- debug: msg={{my_global_var}}
This above example doesn't work as the password is now re-generated and completely different for each host in the all hosts (unless you coincidentally use the same machine/hostname for your db and web servers).
Ideally I don't want someone to have to remember to pass a good random password in on the command-line using --extra-vars, it should be generated and handled by the playbook.
Is there any suggested mechanism in Ansible for creating variables within a playbook and having it accessible to all hosts within that playbook?
You may want to try to generate pass on localhost and then copy it to every other host:
---
- hosts: localhost
tasks:
- name: Generate new password
shell: "tr -dc _[:alnum:] < /dev/urandom | head -c${1:-20}"
register: new_password
- hosts: all
tasks:
- name: Set password as fact
set_fact:
my_global_var: "{{ hostvars['localhost'].new_password.stdout }}"
- hosts: db
tasks:
- debug: msg={{my_global_var}}
- hosts: web
tasks:
- debug: msg={{my_global_var}}
I know this is an old question but I settled on an alternative method that combines the two answers provided here and this issue, by using an implicit localhost reference and doing everything within the same play. I think it's a bit more elegant. Tested with 2.8.4.
This is the working solution in my context, where I wanted a common timestamped backup directory across all my hosts, for later restore:
---
tasks:
- name: Set local fact for the universal backup string
set_fact:
thisHostTimestamp: "{{ ansible_date_time.iso8601 }}"
delegate_to: localhost
delegate_facts: true
- name: Distribute backup datestring to all hosts in group
set_fact:
backupsTimeString: "{{ hostvars['localhost']['thisHostTimestamp'] }}"
I believe this should translate to the OP's original example like this, but I have not tested it:
---
- hosts: all
tasks:
- name: Generate new password
shell: "tr -dc _[:alnum:] < /dev/urandom | head -c${1:-20}"
register: new_password
delegate_to: localhost
delegate_facts: true
- name: Set password as fact
set_fact:
my_global_var: "{{ hostvars['localhost'].new_password.stdout }}"