I want to encrypt my host credentials in a central secrets.yml file.
How can I tell Ansible, to use the variables?
I tried with this setup:
host_vars/test.yml
ansible_user: {{ test_user }}
ansible_become_pass: {{ test_pass }}
secrets.yml
# Credentials Test Server #
test_user: user
test_pass: password
inventory.yml
all:
children:
test:
hosts:
10.10.10.10
playbook.yml
---
- name: Update Server
hosts: test
become: yes
vars_files:
- secrets.yml
tasks:
- name: Update
ansible.builtin.apt:
update_cache: yes
For execution I user this command:
ansible-playbook -i inventory.yml secure_linux.yml --ask-vault-pass
During execution I get this Error Message:
fatal: [10.10.10.10]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: root#10.10.10.10: Permission denied (publickey,password).", "unreachable": true}
For those credentials to be used by all hosts, use the group_vars/all directory. So you will have the file group_vars/all/secrets.yml, which you will encrypt with ansible-vault.
ansible_user: user
ansible_password: password
You do not need a host_vars file.
The solution was:
give the host_vars file the right name (10.10.10.10.yml)
add ansible_password as variable
use quotation marks "{{ test_user }}"
I have to pass the host on which the Ansible command will be executed through extra vars.
I don't know in advance to which hosts the tasks will be applied to, and, therefore, my inventory file is currently missing the hosts: variable.
If I understood from the article "How to pass extra variables to an Ansible playbook" correctly, overwriting hosts is only possible by having already composed groups of hosts.
From the post Ansible issuing warning about localhost I gathered that referencing hosts to be managed in an Ansible inventory is a must, however, I still have doubts about it since the usage of extra vars was not mentioned in the given question.
So my question is: What can i do in order to make this playbook work?
- hosts: "{{ host }}"
tasks:
- name: KLIST COMMAND
command: klist
register: klist_result
- name: TEST COMMAND
ansible.builtin.shell: echo hi > /tmp/test_result.txt
... referencing hosts to be managed in an Ansible inventory is a must
Yes, that's the case. Regarding your question
What can I do in order to make this playbook work? (annot. without a "valid" inventory file)
you could try with the following workaround.
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- add_host:
hostname: "{{ target_hosts }}"
group: dynamic
- hosts: dynamic
become: true
gather_facts: true
tasks:
- name: Show hostname
shell:
cmd: "hostname && who am i"
register: result
- name: Show result
debug:
var: result
A call with
ansible-playbook hosts.yml --extra-vars="target_hosts=test.example.com"
resulting into execution on
TASK [add_host] ***********
changed: [localhost]
PLAY [dynamic] ************
TASK [Show hostname] ******
changed: [test.example.com]
In any case it is recommended to check how to build your inventory.
Further Documentation
add_host module – Add a host (and alternatively a group) to the ansible-playbook in-memory inventory
I have a task based on a shell command that needs to run on a local computer. As part of the command, I need to add the IP address on part of the groups I have in my inventory file
Inventory file :
[server1]
130.1.1.1
130.1.1.2
[server2]
130.1.1.3
130.1.1.4
[server3]
130.1.1.5
130.1.1.6
I need to run the following command from the local computer on the Ips that are part of the Server 2 + 3 groups
ssh-copy-id user#<IP>
# <IP> should be 130.1.1.3 , 130.1.1.4 , 130.1.1.5 , 130.1.1.6
Playbook - were I'm missing the part of the ip
- hosts: localhost
gather_facts: no
become: yes
tasks:
- name: Generate ssh key for root user
shell: "ssh-copy-id user#{{ item }}"
run_once: True
with_items:
- server2 group
- server3 group
In a nutshell:
- hosts: server1:server2
gather_facts: no
become: true
tasks:
- name: Push local root pub key for remote user
shell: "ssh-copy-id user#{{ inventory_hostname }}"
delegate_to: localhost
Note that I kept your exact shell command which is actually a bad practice since there is a dedicated ansible module to manage that. So this could be translated to something like.
- hosts: server1:server2
gather_facts: no
become: true
tasks:
- name: Push local root pub key for remote user
authorized_key:
user: user
state: present
key: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
I am using Ansible 2.1.0.0
I try to use become_user with a variable in a task, but I receive the following message:
fatal: [host]: FAILED! => {"failed": true, "msg": "'ansible_user' is undefined"}
The task executing this is
- name: Config git user name
git_config: name=user.name scope=global value={{ ansible_host }}
become: Yes
become_user: "{{ansible_user}}"
And the playbook has the following line to define the remote user:
- name: Foo
hosts: foo
vars:
http_port: 80
remote_user: admin
I've seen this response which seems to be the same problem, but this does not work for me.
I have seen also a set_fact solution but I would like to use the remote_user var if possible so no extra lines must be added if a playbook already has the remote_user var set.
Does anyone know how to do this or what I am doing wrong?
What about that:
- name: Foo
hosts: foo
vars:
http_port: 80
my_user: admin
remote_user: "{{my_user}}"
then:
- name: Config git user name
git_config: name=user.name scope=global value={{ ansible_host }}
become: Yes
become_user: "{{my_user}}"
I think I found it:
become_user: "{{ansible_ssh_user}}"
In fact the remote_user: admin is another way of defining the variable ansible_ssh_user, I dont know why remote_user is not accessible as a variable, but what I know is that when you set remote_user, it changes the variable ansible_ssh_user
Not sure if it's a clean solution though, but it works
I had a similar problem thrying to use {{ ansible_ssh_user }}
fatal: [xxx]: FAILED! => {"msg": "The field 'become_user' has an
invalid value, which includes an undefined variable. The error was:
'ansible_user' is undefined"}
I fixed this error using this approach:
- name: Backups - Start backups service
shell:
cmd: systemctl --user enable backups.service && systemctl --user restart backups.service
executable: /bin/bash
become: true
become_method: sudo
become_user: "{{ lookup('env','USER') }}"
I hope this helps.
This is a fragment of a playbook that I'm using (server.yml):
- name: Determine Remote User
hosts: web
gather_facts: false
roles:
- { role: remote-user, tags: [remote-user, always] }
My hosts file has different groups of servers, e.g.
[web]
x.x.x.x
[droplets]
x.x.x.x
Now I want to execute ansible-playbook -i hosts/<env> server.yml and override hosts: web from server.yml to run this playbook for [droplets].
Can I just override as a one time off thing, without editing server.yml directly?
Thanks.
I don't think Ansible provides this feature, which it should. Here's something that you can do:
hosts: "{{ variable_host | default('web') }}"
and you can pass variable_host from either command-line or from a vars file, e.g.:
ansible-playbook server.yml --extra-vars "variable_host=newtarget(s)"
For anyone who might come looking for the solution.
Play Book
- hosts: '{{ host }}'
tasks:
- debug: msg="Host is {{ ansible_fqdn }}"
Inventory
[web]
x.x.x.x
[droplets]
x.x.x.x
Command: ansible-playbook deplyment.yml -i hosts --extra-vars "host=droplets"
So you can specify the group name in the extra-vars
We use a simple fail task to force the user to specify the Ansible limit option, so that we don't execute on all hosts by default/accident.
The easiest way I found is this:
---
- name: Force limit
# 'all' is okay here, because the fail task will force the user to specify a limit on the command line, using -l or --limit
hosts: 'all'
tasks:
- name: checking limit arg
fail:
msg: "you must use -l or --limit - when you really want to use all hosts, use -l 'all'"
when: ansible_limit is not defined
run_once: true
Now we must use the -l (= --limit option) when we run the playbook, e.g.
ansible-playbook playbook.yml -l www.example.com
Limit option docs:
Limit to one or more hosts This is required when one wants to run a
playbook against a host group, but only against one or more members of
that group.
Limit to one host
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1"
Limit to multiple hosts
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1,host2"
Negated limit.
NOTE: Single quotes MUST be used to prevent bash
interpolation.
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'all:!host1'
Limit to host group
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'group1'
This is a bit late, but I think you could use the --limit or -l command to limit the pattern to more specific hosts. (version 2.3.2.0)
You could have
- hosts: all (or group)
tasks:
- some_task
and then ansible-playbook playbook.yml -l some_more_strict_host_or_pattern
and use the --list-hosts flag to see on which hosts this configuration would be applied.
An other solution is to use the special variable ansible_limit which is the contents of the --limit CLI option for the current execution of Ansible.
- hosts: "{{ ansible_limit | default(omit) }}"
If the --limit option is omitted, then Ansible issues a warning, but does nothing since no host matched.
[WARNING]: Could not match supplied host pattern, ignoring: None
PLAY ****************************************************************
skipping: no hosts matched
I'm using another approach that doesn't need any inventory and works with this simple command:
ansible-playbook site.yml -e working_host=myhost
To perform that, you need a playbook with two plays:
first play runs on localhost and add a host (from given variable) in a known group in inmemory inventory
second play runs on this known group
A working example (copy it and runs it with previous command):
- hosts: localhost
connection: local
tasks:
- add_host:
name: "{{ working_host }}"
groups: working_group
changed_when: false
- hosts: working_group
gather_facts: false
tasks:
- debug:
msg: "I'm on {{ ansible_host }}"
I'm using ansible 2.4.3 and 2.3.3
I changed mine to default to no host and have a check to catch it. That way the user or cron is forced to provide a single host or group etc. I like the logic from the comment from #wallydrag. The empty_group contains no hosts in the inventory.
- hosts: "{{ variable_host | default('empty_group') }}"
Then add the check in tasks:
tasks:
- name: Fail script if required variable_host parameter is missing
fail:
msg: "You have to add the --extra-vars='variable_host='"
when: (variable_host is not defined) or (variable_host == "")
Just came across this googling for a solution. Actually, there is one in Ansible 2.5. You can specify your inventory file with --inventory, like this: ansible --inventory configs/hosts --list-hosts all
If you want to run a task that's associated with a host, but on different host, you should try delegate_to.
In your case, you should delegate to your localhost (ansible master) and calling ansible-playbook command
I am using ansible 2.5 (2.5.3 exactly), and it seems that the vars file is loaded before the hosts param is executed. So you can set the host in a vars.yml file and just write hosts: {{ host_var }} in your playbook
For example, in my playbook.yml:
---
- hosts: "{{ host_name }}"
become: yes
vars_files:
- vars/project.yml
tasks:
...
And inside vars/project.yml:
---
# general
host_name: your-fancy-host-name
Here's a cool solution I came up to safely specify hosts via the --limit option. In this example, the play will end if the playbook was executed without any hosts specified via the --limit option.
This was tested on Ansible version 2.7.10
---
- name: Playbook will fail if hosts not specified via --limit option.
# Hosts must be set via limit.
hosts: "{{ play_hosts }}"
connection: local
gather_facts: false
tasks:
- set_fact:
inventory_hosts: []
- set_fact:
inventory_hosts: "{{inventory_hosts + [item]}}"
with_items: "{{hostvars.keys()|list}}"
- meta: end_play
when: "(play_hosts|length) == (inventory_hosts|length)"
- debug:
msg: "About to execute tasks/roles for {{inventory_hostname}}"
This worked for me as I am using Azure devops to deploy an application using CICD pipelines. I had to make this hosts (in yml file) more dynamic so in release pipeline I can add it's value, for example:
--extra-vars "host=$(target_host)"
pipeline_variable
My ansible playbook looks like this
- name: Apply configuration to test nodes
hosts: '{{ host }}'