In ansible, is it possible to get the value of the argument to the "--limit" option within a playbook? I want to do is something like this:
---
- hosts: all
remote user: root
tasks:
- name: The value of the --limit argument
debug:
msg: "argument of --limit is {{ ansible-limit-arg }}"
Then when I run he command:
$ ansible-playbook getLimitArg.yaml --limit webhosts
I'll get this output:
argument of --limit is webhost
Of course, I made up the name of the variable "ansible-limit-arg", but is there a valid way of doing this? I could specify "webhosts" twice, the second time with --extra-args, but that seems a roundabout way of having to do this.
Since Ansible 2.5 you can access the value using a new magic variable ansible_limit, so:
- debug:
var: ansible_limit
Have you considered using the {{ ansible_play_batch }} built-in variable?
- hosts: all
become: "False"
gather_facts: "False"
tasks:
- name: The value of the --limit argument
debug:
msg: "argument of --limit is {{ ansible_play_batch }}"
delegate_to: localhost
It won't tell you exactly what was entered as the argument but it will tell you how Ansible interpreted the --limit arg.
You can't do this without additional plugin/module. If you utterly need this, write action plugin and access options via cli.options (see example).
P.S. If you try to use --limit to run playbooks agains different environments, don't do it, you can accidentally blow your whole infrastructure – use different inventories instead.
Here is the small code block to achieve the same
- block:
- shell: 'echo {{inventory_hostname}} >> /tmp/hosts'
- shell: cat /tmp/hosts
register: servers
- file:
path: /tmp/hosts
state: absent
delegate_to: 127.0.0.1
- debug: var=servers.stdout_lines
Then use stdout_lines output as u wish like mine
- add_host:
name: "{{ item }}"
groups: cluster
ansible_user: "ubuntu"
with_items:
- "{{ servers.stdout_lines }}"
Related
This is code of my ansible script .
---
- hosts: "{{ host }}"
remote_user: "{{ user }}"
ansible_become_pass: "{{ pass }}"
tasks:
- name: Creates directory to keep files on the server
file: path=/home/{{ user }}/fabric_shell state=directory
- name: Move sh file to remote
copy:
src: /home/pankaj/my_ansible_scripts/normal_script/installation/install.sh
dest: /home/{{ user }}/fabric_shell/install.sh
- name: Execute the script
command: sh /home/{{ user }}/fabric_shell/install.sh
become: yes
I am running the ansible playbook using command>>>
ansible-playbook send_run_shell.yml --extra-vars "user=sakshi host=192.168.0.238 pass=Welcome01" .
But I don't know why am getting error
ERROR! 'ansible_become_pass' is not a valid attribute for a Play
The error appears to have been in '/home/pankaj/go/src/shell_code/send_run_shell.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- hosts: "{{ host }}"
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
Please guide , what I am doing wrong.
Thanks in advance ...
ansible_become_pass is a connection parameter which you can set as variable:
---
- hosts: "{{ host }}"
remote_user: "{{ user }}"
vars:
ansible_become_pass: "{{ pass }}"
tasks:
# ...
That said, you can move remote_user to variables too (refer to the whole list of connection parameters), save it to a separate host_vars- or group_vars-file and encrypt with Ansible Vault.
Take a look on this thread thread and Ansible Page. I propose to use become_user in this way:
- hosts: all
tasks:
- include_tasks: task/java_tomcat_install.yml
when: activity == 'Install'
become: yes
become_user: "{{ aplication_user }}"
Try do not use pass=Welcome01,
When speaking with remote machines, Ansible by default assumes you are using SSH keys. SSH keys are encouraged but password authentication can also be used where needed by supplying the option --ask-pass. If using sudo features and when sudo requires a password, also supply --ask-become-pass (previously --ask-sudo-pass which has been deprecated).
How can I run a playbook only on first host in the group?
I am expecting something like this:
---
- name: playbook that only run on first host in the group
hosts: "{{ groups[group_name] | first }}"
tasks:
- debug:
msg: "on {{ inventory_hostname }}"
But this doesn't work, gives error:
'groups' is undefined
How can I make it work?
You can use:
hosts: group_name[0]
Inventory hosts values (specified in the hosts directive) are processed with a custom parser, which does not allow Jinja2 expressions like the regular template engine does.
Read about Patterns.
I'd like to pass a variable to an included Ansible playbook as follows:
---
- hosts: localhost
connection: local
vars:
my_group: foo
- include: site.yml hosts={{ my_group }}
Then, in site.yml...
---
- hosts: "{{ hosts }}"
...
Unfortunately, I get an error saying that my_group is undefined in site.yml. Ansible docs do say that:
Note that you cannot do variable substitution when including one playbook inside another.
Is this my case? Is there a way around it?
You can use this syntax, but my_group has to be defined at the global level. Now it's local to the first play - it's even clear from the indentation.
You can confirm this by running your playbook with --extra-vars my_group=foo.
But generally what you seem to want to achieve is done using in-memory inventory files and add_host module. Take this as an example:
- hosts: localhost
gather_facts: no
vars:
target_host: foo
some_other_variable: bar
tasks:
- add_host:
name: "{{ target_host }}"
groups: dynamically_created_hosts
some_other_variable: "{{ some_other_variable }}"
- include: site.yml
with site.yml:
---
- hosts: dynamically_created_hosts
tasks:
- debug:
var: some_other_variable
I added some_other_variable to answer the question from your comment "how do I make a variable globally available from inside a play". It's not global, but it's passed to another play as a "hostvar".
From what I see (and I can't explain why) in Ansible >=2.1.1.0, there must be an inventory file specified for the dynamic in-memory inventory to work. In older versions it worked with Ansible executed ad hoc, without an inventory file, but now you must run ansible-playbook with -i inventory_file or have an inventory file defined through ansible.cfg.
This is a fragment of a playbook that I'm using (server.yml):
- name: Determine Remote User
hosts: web
gather_facts: false
roles:
- { role: remote-user, tags: [remote-user, always] }
My hosts file has different groups of servers, e.g.
[web]
x.x.x.x
[droplets]
x.x.x.x
Now I want to execute ansible-playbook -i hosts/<env> server.yml and override hosts: web from server.yml to run this playbook for [droplets].
Can I just override as a one time off thing, without editing server.yml directly?
Thanks.
I don't think Ansible provides this feature, which it should. Here's something that you can do:
hosts: "{{ variable_host | default('web') }}"
and you can pass variable_host from either command-line or from a vars file, e.g.:
ansible-playbook server.yml --extra-vars "variable_host=newtarget(s)"
For anyone who might come looking for the solution.
Play Book
- hosts: '{{ host }}'
tasks:
- debug: msg="Host is {{ ansible_fqdn }}"
Inventory
[web]
x.x.x.x
[droplets]
x.x.x.x
Command: ansible-playbook deplyment.yml -i hosts --extra-vars "host=droplets"
So you can specify the group name in the extra-vars
We use a simple fail task to force the user to specify the Ansible limit option, so that we don't execute on all hosts by default/accident.
The easiest way I found is this:
---
- name: Force limit
# 'all' is okay here, because the fail task will force the user to specify a limit on the command line, using -l or --limit
hosts: 'all'
tasks:
- name: checking limit arg
fail:
msg: "you must use -l or --limit - when you really want to use all hosts, use -l 'all'"
when: ansible_limit is not defined
run_once: true
Now we must use the -l (= --limit option) when we run the playbook, e.g.
ansible-playbook playbook.yml -l www.example.com
Limit option docs:
Limit to one or more hosts This is required when one wants to run a
playbook against a host group, but only against one or more members of
that group.
Limit to one host
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1"
Limit to multiple hosts
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1,host2"
Negated limit.
NOTE: Single quotes MUST be used to prevent bash
interpolation.
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'all:!host1'
Limit to host group
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'group1'
This is a bit late, but I think you could use the --limit or -l command to limit the pattern to more specific hosts. (version 2.3.2.0)
You could have
- hosts: all (or group)
tasks:
- some_task
and then ansible-playbook playbook.yml -l some_more_strict_host_or_pattern
and use the --list-hosts flag to see on which hosts this configuration would be applied.
An other solution is to use the special variable ansible_limit which is the contents of the --limit CLI option for the current execution of Ansible.
- hosts: "{{ ansible_limit | default(omit) }}"
If the --limit option is omitted, then Ansible issues a warning, but does nothing since no host matched.
[WARNING]: Could not match supplied host pattern, ignoring: None
PLAY ****************************************************************
skipping: no hosts matched
I'm using another approach that doesn't need any inventory and works with this simple command:
ansible-playbook site.yml -e working_host=myhost
To perform that, you need a playbook with two plays:
first play runs on localhost and add a host (from given variable) in a known group in inmemory inventory
second play runs on this known group
A working example (copy it and runs it with previous command):
- hosts: localhost
connection: local
tasks:
- add_host:
name: "{{ working_host }}"
groups: working_group
changed_when: false
- hosts: working_group
gather_facts: false
tasks:
- debug:
msg: "I'm on {{ ansible_host }}"
I'm using ansible 2.4.3 and 2.3.3
I changed mine to default to no host and have a check to catch it. That way the user or cron is forced to provide a single host or group etc. I like the logic from the comment from #wallydrag. The empty_group contains no hosts in the inventory.
- hosts: "{{ variable_host | default('empty_group') }}"
Then add the check in tasks:
tasks:
- name: Fail script if required variable_host parameter is missing
fail:
msg: "You have to add the --extra-vars='variable_host='"
when: (variable_host is not defined) or (variable_host == "")
Just came across this googling for a solution. Actually, there is one in Ansible 2.5. You can specify your inventory file with --inventory, like this: ansible --inventory configs/hosts --list-hosts all
If you want to run a task that's associated with a host, but on different host, you should try delegate_to.
In your case, you should delegate to your localhost (ansible master) and calling ansible-playbook command
I am using ansible 2.5 (2.5.3 exactly), and it seems that the vars file is loaded before the hosts param is executed. So you can set the host in a vars.yml file and just write hosts: {{ host_var }} in your playbook
For example, in my playbook.yml:
---
- hosts: "{{ host_name }}"
become: yes
vars_files:
- vars/project.yml
tasks:
...
And inside vars/project.yml:
---
# general
host_name: your-fancy-host-name
Here's a cool solution I came up to safely specify hosts via the --limit option. In this example, the play will end if the playbook was executed without any hosts specified via the --limit option.
This was tested on Ansible version 2.7.10
---
- name: Playbook will fail if hosts not specified via --limit option.
# Hosts must be set via limit.
hosts: "{{ play_hosts }}"
connection: local
gather_facts: false
tasks:
- set_fact:
inventory_hosts: []
- set_fact:
inventory_hosts: "{{inventory_hosts + [item]}}"
with_items: "{{hostvars.keys()|list}}"
- meta: end_play
when: "(play_hosts|length) == (inventory_hosts|length)"
- debug:
msg: "About to execute tasks/roles for {{inventory_hostname}}"
This worked for me as I am using Azure devops to deploy an application using CICD pipelines. I had to make this hosts (in yml file) more dynamic so in release pipeline I can add it's value, for example:
--extra-vars "host=$(target_host)"
pipeline_variable
My ansible playbook looks like this
- name: Apply configuration to test nodes
hosts: '{{ host }}'
I cannot get this seemingly simple example to work in Ansible 1.8.3. The variable interpolation does not kick in the task name. All examples I have seen seem to suggest this should work. Given that the variable is defined in the vars section I expected the task name to print the value of the variable. Why doesn't this work?
Even the example from the Ansible documentation seems to not print the variable value.
---
- hosts: 127.0.0.1
gather_facts: no
vars:
vhost: "foo"
tasks:
- name: create a virtual host file for {{ vhost }}
debug: msg="{{ vhost }}"
This results in the following output:
PLAY [127.0.0.1]
**************************************************************
TASK: [create a virtual host file for {{ vhost }}]
****************************
ok: [127.0.0.1] => {
"msg": "foo"
}
PLAY RECAP
********************************************************************
127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0
Update
This works with 1.7.2 but does not work with 1.8.3. So either this is a bug or a feature.
Variables are not resolved inside the name. Only inside the actual tasks/conditions etc. the placeholders will be resolved. I guess this is by design. Imagine you have a with_items loop and use the {{ item }}in the name. The tasks name will only be printed once, but the {{ item }} would change in every iteration.
I see the examples, even the one in the doc you linked to, use variables in the name. But that doesn't mean the result would be like you expected it. The docs are community managed. It might be someone just put that line there w/o testing it - or maybe it used to work like that in a previous version of Ansible and the docs have not been updated then. (I'm only using Ansible since about one year). But even though it doesn't work like we wish it would, I'm still using variables in my name's, just to indicate that the task is based on dynamic parameters. Might be the examples have been written with the same intention.
An interesting observation I recently made (Ansible 1.9.4) is, default values are written out in the task name.
- name: create a virtual host file for {{ vhost | default("foo") }}
When executed, Ansible would show the task title as:
TASK: [create a virtual host file for foo]
This way you can avoid ugly task names in the output.
Explanation
Whether the variable gets interpolated depends on where it has been declared.
Imagine You have two hosts: A and B.
If variable foo has only per-host values, when Ansible runs the play, it cannot decide which value to use.
On the other hand, if it has a global value (global in a sense of host invariance), there is no confusion which value to use.
Source: https://github.com/ansible/ansible/issues/3103#issuecomment-18835432
Hands on playbook
ansible_user is an inventory variable
greeting is an invariant variable
- name: Test variable substitution in names
hosts: localhost
connection: local
vars:
greeting: Hello
tasks:
- name: Sorry {{ ansible_user }}
debug:
msg: this won't work
- name: You say '{{ greeting }}'
debug:
var: ansible_user
I experienced the same problem today in one of my Ansible roles and I noticed something interesting.
When I use the set_fact module before I use the vars in the task name, they actually get translated to their correct values.
In this example I wanted to set the password for a remote user:
Notice that I use the vars test_user and user_password that I set as facts before.
- name: Prepare to set user password
set_fact:
user_password: "{{ linux_pass }}"
user_salt: "s0m3s4lt"
test_user: "{{ ansible_user }}"
- name: "Changing password for user {{ test_user }} to {{ user_password }}"
user:
name: "{{ ansible_user }}"
password: "{{ user_password | password_hash('sha512', user_salt) }}"
state: present
shell: /bin/bash
update_password: always
This gives me the following output:
TASK [install : Changing password for user linux to LiNuXuSeRPaSs#]
So this solved my problem.
It might be ugly, but you can somewhat workaround with something like this:
- name: create a virtual host file
debug:
msg: "Some command result"
loop: "{{ [ vhost ] }}"
or
- name: create a virtual host file
debug:
msg: "Some command result"
loop_control:
label: "{{ vhost }}"
loop: [1]
I wouldn't do this in general, but it shows how you can use items or label to give information outside of the command result. While it might not
Source: https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html