ansible : is there a method to avoid skipped messages from specific roles - ansible

I m not looking for ansible.cfg skippy variable method but for a method I can use in specific roles only to avoid skipped messages on stout while running my playbooks.
(cannot use solutions based on grep rejections cause my ansible commands are not launched from a shell or similar but some applications) I can only work on yml files.
How to do that ?
test file
---
- hosts: localhost
vars:
mypath: /tmp/file
tasks:
- stat: path={{mypath}}
register: foo
- debug: var=foo
- name: do something with file if exists
command: cat {{ mypath }}
when: foo.stat.exists
here it fails to avoid the skip message application runs only "ansible-playbook filename" & cannot be modified to include any pipes contrary of the example below :
-bash-4.4$ ansible-playbook filetest.yml | grep -i skip
skipping: [localhost]
If its not possible is it possible on specific playbook at last ?
I repeat I cannot use cfg file solutions at all only environnement variables or an "inside yml" syntax
Thanks.

in the ansible.cfg put
stdout_callback = skippy

Related

Grep vars from a file with ansible to use later in a task with a for loop

I am relatively new to ansible.
I have a parameters.yml with something like this:
parameters:
stuff: bullshit
otherstuff: otherbullshit
domain_locale: domain.locale.test
domain_differentlocale1: domain.differentlocale1.test
domain_differentlocale2: domain.differentlocale2.test
domain_differentlocale3: domain.differentlocale3.test
uninteresting_stuff: uninteresting_bullshit
This file is in another directory, but I could also symlink it to be in defaults/main.yml. Doesn't matter.
What I would like to accomplish is grep somehow within an ansible task these domain.*.test values and use it as vars in another task with ansible. This other task would be to write it directly after 127.0.0.1 localhost in the same line in the /etc/hosts.
I am not even sure which of the modules to use in ansible. include_vars has no additional grep as I see it. set_fact does not read from a different file(can I combine it with a shell command? how?). lineinfile does not have a source and dest, so it is also limited to one file.
I also thought this post would help me: Ansible read multiple variables with same name from vars_file
but it does not need the grep part, I need.
Here you go:
---
- hosts: localhost
gather_facts: no
vars:
parameters:
stuff: bullshit
otherstuff: otherbullshit
domain_locale: domain.locale.test
domain_differentlocale1: domain.differentlocale1.test
domain_differentlocale2: domain.differentlocale2.test
domain_differentlocale3: domain.differentlocale3.test
uninteresting_stuff: uninteresting_bullshit
tasks:
- lineinfile:
dest: /tmp/test.txt
regexp: ^127.0.0.1
line: "127.0.0.1 localhost {{ parameters.values() | select('match','domain.*test') | list | join(' ') }}"

Is it possible to specify groups from two different inventory files for an Ansible playbook?

My playbook (/home/user/Ansible/dist/playbooks/test.yml):
- hosts: regional_clients
tasks:
- shell: /export/home/user/ansible_scripts/test.sh
register: shellout
- debug: var=shellout
- hosts: web_clients
tasks:
- shell: /var/www/html/webstart/release/ansible_scripts/test.sh
register: shellout
- debug: var=shellout
- command: echo catalina.sh start
register: output
- debug: var=output
The [regional_clients] group is specified in /home/user/Ansible/webproj/hosts and the [web_clients] group is specified in /home/user/Ansible/regions/hosts.
Is there a way I could make the above work? Currently, running the playbook will fail since neither [regional_clients] or [web_clients] are defined in the default inventory file /home/user/Ansible/dist/hosts.
Yes, you can write a simple shell script:
#!/bin/sh
cat /home/user/Ansible/webproj/hosts /home/user/Ansible/regions/hosts
and call it as a dynamic inventory in Ansible:
ansible-playbook -i my_script test.yml
This question, however, looks to me like a problem with your organisation, not a technical one. If your environment is so complex and maintained by different parties, then use some kind of configuration database (and a dynamic inventory in Ansible which would retrieve the data), instead of individual files in user's paths.

Dynamic file name in vars_files

The following is a simple playbook which tries to dynamically load variables:
site.yml
---
- hosts: localhost
vars_files:
- "{{ name }}.yml"
tasks:
- debug: var={{ foo }}
Variable foo is defined in this file:
vars/myvars.yml
---
foo: "Hello"
Then playbook is run like this:
ansible-playbook test.yml -e "name=myvars"
However this results in this error:
ERROR! vars file {{ name }}.yml was not found
From what I understood from several code snippets this should be possible and import the variables from myvars.yml. When trying with ansible 1.7.x it indeed seemed to work (although I hit a different issue the file name vas resolved correctly).
Was this behaviour changed (perhaps support for dynamic variable files was removed?). Is there a different way to achieve this behaviour (I can use include_vars tasks, however it is not quite suitable)?
EDIT: To make sure my playbook structure is correct, here is a github repository: https://github.com/jcechace/ansible_sinppets/tree/master/dynamic_vars
Just change your site.yml like this:
- hosts: localhost
vars_files:
- "vars/{{ name }}.yml"
tasks:
- debug: var={{ foo }}
Then run the command like this:
ansible-playbook site.yml -e "name=myvars" -c local
Hope this will help you.

in Ansible, How can I set log file name dynamilcally

I'm currently developing ansible script to build and deploy java project.
so, I can set the log_path like below
log_path=/var/log/ansible.log
but, It is hard to look up build history.
Is it possible to append datetime to log file name?
for example,
ansible.20150326145515.log
I don't believe there is a built-in way to generate the date on the fly like that but one option you have is to use a lookup which can shell out to date. Example:
log_path="/var/log/ansible.{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}.log"
Here is an option using ANSIBLE_LOG_PATH environment variable thanks to Bash shell alias:
alias ansible="ANSIBLE_LOG_PATH=ansible-\`date +%Y%m%d%H%M%S\`.log ansible"
Feel free to use an absolute path if you prefer.
I found it.
just add task to copy(or mv command) log locally
- name: Copy ansible.log
connection: local
command: mv ./logs/ansible.log ./logs/ansible.{{ lookup('pipe', 'date %Y%M%d%H%M%S') }}.log
run_once: true
thanks to #jarv
How about this:
- shell: date +%Y%m%d%H%M%S
register: timestamp
- debug: msg="foo.{{timestamp.stdout}}.log"
Output:
TASK [command] *****************************************************************
changed: [blabla.example.com]
TASK [debug] *******************************************************************
ok: [blabla.example.com] => {
"msg": "foo.20160922233847.log"
}
According to the nice folks at the #ansible freenode IRC, this can be accomplished with a custom callback plugin.
I haven't done it yet because I can't install the Ansible Python library on this machine. Specifically, Windows 7 can't have directory names > 260 chars in length, and pip tries to make lengthy temporary paths. But if someone gets around to it, please post it here.
Small improvement on #ickhyun-kwon answer:
- name: "common/_ansible_log_path.yml: rename ansible.log"
connection: local
shell: |
mkdir -vp {{ inventory_dir }}/logs/{{ svn_deploy.release }}/ ;
mv -vf {{ inventory_dir }}/logs/ansible.log {{ inventory_dir }}/logs/{{ svn_deploy.release }}/ansible.{{ svn_deploy.release }}.{{ lookup('pipe', 'date +%Y-%m-%d-%H%M') }}.log args:
executable: /bin/bash
chdir: "{{ inventory_dir }}"
run_once: True
ignore_errors: True
This has separate log directories per svn release, ensures the log directory actually exists before the mv command.
Ansible interprets ./ as the current playbook directory, which may or may not be the root of your ansible repository, whereas mine live in ./playbooks/$project/$role.yml. For me {{ inventory_dir }}/logs/ happens to correspond to the ~/ansible/log/ directory, though alternative layout configurations do not guarantee this.
I am unsure the correct way to formally extract the absolute ansible.cfg::log_path variable
Also the date command for month is +%m and not %M which is Minute
I have faced a similar problem while trying to set dynamic log paths for various playbooks.
A simple solution seems to be to pass the log filename dynamically to the ANSIBLE_LOG_PATH environment variable. Checkout -> https://docs.ansible.com/ansible/latest/reference_appendices/config.html
In this particular case just export the environment variable when running the intended playbook on your terminal:
export ANSIBLE_LOG_PATH=ansible.`date +%s`.log; ansible-playbook test.yml
Else if the intended filename cannot be generated by the terminal, you can always use a runner playbook which runs the intended playbook from the within:
---
- hosts:
- localhost
gather_facts: false
ignore_errors: yes
tasks:
- name: set dynamic variables
set_fact:
task_name: dynamic_log_test
log_dir: /path/to/log_directory/
- name: Change the working directory and run the ansible-playbook as shell command
shell: "export ANSIBLE_LOG_PATH={{ log_dir }}log_{{ task_name|lower }}.txt; ansible-playbook test.yml"
register: shell_result
This should log the result of test.yml to /path/to/log_directory/log_dynamic_log_test.txt
Hope you find this helpful!

Safely limiting Ansible playbooks to a single machine?

I'm using Ansible for some simple user management tasks with a small group of computers. Currently, I have my playbooks set to hosts: all and my hosts file is just a single group with all machines listed:
# file: hosts
[office]
imac-1.local
imac-2.local
imac-3.local
I've found myself frequently having to target a single machine. The ansible-playbook command can limit plays like this:
ansible-playbook --limit imac-2.local user.yml
But that seems kind of fragile, especially for a potentially destructive playbook. Leaving out the limit flag means the playbook would be run everywhere. Since these tools only get used occasionally, it seems worth taking steps to foolproof playback so we don't accidentally nuke something months from now.
Is there a best practice for limiting playbook runs to a single machine? Ideally the playbooks should be harmless if some important detail was left out.
Turns out it is possible to enter a host name directly into the playbook, so running the playbook with hosts: imac-2.local will work fine. But it's kind of clunky.
A better solution might be defining the playbook's hosts using a variable, then passing in a specific host address via --extra-vars:
# file: user.yml (playbook)
---
- hosts: '{{ target }}'
user: ...
Running the playbook:
ansible-playbook user.yml --extra-vars "target=imac-2.local"
If {{ target }} isn't defined, the playbook does nothing. A group from the hosts file can also be passed through if need be. Overall, this seems like a much safer way to construct a potentially destructive playbook.
Playbook targeting a single host:
$ ansible-playbook user.yml --extra-vars "target=imac-2.local" --list-hosts
playbook: user.yml
play #1 (imac-2.local): host count=1
imac-2.local
Playbook with a group of hosts:
$ ansible-playbook user.yml --extra-vars "target=office" --list-hosts
playbook: user.yml
play #1 (office): host count=3
imac-1.local
imac-2.local
imac-3.local
Forgetting to define hosts is safe!
$ ansible-playbook user.yml --list-hosts
playbook: user.yml
play #1 ({{target}}): host count=0
There's also a cute little trick that lets you specify a single host on the command line (or multiple hosts, I guess), without an intermediary inventory:
ansible-playbook -i "imac1-local," user.yml
Note the comma (,) at the end; this signals that it's a list, not a file.
Now, this won't protect you if you accidentally pass a real inventory file in, so it may not be a good solution to this specific problem. But it's a handy trick to know!
This approach will exit if more than a single host is provided by checking the play_hosts variable. The fail module is used to exit if the single host condition is not met. The examples below use a hosts file with two hosts alice and bob.
user.yml (playbook)
---
- hosts: all
tasks:
- name: Check for single host
fail: msg="Single host check failed."
when: "{{ play_hosts|length }} != 1"
- debug: msg='I got executed!'
Run playbook with no host filters
$ ansible-playbook user.yml
PLAY [all] ****************************************************************
TASK: [Check for single host] *********************************************
failed: [alice] => {"failed": true}
msg: Single host check failed.
failed: [bob] => {"failed": true}
msg: Single host check failed.
FATAL: all hosts have already failed -- aborting
Run playbook on single host
$ ansible-playbook user.yml --limit=alice
PLAY [all] ****************************************************************
TASK: [Check for single host] *********************************************
skipping: [alice]
TASK: [debug msg='I got executed!'] ***************************************
ok: [alice] => {
"msg": "I got executed!"
}
There's IMHO a more convenient way.
You can indeed interactively prompt the user for the machine(s) he wants to apply by using vars_prompt:
---
- hosts: "{{ setupHosts }}"
vars_prompt:
- name: "setupHosts"
prompt: "Which hosts would you like to setup?"
private: false
tasks:
- shell: echo
A slightly different solution is to use the special variable ansible_limit which is the contents of the --limit CLI option for the current execution of Ansible.
- hosts: "{{ ansible_limit | default(omit) }}"
No need to define an extra variable here, just run the playbook with the --limit flag.
ansible-playbook --limit imac-2.local user.yml
To expand on joemailer's answer, if you want to have the pattern-matching ability to match any subset of remote machines (just as the ansible command does), but still want to make it very difficult to accidentally run the playbook on all machines, this is what I've come up with:
Same playbook as the in other answer:
# file: user.yml (playbook)
---
- hosts: '{{ target }}'
user: ...
Let's have the following hosts:
imac-10.local
imac-11.local
imac-22.local
Now, to run the command on all devices, you have to explicty set the target variable to "all"
ansible-playbook user.yml --extra-vars "target=all"
And to limit it down to a specific pattern, you can set target=pattern_here
or, alternatively, you can leave target=all and append the --limit argument, eg:
--limit imac-1*
ie.
ansible-playbook user.yml --extra-vars "target=all" --limit imac-1* --list-hosts
which results in:
playbook: user.yml
play #1 (office): host count=2
imac-10.local
imac-11.local
I really don't understand how all the answers are so complicated, the way to do it is simply:
ansible-playbook user.yml -i hosts/hosts --limit imac-2.local --check
The check mode allows you to run in dry-run mode, without making any change.
AWS users using the EC2 External Inventory Script can simply filter by instance id:
ansible-playbook sample-playbook.yml --limit i-c98d5a71 --list-hosts
This works because the inventory script creates default groups.
Since version 1.7 ansible has the run_once option. Section also contains some discussion of various other techniques.
We have some generic playbooks that are usable by a large number of teams. We also have environment specific inventory files, that contain multiple group declarations.
To force someone calling a playbook to specify a group to run against, we seed a dummy entry at the top of the playbook:
[ansible-dummy-group]
dummy-server
We then include the following check as a first step in the shared playbook:
- hosts: all
gather_facts: False
run_once: true
tasks:
- fail:
msg: "Please specify a group to run this playbook against"
when: '"dummy-server" in ansible_play_batch'
If the dummy-server shows up in the list of hosts this playbook is scheduled to run against (ansible_play_batch), then the caller didn't specify a group and the playbook execution will fail.
This shows how to run the playbooks on the target server itself.
This is a bit trickier if you want to use a local connection. But this should be OK if you use a variable for the hosts setting and in the hosts file create a special entry for localhost.
In (all) playbooks have the hosts: line set to:
- hosts: "{{ target | default('no_hosts')}}"
In the inventory hosts file add an entry for the localhost which sets the connection to be local:
[localhost]
127.0.0.1 ansible_connection=local
Then on the command line run commands explicitly setting the target - for example:
$ ansible-playbook --extra-vars "target=localhost" test.yml
This will also work when using ansible-pull:
$ ansible-pull -U <git-repo-here> -d ~/ansible --extra-vars "target=localhost" test.yml
If you forget to set the variable on the command line the command will error safely (as long as you've not created a hosts group called 'no_hosts'!) with a warning of:
skipping: no hosts matched
And as mentioned above you can target a single machine (as long as it is in your hosts file) with:
$ ansible-playbook --extra-vars "target=server.domain" test.yml
or a group with something like:
$ ansible-playbook --extra-vars "target=web-servers" test.yml
I have a wrapper script called provision forces you to choose the target, so I don't have to handle it elsewhere.
For those that are curious, I use ENV vars for options that my vagrantfile uses (adding the corresponding ansible arg for cloud systems) and let the rest of the ansible args pass through. Where I am creating and provisioning more than 10 servers at a time I include an auto retry on failed servers (as long as progress is being made - I found when creating 100 or so servers at a time often a few would fail the first time around).
echo 'Usage: [VAR=value] bin/provision [options] dev|all|TARGET|vagrant'
echo ' bootstrap - Bootstrap servers ssh port and initial security provisioning'
echo ' dev - Provision localhost for development and control'
echo ' TARGET - specify specific host or group of hosts'
echo ' all - provision all servers'
echo ' vagrant - Provision local vagrant machine (environment vars only)'
echo
echo 'Environment VARS'
echo ' BOOTSTRAP - use cloud providers default user settings if set'
echo ' TAGS - if TAGS env variable is set, then only tasks with these tags are run'
echo ' SKIP_TAGS - only run plays and tasks whose tags do not match these values'
echo ' START_AT_TASK - start the playbook at the task matching this name'
echo
ansible-playbook --help | sed -e '1d
s#=/etc/ansible/hosts# set by bin/provision argument#
/-k/s/$/ (use for fresh systems)/
/--tags/s/$/ (use TAGS var instead)/
/--skip-tags/s/$/ (use SKIP_TAGS var instead)/
/--start-at-task/s/$/ (use START_AT_TASK var instead)/
'
I would suggest using --limit <hostname or ip>

Resources