I would like to know if
- shell: >
hostname;
whoami;
and
- shell: "{{item}}"
with_items: ['hostname', 'whoami']
are equivalent? In the second example, Ansible will always use the same SSH connection for both commands (hostname, whoami)?
It seems to me that it is false...
- shell: "{{item}}"
with_items: ['export miavar=PIPPO', 'echo $miavar']
(item=export miavar=PIPPO) => {"changed": true, "cmd": "export miavar=PIPPO", "stdout": ""}
(item=echo $miavar) => {"changed": true, "cmd": "echo $miavar", "stdout": ""}
--ansible 2.1.1.0
Riccardo
Ansible runs each loop iteration as separate run, so you end up with different ssh sessions.
There are some exceptions described in ANSIBLE_SQUASH_ACTIONS variable:
"apk, apt, dnf, package, pacman, pkgng, yum, zypper"
This modules are smart enough to squash all items into a single task call.
Just add a list of commands to a variable and substitute the variable in the place where you need to execute them.
vars:
shell_cmd:
- "hostname &&"
- whoami
tasks:
- shell: "{{shell_cmd}}"
Enjoy!
As mentioned in the previous answer, with_items makes ansible run separate loop iterations. One more benefit of this is debuggability esp. if a lot of commands are chained under one shell task. For eg: For the above, ansible will internally run it as:
- shell: "{{item}}"
with_items: ['hostname', 'whoami']
is equivalent to:
- shell: 'hostname:
- shell: 'whoami'
Since its broken into two separate tasks, if one of them fails, ansible will point to the exact failing task (command) instead of the whole chain in the alternative.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html
Related
I want to create a named pipe with ansible on Linux/Debian. In a shell script I would use the command mkfifo for creating a pipe. Now I'm wondering if there's an builtin module approach but I can't find anything in the builtin modules of ansible.
Googling the question also doesn't yield any meaningful results.
My approach would be just to execute a shell command with ansible and execute mkfifo like this for example:
- name: Create named pipe
shell: "mkfifo testpipe"
However, to be precisely, I need to check if the pipe already exists, and if yes, I need to check filetype and so on and so on...
I bet there is a convenient way but I just can't find it.
Thank you very much for your help
Edit:
I just did it this way now. I bet there are some cases I haven't catched.
- name: Check for existing pipe
shell: "test -p {{ pipe_file }}"
register: pipe_file_test
become: true
changed_when: false
- name: Delete pipe_file if its not a pipe
file:
name: "{{ pipe_file }}"
state: absent
when: pipe_file_test.rc != 0
become: true
- name: Create pipe if necessary
shell: "mkfifo {{ pipe_file }}"
when: pipe_file_test.rc != 0
become: true
command/shell modules have some crude support for idempotency:
- name: Create named pipe
command:
cmd: mkfifo /tmp/testpipe
creates: /tmp/testpipe
If /tmp/testpipe exists, ansible will report 'no changed'.
I've been searching for quite some time and tried many variants and similar answers without success. Hopefully this is something simple I am missing.
Ansible 2.9.6
I am creating many playbooks that all share a large set of custom Roles for my clients. I want to keep all logic out of the playbooks, and place that logic in the roles themselves to allow maximum re-usability. Basically, I just want to add boilerplate playbooks for simple "Role" runs via tags, with some vars overrides here and there.
My problem is that I can't seem to make some roles idempotent by using conditions - the conditionals don't work. The error I get is:
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'homebrew_base.rc != 0' failed.
The error was: error while evaluating conditional (homebrew_bash.rc != 0): 'homebrew_bash' is undefined
The error appears to be in '/Users/eric/code/client1/provisioning/roles/bash/tasks/main.yaml': line 12, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Change shell (chsh) for macOS to homebrew managed version
^ here
"}
Below is the boilerplate code for my playbooks:
# ./playbook.yaml
---
- name: Provisioning
hosts: localhost
connection: local
pre_tasks:
- include_vars: "{{ item }}"
with_fileglob:
- "{{ playbook_dir }}/vars/global.yaml"
tags: always
tasks:
- name: Use homebrew bash binary
include_role:
name: bash
tags: bash
The above is truncated quite a bit, but only thing missing are additional var files and a whole bunch of include_roles.
Below is my role file in the entirely though. They are largely untested because of the error I keep getting.
# ./roles/bash/tasks/main.yaml
---
- name: Check /etc/shells contains "/usr/local/bin/bash"
command: grep -Fxq "/usr/local/bin/bash" /etc/shells
register: homebrew_bash
ignore_errors: True
changed_when:
- homebrew_bash.rc != 0
- name: Check that homebrew installed /usr/local/bin/bash
stat:
path: /usr/local/bin/bash
register: homebrew_bash_binary
- name: Change shell (chsh) for macOS to homebrew managed versoin
tags: bash, chsh
shell: chsh -s /usr/local/bin/bash
become: yes
when:
- homebrew_bash.rc != 0
- homebrew_bash_binary.stat.exists = True
Ps: I do plan on abstracting those hardcoded paths into roles/bash/defaults/. But I need it working first before that.
Ps2: If there is a better way to use a contains filter (instead of the grep hack), I'm all ears.
I've tried:
making separate tasks/chsh.yaml, and using the include: command to call that task within the role. When I do this, I get an odd error telling me the variable is undefined - in the tasks/chsh.yaml - even though I am checking for the variable in tasks/main.yaml! That doesn't seem right.
using quotes in various places in the conditions
commenting out each condition: both give the same error, just differenet names.
Again, I am trying to keep this logic in the roles only - not in the playbook.
Thanks!
Figured it out. I was missing the "tags" on the conditionals!
# ./roles/bash/tasks/main.yaml
---
- name: Check /etc/shells contains "/usr/local/bin/bash"
tags: bash, chsh
command: grep -Fxq "/usr/local/bin/bash" /etc/shells
register: homebrew_bash
ignore_errors: True
changed_when:
- homebrew_bash.rc != 0
- name: Check that homebrew installed /usr/local/bin/bash
tags: bash, chsh
stat:
path: /usr/local/bin/bash
register: homebrew_bash_binary
- name: Change shell (chsh) for macOS to homebrew managed versoin
tags: bash, chsh
shell: chsh -s /usr/local/bin/bash
become: yes
when:
- homebrew_bash.rc != 0
- homebrew_bash_binary.stat.exists = True
Running --list-tags against my playbook, deploy.yml, I get the following:
play #1 (localhost): Extract Variable Data From Jenkins Call TAGS: [main,extract]
TASK TAGS: [extract, main]
play #2 (localhost): Pull Repo to Localhost TAGS: [main,pullRepo]
TASK TAGS: [main, pullRepo]
play #3 (gms): Deploy GMS TAGS: [main,gmsDeploy]
TASK TAGS: [gmsDeploy, main]
play #4 (localhost): Execute SQL Scripts TAGS: [main,sql]
TASK TAGS: [main, sql]
play #5 (localhost): Configure Hosts for Copying TAGS: [main,config,tibco]
TASK TAGS: [config, main, tibco]
play #6 (copy_group): Copy Files between servers TAGS: [main,tibco]
TASK TAGS: [main, tibco]
play #7 (localhost): Deploy TIBCO TAGS: [main,tibco]
TASK TAGS: [main, tibco]
play #8 (localhost): Validate Deployment with GMS Heartbeat TAGS: [validate,main]
TASK TAGS: [main, validate]
play #9 (tibco): Clean Up TAGS: [main,cleanup]
TASK TAGS: [cleanup, main]
However, when I run ansible-playbook deploy.yml --tags "pullRepo" in an attempt to execute only the 2nd play, it still attempts to execute the include tasks on play #4. I think the includes (the act of including, not the tasks being included) are the issue because the tasks before the includes in play #4 don't execute. I was hoping the issue was that the included tasks weren't tagged. Unfortunately, adding the sql tags to the included tasks still shows attempts to execute the included tasks.
I know I can avoid running them with --skip-tags, but I shouldn't have to. Any suggestions why this may be happening?
I would post the playbook, but it's hundreds of lines containing proprietary information and I can't seem to replicate the issue with an MCVE.
NOTE: I'm not using any roles in the playbook, so tags being applied to the tasks due to roles is not a factor. All tags are either on the entire plays or the tasks within the plays (primarily the former).
Pretty close to an MCVE:
---
#:PLAY 2 - PULL REPO (LOCALLY)
- name: Pull Repo to Localhost
hosts: localhost
any_errors_fatal: true
tags:
- pullRepo
- main
gather_facts: no
tasks:
# Save the repository to the control machine (this machine) for distribution
- name: Pulling Git Repo for Ansible...
git:
repo: 'https://github.com/ansible/ansible.git'
dest: '/home/dholt2/ansi'
accept_hostkey: yes
force: yes
recursive: no
# Execute the sql files in the 'Oracle' directory after
## checking if the directory exists
#:PLAY 4 - TEST AND EXECUTE SQL (LOCALLY)
- name: Test & Execute SQL Scripts
hosts: localhost
any_errors_fatal: true
tags:
- sql
gather_facts: no
tasks:
# Check if the 'Oracle' directory exists. Save the
## output to deploy 'Oracle/*' if it does exist.
- name: Check for presence of the Oracle directory
stat:
path: '/home/dholt2/testing/Oracle'
register: st3
# Get a list of all sql scripts (using 'ls -v') to run ONLY if the 'Oracle'
## directory exists; exclude the rollback script; -v ensures natural ordering of the scripts
- name: Capture All Scripts To Run
shell: 'ls -v /home/dholt2/testing/Oracle -I rollback.sql'
register: f1
when: st3.stat.isdir is defined
# Test that the deployment scripts run without error
- name: Testing SQL Deployment Scripts...
include: testDeploySql.yml
any_errors_fatal: true
loop_control:
loop_var: deploy_item
with_items: "{{ f1.stdout_lines }}"
register: sqlDepl
when: st3.stat.isdir is defined
The output from executing ansible-playbook deploy2.yml --tags "pullRepo" -vvv:
PLAYBOOK: deploy2.yml *********************************************************************************************************************************************************************************************************************************************************
2 plays in deploy2.yml
PLAY [Pull Repo to Localhost] *************************************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Pulling Git Repo for Ansible...] ********************************************************************************************************************************************************************************************************************************************
task path: /home/dholt2/test/deploy2.yml:17
Using module file /usr/lib/python2.6/site-packages/ansible/modules/source_control/git.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dholt2
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412 `" && echo ansible-tmp-1506540242.05-264367109987412="` echo /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpVWukIT TO /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412/git.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412/ /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412/git.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2.6 /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412/git.py; rm -rf "/home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412/" > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"after": "5c3bbd4620c4a94ece7741ecfe5514a1bd06422b",
"before": null,
"changed": true,
"invocation": {
"module_args": {
"accept_hostkey": true,
"bare": false,
"clone": true,
"depth": null,
"dest": "/home/dholt2/ansi",
"executable": null,
"force": true,
"key_file": null,
"recursive": false,
"reference": null,
"refspec": null,
"remote": "origin",
"repo": "https://github.com/ansible/ansible.git",
"ssh_opts": null,
"track_submodules": false,
"umask": null,
"update": true,
"verify_commit": false,
"version": "HEAD"
}
}
}
META: ran handlers
META: ran handlers
PLAY [Test & Execute SQL Scripts] *********************************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Testing SQL Deployment Scripts...] **************************************************************************************************************************************************************************************************************************************
task path: /home/dholt2/test/deploy2.yml:54
fatal: [localhost]: FAILED! => {
"failed": true,
"msg": "'f1' is undefined"
}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1
I created the following directory structure and files for this example:
/home/dholt2/testing/Oracle
Inside of the directory, I created the files a.sql, b.sql , c.sql, and rollback.sql
Just as it should, it skips over the first task in play #4 that searches for the Oracle directory, ignores the following task that gets a list of the files... yet it tries to execute the include even though it should only run if the previous task was successful. The result is that the variable the include task is trying to use doesn't exist — so it throws an error — even though it shouldn't even have passed on the when directive since the first task didn't execute to show that the directory exists and the previous task didn't get the list of files.
EDIT 1 on 09/28/2017: this task was also in the original playbook, it's the rollback for if the first sql scripts fail:
# Test that the rollback script runs without error
- name: Testing SQL Rollback Script...
any_errors_fatal: true
include: testRollbackSql.yml
register: sqlRoll
when: st3.stat.isdir is defined
EDIT 2 on 10/16/2017: I keep running into a similar problem. I'm often using a with_together where one of the items would always be populated with values, but the other may not (like f1 above). Due to this, the answer below to use the default() filter doesn't work because there's always something for the the include to loop over Fortunately, I found this official issue on Ansible's Github. It discusses why this is happening and how to get around it.
Essentially, you should still have the default() filter, but you need to add static: no to the task as well.
This backs up Konstantins statement about why Ansible may be running the tags even thouth they're not specified.
From the Github issue:
...this is an issue with static vs dynamic includes, inheritance and that when executes inside the with loops.
The issue is that with both static includes and when/with interactions Ansible does NOT know if it will skip the task.
I see..
include is not a usual task it's a kind of statement and this statement behave differently when used in static and dynamic manner.
To eliminate this ambiguity, import_tasks and include_tasks were introduced in Ansible 2.4.
Please see recent question on serverfault: https://serverfault.com/questions/875247/whats-the-difference-between-include-tasks-and-import-tasks/875292#875292
When you specify a tag, Ansible still goes through all tasks under the hood and silently skips those without corresponding tag. But when it hit dynamic include (like in your case), it must process this include statement, because included file may contain tasks marked with that tag.
But you have some undefined variable within you include statement, so it fails. This is expected behaviour.
I have a playbook where I'm running the local_action task (which overwrites the ansible_host variable with "localhost") but I want to access the data that was in the ansible_host variable prior to the local_action task running, during it.
This is what I've tried so far, but I get the error "'register' is not a valid option in debug"
playbook.yml
- debug:
var: ansible_host
register: run_host
- name: Install OBS
local_action: command ssh ansibler#{{ run_host }} "sudo /home/ansibler/obs/bin/install.sh"
become_user: bsmith
The local_action task should ssh to localhost then run the ssh command given. The reason I'm doing it that way instead of just running the command task against the ansible_host is because I couldn't find any other workaround to make this particular script to work. It would hang indefinitely if I ran it with the command task.
Is there another way to achieve what I'm wanting to do?
That's what the set_fact module is for:
- set_fact:
run_host: "{{ ansible_host }}"
I'm currently developing ansible script to build and deploy java project.
so, I can set the log_path like below
log_path=/var/log/ansible.log
but, It is hard to look up build history.
Is it possible to append datetime to log file name?
for example,
ansible.20150326145515.log
I don't believe there is a built-in way to generate the date on the fly like that but one option you have is to use a lookup which can shell out to date. Example:
log_path="/var/log/ansible.{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}.log"
Here is an option using ANSIBLE_LOG_PATH environment variable thanks to Bash shell alias:
alias ansible="ANSIBLE_LOG_PATH=ansible-\`date +%Y%m%d%H%M%S\`.log ansible"
Feel free to use an absolute path if you prefer.
I found it.
just add task to copy(or mv command) log locally
- name: Copy ansible.log
connection: local
command: mv ./logs/ansible.log ./logs/ansible.{{ lookup('pipe', 'date %Y%M%d%H%M%S') }}.log
run_once: true
thanks to #jarv
How about this:
- shell: date +%Y%m%d%H%M%S
register: timestamp
- debug: msg="foo.{{timestamp.stdout}}.log"
Output:
TASK [command] *****************************************************************
changed: [blabla.example.com]
TASK [debug] *******************************************************************
ok: [blabla.example.com] => {
"msg": "foo.20160922233847.log"
}
According to the nice folks at the #ansible freenode IRC, this can be accomplished with a custom callback plugin.
I haven't done it yet because I can't install the Ansible Python library on this machine. Specifically, Windows 7 can't have directory names > 260 chars in length, and pip tries to make lengthy temporary paths. But if someone gets around to it, please post it here.
Small improvement on #ickhyun-kwon answer:
- name: "common/_ansible_log_path.yml: rename ansible.log"
connection: local
shell: |
mkdir -vp {{ inventory_dir }}/logs/{{ svn_deploy.release }}/ ;
mv -vf {{ inventory_dir }}/logs/ansible.log {{ inventory_dir }}/logs/{{ svn_deploy.release }}/ansible.{{ svn_deploy.release }}.{{ lookup('pipe', 'date +%Y-%m-%d-%H%M') }}.log args:
executable: /bin/bash
chdir: "{{ inventory_dir }}"
run_once: True
ignore_errors: True
This has separate log directories per svn release, ensures the log directory actually exists before the mv command.
Ansible interprets ./ as the current playbook directory, which may or may not be the root of your ansible repository, whereas mine live in ./playbooks/$project/$role.yml. For me {{ inventory_dir }}/logs/ happens to correspond to the ~/ansible/log/ directory, though alternative layout configurations do not guarantee this.
I am unsure the correct way to formally extract the absolute ansible.cfg::log_path variable
Also the date command for month is +%m and not %M which is Minute
I have faced a similar problem while trying to set dynamic log paths for various playbooks.
A simple solution seems to be to pass the log filename dynamically to the ANSIBLE_LOG_PATH environment variable. Checkout -> https://docs.ansible.com/ansible/latest/reference_appendices/config.html
In this particular case just export the environment variable when running the intended playbook on your terminal:
export ANSIBLE_LOG_PATH=ansible.`date +%s`.log; ansible-playbook test.yml
Else if the intended filename cannot be generated by the terminal, you can always use a runner playbook which runs the intended playbook from the within:
---
- hosts:
- localhost
gather_facts: false
ignore_errors: yes
tasks:
- name: set dynamic variables
set_fact:
task_name: dynamic_log_test
log_dir: /path/to/log_directory/
- name: Change the working directory and run the ansible-playbook as shell command
shell: "export ANSIBLE_LOG_PATH={{ log_dir }}log_{{ task_name|lower }}.txt; ansible-playbook test.yml"
register: shell_result
This should log the result of test.yml to /path/to/log_directory/log_dynamic_log_test.txt
Hope you find this helpful!