Ansible not changing into directory to run command(s) - ansible

I'm using Ansible 2.8.4-1.el7 in order to automate some tasks. Since I'm starting learning the tool, I'm going through the official documentation most of the times (more specifically, I'm following these examples).
I'm just trying to execute a script that manages the Tomcat instance(s):
- name: Stop service
command: "{{ script_manager }} stop"
args:
chdir: "{{ path_base }}/bin/"
I'm setting those variables beforehand:
path_base: /some/path/tomcat9_dev/
script_manager: tomcat9_dev
...but I'm still getting this error message:
TASK [tomcat : Stop service] ***************************************************
fatal: [tld.server.name]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "cmd": "tomcat9_dev stop", "msg": "[Errno 2] No such file or directory", "rc": 2}
It looks like it's not changing into that directory first to execute the command. Any clues?
I know the location exists, I can SSH into the server(s) and cd into it.
UPDATE: If I do "{{ path_base }}/bin/{{ script_manager }} stop" it works fine. In any case, what's a better approach?

The current directory is not part of the PATH in *nix OS. Changing dir to the correct location is not enough.
If you really want to call your script like this, you have to add {{ path_base }}/bin/ to your user's PATH.
Meanwhile, you can take advantage of chdir by calling your script relative to the current dir:
- name: Stop service
command: "./{{ script_manager }} stop"
args:
chdir: "{{ path_base }}/bin/"
Furtermore, you are IMO on the wrong track: tomcat should be a registered service on the machine and you should manipulate it with the relevant modules: either the generic service or one of the specific systemd, runit,sysvinit...
Example with service:
- name: Make sure tomcat_dev is stopped
service:
name: tomcat_dev
state: stopped

Related

Execute role in play based on a failure [duplicate]

I'm trying to spin up an AWS deployment environment in Ansible, and I want to make it so that if something fails along the way, Ansible tears down everything on AWS that has been spun up so far. I can't figure out how to get Ansible to throw an error within the role
For example:
<main.yml>
- hosts: localhost
connection: local
roles:
- make_ec2_role
- make_rds_role
- make_s3_role
2. Then I want it to run some code based on that error here.
<make_rds_role>
- name: "Make it"
- rds:
params: etc <-- 1. Let's say it fails in the middle here
I've tried:
- name: this command prints FAILED when it fails
command: /usr/bin/example-command -x -y -z
register: command_result
failed_when: "'FAILED' in command_result.stderr"
As well as other things on within the documentation, but what I really want is just a way to use something like the "block" and "rescue" commands , but as far as I can tell that only works within the same book and on plays, not roles. Does anyone have a good way to do this?
Wrap tasks inside your roles into block/rescue thing.
Make sure that rescue block has at least one task – this way Ansible will not mark the host as failed.
Like this:
- block:
- name: task 1
... # something bad may happen here
- name: task N
rescue:
- assert: # we need a dummy task here to prevent our host from being failed
that: ansible_failed_task is defined
Recent versions of Ansible register ansible_failed_task and ansible_failed_result when hit rescue block.
So you can do some post_tasks in your main.yml playbook like this:
post_tasks:
- debug:
msg: "Failed task: {{ ansible_failed_task }}, failed result: {{ ansible_failed_result }}"
when: ansible_failed_task is defined
But be warned that this trick will NOT prevent other roles from executing.
So in your example if make_rds_role fails ansible will apply make_s3_role and run your post_tasks afterwards.
If you need to prevent it, add some checking for ansible_failed_task fact in the beginning of each role or something.

Ansible - Unspecified Tags Still Executing

Running --list-tags against my playbook, deploy.yml, I get the following:
play #1 (localhost): Extract Variable Data From Jenkins Call TAGS: [main,extract]
TASK TAGS: [extract, main]
play #2 (localhost): Pull Repo to Localhost TAGS: [main,pullRepo]
TASK TAGS: [main, pullRepo]
play #3 (gms): Deploy GMS TAGS: [main,gmsDeploy]
TASK TAGS: [gmsDeploy, main]
play #4 (localhost): Execute SQL Scripts TAGS: [main,sql]
TASK TAGS: [main, sql]
play #5 (localhost): Configure Hosts for Copying TAGS: [main,config,tibco]
TASK TAGS: [config, main, tibco]
play #6 (copy_group): Copy Files between servers TAGS: [main,tibco]
TASK TAGS: [main, tibco]
play #7 (localhost): Deploy TIBCO TAGS: [main,tibco]
TASK TAGS: [main, tibco]
play #8 (localhost): Validate Deployment with GMS Heartbeat TAGS: [validate,main]
TASK TAGS: [main, validate]
play #9 (tibco): Clean Up TAGS: [main,cleanup]
TASK TAGS: [cleanup, main]
However, when I run ansible-playbook deploy.yml --tags "pullRepo" in an attempt to execute only the 2nd play, it still attempts to execute the include tasks on play #4. I think the includes (the act of including, not the tasks being included) are the issue because the tasks before the includes in play #4 don't execute. I was hoping the issue was that the included tasks weren't tagged. Unfortunately, adding the sql tags to the included tasks still shows attempts to execute the included tasks.
I know I can avoid running them with --skip-tags, but I shouldn't have to. Any suggestions why this may be happening?
I would post the playbook, but it's hundreds of lines containing proprietary information and I can't seem to replicate the issue with an MCVE.
NOTE: I'm not using any roles in the playbook, so tags being applied to the tasks due to roles is not a factor. All tags are either on the entire plays or the tasks within the plays (primarily the former).
Pretty close to an MCVE:
---
#:PLAY 2 - PULL REPO (LOCALLY)
- name: Pull Repo to Localhost
hosts: localhost
any_errors_fatal: true
tags:
- pullRepo
- main
gather_facts: no
tasks:
# Save the repository to the control machine (this machine) for distribution
- name: Pulling Git Repo for Ansible...
git:
repo: 'https://github.com/ansible/ansible.git'
dest: '/home/dholt2/ansi'
accept_hostkey: yes
force: yes
recursive: no
# Execute the sql files in the 'Oracle' directory after
## checking if the directory exists
#:PLAY 4 - TEST AND EXECUTE SQL (LOCALLY)
- name: Test & Execute SQL Scripts
hosts: localhost
any_errors_fatal: true
tags:
- sql
gather_facts: no
tasks:
# Check if the 'Oracle' directory exists. Save the
## output to deploy 'Oracle/*' if it does exist.
- name: Check for presence of the Oracle directory
stat:
path: '/home/dholt2/testing/Oracle'
register: st3
# Get a list of all sql scripts (using 'ls -v') to run ONLY if the 'Oracle'
## directory exists; exclude the rollback script; -v ensures natural ordering of the scripts
- name: Capture All Scripts To Run
shell: 'ls -v /home/dholt2/testing/Oracle -I rollback.sql'
register: f1
when: st3.stat.isdir is defined
# Test that the deployment scripts run without error
- name: Testing SQL Deployment Scripts...
include: testDeploySql.yml
any_errors_fatal: true
loop_control:
loop_var: deploy_item
with_items: "{{ f1.stdout_lines }}"
register: sqlDepl
when: st3.stat.isdir is defined
The output from executing ansible-playbook deploy2.yml --tags "pullRepo" -vvv:
PLAYBOOK: deploy2.yml *********************************************************************************************************************************************************************************************************************************************************
2 plays in deploy2.yml
PLAY [Pull Repo to Localhost] *************************************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Pulling Git Repo for Ansible...] ********************************************************************************************************************************************************************************************************************************************
task path: /home/dholt2/test/deploy2.yml:17
Using module file /usr/lib/python2.6/site-packages/ansible/modules/source_control/git.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dholt2
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412 `" && echo ansible-tmp-1506540242.05-264367109987412="` echo /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpVWukIT TO /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412/git.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412/ /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412/git.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2.6 /home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412/git.py; rm -rf "/home/dholt2/.ansible/tmp/ansible-tmp-1506540242.05-264367109987412/" > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"after": "5c3bbd4620c4a94ece7741ecfe5514a1bd06422b",
"before": null,
"changed": true,
"invocation": {
"module_args": {
"accept_hostkey": true,
"bare": false,
"clone": true,
"depth": null,
"dest": "/home/dholt2/ansi",
"executable": null,
"force": true,
"key_file": null,
"recursive": false,
"reference": null,
"refspec": null,
"remote": "origin",
"repo": "https://github.com/ansible/ansible.git",
"ssh_opts": null,
"track_submodules": false,
"umask": null,
"update": true,
"verify_commit": false,
"version": "HEAD"
}
}
}
META: ran handlers
META: ran handlers
PLAY [Test & Execute SQL Scripts] *********************************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Testing SQL Deployment Scripts...] **************************************************************************************************************************************************************************************************************************************
task path: /home/dholt2/test/deploy2.yml:54
fatal: [localhost]: FAILED! => {
"failed": true,
"msg": "'f1' is undefined"
}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1
I created the following directory structure and files for this example:
/home/dholt2/testing/Oracle
Inside of the directory, I created the files a.sql, b.sql , c.sql, and rollback.sql
Just as it should, it skips over the first task in play #4 that searches for the Oracle directory, ignores the following task that gets a list of the files... yet it tries to execute the include even though it should only run if the previous task was successful. The result is that the variable the include task is trying to use doesn't exist — so it throws an error — even though it shouldn't even have passed on the when directive since the first task didn't execute to show that the directory exists and the previous task didn't get the list of files.
EDIT 1 on 09/28/2017: this task was also in the original playbook, it's the rollback for if the first sql scripts fail:
# Test that the rollback script runs without error
- name: Testing SQL Rollback Script...
any_errors_fatal: true
include: testRollbackSql.yml
register: sqlRoll
when: st3.stat.isdir is defined
EDIT 2 on 10/16/2017: I keep running into a similar problem. I'm often using a with_together where one of the items would always be populated with values, but the other may not (like f1 above). Due to this, the answer below to use the default() filter doesn't work because there's always something for the the include to loop over Fortunately, I found this official issue on Ansible's Github. It discusses why this is happening and how to get around it.
Essentially, you should still have the default() filter, but you need to add static: no to the task as well.
This backs up Konstantins statement about why Ansible may be running the tags even thouth they're not specified.
From the Github issue:
...this is an issue with static vs dynamic includes, inheritance and that when executes inside the with loops.
The issue is that with both static includes and when/with interactions Ansible does NOT know if it will skip the task.
I see..
include is not a usual task it's a kind of statement and this statement behave differently when used in static and dynamic manner.
To eliminate this ambiguity, import_tasks and include_tasks were introduced in Ansible 2.4.
Please see recent question on serverfault: https://serverfault.com/questions/875247/whats-the-difference-between-include-tasks-and-import-tasks/875292#875292
When you specify a tag, Ansible still goes through all tasks under the hood and silently skips those without corresponding tag. But when it hit dynamic include (like in your case), it must process this include statement, because included file may contain tasks marked with that tag.
But you have some undefined variable within you include statement, so it fails. This is expected behaviour.

Ansible shell and with_items

I would like to know if
- shell: >
hostname;
whoami;
and
- shell: "{{item}}"
with_items: ['hostname', 'whoami']
are equivalent? In the second example, Ansible will always use the same SSH connection for both commands (hostname, whoami)?
It seems to me that it is false...
- shell: "{{item}}"
with_items: ['export miavar=PIPPO', 'echo $miavar']
(item=export miavar=PIPPO) => {"changed": true, "cmd": "export miavar=PIPPO", "stdout": ""}
(item=echo $miavar) => {"changed": true, "cmd": "echo $miavar", "stdout": ""}
--ansible 2.1.1.0
Riccardo
Ansible runs each loop iteration as separate run, so you end up with different ssh sessions.
There are some exceptions described in ANSIBLE_SQUASH_ACTIONS variable:
"apk, apt, dnf, package, pacman, pkgng, yum, zypper"
This modules are smart enough to squash all items into a single task call.
Just add a list of commands to a variable and substitute the variable in the place where you need to execute them.
vars:
shell_cmd:
- "hostname &&"
- whoami
tasks:
- shell: "{{shell_cmd}}"
Enjoy!
As mentioned in the previous answer, with_items makes ansible run separate loop iterations. One more benefit of this is debuggability esp. if a lot of commands are chained under one shell task. For eg: For the above, ansible will internally run it as:
- shell: "{{item}}"
with_items: ['hostname', 'whoami']
is equivalent to:
- shell: 'hostname:
- shell: 'whoami'
Since its broken into two separate tasks, if one of them fails, ansible will point to the exact failing task (command) instead of the whole chain in the alternative.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html

How do I get the Ansible expect module to properly wait for pid file creation with_items

I'm trying to start a bunch of services on a node with a service startup shell script we use. It seems like the services do not fully startup because ansible doesn't wait for the script to finish running (part of it starts thin webserver in the bg). I want the with_items loop to wait until the pid file is in place before starting the second srvc.
- name: startup all the services
hosts: all
gather_facts: no
tasks:
expect:
command: /bin/bash -c "/home/vagrant/app-src/app_global/bin/server_tool server_daemon {{ item }}"
creates: "/home/vagrant/app-src/{{ item }}/tmp/pids/thin.pid"
with_items:
- srvc1
- srvc2
I want the items loop to work both with the command as well as the thin.pid file it creates.
But it doesn't seem to do anything when I run it.
🍺 vagrant provision
==> default: Running provisioner: ansible...
default: Running ansible-playbook...
PLAY [startup all the services] *******************************************
PLAY RECAP ********************************************************************
If I understand your intentions correctly, you shouldn't be using Expect module at all. It is for automating programs requiring interactive input (see: Expect).
To start services sequentially and suspend processing of the playbook until the pid-file was created, you can (currently) split your playbook into two files and use include module with with_items attribute:
Main playbook:
- name: startup all the services
hosts: all
gather_facts: no
tasks:
- include: start_daemon.yml srvcname={{ item }}
with_items:
- srvc1
- srvc2
Sub-playbook start_daemon.yml:
- shell: "/home/vagrant/app-src/app_global/bin/server_tool server_daemon {{ srvcname }}"
args:
creates: "/home/vagrant/app-src/{{ srvcname }}/tmp/pids/thin.pid"
- name: Waiting for {{ srvcname }} to start
wait_for: path=/home/vagrant/app-src/{{ srvcname }}/tmp/pids/thin.pid state=present
Remarks:
I think you don't need to specify /bin/bash for command module (however it might depend on the configuration). If for some reason server_tool requires shell environment, use shell module (as I suggested above).
With name: in the wait_for task you'll get an on-screen info which service Ansible is currently waiting for.
For future: A natural way to do it would be to use block module with with_items. This feature has been requested, but as of today is not implemented.

in Ansible, How can I set log file name dynamilcally

I'm currently developing ansible script to build and deploy java project.
so, I can set the log_path like below
log_path=/var/log/ansible.log
but, It is hard to look up build history.
Is it possible to append datetime to log file name?
for example,
ansible.20150326145515.log
I don't believe there is a built-in way to generate the date on the fly like that but one option you have is to use a lookup which can shell out to date. Example:
log_path="/var/log/ansible.{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}.log"
Here is an option using ANSIBLE_LOG_PATH environment variable thanks to Bash shell alias:
alias ansible="ANSIBLE_LOG_PATH=ansible-\`date +%Y%m%d%H%M%S\`.log ansible"
Feel free to use an absolute path if you prefer.
I found it.
just add task to copy(or mv command) log locally
- name: Copy ansible.log
connection: local
command: mv ./logs/ansible.log ./logs/ansible.{{ lookup('pipe', 'date %Y%M%d%H%M%S') }}.log
run_once: true
thanks to #jarv
How about this:
- shell: date +%Y%m%d%H%M%S
register: timestamp
- debug: msg="foo.{{timestamp.stdout}}.log"
Output:
TASK [command] *****************************************************************
changed: [blabla.example.com]
TASK [debug] *******************************************************************
ok: [blabla.example.com] => {
"msg": "foo.20160922233847.log"
}
According to the nice folks at the #ansible freenode IRC, this can be accomplished with a custom callback plugin.
I haven't done it yet because I can't install the Ansible Python library on this machine. Specifically, Windows 7 can't have directory names > 260 chars in length, and pip tries to make lengthy temporary paths. But if someone gets around to it, please post it here.
Small improvement on #ickhyun-kwon answer:
- name: "common/_ansible_log_path.yml: rename ansible.log"
connection: local
shell: |
mkdir -vp {{ inventory_dir }}/logs/{{ svn_deploy.release }}/ ;
mv -vf {{ inventory_dir }}/logs/ansible.log {{ inventory_dir }}/logs/{{ svn_deploy.release }}/ansible.{{ svn_deploy.release }}.{{ lookup('pipe', 'date +%Y-%m-%d-%H%M') }}.log args:
executable: /bin/bash
chdir: "{{ inventory_dir }}"
run_once: True
ignore_errors: True
This has separate log directories per svn release, ensures the log directory actually exists before the mv command.
Ansible interprets ./ as the current playbook directory, which may or may not be the root of your ansible repository, whereas mine live in ./playbooks/$project/$role.yml. For me {{ inventory_dir }}/logs/ happens to correspond to the ~/ansible/log/ directory, though alternative layout configurations do not guarantee this.
I am unsure the correct way to formally extract the absolute ansible.cfg::log_path variable
Also the date command for month is +%m and not %M which is Minute
I have faced a similar problem while trying to set dynamic log paths for various playbooks.
A simple solution seems to be to pass the log filename dynamically to the ANSIBLE_LOG_PATH environment variable. Checkout -> https://docs.ansible.com/ansible/latest/reference_appendices/config.html
In this particular case just export the environment variable when running the intended playbook on your terminal:
export ANSIBLE_LOG_PATH=ansible.`date +%s`.log; ansible-playbook test.yml
Else if the intended filename cannot be generated by the terminal, you can always use a runner playbook which runs the intended playbook from the within:
---
- hosts:
- localhost
gather_facts: false
ignore_errors: yes
tasks:
- name: set dynamic variables
set_fact:
task_name: dynamic_log_test
log_dir: /path/to/log_directory/
- name: Change the working directory and run the ansible-playbook as shell command
shell: "export ANSIBLE_LOG_PATH={{ log_dir }}log_{{ task_name|lower }}.txt; ansible-playbook test.yml"
register: shell_result
This should log the result of test.yml to /path/to/log_directory/log_dynamic_log_test.txt
Hope you find this helpful!

Resources