I have a function in shell script and i want to use it in my ansible playbook
my shell script -
wait_for_apt_locks () {
while sudo fuser /var/{lib/cache/apt/archives}/{lock,lock-frontend} >/dev/null 2>&1; do
echo "Waiting for apt locks"
sleep 1
done
i want to use this function wait_for_apt_locks in my playbook wait.yml
is the below the right way to use it?
- name: source /tmp/function.sh
shell: |
source /tmp/functions.sh
wait_for_apt_locks
I don't have any suggestions regarding loading bash functions - if your example works, then it works!
But do note that Ansible has a wait_for module used for monitoring ports or files.
Perhaps this is a more Ansible-native solution for dealing with lock files.
From https://docs.ansible.com/ansible/latest/modules/wait_for_module.html#examples:
- name: Wait until the lock file is removed
wait_for:
path: /var/lock/file.lock
state: absent
If you are creating the shell script for the sole purpose of calling it here, you can skip the script file and simply execute the logic inline. At the end of the day, if it works it works.
- name: Run a shell command inline
become: true
command: >
while fuser {lib/cache/apt/archives}/{lock,lock-frontend} >/dev/null 2>&1; do
echo "Waiting for apt locks";
sleep 1;
done
ignore_errors: no
args:
chdir: /var
Documentation
Related
I am trying to emulate this behavior with Ansible raw command but I could not find any feature that achieve this
ssh user#host.com <<EOF
command
exit
EOF
You are simply sending the script:
command
exit
To the remote host. The <<EOF and EOF parts are parsed by your local shell and aren't part of the command. The equivalent ansible task would be:
- raw: |
command
exit
In most cases (if the remote target is running a semi-standard shell), you won't need the exit either; the script will exit after the last command completes.
You don't need to send a multiline commands via ssh, perhaps you have connected with ssh already with ansible when you set ansible_connection variable, e.g. in your inventory file:
[my_host_group]
host.com
[my_host_group:vars]
ansible_connection=ssh
ansible_become_user=root
ansible_ssh_user=user
Then execute a tasks with bash:
- name: Executing multiline command on host.com under user
ansible.builtin.shell: command
delegate_to: "{{ groups['my_host_group'][0] }}"
become: False
Or just use ansible.builtin.command module instead of ansible.builtin.shell if your command is simple and not multi line.
You don't need an exit at the end of your script either (until you want to change an exit code and return them to ansible). 'Failed when' conditions is your firend:
- name: Executing multiline command on host.com under user
ansible.builtin.shell: command
delegate_to: "{{ groups['my_host_group'][0] }}"
register: your_script_results
ignore_errors: True
become: False
- name: Print an exit code on script error
ansible.builtin.debug:
msg: "Script was failed with {{ your_script_results.rc }} exit code"
when: your_script_results.failed
Before running a patching playbook, I ran the playbook with the "--check" option as a dry run. However, one of the plays within the playbook to check whether the group needs a reboot, doesn't register the "reboot_hint" variable as intended
- name: check for reboot
shell: needs-restarting -r
register: reboot_hint
failed_when: reboot_hint.rc > 1
- name: debug, show the reboot hint variable
debug:
var:
- reboot_hint.rc
- reboot_hint
I get the message: ""VARIABLE IS NOT DEFINED!"" for the run.
What could be causing this? I am expect a return value of "1" or "0". I do get that when I go into the command line, run the "needs-restarting -r" and "echo $?
No core libraries or services have been updated.
Reboot is probably not necessary.
> echo $?
0
You're running --check and thus the shell command doesn't run. Because the shell command doesn't run there's nothing to register.
You can read more about this in https://docs.ansible.com/ansible/latest/user_guide/playbooks_checkmode.html#enabling-or-disabling-check-mode-for-tasks
A simple fix is adding check_mode: no, e.g.
- name: check for reboot
check_mode: no
shell: needs-restarting -r
register: reboot_hint
failed_when: reboot_hint.rc > 1
This forces the task to run even when check-mode is enabled.
What is the difference between raw, shell and command in the ansible playbook? And when to use which?
command: executes a remote command on the target host, in the same shell of other playbook's tasks.
It can be used for launch scripts (.sh) or for execute simple commands. For example:
- name: Cat a file
command: cat somefile.txt
- name: Execute a script
command: somescript.sh param1 param2
shell: executes a remote command on the target host, opening a new shell (/bin/sh).
It can be used if you want to execute more complex commands, for example, commands concatenated with pipes. For example:
- name: Look for something in a file
shell: cat somefile.txt | grep something
raw: executes low-level commands where the interpreter is missing on the target host, a common use case is for installing python. This module should not be used in all other cases (where command and shell are suggested)
Since I were I stumbling about the same question, I wanted to share my findings here too.
The command and shell module, as well gather_facts (annot.: setup.py) depend on a properly installed Python interpreter on the Remote Node(s). If that requirement isn't fulfilled one may experience errors were it isn't possible to execute
python <ansiblePython.py>
In a Debian 10 (Buster) minimal installation i.e., python3 was installed but the symlink to python missing.
To initialize the system correctly before applying all other roles, I've used an approach with the raw module
ansible/initSrv/main.yml
- hosts: "{{ target_hosts }}"
gather_facts: no # is necessary because setup.py depends on Python too
pre_tasks:
- name: "Make sure remote system is initialized correctly"
raw: 'ln -s /usr/bin/python3 /usr/bin/python'
register: set_symlink
failed_when: set_symlink.rc != 0 and set_symlink.rc != 1
which is doing something like
/bin/sh -c 'ln -s /usr/bin/python3 /usr/bin/python'
on the remote system.
Further Documentation
raw module – Executes a low-down and dirty command
A common case is installing python on a system without python installed by default.
... but not only restricted to that
Playbook Keyword - pre_tasks
A list of tasks to execute before roles.
Set the order of task execution in Ansible
I'm trying to run the following ansible playbook to start the "nexus" service on remote server at path "nexux/bin" it gets failed :
- hosts: nexus
become: yes
become_user: nexus
become_method: sudo
tasks:
- name: changing dir and starting nexus service
shell:
chdir: nexux/bin
executable: ./nexus start
Can someone troubleshoot here to deduce the root cause ?
As the ansible output very clearly told you, in that syntax you did not provide a command. The executable: is designed to be the shell executable, not the "run this thing" argument. It is very clear in the examples section of the fine manual
- shell: cd /opt/nexus/bin && ./nexus start
If you want to use the chdir: option, you must put it under a sibling yaml key to the shell:, like so:
- shell: echo hello world
args:
chdir: /opt/nexus/bin
# I'm omitting the "executable:" key here, because you for sure
# do not want to do that, but if you did, then fine, put it here
Having said all of that, as the docs also indicate, what you really want is to use command: because you are not making use of any special shell characters (redirects, pipes, && phrases, etc), so:
- command: ./nexus start
args:
chdir: /opt/nexus/bin
Try use the shell module, i also recommend to run with nohup and send the output to a file
- shell: |
cd /opt/nexus/bin
nohup ./nexus start > /tmp/nexus.log 2>&1 &
I'm trying to follow this solution to add use the shell module and ssh-keyscan to add a key to my known_hosts file of a newly created EC2 instance.
After trying to do this multiple ways as listed on that question I eventually ran just the ssh-keyscan command using the shell module without the append. I am getting no output from this task:
- name: accept new ssh fingerprints
shell: ssh-keyscan -H {{ item.public_ip }}
args:
executable: /bin/bash
with_items: "{{ ec2.instances }}"
register: keyscan
- debug: var=keyscan
Debug here shows nothing in stdout and stdout_lines and nothing in stderr and stderr_lines
Note: I tried running this with the bash as the executable shown after reading that the shell module defaults to /bin/sh which is the dash shell on my Linux Mint VirtualBox. But it's the same regardless.
I have tested the shell command with the following task and I see the proper output in stdout and stdout_lines:
- name: test the shell
shell: echo hello
args:
executable: /bin/bash
register: hello
- debug: var=hello
What is going on here? Running ssh-keyscan in a terminal (not through Ansible) works as expected.
EDIT: Looking at the raw_params output from debug shows ssh-keyscan -H x.x.x.x and copying and pasting this into the terminal works as expected.
The answer is that it doesn't work the first time. While researching another method I stumbled across the retries keyword in ansible that allows a retry of whatever command. I tried this and on attempt number 2 in the retry loop it is working.