I'm asking myself why Ansible doesn't source ~/.profile file before execute template module on one host ?
Distant host ~/.profile:
export ENV_VAR=/usr/users/toto
A single Ansible task:
- template: src=file1.template dest={{ ansible_env.ENV_VAR }}/file1
Ansible fail with:
fatal: [distant-host] => One or more undefined variables: 'dict object' has no attribute 'ENV_VAR'
Ansible is not running remote tasks (command, shell, ...) in an interactive nor login shell. It's same like when you execute command remotely via 'ssh user#host "which python"'
To source ~/.bashrc won't work often because ansible shell is not interactive and ~/.bashrc implementation by default ignores non interactive shell (check its beginning).
The best solution for executing commands as user after its ssh interactive login I found is:
- hosts: all
tasks:
- name: source user profile file
#become: yes
#become_user: my_user # in case you want to become different user (make sure acl package is installed)
shell: bash -ilc 'which python' # example command which prints
register: which_python
- debug:
var: which_python
bash: '-i' means interactive shell, so .bashrc won't be ignored
'-l' means login shell which sources full user profile (/etc/profile and ~/.bash_profile, or ~/.profile - see bash manual page for more details)
Explanation of my example: my ~/.bashrc sets specific python from anaconda installed under that user.
Ansible is not running tasks in an interactive shell on the remote host. Michael DeHaan has answered this question on github some time ago:
The uber-basic description is ansible isn't really doing things through the shell, it's transferring modules and executing scripts that it transfers, not using a login shell.
i.e. Why does an SSH remote command get fewer environment variables then when run manually?
It's not a continous shell environment basically, nor is it logging in and typing commands and things.
You should see the same result (undefined variable) by running this:
ssh <host> echo $ENV_VAR
In a lot of places I've used below structure:
- name: Task Name
shell: ". /path/to/profile;command"
when ansible escalates the privilige to sudo it don't invoke the login shell of sudo user
we need to make changes in the way we call sudo like invoking it with -i and -H flags
"sudo_flags=-H" in your ansible.cfg file
If you can run as root, you can use runuser.
- shell: runuser -l '{{ install_user }}' -c "{{ cmd }}"
This effectively runs the command as install_user in a fresh login shell, as if you had used su - *install_user* (which loads the profile, though it might be .bash_profile and not .profile...) and then executed *cmd*.
I'd try not to run everything as root just so you can run it as someone else, though...
If you can modify the configuration of your target host and don't want to change your ansible yaml code. You can try this:
add the variable ENV_VAR=/usr/users/toto into /etc/environment file rather than ~/.profile.
shell: "bash -l scala -version"
by using bash -l will allow ansible to load corresponding bash_profile.
bash: '-i' (interactive shell) won't allow the ansible to run other task.
add the variable ENV_VAR=/usr/users/toto into /etc/environment file rather than ~/.profile.
You really can use /etc/environment, but only if a variable has a static value. If we use variable which gets the value of another variable it doesn't work. For example, if we put this line to /etc/environment
XDG_RUNTIME_DIR=/run/user/$(id -u)
Ansible can see exactly XDG_RUNTIME_DIR=/run/user/$(id -u), not XDG_RUNTIME_DIR=/run/user/1012.
And if we put this line to ~/.bash_profile or ~/.bashrc:
export XDG_RUNTIME_DIR=/run/user/$(id -u)
User can see XDG_RUNTIME_DIR=/run/user/1012 (if user's id is 1012) when he works manually, but Ansible doesn't get variable XDG_RUNTIME_DIR at all.
Related
What is the difference between raw, shell and command in the ansible playbook? And when to use which?
command: executes a remote command on the target host, in the same shell of other playbook's tasks.
It can be used for launch scripts (.sh) or for execute simple commands. For example:
- name: Cat a file
command: cat somefile.txt
- name: Execute a script
command: somescript.sh param1 param2
shell: executes a remote command on the target host, opening a new shell (/bin/sh).
It can be used if you want to execute more complex commands, for example, commands concatenated with pipes. For example:
- name: Look for something in a file
shell: cat somefile.txt | grep something
raw: executes low-level commands where the interpreter is missing on the target host, a common use case is for installing python. This module should not be used in all other cases (where command and shell are suggested)
Since I were I stumbling about the same question, I wanted to share my findings here too.
The command and shell module, as well gather_facts (annot.: setup.py) depend on a properly installed Python interpreter on the Remote Node(s). If that requirement isn't fulfilled one may experience errors were it isn't possible to execute
python <ansiblePython.py>
In a Debian 10 (Buster) minimal installation i.e., python3 was installed but the symlink to python missing.
To initialize the system correctly before applying all other roles, I've used an approach with the raw module
ansible/initSrv/main.yml
- hosts: "{{ target_hosts }}"
gather_facts: no # is necessary because setup.py depends on Python too
pre_tasks:
- name: "Make sure remote system is initialized correctly"
raw: 'ln -s /usr/bin/python3 /usr/bin/python'
register: set_symlink
failed_when: set_symlink.rc != 0 and set_symlink.rc != 1
which is doing something like
/bin/sh -c 'ln -s /usr/bin/python3 /usr/bin/python'
on the remote system.
Further Documentation
raw module – Executes a low-down and dirty command
A common case is installing python on a system without python installed by default.
... but not only restricted to that
Playbook Keyword - pre_tasks
A list of tasks to execute before roles.
Set the order of task execution in Ansible
I need to execute a vendor provided csh script to create a veritable plethora of environment variables in order to run application from playbook.
I have built an ad-hoc script that uses screen to inject the commands I need to run inside of a csh session. This succesfully allows me to run the application; but again, from an ad-hoc script and not a playbook.
### start the fsc
# launch a screen session with csh
ansible 10.1.1.103 -m shell -a "su - testdev -c 'screen -dmS testdev_fsc csh'" -b
# run vendor provided env variables script
ansible 10.1.1.103 -m shell -a "su - testdev -c 'screen -S testdev_fsc -X stuff '/export/home/testdev/tcdata/tc_cshvars^M''" -b
# execute the application
ansible 10.1.1.103 -m shell -a "su - testdev -c 'screen -S testdev_fsc -X stuff '/export/home/testdev/ccbin/fsc.sh^M''" -b
In the end, I'd like to be able to create a playbook snippet that allows the above to run/execute.
So basically you want to become user testdev and run two commands in a row in the same csh session. The following fictive playbook should do the job.
---
# Playbook example
- name: Play to run my script
hosts: my_inventory_group
remote_user: my_remote_connection_user
tasks
- name: run my script
become: true
become_user: testdev
shell: |-
./tcdata/tc_cshvars
./ccbin/fsc.sh
args:
chdir: /export/home/testdev/
executable: /bin/csh
|- is the yaml litteral block marker with trailing space control. All the following block is interpreted as a string with new lines preserved. Passed to the shell module, each line will be executed (as if entered in your own shell). Double check the path for your csh executable. You must provide an absolute path.
If you have several tasks in your playbook and need to run all or them as testdev, you can move become and become_user to play level rather that task level.
I would like to run a sub shell command in an ad-hoc Ansible command.
Here is what I want to do :
sudo ansible myservers -m shell -a "touch /var/tmp/$(uname -n)"
It creates the remote file but with the name of the local host, it doesn't execute the uname command on remote servers.
I found the solution :
sudo ansible myservers -m shell -a '/bin/bash -c "toto=`uname -n` ; touch /var/tmp/\$toto.json;"'
Seems that I have to start a shell to execute sub shell commands, but it works.
The problem comes from the double-quotes in your command.
They mean that the content between the double-quotes (including your $(...)) will be interpreted by the shell that executes this command (your Ansible control node), so you get the ansible control node name.
If you replace the double-quotes with simple-quotes, the shell of the control node will not interpret the content, and will pass it "as-is" to the ansible hosts. Afterwards, these ansible hosts will interpret the $(...), so you'll get the target ansible hostname.
See http://www.gnu.org/software/bash/manual/html_node/Single-Quotes.html and https://www.gnu.org/software/bash/manual/html_node/Double-Quotes.html
So it could be fixed with :
ansible myservers -m shell -a 'touch /var/tmp/$(uname -n)'
(I don't see why you had to use sudo)
Or (if the name of your host in your inventory is good enough), you could use :
ansible myservers -m shell -a 'touch /var/tmp/{{ inventory_hostname }}'
By the way, if you end-up using this command in a playbook, another solution is to use the Ansible facts : ansible_hostname or ansible_fqdn for example
I am writing an Ansible playbook to automate a series of sudo commands on various hosts. When I execute these commands individually in puTTY, I have no permission problems, as I have been granted proper access. However, when I attempt to create a playbook to do the same thing, I am told
user is not allowed to execute ... on host_name
For example, if I do $ sudo ls /root/, I have no problem, and, once I enter my password, can see the contents of /root/
In the case of my Ansible playbook ...
---
- host: servers
tasks:
- name: ls /root/
shell: ls /root/
become: true
become_method: sudo
...I then get the error mentioned above.
Any ideas why this would be the case? It seems to be telling me I don't have permission to run a command that I otherwise could run in an individual puTTY terminal.
[ ] automate a series of sudo commands on various hosts. When I execute these commands individually [ ]
Any ideas why this would be the case?
Sounds like you configured specific commands in the sudoers file (unfortunately you did not provide enough details, fortunately you asked for "ideas" not the real cause).
Ansible shell module does not run the command you specify prepended with sudo - it runs the whole shell session with sudo, so the command doesn't match what you configured in sudoers.
Either allow all commands to be run with elevated privileges for the Ansible user, or use raw module instead of shell.
I am experimenting whether I can check the version of bundle in localhost using ansible-playbook local.yml as shown in local.yml below.
local.yml
---
- hosts: local
remote_user: someuser
tasks:
- name: Check bundle version
shell: "{{ansible_user_shell}} -l -c 'bundle --version'"
args:
chdir: "/path/to/rails/dir"
Inventory file is as follows:
hosts
[local]
127.0.0.1
[local:vars]
ansible_ssh_user=someuser
However I got the error saying,
stderr: zsh:1: command not found: bundle`
I have no idea why I am getting this error because I confirmed bundle is installed on localhost. Also I found that shell module does not use login shell so environmental variables in .zshrc is not loaded so I ran zsh with -l(use login shell) option. But it's not working. Is there anything I am missing?
I figured out the problem by myself. The problem was the configuration of zsh. I thought .zshrc is executed on every login. This is inaccurate because .zshrc is only loaded on login and interactive shell. In the above case, the command is NOT run on interactive shell so .zshrc was not loaded.
To load .zshrc every time I use login shell, I created .zprofile which is loaded on login shell as follows:
# include .zshrc if it exists
if [ -f "$HOME/.zshrc" ]; then
. "$HOME/.zshrc"
fi
Another solution might be to add -i(interactive shell) option :)