Where would Ansible run a command if no directory is specified? - ansible

I am encountering a problem where I am running the same tasks on 2 remote nodes and the directories that those commands are executed at are different.
If I run pwd through Ansible on each remote host before this command, they return different paths. For example /usr and /usr/src. If I log into the remote host manually I go to /usr/src for both (As specified in their configuration files).
Can anyone explain to me why is this happening? To what directory does Ansible go if you run a command without specifying a chdir?

I would expect this difference to happen because, when login in manually, you have a .bashrc that cd you in the right folder in one of those two hosts, when Ansible does not source the .bashrc file.
Per default, ssh, and, so, Ansible, logs you into the $HOME folder of the user you define Ansible to connect with, which you can also find in /etc/passwd
Another reason I could see for this to happen would be because you use one user to log into the node but then become another one.
inventory.yml
all:
hosts:
some.example.com:
ansible_user: some_user
playbook.yml
---
- hosts: all
tasks:
- command: pwd # still you will be in /home/some_user
become: yes
become_user: some_other_user

Related

Whats the difference between ansible 'raw', 'shell' and 'command'?

What is the difference between raw, shell and command in the ansible playbook? And when to use which?
command: executes a remote command on the target host, in the same shell of other playbook's tasks.
It can be used for launch scripts (.sh) or for execute simple commands. For example:
- name: Cat a file
command: cat somefile.txt
- name: Execute a script
command: somescript.sh param1 param2
shell: executes a remote command on the target host, opening a new shell (/bin/sh).
It can be used if you want to execute more complex commands, for example, commands concatenated with pipes. For example:
- name: Look for something in a file
shell: cat somefile.txt | grep something
raw: executes low-level commands where the interpreter is missing on the target host, a common use case is for installing python. This module should not be used in all other cases (where command and shell are suggested)
Since I were I stumbling about the same question, I wanted to share my findings here too.
The command and shell module, as well gather_facts (annot.: setup.py) depend on a properly installed Python interpreter on the Remote Node(s). If that requirement isn't fulfilled one may experience errors were it isn't possible to execute
python <ansiblePython.py>
In a Debian 10 (Buster) minimal installation i.e., python3 was installed but the symlink to python missing.
To initialize the system correctly before applying all other roles, I've used an approach with the raw module
ansible/initSrv/main.yml
- hosts: "{{ target_hosts }}"
gather_facts: no # is necessary because setup.py depends on Python too
pre_tasks:
- name: "Make sure remote system is initialized correctly"
raw: 'ln -s /usr/bin/python3 /usr/bin/python'
register: set_symlink
failed_when: set_symlink.rc != 0 and set_symlink.rc != 1
which is doing something like
/bin/sh -c 'ln -s /usr/bin/python3 /usr/bin/python'
on the remote system.
Further Documentation
raw module – Executes a low-down and dirty command
A common case is installing python on a system without python installed by default.
... but not only restricted to that
Playbook Keyword - pre_tasks
A list of tasks to execute before roles.
Set the order of task execution in Ansible

Can't run sudo command in Ansible playbook

I am writing an Ansible playbook to automate a series of sudo commands on various hosts. When I execute these commands individually in puTTY, I have no permission problems, as I have been granted proper access. However, when I attempt to create a playbook to do the same thing, I am told
user is not allowed to execute ... on host_name
For example, if I do $ sudo ls /root/, I have no problem, and, once I enter my password, can see the contents of /root/
In the case of my Ansible playbook ...
---
- host: servers
tasks:
- name: ls /root/
shell: ls /root/
become: true
become_method: sudo
...I then get the error mentioned above.
Any ideas why this would be the case? It seems to be telling me I don't have permission to run a command that I otherwise could run in an individual puTTY terminal.
[ ] automate a series of sudo commands on various hosts. When I execute these commands individually [ ]
Any ideas why this would be the case?
Sounds like you configured specific commands in the sudoers file (unfortunately you did not provide enough details, fortunately you asked for "ideas" not the real cause).
Ansible shell module does not run the command you specify prepended with sudo - it runs the whole shell session with sudo, so the command doesn't match what you configured in sudoers.
Either allow all commands to be run with elevated privileges for the Ansible user, or use raw module instead of shell.

Stuck on debugging an ansible task running remote command that freezes

I'm setting up an Ansible role to install Ahsay Offsite Backup Server.
After downloading and extracting the compressed file containing the software, I need to run the install script. I've determined that it's a step early in the script where it checks that your current user has appropriate permissions which is failing to run.
When I run the playbook, the final task never finishes.
The role
- name: Check if OBS install files have already been downloaded
stat:
path: /tmp/obs/version.txt
register: stat_result
- name: Ensures /tmp/obs exists
file: path=/tmp/obs state=directory
- name: Download and extract OBS install files
unarchive:
src: https://ahsay-dn.ahsay.com/v6/obsr/62900/obsr-nix.tar.gz
dest: /tmp/obs
remote_src: true
validate_certs: no
when: stat_result.stat.exists == false
- name: Install OBS
command: bash -lc "/tmp/obs/bin/install.sh > /tmp/install_output.log"
The playbook configuration is for all tasks to become sudo.
If I run the command in a shell on the remote host, it executes successfully.
I've hit similar issues before where commands fail because (in the case of rvm) it requires the bash_profile to load and pull in a bunch of environment variables first. The fix for that was as I've done above, to wrap the command in bash -lc "...", but that hasn't helped this time.
I'd love any suggestions of how I could continue troubleshooting this one.
you are checking for file presence before ensuring the folder.
some applications require tty, and when not on it they ask some stupid question
to really debug while the command is "stuck" connect to the offending machine, and try analyzing what does the script do: look in its /proc/${PID} folder (if you're on linux), maybe connect to it via strace -p ${PID} and maybe dup its stderr to see maybe it prints something that makes sense to you.
Also, you don't really have to run command, you can use shell module, and specify its args to make sure the command runs from specific folder, like so:
- name: Install OBS
shell: |
./bin/install.sh \
1> /tmp/install.output.log \
2> /tmp/install.error.log
args:
executable: /bin/bash
chdir: /tmp/obs

remote tmp directory not set for ansible script execution

I need to install binary on remote servers. Following are the list of tasks that i am performing.
Copy/scp binary to remote server
Run the installer in silent mode
Step #1 is copying binaries to /tmp, on remote hosts /tmp has very less space and scp is failing once the /tmp is full. I understood that by default ansible scripts/files will be copied to /tmp directory, once the activity is done it will be removed. Since /tmp is very low i need to use user directory to copy the binaries.
Below is ansible.cfg:
remote_user = testaccount
host_key_checking = False
scp_if_ssh=True
remote_tmp = $HOME/.ansible/tmp
Below is the playbook:
- name: deploy binaries
hosts: test
strategy: free
become_method: 'sudo'
tasks:
- name: transfer
copy: src=./files/weblogic.jar dest=$HOME mode=0777
register: transfer
- debug: var=transfer.stdout
playbook execution:
ansible-playbook --inventory="hosts" --ask-pass --ask-become-pass --extra-vars="ansible_ssh_user=<unixaccount>" copybinaries.yml
Even with above config the binaries are not copied to user home, I have make sure to have $HOME/.ansible/tmp directory and even hard coded like /home/testaccount/.ansbile/tmp.
Any other configs needs to be overridden in ansible.cfg?
Although you still have not included a coherent MCVE, I guess:
either you are running the task with become-an-unprivileged-user option;
or your inventory file, configuration file, the execution call, and the playbook contain unnecessary and contradictory settings making Ansible run in a more or less nondeterministic way.
Ansible uses the system temp directory for tasks being run as a different user. This line determines this.
You can only specify /tmp (default) or a subdirectory of /var/tmp for such tasks (with remote_tmp). See the comment in the Ansible code:
# When system is specified we have to create this in a directory where
# other users can read and access the temp directory. This is because
# we use system to create tmp dirs for unprivileged users who are
# sudo'ing to a second unprivileged user. The only dirctories where
# that is standard are the tmp dirs, /tmp and /var/tmp. So we only
# allow one of those two locations if system=True. However, users
# might want to have some say over which of /tmp or /var/tmp is used
# (because /tmp may be a tmpfs and want to conserve RAM or persist the
# tmp files beyond a reboot. So we check if the user set REMOTE_TMP
# to somewhere in or below /var/tmp and if so use /var/tmp. If
# anything else we use /tmp (because /tmp is specified by POSIX nad
# /var/tmp is not).

Ansible doesn't load ~/.profile

I'm asking myself why Ansible doesn't source ~/.profile file before execute template module on one host ?
Distant host ~/.profile:
export ENV_VAR=/usr/users/toto
A single Ansible task:
- template: src=file1.template dest={{ ansible_env.ENV_VAR }}/file1
Ansible fail with:
fatal: [distant-host] => One or more undefined variables: 'dict object' has no attribute 'ENV_VAR'
Ansible is not running remote tasks (command, shell, ...) in an interactive nor login shell. It's same like when you execute command remotely via 'ssh user#host "which python"'
To source ~/.bashrc won't work often because ansible shell is not interactive and ~/.bashrc implementation by default ignores non interactive shell (check its beginning).
The best solution for executing commands as user after its ssh interactive login I found is:
- hosts: all
tasks:
- name: source user profile file
#become: yes
#become_user: my_user # in case you want to become different user (make sure acl package is installed)
shell: bash -ilc 'which python' # example command which prints
register: which_python
- debug:
var: which_python
bash: '-i' means interactive shell, so .bashrc won't be ignored
'-l' means login shell which sources full user profile (/etc/profile and ~/.bash_profile, or ~/.profile - see bash manual page for more details)
Explanation of my example: my ~/.bashrc sets specific python from anaconda installed under that user.
Ansible is not running tasks in an interactive shell on the remote host. Michael DeHaan has answered this question on github some time ago:
The uber-basic description is ansible isn't really doing things through the shell, it's transferring modules and executing scripts that it transfers, not using a login shell.
i.e. Why does an SSH remote command get fewer environment variables then when run manually?
It's not a continous shell environment basically, nor is it logging in and typing commands and things.
You should see the same result (undefined variable) by running this:
ssh <host> echo $ENV_VAR
In a lot of places I've used below structure:
- name: Task Name
shell: ". /path/to/profile;command"
when ansible escalates the privilige to sudo it don't invoke the login shell of sudo user
we need to make changes in the way we call sudo like invoking it with -i and -H flags
"sudo_flags=-H" in your ansible.cfg file
If you can run as root, you can use runuser.
- shell: runuser -l '{{ install_user }}' -c "{{ cmd }}"
This effectively runs the command as install_user in a fresh login shell, as if you had used su - *install_user* (which loads the profile, though it might be .bash_profile and not .profile...) and then executed *cmd*.
I'd try not to run everything as root just so you can run it as someone else, though...
If you can modify the configuration of your target host and don't want to change your ansible yaml code. You can try this:
add the variable ENV_VAR=/usr/users/toto into /etc/environment file rather than ~/.profile.
shell: "bash -l scala -version"
by using bash -l will allow ansible to load corresponding bash_profile.
bash: '-i' (interactive shell) won't allow the ansible to run other task.
add the variable ENV_VAR=/usr/users/toto into /etc/environment file rather than ~/.profile.
You really can use /etc/environment, but only if a variable has a static value. If we use variable which gets the value of another variable it doesn't work. For example, if we put this line to /etc/environment
XDG_RUNTIME_DIR=/run/user/$(id -u)
Ansible can see exactly XDG_RUNTIME_DIR=/run/user/$(id -u), not XDG_RUNTIME_DIR=/run/user/1012.
And if we put this line to ~/.bash_profile or ~/.bashrc:
export XDG_RUNTIME_DIR=/run/user/$(id -u)
User can see XDG_RUNTIME_DIR=/run/user/1012 (if user's id is 1012) when he works manually, but Ansible doesn't get variable XDG_RUNTIME_DIR at all.

Resources