remote tmp directory not set for ansible script execution - ansible

I need to install binary on remote servers. Following are the list of tasks that i am performing.
Copy/scp binary to remote server
Run the installer in silent mode
Step #1 is copying binaries to /tmp, on remote hosts /tmp has very less space and scp is failing once the /tmp is full. I understood that by default ansible scripts/files will be copied to /tmp directory, once the activity is done it will be removed. Since /tmp is very low i need to use user directory to copy the binaries.
Below is ansible.cfg:
remote_user = testaccount
host_key_checking = False
scp_if_ssh=True
remote_tmp = $HOME/.ansible/tmp
Below is the playbook:
- name: deploy binaries
hosts: test
strategy: free
become_method: 'sudo'
tasks:
- name: transfer
copy: src=./files/weblogic.jar dest=$HOME mode=0777
register: transfer
- debug: var=transfer.stdout
playbook execution:
ansible-playbook --inventory="hosts" --ask-pass --ask-become-pass --extra-vars="ansible_ssh_user=<unixaccount>" copybinaries.yml
Even with above config the binaries are not copied to user home, I have make sure to have $HOME/.ansible/tmp directory and even hard coded like /home/testaccount/.ansbile/tmp.
Any other configs needs to be overridden in ansible.cfg?

Although you still have not included a coherent MCVE, I guess:
either you are running the task with become-an-unprivileged-user option;
or your inventory file, configuration file, the execution call, and the playbook contain unnecessary and contradictory settings making Ansible run in a more or less nondeterministic way.
Ansible uses the system temp directory for tasks being run as a different user. This line determines this.
You can only specify /tmp (default) or a subdirectory of /var/tmp for such tasks (with remote_tmp). See the comment in the Ansible code:
# When system is specified we have to create this in a directory where
# other users can read and access the temp directory. This is because
# we use system to create tmp dirs for unprivileged users who are
# sudo'ing to a second unprivileged user. The only dirctories where
# that is standard are the tmp dirs, /tmp and /var/tmp. So we only
# allow one of those two locations if system=True. However, users
# might want to have some say over which of /tmp or /var/tmp is used
# (because /tmp may be a tmpfs and want to conserve RAM or persist the
# tmp files beyond a reboot. So we check if the user set REMOTE_TMP
# to somewhere in or below /var/tmp and if so use /var/tmp. If
# anything else we use /tmp (because /tmp is specified by POSIX nad
# /var/tmp is not).

Related

Where would Ansible run a command if no directory is specified?

I am encountering a problem where I am running the same tasks on 2 remote nodes and the directories that those commands are executed at are different.
If I run pwd through Ansible on each remote host before this command, they return different paths. For example /usr and /usr/src. If I log into the remote host manually I go to /usr/src for both (As specified in their configuration files).
Can anyone explain to me why is this happening? To what directory does Ansible go if you run a command without specifying a chdir?
I would expect this difference to happen because, when login in manually, you have a .bashrc that cd you in the right folder in one of those two hosts, when Ansible does not source the .bashrc file.
Per default, ssh, and, so, Ansible, logs you into the $HOME folder of the user you define Ansible to connect with, which you can also find in /etc/passwd
Another reason I could see for this to happen would be because you use one user to log into the node but then become another one.
inventory.yml
all:
hosts:
some.example.com:
ansible_user: some_user
playbook.yml
---
- hosts: all
tasks:
- command: pwd # still you will be in /home/some_user
become: yes
become_user: some_other_user

Stuck on debugging an ansible task running remote command that freezes

I'm setting up an Ansible role to install Ahsay Offsite Backup Server.
After downloading and extracting the compressed file containing the software, I need to run the install script. I've determined that it's a step early in the script where it checks that your current user has appropriate permissions which is failing to run.
When I run the playbook, the final task never finishes.
The role
- name: Check if OBS install files have already been downloaded
stat:
path: /tmp/obs/version.txt
register: stat_result
- name: Ensures /tmp/obs exists
file: path=/tmp/obs state=directory
- name: Download and extract OBS install files
unarchive:
src: https://ahsay-dn.ahsay.com/v6/obsr/62900/obsr-nix.tar.gz
dest: /tmp/obs
remote_src: true
validate_certs: no
when: stat_result.stat.exists == false
- name: Install OBS
command: bash -lc "/tmp/obs/bin/install.sh > /tmp/install_output.log"
The playbook configuration is for all tasks to become sudo.
If I run the command in a shell on the remote host, it executes successfully.
I've hit similar issues before where commands fail because (in the case of rvm) it requires the bash_profile to load and pull in a bunch of environment variables first. The fix for that was as I've done above, to wrap the command in bash -lc "...", but that hasn't helped this time.
I'd love any suggestions of how I could continue troubleshooting this one.
you are checking for file presence before ensuring the folder.
some applications require tty, and when not on it they ask some stupid question
to really debug while the command is "stuck" connect to the offending machine, and try analyzing what does the script do: look in its /proc/${PID} folder (if you're on linux), maybe connect to it via strace -p ${PID} and maybe dup its stderr to see maybe it prints something that makes sense to you.
Also, you don't really have to run command, you can use shell module, and specify its args to make sure the command runs from specific folder, like so:
- name: Install OBS
shell: |
./bin/install.sh \
1> /tmp/install.output.log \
2> /tmp/install.error.log
args:
executable: /bin/bash
chdir: /tmp/obs

How to copy file to local directory using Ansible?

I'm trying to set up a playbook that will configure my development system. I'd like to copy the /etc/hosts file from my playbooks "files" directory to the /etc directory on my system. Currently I'm doing the following:
# main.yml
- hosts: all
- tasks:
- copy: src=files/hosts
dest=/etc/hosts
owner=root
group=wheel
mode=0644
backup=true
become: true
# inventory
localhost ansible_connection=local
When I run the playbook I'm getting this error:
fatal: [localhost]: FAILED! => {... "msg": Failed to get information on remote file (/etc/hosts): MODULE FAILURE"}
I believe this is because copy is supposed to be used to copy a file to a remote file system. So how do you copy a file to your local management system? I did a Google Search and everything talks about doing the former. I didn't see this addressed in the Ansible docs.
Your task is ok.
You should add --ask-sudo-pass to the ansible-playbook call.
If you run with -vvv you can see the command starts with sudo -H -S -n -u root /bin/sh -c echo BECOME-SUCCESS-somerandomstring (followed by a call to the Python script). If you execute it yourself, you'll get sudo: a password is required message. Ansible quite unhelpfully replaces this error message with its own Failed to get information on remote file (/etc/hosts): MODULE FAILURE.

Ansible Synchronize not able to create directory as root

I'm working on building an Ansible playbook and I'm using Vagrant as a test platform before I apply the playbook to a remote server.
I'm having issues getting Synchronize to work. I have some files that I need to move up to the server as part of the deployment.
Here's my playbook. I put the shell: whoami in there to make sure commands were running as root.
---
- hosts: all
sudo: yes
tasks:
- name: who am I
shell: whoami
- name: Sync up www folder
synchronize: src=www dest=/var
When I run this I get this:
failed: [default] => {"cmd": "rsync --delay-updates -FF --compress --archive --rsh 'ssh -i /Users/dan/.vagrant.d/insecure_private_key -o StrictHostKeyChecking=no -o Port=2222' --out-format='<<CHANGED>>%i %n%L' www vagrant#127.0.0.1:/var", "failed": true, "rc": 23}
msg: rsync: recv_generator: mkdir "/var/www" failed: Permission denied (13)
*** Skipping any contents from this failed directory ***
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1236) [sender=3.1.1]
FATAL: all hosts have already failed -- aborting
If I'm supplying sudo: yes shouldn't all commands be run as root, including Synchronize?
The Ansible Synchronize module page has some big hairy warnings:
The remote user for the dest path will always be the remote_user, not
the sudo_user.
There's a suggestion to wrap rsync with sudo like this:
# Synchronize using an alternate rsync command
synchronize: src=some/relative/path dest=/some/absolute/path rsync_path="sudo rsync"
There's also a suggestion to use verbosity to debug what's really going on. In this case, it means adding -vvv or even -vvvv to your ansible-playbook commandline execution.
Finally, this is a great time to use proper permissions, especially for non-system files like a www dir. This will solve your problem in the process.
# don't use recurse here unless you are confident how it works with directories.
- file: dest=/var/www state=directory owner=www-data group=www-data mode=0755
- synchronize: src=www dest=/var

Sudo does not work with Ansible AWX

I have set sudo_user in /etc/ansible/ansible.cfg.
In my playbook I have set remote_user, sudo_user and sudo:yes (also tried sudo:True) on the same level as hosts:
I then use a role that does:
shell: cp -f src /usr/local/bin/dest
sudo: yes
and get
stderr: cp: cannot create regular file `/usr/local/bin/dest': Permission denied
The credentials in AWX are set correctly - I am able to manually log in as the desired user on the remote machine and copy the file with sudo cp. I have no idea what I'm doing wrong.
What's your sudo user? Your base playbook should look something like this:
# Base System: this is my base playbook
- name: Base Playbook
hosts: all
user: myuser
sudo: True

Resources