Ansible cannot unarchive using 'root' username - ansible

I have the classic unarchive example in my playbook as follows:
- name: Extract foo.tgz into /var/lib/foo
ansible.builtin.unarchive:
src: foo.tgz
dest: /tmp
I get the error:
"Commands \"gtar\" and \"tar\" not found. Command \"unzip\" not found."
I did have a look at the answer here. However in my case, the issue is specifically with the 'root' user. With the user john, the unarchive is successful.
The following tests are successful for both the root and john:
ansible all -i hosts -m command -a "which tar" -l hostname --user [root,john]
... results in
hostname | CHANGED | rc=0 >> /usr/local/bin/tar
... and successfully finds the 'tar' binary.
What might be the issue?

Related

ansible tower cannot find name for the group id error for synchronize module

I am using synchronize module to copy dir present in NFS to local path. The user(SVC12345) which run playbook from ansible tower is not present in /etc/passwd or /etc/group.
When synchronise task is invoked it failes with below error
"msg" : "Warning : Permanently added 'hostname,1.2.3.4 to the list of known hosts\r\n/usr/bin/id: cannot find name for group ID 12345 \nrsync: change_dir \"/app/nfs_share_path/DIR1 failed: No such file or directory
"rc": "23"
"cmd": sshpass -d4 /bin/rsync --delay-updates -F compress --dry-run --archive --rsh=/bin/ssh -S none -o StrictHostKetcheking=no -o UserKnownHostFile=/dev/null --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L /app/nfs_share_path/DIR1 SVC12345#hostname:/app/path/util"
My ansible task
- name: Test
become: yes
become_user: local_user
synchronize:
src: /app/nfs_share_path/DIR1 //shared directory
dest: /app/path/util
owner: yes
group: yes
I am expecting this task is executed as "local_user" (since I have mentioned become_user) instead it perform the task as SVC12345 user.

How to copy content provided in command prompt with special chars in a file using Ansible?

I need to run a script using Ansible on multiple nodes. I need to provide the script from command line and it will be changing. The script will also have special characters.
I tried to copy the content in a file using --extra-vars. But as there is a single quote ' in the script, the script is failing.
The content of the script:
---
- hosts: all
become: true
tasks:
- name: copy extra variables to a file
copy:
content: |
{{ command }}
dest: /tmp/commands.sh
delegate_to: localhost
- name: copy commands.sh to nodes
copy:
src: /tmp/commands.sh
dest: /tmp/
mode: 0755
- name: run the commands in the nodes
shell: /bin/bash /tmp/commands.sh
register: command_output
- name: print output
debug:
msg: "{{ command_output.stdout_lines }}"
Lets say I am giving the command as below
ansible-playbook -i hosts.ini store-in-file-and-run-in-nodes.yaml -e "command='
date
ps -ef | grep httpd
echo "That's all folks"
'
"
Here due to single quote in echo "That's all folks", the command is failing. It works fine if I escape the singe quote. It looks easy to escape a single ' as per this script. But my original script has multiple ' also special chars.
The commands I provide in command line should be stored as it is ...
Please help me to find a solution.
Thanks
According your description it should be possible to provide the content of command.sh upfront and in a file instead of in an extra variable and create the file from it. That seems to be an unnecessary tasks and adding complexity instead of reducing it.
cat > command.sh <<EOF
date
ps -ef | grep ssh
echo "That's all folks"
EOF
The command.sh can than just copied over and become executed. For this the script module – Runs a local script on a remote node after transferring it.
---
- hosts: test
become: true
gather_facts: false
tasks:
- name: Exec 'command.sh' on Remote Node
script: command.sh
register: result
- name: Show result
debug:
msg: "{{ result.stdout }}"
resulting into an output of
TASK [Exec 'command.sh' on Remote Node] *******************************
changed: [test.example.com]
TASK [Show result] ****************************************************
ok: [test.example.com] =>
msg: |-
Fri Aug 12 15:56:37 CEST 2022
root 709 1 0 Aug11 ? 00:00:00 /usr/sbin/sshd -D
root 168296 709 24 15:56 ? 00:00:00 sshd: user [priv]
user 168298 168296 1 15:56 ? 00:00:00 sshd: user#pts/0
root 168485 168482 0 15:56 pts/0 00:00:00 grep ssh
That's all folks
Another thing you should consider is that you currently (try to) allow users to execute unvalidated code on remote machines. Additionally and depending on what you try to achieve, your use case can probably become dense down to simple Ansible ad-hoc commands.
ansible test -m shell -a 'date; ps -ef | grep ssh; echo "End of task"; echo " "'
The description
I need to provide the script from command line and it will be changing.
sounds more like an anti-pattern for Ansible. It should be possible to identify common administrative and usual operational tasks before, describe them in a simple playbook and allow users to execute only that. Otherwise you could have a look for parallel or cluster shells.

Galaxy role. The module file was not found in configured module paths. Additionally, core modules are missing

This is from a galaxy role (ashwin_sid.gaia_fw1) that I'm trying to implement.
Ansible version is 2.8.4
As part of the playbook it logs in, runs a show command. The output is then supposed to go to "BACKUP", but it throws this error: "The module file was not found in configured module paths. Additionally, core modules are missing".
This is the playbook:
serial: 1
gather_facts: no
tasks:
- name: BACKUP
import_role:
name: ashwin_sid.gaia_fw1
tasks_from: backup'
I think this where it breaks, where it references this file:
'- name: create dir
local_action: file path=={{ logdir | default('../BACKUP') }}/{{ r0.stdout }} state=directory'
This is the task with the error in verbose mode.
TASK [ashwin_sid.gaia_fw1 : create dir] ****************************************************************************************************************************************************************
task path: /app/sandbox/playbooks/ashwin_sid.gaia_fw1/tasks/backup.yml:23
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: xxxxx
<localhost> EXEC /bin/sh -c 'echo ~xxxxx && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/xxxxx/.ansible/tmp/ansible-tmp-1569528903.45-71335581192935 `" && echo ansible-tmp-1569528903.45-71335581192935="` echo /home/xxxxx/.ansible/tmp/ansible-tmp-1569528903.45-71335581192935 `" ) && sleep 0'
fatal: [lab_B]: FAILED! => {
"msg": "The module file was not found in configured module paths. Additionally, core modules are missing. If this is a checkout, run 'git pull --rebase' to correct this problem."
}
I'm not sure what other information to provide?
I've created the "BACKUP" directory. I don't think it's a permissions issue. It logs in fine and I think it runs the command it just can't write?
You have an extra space in your playbook:
"local_action: file path=={{"
should be :
"local_action: file path=={{
The error shows an extra space after stating module not found:
'"msg": "The module file was not found...'
After removing that space, it should work for you.

Chown Not Permitted, Has full Sudo Access?

I have an Ansible role which I'm trying to migrate to macOS.
The following step fails:
- name: get go user details
command: sh -c 'echo $HOME'
become: true
become_user: "{{ go_user }}"
become_flags: -Hi
changed_when: false
register: go_user_check
The error emitted is:
TASK [default : get go user details] *******************************************
fatal: [127.0.0.1]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: /var/tmp/ansible-tmp-1533765311.28-97292171123102/: Operation not permitted\nchown: /var/tmp/ansible-tmp-1533765311.28-97292171123102/command.py: Operation not permitted\n}). For information on working around this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
This same step works just fine on all the Linux distributions
I'm testing against. When I execute sudo -Hiu username sh -c 'echo $HOME', I get what I would expect: /Users/travis.
When I execute the following:
ansible -i 127.0.0.1, -c local --become --become-user travis \
-m command -a 'sh -c "echo $HOME"' all
I get exactly what I'd expect, /Users/travis.
I'm reading the documentation linked to by Ansible but I'm not seeing a workaround.
Is there a different way I should be executing this on OSX?
I was able to get it to proceed with the following modification:
diff --git a/tasks/discover/go.yml b/tasks/discover/go.yml
index 090ce7d..f0c2b89 100644
--- a/tasks/discover/go.yml
+++ b/tasks/discover/go.yml
## -3,9 +3,10 ##
command: sh -c 'echo $HOME'
become: true
become_user: "{{ go_user }}"
- become_flags: -Hi
changed_when: false
register: go_user_check
+ vars:
+ ansible_ssh_pipelining: true
- name: set go user home
set_fact: go_user_home="{{ go_user_check.stdout_lines[0] }}"
Pipelining appears to dodge the file modification that is failing. Unfortunately, I don't have the answer as to why it's failing, so other answers are still welcome as I'd like to get to what is actually happening.
There appear to be a lot of caveats to using Ansible on macOS, so I will probably have to have my tasks diverge completely based on whether it's Linux or macOS.

Ansible update user password

ansible 192.168.1.115 -s -m shell -a "echo -e 'oldpassword\nnewpassword\nnewpassword' | passwd myuser" -u myuser --ask-sudo-pass
I would like to update existing user with new password, I had tried this command, but it doesn't work
appreciate any Tips !
You can leverage the user module to quickly change the password for desired account. Ansible doesn’t allow you to pass a cleartext password to user module so you have to install a password hashing library to be leveraged by Python.
To install the library:
sudo -H pip install passlib
Then simply exexute your command:
ansible 192.168.1.115 -s -m user -a "name=root update_password=always password={{ yourpassword | password_hash('sha512') }}" -u myuser --ask-sudo-pass
Hope that help you
Create your shadow password (linux) with
python -c 'import crypt; print crypt.crypt("YourPassword", "$6$random_salt")'
create
update_pass.yml
execute your ansible-playbook with sudoer (bash)
ansible-playbook update_pass.yml --become --become-method='sudo' --ask-become-pass
Update password for a list of hosts using dynamic variables:
In your inventory file set a variable (pass) as the following:
ip_1# ansible_user=xxxxxx ansible_ssh_pass=xxxx ansible_sudo_pass=xxx pass='aaaa'
ip_2# ansible_user=xxxxxx ansible_ssh_pass=xxxx ansible_sudo_pass=xxx pass='bbbb'
Now in the playbook we make a backup of the shadow file and set cron task to restore the shadow file in case something went wrong than we update the password:
- hosts: your_hosts
gather_facts: no
tasks:
- name: backup shadow file
copy:
src: /etc/shadow
dest: /etc/shadaw.org
become: yes
- name: set cron for backup
cron:
name: restore shadow
hour: 'AT LEAST GIVE YOURSELF ONE HOUR TO BE ABLE TO CALL THIS OFF'
minute: *
job: "yes | cp /tmp/shadow /etc/"
become: yes
- name: generate hash pass
delegate_to: localhost
command: python -c "from passlib.hash import sha512_crypt; import getpass; print sha512_crypt.encrypt('{{pass}}')"
register: hash
- debug:
var: hash.stdout
- name: update password
user:
name: xxxxxx
password: '{{hash.stdout}}'
become: yes
Now we create a new playbook to call off cron task we use the new password for authentication and if authentication failed cron will remain active and restore the old password.
hosts file:
ip_1# ansible_user=xxxxxx ansible_ssh_pass=aaaa ansible_sudo_pass=aaaa
ip_2# ansible_user=xxxxxx ansible_ssh_pass=bbbb ansible_sudo_pass=bbbb
the playbook:
- hosts: your_hosts
gather_facts: no
tasks:
- name: cancel cron task
cron:
name: restore shadow
state: absent
!!Remember:
pass variable contain your password so you may consider using vault.
Give yourself time when setting cron for backup to be able to call it of (second playbook).
In worst case cron will restore the original password.
You need to have passlib installed in your ansible server.

Resources