run a particular command on all remote servers as a particular user? - ansible

I am trying to run a specific Ansible task as a different user than the one who is running the playbook. On my local box I have below playbook and I am logged in as david user and I want to run this command /tek/ghy/bin/ss.sh start on all remote servers as goldy user only.
My .yml file looks like this:
---
- name: start server
hosts: one_box
serial: "{{ num_serial }}"
tasks:
- name: start server
command: /tek/ghy/bin/ss.sh start
become: true
become_user: goldy
Below is how I am running it:
david#machineA:~$ ansible-playbook -e 'host_key_checking=False' -e 'num_serial=1' start_box.yml -u david --ask-pass --sudo -U goldy --ask-become-pass
[DEPRECATION WARNING]: The sudo command line option has been deprecated in favor of the "become" command line arguments. This feature will be removed in version 2.6. Deprecation warnings
can be disabled by setting deprecation_warnings=False in ansible.cfg.
SSH password:
SUDO password[defaults to SSH password]:
PLAY [start server] ***************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
fatal: [remote_machineA]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership of ‘/tmp/ansible-tmp-1527357815.74-165519966271795/’: Operation not permitted\nchown: changing ownership of ‘/tmp/ansible-tmp-1527357815.74-165519966271795/setup.py’: Operation not permitted\n}). For information on working around this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
What wrong I am doing here? I am running ansible 2.4.3.0

by google search, you could be affected by this issue.
try to upgrade ansible, your code (i replaced the command to run a simple id on the remote server, instead of the /tek/ghy/bin/ss.sh start, and i used the same shell command and arguments as you provided) works on 2.5.2:
[ilias#optima-ansible tmp]$ ansible-playbook -e 'host_key_checking=False' -e 'num_serial=1' lala.yml -u ilias --ask-pass --sudo -U http_offline --ask-become-pass
[DEPRECATION WARNING]: The sudo command line option has been deprecated in favor of the "become" command line arguments. This feature will be removed in version 2.6. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
SSH password:
SUDO password[defaults to SSH password]:
PLAY [start server] *************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************************************
ok: [greenhat]
TASK [start server] *************************************************************************************************************************************************************************************************
changed: [greenhat]
TASK [debug] ********************************************************************************************************************************************************************************************************
ok: [greenhat] => {
"command_output": {
"changed": true,
"cmd": [
"id"
],
"delta": "0:00:00.004484",
"end": "2018-05-26 21:26:28.531838",
"failed": false,
"rc": 0,
"start": "2018-05-26 21:26:28.527354",
"stderr": "",
"stderr_lines": [],
"stdout": "uid=1002(http_offline) gid=1002(http_offline) groups=1002(http_offline),984(docker)",
"stdout_lines": [
"uid=1002(http_offline) gid=1002(http_offline) groups=1002(http_offline),984(docker)"
]
}
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
greenhat : ok=3 changed=1 unreachable=0 failed=0
[ilias#optima-ansible tmp]$ ansible --version
ansible 2.5.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ilias/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15 (default, May 16 2018, 17:50:09) [GCC 8.1.1 20180502 (Red Hat 8.1.1-1)]
[ilias#optima-ansible tmp]$

Related

Executing python script on remote server using ansible Error

I am logged in as root#x.x.x.12 with ansible 2.8.3 Rhel 8.
I wish to copy few files to root#x.x.x.13 Rhel 8 and then execute a python script.
I am able to copy the files sucessfully using ansible. I had even copied the keys and now it is ssh-less.
But during execution of script :
'fatal: [web_node1]: FAILED! => {"changed": false, "msg": "Could not find or access '/root/ansible_copy/write_file.py' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}'
Please note that I am a novice to ansible.
I guess there is some permission issues.
Please Help me out if possible.
Thanking in anticipation
**yaml_file**
-
name: Copy_all_ansible_files_to_servers
hosts: copy_Servers
become: true
become_user: root
tasks:
-
name: copy_to_all
copy:
src: /home/testuser/ansible_project/{{item}}
dest: /root/ansible_copy/{{item}}
owner: root
group: root
mode: u=rxw,g=rxw,o=rxw
with_items:
- write_file.py
- sink.txt
- ansible_playbook_task.yaml
- copy_codes_2.yaml
notify :
- Run_date_command
-
name: Run_python_script
script: /root/ansible_copy/write_file.py > /root/ansible_copy/sink.txt
args:
#chdir: '{{ role_path }}'
executable: /usr/bin/python3.6
**inventory_file**
-
web_node1 ansible_host=x.x.x.13
[control]
thisPc ansible_connection=local
#Groups
[copy_Servers]
web_node1
Command: ansible-playbook copy_codes_2.yaml -i inventory.dat =>
PLAY [Copy_all_ansible_files_to_servers] *******************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************
ok: [web_node1]
TASK [copy_to_all] *****************************************************************************************************************************************************************************************
ok: [web_node1] => (item=write_file.py)
ok: [web_node1] => (item=sink.txt)
ok: [web_node1] => (item=ansible_playbook_task.yaml)
ok: [web_node1] => (item=copy_codes_2.yaml)
TASK [Run_python_script] ***********************************************************************************************************************************************************************************
fatal: [web_node1]: FAILED! => {"changed": false, "msg": "Could not find or access '/root/ansible_copy/write_file.py' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
PLAY RECAP *************************************************************************************************************************************************************************************************
web_node1 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The script command will actually copy the file to the remote server before running it. Thus, when it complains about not being able to find or access the script, it's because it's trying to copy from /root/ansible_copy/write_file.py to the server.
If you don't really need the script to remain on the server after you execute it, you could remove the script from the copy task and change the script task to have the src point at /home/testuser/ansible_project/write_file.py.
Alternatively, instead of using the script command, you can manually run the script after transferring it using:
- name: run the write_file.py after it has already been transferred
command: python3.6 /root/ansible_copy/write_file.py > /root/ansible_copy/sink.txt
(Note: you may need to provide the full path to your python3.6 executable)

You need to be root to execute - ansible

I have a lab setup with ansible controller + node and exploring few areas.
I am so far setup an user account named ansible in both machines and enabled ssh keybased authentication
Also setup sudo premissions for the user in both machines
When I try to run the below playbook , It works on the local machine and fails on the other node.
--- #Install Telnet
- hosts: all
name: Install Telnet
become: true
become_user: ansible
become_method: sudo
tasks:
- yum:
name: telnet
state: latest
Output is as follows
`[ansible#host1 playbooks]$ ansible-playbook telnetDeployYUM.yml
PLAY [Install Telnet] ***********************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [192.168.64.6]
ok: [192.168.64.5]
TASK [yum] **********************************************************************************************************************************************************************************
ok: [192.168.64.5]
fatal: [192.168.64.6]: FAILED! => {"changed": true, "msg": "You need to be root to perform this command.\n", "obsoletes": {"grub2": {"dist": "x86_64", "repo": "#anaconda", "version": "1:2.02-0.64.el7.centos"}, "grub2-tools": {"dist": "x86_64", "repo": "#anaconda", "version": "1:2.02-0.64.el7.centos"}}, "rc": 1, "results": ["Loaded plugins: fastestmirror\n"]}
to retry, use: --limit #/home/ansible/playbooks/telnetDeployYUM.retry
PLAY RECAP **********************************************************************************************************************************************************************************
192.168.64.5 : ok=2 changed=0 unreachable=0 failed=0
192.168.64.6 : ok=1 changed=0 unreachable=0 failed=1
[ansible#host1 playbooks]$
`
I could also manually able to run sudo yum on the failed target as ansible user
I believe sudo set up in correct
[ansible#host2 root]$ sudo whoami
root
Can experts share some insights on what I am missing with respect to my failed machine , Thanks.
Below should work fine
- hosts: all
name: Install Telnet
become: yes
tasks:
- yum:
name: telnet
state: latest
ansible or user through which ansible is getting executed should be in sudoers file.
You are changing your user to ansible which is not required.
Run with -vvvv to see what ansible is doing.
Have you setup ansible in sudoers for password less privilege elevation?
you are getting a message that it is waiting for "escalation prompt". That means when you are running with become, you are failing to become since it needs the password. Make sure your test user is in /etc/sudoers AND you have it marked for that user to NOT need to enter a password when running sudo commands. The entry should end with :NOPASSWD on the line in that file.

Unable to escalate privileges for a task in ansible even after using become

I am trying to automate a scenario using ansible.
- name: Copy NRPE Upgrade script
template: src=nagiosclient.sh.j2 dest=/var/tmp/nagiosclient.sh
- name: Add Execute permissions of the script
file: dest=/var/tmp/nagiosclient.sh mode=a+x
- name: Execute the NRPE script
script: /var/tmp/nagiosclient.sh
become: true
tags: test
This is an excerpt of my playbook. This playbooks successfully runs the copy and add execute permissions tasks.
But when I try to run , the execute one it fails.
Because ansible is trying to login as 'gparasha' user, this path /var/tmp is unavailable for this user as expected.
But even if i add a "become:true" in the task as done above,
and even after using --become in the ansible playbook task,
i.e. "ansible-playbook -i hosts tltd.yml --become --tags test"
I am getting a permission denied error..
Can anyone suggest as to what is wrong here and how to rectify it?
gparasha-macOS:TLTD gparasha$ ansible-playbook -i hosts tltd.yml --become --tags test
PLAY [Run tasks on Author] **************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************************
ok: [13.229.22.58]
fatal: [34.198.174.78]: UNREACHABLE! => {"changed": false, "msg": "Authentication failure.", "unreachable": true}
TASK [author : Execute the NRPE script] *************************************************************************************************************************************************
fatal: [13.229.22.58]: FAILED! => {"changed": false, "failed": true, "msg": "Could not find or access '/var/tmp/nagiosclient.sh'"}
[WARNING]: Could not create retry file '/opt/ansible/TLTD/tltd.retry'. [Errno 13] Permission denied: u'/opt/ansible/TLTD/tltd.retry'
PLAY RECAP ******************************************************************************************************************************************************************************
13.229.22.58 : ok=1 changed=0 unreachable=0 failed=1
34.198.174.78 : ok=0 changed=0 unreachable=1 failed=0
It doesn’t matter if you use become or not, because script module reads the script file from the control machine, transfers it to the target and executes there (with become privileges in your case).
The error comes from the fact that the script does not exist at /var/tmp/nagiosclient.sh on the control machine.
If you want to execute it on the target, you should use shell module and run /var/tmp/nagiosclient.sh.
Moreover, the permission denied problem is completely unrelated and it is a warning that a retry-file could not be created; also on the control machine.

Cloud9 and ansible

when trying to run ansible on cloud9,
some of my task have:
sudo_user: emr-user
HOSTS file:
[development]
localhost ansible_connection=local ansible_ssh_user=ubuntu
Running with:
ansible-playbook -i hosts site.yml --limit=development
keeps failing on this task with:
failed: [localhost] => {"failed": true, "parsed": false}
[sudo via ansible, key=zacflhyhixxhiajrlmtitjxgpxqimnmn] password:
I believe it is related to the fact the cloud9 runs on password-less ubuntu root
I was able to bypass it using sudo su and then running:
ansible-playbook -i hosts site.yml --limit=development
but it doesn't feel right. any other ideas?

Ansible yum: All packages providing ... are up to date

OK, I'm trying to learn ansible and am running into a problem doing a very basic operation.
Playbook:
---
- hosts: fedtest
tasks:
- name: Install httpd package
yum: name=httpd state=latest
sudo: yes
- name: Starting http service
service: name=http state=started
sudo: yes
ansible.cfg:
[defaults]
hostfile = /home/abcd/proj/ans/hosts
remote_user = abcd
private_key_file = /home/abcd/proj/ans/.ssh/ans.priv
Ok, I run the command:
$ ansible-playbook setup_apache.yml
PLAY [fedtest]
****************************************************************
GATHERING FACTS
***************************************************************
ok: [fedtest]
TASK: [Install httpd package]
***********************************************
failed: [fedtest] => {"failed": true, "parsed": false}
BECOME-SUCCESS-ajlxizkspxrhyrqauuvywgrtojtutomb
{"msg": "", "changed": false, "results": ["All packages providing httpd are up to date"], "rc": 0}
6.719u 1.760s 0:11.33 74.7% 0+0k 0+592io 0pf+0w
OpenSSH_6.6.1, OpenSSL 1.0.1k-fips 8 Jan 2015
debug1: auto-mux: Trying existing master
debug1: mux_client_request_session: master session id: 2
Shared connection to fedserwizard closed.
FATAL: all hosts have already failed -- aborting
PLAY RECAP
********************************************************************
to retry, use: --limit #/home/abcd/setup_apache.retry
fedtest : ok=1 changed=0 unreachable=0 failed=1
Exit 2
I did do the -vvvv on the ansible-playbook command and it looks like it is failing to execute the shell command to echo the BECOME-SUCCESS string so that playbook can continue instead of erroring out. I've tried these operations on several systems both source and destination and still get the same result.
What type of problem do I need to correct.
After a lot of experimenting, I notice that if the shell of the client (receiver) of the ansible apparently had to be /bin/bash and NOT /bin/tcsh which is what I had.
Interesting that according to the verbose output that I could find that /bin/sh was being explicitly being called. And to cause an ssh issue was extremely troublesome.

Resources