Playbook failing execution due to permission denied - ansible

Here is the inventory content:
[osm]
osm_host ansible_port=22 ansible_host=10.20.20.11 ansible_user=ubuntu ansible_ssh_private_key_file=/path/to/key/key
And here is the playbook content:
- hosts: osm
user: ubuntu
become: yes
tasks:
- name: Download the OSM installer
get_url: url=https://osm-download.etsi.org/ftp/osm-8.0-eight/install_osm.sh dest=/tmp/install_osm.sh
- name: Execute the OSM installer
shell: /tmp/install_osm.sh
When I run ansible-playbook -i inventory play.yaml, I get the following error:
PLAY [osm]
TASK [Gathering Facts]
********************************************************* ok: [osm_host]
TASK [Download the OSM installer]
********************************************** ok: [osm_host]
TASK [Execute the OSM installer]
*********************************************** fatal: [osm_host]: FAILED! => {"changed": true, "cmd": "/tmp/install_osm.sh", "delta":
"0:00:00.001919", "end": "2020-09-04 19:26:46.510381", "msg":
"non-zero return code", "rc": 126, "start": "2020-09-04
19:26:46.508462", "stderr": "/bin/sh: 1: /tmp/install_osm.sh:
Permission denied", "stderr_lines": ["/bin/sh: 1: /tmp/install_osm.sh:
Permission denied"], "stdout": "", "stdout_lines": []}
PLAY RECAP
********************************************************************* osm_host : ok=2 changed=0 unreachable=0
failed=1 skipped=0 rescued=0 ignored=0
I tried to use true and yes for the become clause, but nothing changed. What am I missing?

You have to be sure that the root user has executable permissions on the new OSM download. When you use a become: yes without become_user, the default user is root
So you need to be sure that root user can execute your script.
Try the get_url like that:
- hosts: osm
user: ubuntu
become: yes
tasks:
- name: Download the OSM installer
get_url:
url: https://osm-download.etsi.org/ftp/osm-8.0-eight/install_osm.sh
dest: /tmp/install_osm.sh
mode: "0555"
- name: Execute the OSM installer
shell: /tmp/install_osm.sh
Play with the mode param of the get_url module.

Related

Using ansible to reboot a remote host that does not have python installed

My organization has tasked me with finding out a way to use Ansible to automate rebooting some of our cctv cameras, as we currently use it for a lot of our other infrastructure
The cameras (Axis) are running armv7l GNU/Linux with some proprietary stuff built on top, however python is not installed and after doing quite a bit of research and reaching out to the vendor there is no "official" way of installing python without something else breaking.
That being said, I have looked around and have come across of two Ansible modules that could potentially do this, raw and script. All that needs to be done is to reboot these cameras.
However, I am now completely lost in finding a solution to my issue. Below is my current playbook and output.
- name: cctv restart playbook
hosts: all
gather_facts: no
tasks:
- name: restart cctv
raw:
cmd: reboot
The output from when I run this playbook is
PLAY [cctv restart playbook] ***************************************************************************************************************
TASK [restart cctv] ************************************************************************************************************************
fatal: [192.168.10.130]: FAILED! => {"changed": true, "msg": "non-zero return code", "rc": 127, "stderr": "Shared connection to 192.168.10.130 closed.\r\n", "stderr_lines": ["Shared connection to 192.168.10.130 closed."], "stdout": "sh: None: not found\r\n", "stdout_lines": ["sh: None: not found"]}
PLAY RECAP *********************************************************************************************************************************
192.168.10.130 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Please let me know what needs to be done to fix this, or if it is just not possible
While the comments on your question are partly correct, the immediate source of your error message appears to be a syntax issue. The raw command does not support a cmd parameter; even targeting a regular Linux system, a playbook like this:
- hosts: all
gather_facts: false
tasks:
- name: restart cctv
raw:
cmd: "date"
Results in the same error:
fatal: [node0]: FAILED! => {"changed": true, "msg":
"non-zero return code", "rc": 127, "stderr": "Warning:
Permanently added 'localhost' (ED25519) to the list of
known hosts.\r\nShared connection to localhost
closed.\r\n", "stderr_lines": ["Warning: Permanently added
'localhost' (ED25519) to the list of known hosts.",
"Shared connection to localhost closed."], "stdout":
"bash: None: command not found\r\n", "stdout_lines":
["bash: None: command not found"]}
Which, stripped of all the extranneous bits, reads:
bash: None: command not found
(This is at least true for Ansible core 2.14.1, which is what I'm running, and that matches the documentation for the raw module.)
You need to write your task like this:
- hosts: all
gather_facts: false
tasks:
- name: restart cctv
raw: "date"
As #Zeitounator said in their comment, for this to work, your remote device needs to be at least a minimal Linux-like environment with a sh command. Assuming that you have this, you would still expect to see an error when attempting to run the reboot command, because that causes the connection to drop. That would look something like this:
fatal: [node0]: UNREACHABLE! => {"changed": false, "msg":
"Failed to connect to the host via ssh: Shared connection
to node0.virt closed.", "unreachable": true}
Since you know that's going to result in an error, you can tell Ansible to ignore the failure. For example:
- hosts: all
gather_facts: false
become: true
tasks:
- name: restart cctv
raw: "reboot"
ignore_unreachable: true
Running this playbook results in:
PLAY [all] **********************************************************************************************
TASK [restart cctv] *************************************************************************************
fatal: [node0]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to node0.virt closed.", "unreachable": true}
...ignoring
PLAY RECAP **********************************************************************************************
node0 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
(And the target system reboots.)

Deploying test playbook with Docker to Ubuntu image. Hi, anyone knows why that errorĀ came up, I couldn'tĀ find solution over Google

[root#prdx-ansible docker_ansible]# ansible-playbook playbook.yml -i inventory.txt
PLAY [Deploy web app] *******************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [target1]
ok: [target2]
TASK [Install all dependencies] *********************************************************************************************************************************************
[WARNING]: Updating cache and auto-installing missing dependency: python3-apt
fatal: [target1]: FAILED! => {"changed": false, "msg": "'/usr/bin/apt-mark manual python python-setuptools python-dev build-essential python-pip python-mysqldb' failed: E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)\nE: The package lists or status file could not be parsed or opened.\n", "rc": 100, "stderr": "E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)\nE: The package lists or status file could not be parsed or opened.\n", "stderr_lines": ["E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)", "E: The package lists or status file could not be parsed or opened."], "stdout": "", "stdout_lines": []}
fatal: [target2]: FAILED! => {"changed": false, "msg": "'/usr/bin/apt-mark manual python python-setuptools python-dev build-essential python-pip python-mysqldb' failed: E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)\nE: The package lists or status file could not be parsed or opened.\n", "rc": 100, "stderr": "E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)\nE: The package lists or status file could not be parsed or opened.\n", "stderr_lines": ["E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)", "E: The package lists or status file could not be parsed or opened."], "stdout": "", "stdout_lines": []}
PLAY RECAP ******************************************************************************************************************************************************************
target1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
target2 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
here is playbook
[root#prdx-ansible docker_ansible]# cat playbook.yml
- name: Deploy web app
hosts: target1,target2
tasks:
- name: Install all dependencies
package:
name: ['python', 'python-setuptools', 'python-dev', 'build-essential', 'python-pip', 'python-mysqldb']
state: present
- name: Install MySQL database
apt: name={{ item }} state=installed
with_items:
- mysql-server
- mysql-client
- name: Start the database service
service:
name: mysql
state: statred
enabled: yes
- name: Create database
mysql_db: name=emploee_db state=present
- name: Create DB user
mysql_user:
name: db_user
password: Passw0rd
priv: '*.*:ALL'
state: present
host: '%'
- name: Install Flask
pip:
name: "{{ item }}"
state: present
with_items:
Couldn't create temporary file to work with [..] mkstemp (28: No space left on device)
Your target server disk seems to be full. Check with df -h how much space you have left. You may have to run apt-get clean and similar commands to clean up some space.
[root#prdx-ansible docker_ansible]# ansible-playbook playbook.yml -i inventory.txt
PLAY [Deploy web app] *******************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [target1]
ok: [target2]
TASK [Install all dependencies] *********************************************************************************************************************************************
[WARNING]: Updating cache and auto-installing missing dependency: python3-apt
changed: [target2]
changed: [target1]
TASK [Install MySQL database] ***********************************************************************************************************************************************
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of using a loop to supply multiple items and specifying `name:
"{{ item }}"`, please use `name: ['mysql-server', 'mysql-client']` and remove the loop. This feature will be removed in version 2.11. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of using a loop to supply multiple items and specifying `name:
"{{ item }}"`, please use `name: ['mysql-server', 'mysql-client']` and remove the loop. This feature will be removed in version 2.11. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
changed: [target2] => (item=[u'mysql-server', u'mysql-client'])
changed: [target1] => (item=[u'mysql-server', u'mysql-client'])
TASK [Start the database service] *******************************************************************************************************************************************
fatal: [target2]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 172.17.03 closed.\r\n", "module_stdout": "/bin/sh: 1: sudo: not found\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}
fatal: [target1]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 172.17.02 closed.\r\n", "module_stdout": "/bin/sh: 1: sudo: not found\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}
PLAY RECAP ******************************************************************************************************************************************************************
target1 : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
target2 : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
NOW I GOT THIS ERROR!!

Executing python script on remote server using ansible Error

I am logged in as root#x.x.x.12 with ansible 2.8.3 Rhel 8.
I wish to copy few files to root#x.x.x.13 Rhel 8 and then execute a python script.
I am able to copy the files sucessfully using ansible. I had even copied the keys and now it is ssh-less.
But during execution of script :
'fatal: [web_node1]: FAILED! => {"changed": false, "msg": "Could not find or access '/root/ansible_copy/write_file.py' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}'
Please note that I am a novice to ansible.
I guess there is some permission issues.
Please Help me out if possible.
Thanking in anticipation
**yaml_file**
-
name: Copy_all_ansible_files_to_servers
hosts: copy_Servers
become: true
become_user: root
tasks:
-
name: copy_to_all
copy:
src: /home/testuser/ansible_project/{{item}}
dest: /root/ansible_copy/{{item}}
owner: root
group: root
mode: u=rxw,g=rxw,o=rxw
with_items:
- write_file.py
- sink.txt
- ansible_playbook_task.yaml
- copy_codes_2.yaml
notify :
- Run_date_command
-
name: Run_python_script
script: /root/ansible_copy/write_file.py > /root/ansible_copy/sink.txt
args:
#chdir: '{{ role_path }}'
executable: /usr/bin/python3.6
**inventory_file**
-
web_node1 ansible_host=x.x.x.13
[control]
thisPc ansible_connection=local
#Groups
[copy_Servers]
web_node1
Command: ansible-playbook copy_codes_2.yaml -i inventory.dat =>
PLAY [Copy_all_ansible_files_to_servers] *******************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************
ok: [web_node1]
TASK [copy_to_all] *****************************************************************************************************************************************************************************************
ok: [web_node1] => (item=write_file.py)
ok: [web_node1] => (item=sink.txt)
ok: [web_node1] => (item=ansible_playbook_task.yaml)
ok: [web_node1] => (item=copy_codes_2.yaml)
TASK [Run_python_script] ***********************************************************************************************************************************************************************************
fatal: [web_node1]: FAILED! => {"changed": false, "msg": "Could not find or access '/root/ansible_copy/write_file.py' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
PLAY RECAP *************************************************************************************************************************************************************************************************
web_node1 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The script command will actually copy the file to the remote server before running it. Thus, when it complains about not being able to find or access the script, it's because it's trying to copy from /root/ansible_copy/write_file.py to the server.
If you don't really need the script to remain on the server after you execute it, you could remove the script from the copy task and change the script task to have the src point at /home/testuser/ansible_project/write_file.py.
Alternatively, instead of using the script command, you can manually run the script after transferring it using:
- name: run the write_file.py after it has already been transferred
command: python3.6 /root/ansible_copy/write_file.py > /root/ansible_copy/sink.txt
(Note: you may need to provide the full path to your python3.6 executable)

Not able to execute service restart, copy file from non root user in ansible

---
- hosts: all
become_user: ansible
become: yes
become_method: sudo
tasks:
- name: Restart the sshd service
service: name=sshd state=restarted
### sudoers file entry for user on host ####
ansible ALL=(ALL) NOPASSWD:ALL
PLAY [all] ***************************************************************************
TASK [Gathering Facts] ***************************************************************
ok: [host2.domain.local]
ok: [host1.domain.local]
TASK [Restart the ssh service] *******************************************************
fatal: [host2.domain.local]: FAILED! => {"changed": false, "msg": "Unable to restart service sshd: Failed to restart sshd.service: Interactive authentication required.\n"}
fatal: [host1.domain.local]: FAILED! => {"changed": false, "msg": "Unable to restart service sshd: Failed to restart sshd.service: Interactive authentication required.\n"}
to retry, use: --limit #/root/1stplay.retry
PLAY RECAP ***************************************************************************
host1.domain.local : ok=1 changed=0 unreachable=0 failed=1
host2.domain.local : ok=1 changed=0 unreachable=0 failed=1
Lose the line:
become_user: ansible
Presumably you are logging into the target machine as ansible and want to become root, not ansible? If you do not specify the become_user root is used by default.

Error while running simple ansible playbook

playbook is as below...
[ansible#ansible2 outline]$ cat webserver.yaml
--- #Create an YAML from an outline
- hosts: web
connection: ssh
remote_user: ansible
become: yes
become_method: sudo
gather_facts: yes
vars:
test: raju
vars_files:
- /home/ansible/playbooks/conf/copyright.yaml
vars_prompt:
- name: web_domain
prompt: WEB DOMAIN
tasks:
- name: install apache web server
yum: pkg=httpd state=latest
notify: start the service
- name: check service
command: service httpd status
register: result
- debug: var=result
handlers:
- name: start the service
service: name=httpd state=restarted
[ansible#ansible2 outline]$
and the error as below...
[ansible#ansible2 outline]$ ansible-playbook webserver.yaml
WEB DOMAIN:
PLAY [web] *********************************************************************
TASK [setup] *******************************************************************
ok: [web2.bharathkumarraju.com]
TASK [install apache web server] ***********************************************
changed: [web2.bharathkumarraju.com]
TASK [check service] ***********************************************************
fatal: [web2.bharathkumarraju.com]: FAILED! => {"changed": true, "cmd": ["service", "httpd", "status"], "delta": "0:00:00.039489", "end": "2016-10-30 04:53:51.833760", "failed": true, "rc": 3, "start": "2016-10-30 04:53:51.794271", "stderr": "", "stdout": "httpd is stopped", "stdout_lines": ["httpd is stopped"], "warnings": ["Consider using service module rather than running service"]}
NO MORE HOSTS LEFT *************************************************************
RUNNING HANDLER [start the service] ********************************************
to retry, use: --limit #/home/ansible/outline/webserver.retry
PLAY RECAP *********************************************************************
web2.bharathkumarraju.com : ok=2 changed=1 unreachable=0 failed=1
The exit code of running service httpd status is different then 0 because the service was not started. Handlers are always ran at the end of a playbook not when they are notified.
One solution is to put an "ignore_errors: true" at the check service status. Another solution would be to remove the handler+notify and put a service module after the yum:
- service: name=httpd status=started enabled=yes
Try check service's status using /etc/init.d
- name: check service
stat: path=/etc/init.d/httpd
register: result

Resources