Using ansible to reboot a remote host that does not have python installed - ansible

My organization has tasked me with finding out a way to use Ansible to automate rebooting some of our cctv cameras, as we currently use it for a lot of our other infrastructure
The cameras (Axis) are running armv7l GNU/Linux with some proprietary stuff built on top, however python is not installed and after doing quite a bit of research and reaching out to the vendor there is no "official" way of installing python without something else breaking.
That being said, I have looked around and have come across of two Ansible modules that could potentially do this, raw and script. All that needs to be done is to reboot these cameras.
However, I am now completely lost in finding a solution to my issue. Below is my current playbook and output.
- name: cctv restart playbook
hosts: all
gather_facts: no
tasks:
- name: restart cctv
raw:
cmd: reboot
The output from when I run this playbook is
PLAY [cctv restart playbook] ***************************************************************************************************************
TASK [restart cctv] ************************************************************************************************************************
fatal: [192.168.10.130]: FAILED! => {"changed": true, "msg": "non-zero return code", "rc": 127, "stderr": "Shared connection to 192.168.10.130 closed.\r\n", "stderr_lines": ["Shared connection to 192.168.10.130 closed."], "stdout": "sh: None: not found\r\n", "stdout_lines": ["sh: None: not found"]}
PLAY RECAP *********************************************************************************************************************************
192.168.10.130 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Please let me know what needs to be done to fix this, or if it is just not possible

While the comments on your question are partly correct, the immediate source of your error message appears to be a syntax issue. The raw command does not support a cmd parameter; even targeting a regular Linux system, a playbook like this:
- hosts: all
gather_facts: false
tasks:
- name: restart cctv
raw:
cmd: "date"
Results in the same error:
fatal: [node0]: FAILED! => {"changed": true, "msg":
"non-zero return code", "rc": 127, "stderr": "Warning:
Permanently added 'localhost' (ED25519) to the list of
known hosts.\r\nShared connection to localhost
closed.\r\n", "stderr_lines": ["Warning: Permanently added
'localhost' (ED25519) to the list of known hosts.",
"Shared connection to localhost closed."], "stdout":
"bash: None: command not found\r\n", "stdout_lines":
["bash: None: command not found"]}
Which, stripped of all the extranneous bits, reads:
bash: None: command not found
(This is at least true for Ansible core 2.14.1, which is what I'm running, and that matches the documentation for the raw module.)
You need to write your task like this:
- hosts: all
gather_facts: false
tasks:
- name: restart cctv
raw: "date"
As #Zeitounator said in their comment, for this to work, your remote device needs to be at least a minimal Linux-like environment with a sh command. Assuming that you have this, you would still expect to see an error when attempting to run the reboot command, because that causes the connection to drop. That would look something like this:
fatal: [node0]: UNREACHABLE! => {"changed": false, "msg":
"Failed to connect to the host via ssh: Shared connection
to node0.virt closed.", "unreachable": true}
Since you know that's going to result in an error, you can tell Ansible to ignore the failure. For example:
- hosts: all
gather_facts: false
become: true
tasks:
- name: restart cctv
raw: "reboot"
ignore_unreachable: true
Running this playbook results in:
PLAY [all] **********************************************************************************************
TASK [restart cctv] *************************************************************************************
fatal: [node0]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to node0.virt closed.", "unreachable": true}
...ignoring
PLAY RECAP **********************************************************************************************
node0 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
(And the target system reboots.)

Related

Playbook failing execution due to permission denied

Here is the inventory content:
[osm]
osm_host ansible_port=22 ansible_host=10.20.20.11 ansible_user=ubuntu ansible_ssh_private_key_file=/path/to/key/key
And here is the playbook content:
- hosts: osm
user: ubuntu
become: yes
tasks:
- name: Download the OSM installer
get_url: url=https://osm-download.etsi.org/ftp/osm-8.0-eight/install_osm.sh dest=/tmp/install_osm.sh
- name: Execute the OSM installer
shell: /tmp/install_osm.sh
When I run ansible-playbook -i inventory play.yaml, I get the following error:
PLAY [osm]
TASK [Gathering Facts]
********************************************************* ok: [osm_host]
TASK [Download the OSM installer]
********************************************** ok: [osm_host]
TASK [Execute the OSM installer]
*********************************************** fatal: [osm_host]: FAILED! => {"changed": true, "cmd": "/tmp/install_osm.sh", "delta":
"0:00:00.001919", "end": "2020-09-04 19:26:46.510381", "msg":
"non-zero return code", "rc": 126, "start": "2020-09-04
19:26:46.508462", "stderr": "/bin/sh: 1: /tmp/install_osm.sh:
Permission denied", "stderr_lines": ["/bin/sh: 1: /tmp/install_osm.sh:
Permission denied"], "stdout": "", "stdout_lines": []}
PLAY RECAP
********************************************************************* osm_host : ok=2 changed=0 unreachable=0
failed=1 skipped=0 rescued=0 ignored=0
I tried to use true and yes for the become clause, but nothing changed. What am I missing?
You have to be sure that the root user has executable permissions on the new OSM download. When you use a become: yes without become_user, the default user is root
So you need to be sure that root user can execute your script.
Try the get_url like that:
- hosts: osm
user: ubuntu
become: yes
tasks:
- name: Download the OSM installer
get_url:
url: https://osm-download.etsi.org/ftp/osm-8.0-eight/install_osm.sh
dest: /tmp/install_osm.sh
mode: "0555"
- name: Execute the OSM installer
shell: /tmp/install_osm.sh
Play with the mode param of the get_url module.

Deploying test playbook with Docker to Ubuntu image. Hi, anyone knows why that error came up, I couldn't find solution over Google

[root#prdx-ansible docker_ansible]# ansible-playbook playbook.yml -i inventory.txt
PLAY [Deploy web app] *******************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [target1]
ok: [target2]
TASK [Install all dependencies] *********************************************************************************************************************************************
[WARNING]: Updating cache and auto-installing missing dependency: python3-apt
fatal: [target1]: FAILED! => {"changed": false, "msg": "'/usr/bin/apt-mark manual python python-setuptools python-dev build-essential python-pip python-mysqldb' failed: E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)\nE: The package lists or status file could not be parsed or opened.\n", "rc": 100, "stderr": "E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)\nE: The package lists or status file could not be parsed or opened.\n", "stderr_lines": ["E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)", "E: The package lists or status file could not be parsed or opened."], "stdout": "", "stdout_lines": []}
fatal: [target2]: FAILED! => {"changed": false, "msg": "'/usr/bin/apt-mark manual python python-setuptools python-dev build-essential python-pip python-mysqldb' failed: E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)\nE: The package lists or status file could not be parsed or opened.\n", "rc": 100, "stderr": "E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)\nE: The package lists or status file could not be parsed or opened.\n", "stderr_lines": ["E: Couldn't create temporary file to work with /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_xenial_InRelease - mkstemp (28: No space left on device)", "E: The package lists or status file could not be parsed or opened."], "stdout": "", "stdout_lines": []}
PLAY RECAP ******************************************************************************************************************************************************************
target1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
target2 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
here is playbook
[root#prdx-ansible docker_ansible]# cat playbook.yml
- name: Deploy web app
hosts: target1,target2
tasks:
- name: Install all dependencies
package:
name: ['python', 'python-setuptools', 'python-dev', 'build-essential', 'python-pip', 'python-mysqldb']
state: present
- name: Install MySQL database
apt: name={{ item }} state=installed
with_items:
- mysql-server
- mysql-client
- name: Start the database service
service:
name: mysql
state: statred
enabled: yes
- name: Create database
mysql_db: name=emploee_db state=present
- name: Create DB user
mysql_user:
name: db_user
password: Passw0rd
priv: '*.*:ALL'
state: present
host: '%'
- name: Install Flask
pip:
name: "{{ item }}"
state: present
with_items:
Couldn't create temporary file to work with [..] mkstemp (28: No space left on device)
Your target server disk seems to be full. Check with df -h how much space you have left. You may have to run apt-get clean and similar commands to clean up some space.
[root#prdx-ansible docker_ansible]# ansible-playbook playbook.yml -i inventory.txt
PLAY [Deploy web app] *******************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [target1]
ok: [target2]
TASK [Install all dependencies] *********************************************************************************************************************************************
[WARNING]: Updating cache and auto-installing missing dependency: python3-apt
changed: [target2]
changed: [target1]
TASK [Install MySQL database] ***********************************************************************************************************************************************
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of using a loop to supply multiple items and specifying `name:
"{{ item }}"`, please use `name: ['mysql-server', 'mysql-client']` and remove the loop. This feature will be removed in version 2.11. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of using a loop to supply multiple items and specifying `name:
"{{ item }}"`, please use `name: ['mysql-server', 'mysql-client']` and remove the loop. This feature will be removed in version 2.11. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
changed: [target2] => (item=[u'mysql-server', u'mysql-client'])
changed: [target1] => (item=[u'mysql-server', u'mysql-client'])
TASK [Start the database service] *******************************************************************************************************************************************
fatal: [target2]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 172.17.03 closed.\r\n", "module_stdout": "/bin/sh: 1: sudo: not found\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}
fatal: [target1]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 172.17.02 closed.\r\n", "module_stdout": "/bin/sh: 1: sudo: not found\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}
PLAY RECAP ******************************************************************************************************************************************************************
target1 : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
target2 : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
NOW I GOT THIS ERROR!!

How to authenticate hosts with Ansible?

My hosts file
[all]
192.168.77.10
192.168.77.11
192.1680.77.12
And here is my playbook.yml
---
- hosts: all
tasks:
- name: Add the Google signing key
apt_key : url=https://packages.cloud.google.com/apt/doc/apt-key.gpg state=present
- name: Add the k8s APT repo
apt_repository: repo='deb http://apt.kubernetes.io/ kubernetes-xenial main' state=present
- name: Install packages
apt :
name: "{{ packages }}"
vars:
packages:
- vim
- htop
- tmux
- docker.io
- kubelet
- kubeadm
- kubectl
- kubernetes-cni
When I run
ansible-playbook -i hosts playbook.yml
unexpected authentication problem occurs.
The authenticity of host '192.168.77.11 (192.168.77.11)' can't be established.
ECDSA key fingerprint is SHA256:mgX/oadP2cL6g33u7xzrEblvga9CGfpW13K2YUdeKsE.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '192.168.77.10 (192.168.77.10)' can't be established.
ECDSA key fingerprint is SHA256:ayWHzp/yquIuQxw7MKGR0+NbtrzHY86Z8PdIPv7r6og.
Are you sure you want to continue connecting (yes/no)? fatal: [192.1680.77.12]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname 192.1680.77.12: Name or service not known\r\n", "unreachable": true}
^C [ERROR]: User interrupted execution
I am following the example from DevOps book,I reproduced the original code. MY OS is Ubuntu 18.04.
telnet hosts
telnet: could not resolve hosts/telnet: Temporary failure in name resolution
VM ls output
vagrant#ubuntu-bionic:~$ ls
hosts playbook.retry playbook.yml
I edited /etc/ansible/ansible.cfg by ading False option.
Anyway it does not work again
fatal: [192.1680.77.12]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname 192.1680.77.12: Name or service not known\r\n", "unreachable": true}
fatal: [192.168.77.10]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '192.168.77.10' (ECDSA) to the list of known hosts.\r\nvagrant#192.168.77.10: Permission denied (publickey).\r\n", "unreachable": true}
fatal: [192.168.77.11]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '192.168.77.11' (ECDSA) to the list of known hosts.\r\nvagrant#192.168.77.11: Permission denied (publickey).\r\n", "unreachable": true}
to retry, use: --limit #/home/vagrant/playbook.retry
PLAY RECAP *************************************************************************************************************************************************************************************************
192.168.77.10 : ok=0 changed=0 unreachable=1 failed=0
192.168.77.11 : ok=0 changed=0 unreachable=1 failed=0
192.1680.77.12 : ok=0 changed=0 unreachable=1 failed=0
How to resolve this issue?
You have several options. One is of course to SSH to the hosts and add them to the known hosts files of your Ansible servers. Another option is to set the environment variable ANSIBLE_HOST_KEY_CHECKING to false. A third option is to use the ansible.cfg config file:
[defaults]
host_key_checking = False
See the official documentation.

Unable to escalate privileges for a task in ansible even after using become

I am trying to automate a scenario using ansible.
- name: Copy NRPE Upgrade script
template: src=nagiosclient.sh.j2 dest=/var/tmp/nagiosclient.sh
- name: Add Execute permissions of the script
file: dest=/var/tmp/nagiosclient.sh mode=a+x
- name: Execute the NRPE script
script: /var/tmp/nagiosclient.sh
become: true
tags: test
This is an excerpt of my playbook. This playbooks successfully runs the copy and add execute permissions tasks.
But when I try to run , the execute one it fails.
Because ansible is trying to login as 'gparasha' user, this path /var/tmp is unavailable for this user as expected.
But even if i add a "become:true" in the task as done above,
and even after using --become in the ansible playbook task,
i.e. "ansible-playbook -i hosts tltd.yml --become --tags test"
I am getting a permission denied error..
Can anyone suggest as to what is wrong here and how to rectify it?
gparasha-macOS:TLTD gparasha$ ansible-playbook -i hosts tltd.yml --become --tags test
PLAY [Run tasks on Author] **************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************************
ok: [13.229.22.58]
fatal: [34.198.174.78]: UNREACHABLE! => {"changed": false, "msg": "Authentication failure.", "unreachable": true}
TASK [author : Execute the NRPE script] *************************************************************************************************************************************************
fatal: [13.229.22.58]: FAILED! => {"changed": false, "failed": true, "msg": "Could not find or access '/var/tmp/nagiosclient.sh'"}
[WARNING]: Could not create retry file '/opt/ansible/TLTD/tltd.retry'. [Errno 13] Permission denied: u'/opt/ansible/TLTD/tltd.retry'
PLAY RECAP ******************************************************************************************************************************************************************************
13.229.22.58 : ok=1 changed=0 unreachable=0 failed=1
34.198.174.78 : ok=0 changed=0 unreachable=1 failed=0
It doesn’t matter if you use become or not, because script module reads the script file from the control machine, transfers it to the target and executes there (with become privileges in your case).
The error comes from the fact that the script does not exist at /var/tmp/nagiosclient.sh on the control machine.
If you want to execute it on the target, you should use shell module and run /var/tmp/nagiosclient.sh.
Moreover, the permission denied problem is completely unrelated and it is a warning that a retry-file could not be created; also on the control machine.

Ansible yum: All packages providing ... are up to date

OK, I'm trying to learn ansible and am running into a problem doing a very basic operation.
Playbook:
---
- hosts: fedtest
tasks:
- name: Install httpd package
yum: name=httpd state=latest
sudo: yes
- name: Starting http service
service: name=http state=started
sudo: yes
ansible.cfg:
[defaults]
hostfile = /home/abcd/proj/ans/hosts
remote_user = abcd
private_key_file = /home/abcd/proj/ans/.ssh/ans.priv
Ok, I run the command:
$ ansible-playbook setup_apache.yml
PLAY [fedtest]
****************************************************************
GATHERING FACTS
***************************************************************
ok: [fedtest]
TASK: [Install httpd package]
***********************************************
failed: [fedtest] => {"failed": true, "parsed": false}
BECOME-SUCCESS-ajlxizkspxrhyrqauuvywgrtojtutomb
{"msg": "", "changed": false, "results": ["All packages providing httpd are up to date"], "rc": 0}
6.719u 1.760s 0:11.33 74.7% 0+0k 0+592io 0pf+0w
OpenSSH_6.6.1, OpenSSL 1.0.1k-fips 8 Jan 2015
debug1: auto-mux: Trying existing master
debug1: mux_client_request_session: master session id: 2
Shared connection to fedserwizard closed.
FATAL: all hosts have already failed -- aborting
PLAY RECAP
********************************************************************
to retry, use: --limit #/home/abcd/setup_apache.retry
fedtest : ok=1 changed=0 unreachable=0 failed=1
Exit 2
I did do the -vvvv on the ansible-playbook command and it looks like it is failing to execute the shell command to echo the BECOME-SUCCESS string so that playbook can continue instead of erroring out. I've tried these operations on several systems both source and destination and still get the same result.
What type of problem do I need to correct.
After a lot of experimenting, I notice that if the shell of the client (receiver) of the ansible apparently had to be /bin/bash and NOT /bin/tcsh which is what I had.
Interesting that according to the verbose output that I could find that /bin/sh was being explicitly being called. And to cause an ssh issue was extremely troublesome.

Resources