OK, I'm trying to learn ansible and am running into a problem doing a very basic operation.
Playbook:
---
- hosts: fedtest
tasks:
- name: Install httpd package
yum: name=httpd state=latest
sudo: yes
- name: Starting http service
service: name=http state=started
sudo: yes
ansible.cfg:
[defaults]
hostfile = /home/abcd/proj/ans/hosts
remote_user = abcd
private_key_file = /home/abcd/proj/ans/.ssh/ans.priv
Ok, I run the command:
$ ansible-playbook setup_apache.yml
PLAY [fedtest]
****************************************************************
GATHERING FACTS
***************************************************************
ok: [fedtest]
TASK: [Install httpd package]
***********************************************
failed: [fedtest] => {"failed": true, "parsed": false}
BECOME-SUCCESS-ajlxizkspxrhyrqauuvywgrtojtutomb
{"msg": "", "changed": false, "results": ["All packages providing httpd are up to date"], "rc": 0}
6.719u 1.760s 0:11.33 74.7% 0+0k 0+592io 0pf+0w
OpenSSH_6.6.1, OpenSSL 1.0.1k-fips 8 Jan 2015
debug1: auto-mux: Trying existing master
debug1: mux_client_request_session: master session id: 2
Shared connection to fedserwizard closed.
FATAL: all hosts have already failed -- aborting
PLAY RECAP
********************************************************************
to retry, use: --limit #/home/abcd/setup_apache.retry
fedtest : ok=1 changed=0 unreachable=0 failed=1
Exit 2
I did do the -vvvv on the ansible-playbook command and it looks like it is failing to execute the shell command to echo the BECOME-SUCCESS string so that playbook can continue instead of erroring out. I've tried these operations on several systems both source and destination and still get the same result.
What type of problem do I need to correct.
After a lot of experimenting, I notice that if the shell of the client (receiver) of the ansible apparently had to be /bin/bash and NOT /bin/tcsh which is what I had.
Interesting that according to the verbose output that I could find that /bin/sh was being explicitly being called. And to cause an ssh issue was extremely troublesome.
Related
My organization has tasked me with finding out a way to use Ansible to automate rebooting some of our cctv cameras, as we currently use it for a lot of our other infrastructure
The cameras (Axis) are running armv7l GNU/Linux with some proprietary stuff built on top, however python is not installed and after doing quite a bit of research and reaching out to the vendor there is no "official" way of installing python without something else breaking.
That being said, I have looked around and have come across of two Ansible modules that could potentially do this, raw and script. All that needs to be done is to reboot these cameras.
However, I am now completely lost in finding a solution to my issue. Below is my current playbook and output.
- name: cctv restart playbook
hosts: all
gather_facts: no
tasks:
- name: restart cctv
raw:
cmd: reboot
The output from when I run this playbook is
PLAY [cctv restart playbook] ***************************************************************************************************************
TASK [restart cctv] ************************************************************************************************************************
fatal: [192.168.10.130]: FAILED! => {"changed": true, "msg": "non-zero return code", "rc": 127, "stderr": "Shared connection to 192.168.10.130 closed.\r\n", "stderr_lines": ["Shared connection to 192.168.10.130 closed."], "stdout": "sh: None: not found\r\n", "stdout_lines": ["sh: None: not found"]}
PLAY RECAP *********************************************************************************************************************************
192.168.10.130 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Please let me know what needs to be done to fix this, or if it is just not possible
While the comments on your question are partly correct, the immediate source of your error message appears to be a syntax issue. The raw command does not support a cmd parameter; even targeting a regular Linux system, a playbook like this:
- hosts: all
gather_facts: false
tasks:
- name: restart cctv
raw:
cmd: "date"
Results in the same error:
fatal: [node0]: FAILED! => {"changed": true, "msg":
"non-zero return code", "rc": 127, "stderr": "Warning:
Permanently added 'localhost' (ED25519) to the list of
known hosts.\r\nShared connection to localhost
closed.\r\n", "stderr_lines": ["Warning: Permanently added
'localhost' (ED25519) to the list of known hosts.",
"Shared connection to localhost closed."], "stdout":
"bash: None: command not found\r\n", "stdout_lines":
["bash: None: command not found"]}
Which, stripped of all the extranneous bits, reads:
bash: None: command not found
(This is at least true for Ansible core 2.14.1, which is what I'm running, and that matches the documentation for the raw module.)
You need to write your task like this:
- hosts: all
gather_facts: false
tasks:
- name: restart cctv
raw: "date"
As #Zeitounator said in their comment, for this to work, your remote device needs to be at least a minimal Linux-like environment with a sh command. Assuming that you have this, you would still expect to see an error when attempting to run the reboot command, because that causes the connection to drop. That would look something like this:
fatal: [node0]: UNREACHABLE! => {"changed": false, "msg":
"Failed to connect to the host via ssh: Shared connection
to node0.virt closed.", "unreachable": true}
Since you know that's going to result in an error, you can tell Ansible to ignore the failure. For example:
- hosts: all
gather_facts: false
become: true
tasks:
- name: restart cctv
raw: "reboot"
ignore_unreachable: true
Running this playbook results in:
PLAY [all] **********************************************************************************************
TASK [restart cctv] *************************************************************************************
fatal: [node0]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to node0.virt closed.", "unreachable": true}
...ignoring
PLAY RECAP **********************************************************************************************
node0 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
(And the target system reboots.)
I have a lab setup with ansible controller + node and exploring few areas.
I am so far setup an user account named ansible in both machines and enabled ssh keybased authentication
Also setup sudo premissions for the user in both machines
When I try to run the below playbook , It works on the local machine and fails on the other node.
--- #Install Telnet
- hosts: all
name: Install Telnet
become: true
become_user: ansible
become_method: sudo
tasks:
- yum:
name: telnet
state: latest
Output is as follows
`[ansible#host1 playbooks]$ ansible-playbook telnetDeployYUM.yml
PLAY [Install Telnet] ***********************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [192.168.64.6]
ok: [192.168.64.5]
TASK [yum] **********************************************************************************************************************************************************************************
ok: [192.168.64.5]
fatal: [192.168.64.6]: FAILED! => {"changed": true, "msg": "You need to be root to perform this command.\n", "obsoletes": {"grub2": {"dist": "x86_64", "repo": "#anaconda", "version": "1:2.02-0.64.el7.centos"}, "grub2-tools": {"dist": "x86_64", "repo": "#anaconda", "version": "1:2.02-0.64.el7.centos"}}, "rc": 1, "results": ["Loaded plugins: fastestmirror\n"]}
to retry, use: --limit #/home/ansible/playbooks/telnetDeployYUM.retry
PLAY RECAP **********************************************************************************************************************************************************************************
192.168.64.5 : ok=2 changed=0 unreachable=0 failed=0
192.168.64.6 : ok=1 changed=0 unreachable=0 failed=1
[ansible#host1 playbooks]$
`
I could also manually able to run sudo yum on the failed target as ansible user
I believe sudo set up in correct
[ansible#host2 root]$ sudo whoami
root
Can experts share some insights on what I am missing with respect to my failed machine , Thanks.
Below should work fine
- hosts: all
name: Install Telnet
become: yes
tasks:
- yum:
name: telnet
state: latest
ansible or user through which ansible is getting executed should be in sudoers file.
You are changing your user to ansible which is not required.
Run with -vvvv to see what ansible is doing.
Have you setup ansible in sudoers for password less privilege elevation?
you are getting a message that it is waiting for "escalation prompt". That means when you are running with become, you are failing to become since it needs the password. Make sure your test user is in /etc/sudoers AND you have it marked for that user to NOT need to enter a password when running sudo commands. The entry should end with :NOPASSWD on the line in that file.
Could someone please help me to write ansible inventory file to connect to bitbucket - clone a file and place into ansible machine.
Playbook
---
- hosts: bitbucketURL
tasks:
- git:
repo: https://p-bitbucket.com:5999/projects/VIT/repos/sample-playbooks/browse/hello.txt
dest: /home/xxx/demo/output/
Inventory file
[bitbucketURL]
p-bitbucket.com:5999
[bitbucketURL:vars]
ansible_connection=winrm
ansible_user=xxx
ansible_pass=<passwd>
I am getting error while using this playbook and inventory file
-bash-4.2$ ansible-playbook -i inv demo_draft1.yml
PLAY [bitbucketURL] *****************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************
fatal: [p-bitbucket.nl.eu.abnamro.com]: UNREACHABLE! => {"changed": false, "msg": "ssl: auth method ssl requires a password", "unreachable": true}
to retry, use: --limit #/home/c55016a/demo/demo_draft1.retry
PLAY RECAP **************************************************************************************************************************************************
p-bitbucket.nl.eu.abnamro.com : ok=0 changed=0 unreachable=1 failed=0
Please help me write a proper inventory file with correct parameters
You need no inventory at all. All you need to do is to set the play to execute on localhost:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- git:
repo: https://p-bitbucket.com:5999/projects/VIT/repos/sample-playbooks/browse/hello.txt
dest: /home/xxx/demo/output/
That said, URL should point to Git repository, not a single file (if hello.txt is a single file).
I am trying to run a specific Ansible task as a different user than the one who is running the playbook. On my local box I have below playbook and I am logged in as david user and I want to run this command /tek/ghy/bin/ss.sh start on all remote servers as goldy user only.
My .yml file looks like this:
---
- name: start server
hosts: one_box
serial: "{{ num_serial }}"
tasks:
- name: start server
command: /tek/ghy/bin/ss.sh start
become: true
become_user: goldy
Below is how I am running it:
david#machineA:~$ ansible-playbook -e 'host_key_checking=False' -e 'num_serial=1' start_box.yml -u david --ask-pass --sudo -U goldy --ask-become-pass
[DEPRECATION WARNING]: The sudo command line option has been deprecated in favor of the "become" command line arguments. This feature will be removed in version 2.6. Deprecation warnings
can be disabled by setting deprecation_warnings=False in ansible.cfg.
SSH password:
SUDO password[defaults to SSH password]:
PLAY [start server] ***************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
fatal: [remote_machineA]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership of ‘/tmp/ansible-tmp-1527357815.74-165519966271795/’: Operation not permitted\nchown: changing ownership of ‘/tmp/ansible-tmp-1527357815.74-165519966271795/setup.py’: Operation not permitted\n}). For information on working around this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
What wrong I am doing here? I am running ansible 2.4.3.0
by google search, you could be affected by this issue.
try to upgrade ansible, your code (i replaced the command to run a simple id on the remote server, instead of the /tek/ghy/bin/ss.sh start, and i used the same shell command and arguments as you provided) works on 2.5.2:
[ilias#optima-ansible tmp]$ ansible-playbook -e 'host_key_checking=False' -e 'num_serial=1' lala.yml -u ilias --ask-pass --sudo -U http_offline --ask-become-pass
[DEPRECATION WARNING]: The sudo command line option has been deprecated in favor of the "become" command line arguments. This feature will be removed in version 2.6. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
SSH password:
SUDO password[defaults to SSH password]:
PLAY [start server] *************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************************************
ok: [greenhat]
TASK [start server] *************************************************************************************************************************************************************************************************
changed: [greenhat]
TASK [debug] ********************************************************************************************************************************************************************************************************
ok: [greenhat] => {
"command_output": {
"changed": true,
"cmd": [
"id"
],
"delta": "0:00:00.004484",
"end": "2018-05-26 21:26:28.531838",
"failed": false,
"rc": 0,
"start": "2018-05-26 21:26:28.527354",
"stderr": "",
"stderr_lines": [],
"stdout": "uid=1002(http_offline) gid=1002(http_offline) groups=1002(http_offline),984(docker)",
"stdout_lines": [
"uid=1002(http_offline) gid=1002(http_offline) groups=1002(http_offline),984(docker)"
]
}
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
greenhat : ok=3 changed=1 unreachable=0 failed=0
[ilias#optima-ansible tmp]$ ansible --version
ansible 2.5.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ilias/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15 (default, May 16 2018, 17:50:09) [GCC 8.1.1 20180502 (Red Hat 8.1.1-1)]
[ilias#optima-ansible tmp]$
Hello guys I make a simple playbook to practice with Ansible but I have a problem when I try to run the playbook (ansible-playbook -i hosts.ini playbook.yml) to configure an instance ec2 the output returns:
> fatal: [XX.XXX.XXX.XXX]: FAILED! => {
> "changed": false,
> "failed": true,
> "invocation": {
> "module_name": "setup"
> },
> "module_stderr": "Shared connection to XXX.XXX.XXX.XXX closed.\r\n",
> "module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n",
> "msg": "MODULE FAILURE" } to retry, use: --limit #/home/douglas/Ansible/ansible_praticing/projeto2.retry
>
> PLAY RECAP
> *********************************************************************
> XX.XXX.XXX.XXX : ok=0 changed=0 unreachable=0 failed=1
When I try to connect with the instance via ssh -i ~/.ssh/key.pem ubuntu#public.ip it works well but the provisioning not.
My playbook:
- hosts: projeto
sudo: True
remote_user: ubuntu
vars_files:
- vars.yml
tasks:
- name: "Update"
apt: update_cache=yes
- name: "Install the Ansible"
apt: name=ansible state=latest
- name: "Installt the mysql"
apt:
args:
name: mysql-server
state: latest
- name: "Install the Nginx"
apt:
args:
name: nginx
state: latest
My hosts.ini is also ok (with public ip of aws ec2 instance) and I put the public key (~/.ssh/id_rsa.pem of local machine) in the ~/.ssh/authorized_keys file, inside of the instance.
In the last week (Friday) this playbook was working well.
What am I doing wrong?
Maybe my answer is too late but I faced the same problem today. I have an Ubuntu 16.04 instance running on my EC2. I think, since it has Python 3 (Python 3.5) as its default Python installation. Hence, ansible is not able to find the required Python directory (/usr/bin/python). I got around this issue by changing the ansible Python interpreter to Python 3.
I added ansible_python_interpreter=/usr/bin/python3 to my inventory file and did not have to change the playbook.
Reference - http://docs.ansible.com/ansible/latest/python_3_support.html