I'm trying to use Ansible with ssh for interact with Windows machines
i have successfully install OpenSSH on a Windows machine that mean i can connect from linux to windows with:
ssh username#ipAdresse
i've tried using a lot of version of ansible (2.6, 2.7.12, 2.7.14, 2.8.5 and 2.8.6) and i always test if i can ping an other Linux machine with this line(it work):
ansible linux -m ping
There is my hosts file
[windows]
192.***.***.***
[linux]
192.***.***.***
[all:vars]
ansible_connection=ssh
ansible_user=root
[windows:vars]
ansible_ssh_pass=*******
remote_tmp=C:\Users\root\AppData\Local\Temp\
become_method=runas
there is the error with verbose:
[root#oel76-template ~]# ansible windows -m win_ping -vvv
ansible 2.8.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 08:19:52) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39.0.1)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
META: ran handlers
<192.***.***.***> ESTABLISH SSH CONNECTION FOR USER: root
<192.***.***.***> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/91df1ca379 192.168.46.99 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo C:/Users/root/AppData/Local/Temp/ansible-tmp-1571839448.66-279092717123794 `" && echo ansible-tmp-1571839448.66-279092717123794="` echo C:/Users/root/AppData/Local/Temp/ansible-tmp-1571839448.66-279092717123794 `" ) && sleep 0'"'"''
<192.***.***.***> (1, '', 'The system cannot find the path specified.\r\n')
<192.***.***.***> Failed to connect to the host via ssh: The system cannot find the path specified.
192.***.***.*** | UNREACHABLE! => {
"changed": false,
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo C:/Users/root/AppData/Local/Temp/ansible-tmp-1571839448.66-279092717123794 `\" && echo ansible-tmp-1571839448.66-279092717123794=\"` echo C:/Users/root/AppData/Local/Temp/ansible-tmp-1571839448.66-279092717123794 `\" ), exited with result 1",
"unreachable": true
}
I don't know what i'm doing wrong, i also try to change the remote_tmp in ansible.cfg but nothing more.
Actual value for remote_tmp=C:/Users/root/AppData/Local/Temp
Any idea ?
To use SSH as the connection to a Windows host (starting from Ansible 2.8), set the following variables in the inventory:
ansible_connection=ssh
ansible_shell_type=cmd/powershell (Set either cmd or powershell not both)
Finally, the inventory file:
[windows]
192.***.***.***
[all:vars]
ansible_connection=ssh
ansible_user=root
[windows:vars]
ansible_password='*******'
ansible_shell_type=cmd
Note for the variable ansible_password. Use single quotation for password with special characters.
Ok solved, the problem was
ansible_ssh_pass=*****
the correct syntax is
ansible_password=*****
Related
I am running ansible-playbook with
ansible-playbook -u vagrant -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml -c ssh
This throws error fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ", "unreachable": true}
However, it works fine after adding connection flag -c paramiko.
ansible-playbook -u vagrant -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml -c paramiko
Question: what could be the reasons the default connection (OpenSSH) does not work but paramiko does? How can I debug it and make OpenSSH works too?
I would like to understand the reasons. Please let me know if you need more information other than
local: ubuntu 20.04
remote: CentOS 7
ansible [core 2.13.6]
When running with verbose mode
ansible-playbook -u vagrant -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml -c ssh -vvv
The output is
/usr/lib/python3/dist-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated
"class": algorithms.Blowfish,
ansible-playbook [core 2.13.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/john/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/john/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-playbook
python version = 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0]
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /home/john/test/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory as it did not pass its verify_file() method
script declined parsing /home/john/test/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory as it did not pass its verify_file() method
auto declined parsing /home/john/test/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory as it did not pass its verify_file() method
Parsed /home/john/test/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory inventory source with ini plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml *************************************************************************************************
1 plays in playbook.yml
PLAY [all] *************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************
task path: /home/john/test/playbook.yml:2
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o Port=2222 -o 'IdentityFile="/home/john/test/.vagrant/machines/default/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/home/john/.ansible/cp/055b8f4af0"' 127.0.0.1 '/bin/sh -c '"'"'echo ~vagrant && sleep 0'"'"''
<127.0.0.1> (255, b'/home/vagrant\n', b'')
fatal: [default]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ",
"unreachable": true
}
PLAY RECAP *************************************************************************************************************
default : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
The command that Ansible is running is returning 255 as the return code, for some reason:
<127.0.0.1> (255, b'/home/vagrant\n', b'')
OpenSSH uses this return code for connection errors but does not prevent remote processes from returning it, and Ansible can't tell the difference between a 255 that is a genuine connection error and whatever happened here.
Paramiko is a Python library and raises errors using native Python error handling, so it doesn't have the same issue.
The only way to get Ansible's OpenSSH plugin working is to figure out why '/bin/sh -c '"'"'echo ~vagrant && sleep 0'"'"' is returning 255 on your target host, and fix that issue.
Ansible command:
ansible all -m module-name -o -e "ansible_user=username ansible_password=password"
Giving following error :
host-ip | FAILED! => {"msg": "to use the 'ssh' connection type with passwords, you must install the sshpass program"}
Install sshpass:
apt-get update
apt-get- install sshpass
if not This error can solved by exporting environment variable.
export ANSIBLE_HOST_KEY_CHECKING=False
If not try to create a file ansible.cfg in your current folder with the following contents:
[defaults]
host_key_checking = false
I have written a simple play for installing pip and expect on my clients using ansible. However, the execution is stuck in the TASK part.
My code-
---
- hosts: mygroup
tasks:
- name: Install packages
yum: name= {{ item }} state=installed
with_items:
- pip
- expect
Debug- [only the Task part where the execution is stuck]
TASK [Install packages] ********************************************************
task path: /home/netman/lab7/prsh1271_play.yaml:4
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/packaging/os/yum.py
<192.168.1.2> ESTABLISH SSH CONNECTION FOR USER: None
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/packaging/os/yum.py
<172.16.1.2> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.2> SSH: EXEC sshpass -d12 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=10 -o ControlPath=/home/netman/.ansible/cp/61004433e3 192.168.1.2 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<172.16.1.2> SSH: EXEC sshpass -d12 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=10 -o ControlPath=/home/netman/.ansible/cp/3e78e2ce1a 172.16.1.2 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
Please help resolve.
Package installation requires root user or root like user. Can you add the user in the sudoers file and try again if not added
Also re run the playbook using -vvvv for verbose logging and entry the verbose logs - which would be helpful for debugging.
you could add a "become: true", so it runs as the root user.
so you have:
---
- hosts: mygroup
become: true
tasks:
- name: Install packages
yum: name= {{ item }} state=installed
with_items:
- pip
- expect
The playbook might be stuck because the command you run in the stuck task issues an input prompt, which you don't see when you run the playbook.
Since no input is ever supplied to the prompt, it just sits there and waits forever.
The solution (if this is indeed the problem):
Change your tasks such that you provide any necessary inputs directly in your Ansible tasks, thus avoiding input prompts.
For example, if the following variable is defined on the Ansible host:
export TEST=new_dir
How can that variable be added to an adhoc -m raw command:
ansible -m raw -a 'mkdir /home/user/$TEST'
Such that Ansible host command runs mkdir /home/user/new_dir on the guest machine?
How can this be achieved?
With help of env lookup:
ansible -m raw -a 'mkdir /home/user/{{ lookup("env","TEST") }}'
I need to run playbooks on Vagrant boxes and on aws when I setup environment with cloud formation.
In Vagrant file I use ansible-local and everything works fine
name: Setup Unified Catalog Webserver
hosts: 127.0.0.1
connection: local
become: yes
become_user: root
roles: generic
However when I create instance in AWS the ansible playbook fails with error:
sudo: sorry, you must have a tty to run sudo
This happen because it is run as root and it doesnt have tty. But I dont know how to fix it without making change in /etc/sudoers to allow !requiretty
Is there any flags I can setup in ansible.cfg or in my Cloud Formation template?
"#!/bin/bash\n", "\n", "
echo 'Installing Git'\n","
yum --nogpgcheck -y install git ansible htop nano wget\n",
"wget https://s3.eu-central-1.amazonaws.com/XXX -O /root/.ssh/id_rsa\n",
"chmod 600 /root/.ssh/id_rsa\n",
"ssh-keyscan 172.31.7.235 >> /root/.ssh/known_hosts\n",
"git clone git#172.31.7.235:something/repo.git /root/repo\n",
"ansible-playbook /root/env/ansible/test.yml\n
I was able to fix this by setting the transport = paramiko configuration in ansible.cfg.
I have found the following solutions for myself:
1. Change requiretty in /etc/sudoers with sed run playbooks and change it back.
"#!/bin/bash\n", "\n", "
echo 'Installing Git'\n","
yum --nogpgcheck -y install git ansible htop nano wget\n",
"wget https://s3.eu-central-1.amazonaws.com/xx/ansible -O /root/.ssh/id_rsa\n",
"chmod 600 /root/.ssh/id_rsa\n",
"ssh-keyscan 172.31.9.231 >> /root/.ssh/known_hosts\n",
"git clone git#172.31.5.254:somerepo/dev.git /root/dev\n",
"sed -i 's/Defaults requiretty/Defaults !requiretty/g' /etc/sudoers\n",
"\n",
"ansible-playbook /root/dev/env/ansible/uk.yml\n",
"\n",
"sed -i 's/Defaults !requiretty/Defaults requiretty/g' /etc/sudoers\n"
OR
2. In ansible playbook specify variable:
- name: Setup
hosts: 127.0.0.1
connection: local
sudo: {{ require_sudo }}
roles:
- generic
Run in AWS Cloud Formation template would be
"ansible-playbook -e require_sudo=False /root/dev/env/ansible/uk.yml\n"
And for Vagrant in ansible.cfg it can be specified
require_sudo=True
Also in CF template may identify who is running and the pass variable
ansible-playbook -e$(id -u |egrep '^0$' > /dev/null && require_sudo=False || require_sudo=True; echo "require_sudo=$require_sudo") /apps/ansible/uk.yml
If you need to specific connection: paramiko within just one playbook versus a global configuration in ansible.cfg, you can add connection: paramiko following in the playbook, example:
- name: Run checks after deployments
hosts: all
# https://github.com/paramiko/paramiko/issues/1369
connection: paramiko
gather_facts: True