vagrant ansible The following settings don't exist: inventory_file - osx-mountain-lion

I've pulled down a git repo and ran vagrant up but I'm getting this error message
The following settings don't exist: inventory_file
I've installed virtual box and vagrant and ansible for osx mountain lion.
But I can't get anything to work.
also when I run ansible all -m ping -vvvv I get
<192.168.0.62> ESTABLISH CONNECTION FOR USER: Grant
<192.168.0.62> EXEC ['ssh', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/Users/Grant/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'Port=22', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', '192.168.0.62', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544 && chmod a+rx $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544 && echo $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544'"]
192.168.0.62 | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_5.9p1, OpenSSL 0.9.8y 5 Feb 2013
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/Users/Grant/.ansible/cp/ansible-ssh-192.168.0.62-22-Grant" does not exist
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.0.62 [192.168.0.62] port 22.
debug2: fd 3 setting O_NONBLOCK
debug1: connect to address 192.168.0.62 port 22: Operation timed out
ssh: connect to host 192.168.0.62 port 22: Operation timed out
Any ideas on what is going on will be appreciated :)

For the inventory_file issue try changing the Vagrantfile to use inventory_path instead. I think this subtle change occurred with Vagrant 1.3.x. If you don't want to modify the Vagrantfile try using Vagrant 1.2.x.
When running:
ansible all -m ping -vvvv
This will use your current user and will look in the default location for the Ansible hosts inventory (/etc/ansible/hosts).
In order to get it working with a Vagrant defined VM you need to use the vagrant user, specify the SSH key to use during the connection and specify the location of the hosts inventory, e.g.
ansible all \
-i provisioning/inventory # <-- or wherever the inventory is \
-m ping \
-u vagrant \
--private-key ~/.vagrant.d/insecure_private_key

It has been repeated all over the block about using ~/.vagrant.d/insecure_private_key but I found that it was using actually .vagrant/machines/default/virtualbox/private_key, in the path where the Vagrantfile is. They probably changed their key generation to be per-machine, not user-wide, but the documentation does not reflect that yet.
So for the whole command, it would be:
ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --private-key=.vagrant/machines/default/virtualbox/private_key -u vagrant playbook.yml
You can check whether is one or the other by running vagrant ssh-config and looking for the IdentityFile value.

Instead of passing the inventory_file, private_key and ssh_user every time, you can put those into an ansible config file. See my more detailed answer here: https://stackoverflow.com/a/25316963/502457

$ ansible all -i inventory -m ping -u vagrant --private-key ~/.vagrant.d/insecure_private_key
ansible_ssh_private_key_file=/Users/dxiao/.vagrant.d/insecure_private_key | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
ansible_ssh_user=vagrant | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
testserver | success >> {
"changed": false,
"ping": "pong"
}
$ cat inventory
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=/Users/dxiao/.vagrant.d/insecure_private_key
it works.

Related

SSH shows the wrong IP address when SSH with port forward

My use case is I have to access AWS ec2 instances through a jumpbox.
Here is my SSH config.
Host awsjumpbox
User sshuser
HostName jumpboxhostname
IdentityFile /Users/myusername/.ssh/id_rsa
LocalForward 8022 10.0.168.43:22
It works when I do SCP command to copy files to the EC2 instance.
myusername % scp -r -i ~/aws/aws-keypair.pem -P 8022 * ec2-user#localhost:testdir
The authenticity of host '[localhost]:8022 ([::1]:8022)' can't be established.
ECDSA key fingerprint is SHA256:rrwr62yjP2cgUTT9SowdlrIwGi4jMMwt5x4Aj6E4Y3Y.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[localhost]:8022' (ECDSA) to the list of known hosts.
/etc/profile.d/lang.sh: line 19: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory
README.md 100% 1064 24.3KB/s 00:00
However, when I executed SSH command. It returns a strange IP address.
myusername % ssh -i ~/aws/aws-keypair.pem -P 8022 ec2-user#localhost
ssh: connect to host 0.0.31.86 port 22: No route to host
What is the cause of this issue? How do I fix it?
Thank you.
Don't use LocalForward and reverse the flow.
Use ProxyCommand or ProxyJump. This will allow SSH to open a session to your bastion server transparently.
E.g. your configuration should be something in the line of
Host 10.0.168.43
User root
ProxyCommand ssh -W %h:%p sshuser#awsjumpbox
...
or
Host 10.0.168.43
User root
ProxyJump sshuser#awsjumpbox
...

Ansible for Windows: Cannot access Windows machine via WinRM

I have installed Ansible on my Linux (14.04) workstation, along with Python (2.7.6), ansible (2.3.0), pywinrm (0.2.1). I want to use Ansible to configure my Windows VMs. I have a Windows VM (Win2k12r2) in Azure. I have opened all ports in the Azure network security group, and I have opened ports in the Windows Firewall for WinRM (5985 and 5986).
I also ran the script located at https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1, in my Windows VM to ensure that WinRM is enabled.
I am able to RDP into the Windows VM, so I know that its public network interface is working.
As per the ansible docs (http://docs.ansible.com/ansible/intro_windows.html) I have an inventory file inventories/test1
[windows]
<azure_ip_address>
And I have a file group_vars/windows.yml
ansible_user: <my_user_id>
ansible_password: <azure_vm_password>
ansible_port: 5986
ansible_connection: winrm
# The following is necessary for Python 2.7.9+ when using default WinRM self-signed certificates:
ansible_winrm_server_cert_validation: ignore
ansible_winrm_scheme: https
When I run the command:
ansible windows -i inventories/test1 -m win_ping --ask-vault-pass -vvvvv
I get the following response:
No config file found; using defaults
Vault password:
Loading callback plugin minimal of type stdout, v2.0 from /home/jgodse/ansible/lib/ansible/plugins/callback/__init__.pyc
Using module file /home/jgodse/ansible/lib/ansible/modules/core/windows/win_ping.ps1
<azure_vm_ip_address> ESTABLISH SSH CONNECTION FOR USER: None
<azure_vm_ip_address> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<azure_vm_ip_address> SSH: ansible_password/ansible_ssh_pass not set: (-o) (KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<azure_vm_ip_address> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<azure_vm_ip_address> SSH: PlayContext set ssh_common_args: ()
<azure_vm_ip_address> SSH: PlayContext set ssh_extra_args: ()
<azure_vm_ip_address> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/jgodse/.ansible/cp/ansible-ssh-%h-%p-%r)
<azure_vm_ip_address> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/jgodse/.ansible/cp/ansible-ssh-%h-%p-%r <azure_vm_ip_address> '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477490434.84-228998637685624 `" && echo ansible-tmp-1477490434.84-228998637685624="` echo $HOME/.ansible/tmp/ansible-tmp-1477490434.84-228998637685624 `" ) && sleep 0'"'"''
<azure_vm_ip_address> | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/jgodse/.ansible/cp/ansible-ssh-<azure_vm_ip_address>-22-jgodse\" does not exist\r\ndebug2: ssh_connect: needpriv 0\r\ndebug1: Connecting to <azure_vm_ip_address> [<azure_vm_ip_address>] port 22.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address <azure_vm_ip_address> port 22: Connection timed out\r\nssh: connect to host <azure_vm_ip_address> port 22: Connection timed out\r\n",
"unreachable": true
}
I tried telnetting to the machine as follows:
$ telnet <azure_vm_ip_address> 5986
Trying <azure_vm_ip_address>...
Connected to <azure_vm_ip_address>.
Escape character is '^]'.
This tells me that telnet worked to 5986, and therefore my firewall rules were OK.
Do I have to do something to tell Ansible that I'm trying to connect to a Windows VM using WinRM? Or am I missing something to help my Ansible workstation connect to my Windows VM via WinRM?
It turns out that putting the connection variables in group_vars/windows.yml didn't work. I got rid of that file completely, and edited inventories/test1 to look like this:
[windows]
<azure_ip_address>
[windows:vars]
ansible_user=<my_user_id>
ansible_password=<azure_vm_password>
ansible_port=5986
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore
Running the command worked, and running win_ping worked successfully.
I then tried encrypting inventories/test1, and I got:
[WARNING]: No hosts matched, nothing to do
And win_ping didn't run.
My guess at this point is that variables needed for connection initialization have to be in an inventory file, and are read before encrypted variables are decrypted and used.

Ansible cannot connect to host via specified private key

The following command works from my OSX terminal:
ssh vagrant#192.168.50.100 -i .vagrant/machines/centos100/virtualbox/private_key
I have an inventory file called "hosts" with the following:
192.168.50.100 ansible_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/centos100/virtualbox/private_key
However when I run the following:
ansible -i hosts all -m ping -v
I get:
192.168.50.100 | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
I was using ansible 1.8.2 where ansible_user is still ansible_ssh_user
The correct hosts file is:
192.168.50.100 ansible_ssh_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/centos100/virtualbox/private_key

Ansible doesn't connect to server - Connection timed out

I try connect to server using ansible. I have installed ubuntu server with openssh. I add public key on server and when I try connect to server using ssh it works:
ssh ubuntu#92.168.0.14
So I create ansible inventory file hosts
[dbservers]
192.16.0.14 ansible_ssh_port=22 ansible_ssh_user=ubuntu
And next I try run command:
ansible all -i hosts -m ping -vvvv
But when I run it i get an error:
<192.16.0.14> ESTABLISH CONNECTION FOR USER: ubuntu
<192.16.0.14> REMOTE_MODULE ping
<192.16.0.14> EXEC ['ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/karol/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'Port=22', '-o', 'IdentityFile=/home/karol/.ssh/id_rsa', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=ubuntu', '-o', 'ConnectTimeout=10', '192.16.0.14', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1431369235.84-48071922815331 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1431369235.84-48071922815331 && echo $HOME/.ansible/tmp/ansible-tmp-1431369235.84-48071922815331'"]
192.16.0.14 | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/home/karol/.ansible/cp/ansible-ssh-192.16.0.14-22-ubuntu" does not exist
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.16.0.14 [192.16.0.14] port 22.
debug2: fd 3 setting O_NONBLOCK
debug1: connect to address 192.16.0.14 port 22: Connection timed out
ssh: connect to host 192.16.0.14 port 22: Connection timed out
I use the same identity file so why ansible can't connect to server?
Maybe, the trouble is here:
ubuntu#**92.168.0.14**
[dbservers]
**192.16**.0.14 ansible_ssh_port=22 ansible_ssh_user=ubuntu
Here is different ip-addresses with different fors octets:
92.168 and 192.16

ansible-playbook -> ControlPath too long

I just trying ouy a playbook that a colleague has set up that I needed to modify. The first problem I get running on my mac was
ERROR: Unable to find an inventory file, specify one with -i ?
This was easily solved by adding -i verif to the command. But then the following error occured.
loadgen-verif-app1.internal.machines | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/andreas.joelsson/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: auto-mux: Trying existing master
ControlPath too long
This is true for all 8 machines (loadgen-verif-app[1-8].internal.machines)
After some debugging that the file could be too long, I tried the following command with the same result:
ansible nukes -m ping -i verif -vvvv
then I thought it was issues with ssh but executing the command through ssh works:
ssh loadgen-verif-app1.internal.machines ping loadgen-verif-app2.internal.machines
And now I am stumped because the ping command works on some of the machines not in the range listed above, the thing is that they are shorter than the loagenXXX.machines path if that makes it an issue. But then the ssh command shouldn't work I guess.
I have some ssh config settings set up for the targets as well, but that is no different that the ones that did work with the ping command.
Host loadgen1
HostName loadgen-verif-app1.internal.machines
Now I am stumped as it works for the colleague on a mac as well. So not sure if there is some setting i'm missing or similar. He doesn't need to provide the -i verif either that can also be a reason why it doesn't work.
edit 2014-12-17:
Have tried modifying the ansible setting control_path according to http://docs.ansible.com/intro_configuration.html#control-path
We are running the same version of ansible
We are running the same version of OpenSSH.
We have the same ssh configs as far as we can tell.
Have been looking for Host* that I found in /etc/ssh_config and removed without progress according to f.e. https://help.openshift.com/hc/en-us/articles/202186044-Unable-to-git-clone-an-application-when-SSH-session-sharing-is-in-use-ControlPath-too-long-
edit 2015-01-08:
SE-C02N76PGG5RP:verif_provisioning andreas.joelsson$ ansible loadgen-verif-app1.internal.machines -m ping -i verif -vvvv
<loadgen-verif-app1.internal.machines> ESTABLISH CONNECTION FOR USER: andreas.joelsson
<loadgen-verif-app1.internal.machines> REMOTE_MODULE ping
<loadgen-verif-app1.internal.machines> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/andreas.joelsson/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 loadgen-verif-app1.internal.machines /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1420723708.99-33622628424665 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1420723708.99-33622628424665 && echo $HOME/.ansible/tmp/ansible-tmp-1420723708.99-33622628424665'
loadgen-verif-app1.internal.machines | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/andreas.joelsson/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: auto-mux: Trying existing master
ControlPath too long
edit 2015-02-12:
SE-C02N76PGG5RP:verif_provisioning andreas.joelsson$ ansible nukes -m ping -i verif
loadgen-verif-app4.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app5.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app3.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app1.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app2.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app8.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app6.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
loadgen-verif-app7.internal.machines | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
And with the working one:
SE-C02N76PGG5RP:verif_provisioning andreas.joelsson$ ansible duke -m ping -i verif
steve-verif-app1.internal.machines | success >> {
"changed": false,
"ping": "pong"
}
Solution to this error is mentioned at ansible documentation, please refer to this link.
I was getting this error, when I tried to connect to EC2 instances but after modifying the below mentioned configuration, it solved my problem.
I am supposing that you have installed the ansible on Mac using pip. So, please do these steps:
create the /etc/ansible directory
sudo mkdir /etc/ansible
change the permission of it
sudo chown $(whoami):staff /etc/ansible
download the ansible.cfg file from here and place inside the /etc/ansible directory
edit/uncomment the following line
[ssh_connection]
control_path = %(directory)s/%%h-%%r
edit the ~/.ssh/config file:
Host *
GSSAPIAuthentication no
EXTRA STEP:
brew install https://raw.github.com/eugeneoden/homebrew/eca9de1/Library/Formula/sshpass.rb
Ok the way I got it working was that I did the changes in ansible.cfg and the extra steps but it did not work. The only way I found is to export ANSIBLE_SSH_CONTROL_PATH:
This is coz, I think it always is picking the default path. Even after the change in ansible.cfg
1.9.4 git:(master) pwd
/usr/local/Cellar/ansible/1.9.4
➜ 1.9.4 git:(master) ag ANSIBLE_SSH_CONTROL
libexec/lib/python2.7/site-packages/ansible/constants.py
187:ANSIBLE_SSH_CONTROL_PATH = get_config(p, 'ssh_connection', 'control_path', 'ANSIBLE_SSH_CONTROL_PATH', "%(directory)s/ansible-ssh-%%h-%%p-%%r")
Output without exporting ANSIBLE_SSH_CONTOL_PATH:
ControlPath="/Users/vinitkhandagle/.ansible/cp/ansible-ssh-%h-%p-%r"
Exported the variable as:
export ANSIBLE_SSH_CONTROL_PATH='%(directory)s/%%h-%%r'
Control path changes accordingly:
ControlPath="/Users/vinitkhandagle/.ansible/cp/%h-%r"
Adding a note to #techraf above comment is that I encountered the same issue when using ansible in combination with molecule in a virtual environment. So even though ansible.cfg has been read. ANSIBLE_SSH_CONTROL_PATH will be overwritten. So in that case a workaround was to set it as environment variable outside the configuration file itself. As this is then shorter than the default path.
export ANSIBLE_SSH_CONTROL_PATH='%(directory)s/tmp'

Resources