I am trying to configure my Ansible for dynamic inventory. According to the ansible documentation I am typing on my OS X laptop command line:
GCE_INI_PATH=~/.gce.ini ansible all -i gce.py -m setup hostname | success >> {"ansible_facts": {"ansible_all_ipv4_addresses": ["x.x.x.x"],
and I am getting:
-bash: success: command not found
close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr
What should I type instead?
Try just running the command
GCE_INI_PATH=~/.gce.ini ansible all -i gce.py -m setup
if this works you will get a stream of JSON output (from the Ansible setup module) showing the system information about any GCE hosts you have set up.
Related
I'm trying to create a local config file in my ansible but when I run the list-hosts command its unable to locate it.
-i switch is for inventory, why do you feed config file into it?
Either use ansible -i dev or don't specify -i at all to use configured value from ansible.cfg.
SYNOPSIS
I was getting the following error when trying to run both the standard ansible all -m ping and ansible all -a "pwd" (The second command is just in the case the ping module was the issue):
..."module_stdout": " File \"/tmp/ansible_C2IrV6/ansible_module_command.py\", line 183\r\n out = b''\r\n ^\r\nSyntaxError: invalid syntax\r\n",...
My issue was that I was somehow running an unreleased Ansible version (2.4.0) due to installation via pip. This was conflicting with my yum installation (2.3.1.0), compounded by an incompatibility with my current Python version (2.6.6).
My solution was to uninstall both versions to ensure I no longer had ansible on my system. From there, I used yum to reinstall ansible to a version that I knew was compatible (2.3.1.0). I have also read that it's possible to use pip to specify the version:
pip install ansible==<version>
There are more details on installing different versions here
ORIGINAL POST
I've seen many instances where people seem to have my exact issue, but it always ends up being something slightly different. Regardless, I attempt the solutions to no avail.
I'm running Ansible 2.4.0 on what I believe is RHEL6:
$ uname -a
Linux <server address> 2.6.32-642.11.1.el6.x86_64 #1 SMP Wed Oct 26 10:25:23 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
The host I'm communicating with is running RHEL5.
When I run this command:
$ sudo ansible all -a "pwd" -vvvv
I get the following result:
Verbose Ansible Output
Extracting the ssh command from the above output:
$ ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o 'ControlPath=~/.ansible/cp' user#hostDestination '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
Result:
Extracted Ansible SSH Command Output
Based on the result above, it seems like the command makes a successful connection, so I don't see why Ansible is giving me the Permission Denied
EDIT: Thanks to #KonstantinSuvorov, I was able to more precisely pin down my issue – I seem to have attributed a module failure to the command's result of Permission Denied because I got the best response using sudo. See his post for why that was an issue.
Some Additional Information
On the server with Ansible installed, I have super user privileges. On the destination host I do not.
Normal SSHing into the destination host works perfectly fine and vice versa.
The following is what we see upon successfully logging in to any of our servers here at work, so don't be alarmed when it isn't familiar:
***************************************************************
* *
* Do not attempt to log on unless you are an authorized user. *
* *
* You must have a valid Network account. *
* *
***************************************************************
If it's necessary, I'll provide my ansible.cfg when I get back to the office tomorrow.
Forgive me if this belongs in SuperUser, ServerFault, or somewhere else.
UPDATE
Running the command without sudo:
ansible all -a "/bin/pwd" -vvvv
Result:
Verbose Ansible Command Without Sudo
UPDATE 2
After placing ssh_args= under [ssh_connection] in my ansible.cfg:
ansible all -a "pwd"
Result:
hostDestination | FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "\n***************************************************************\n* *\n* Do not attempt to log on unless you are an authorized user. *\n* *\n* You must have a valid Network account. *\n* *\n***************************************************************\n\nConnection to hostDestination closed.\r\n",
"module_stdout": " File \"/tmp/ansible_C2IrV6/ansible_module_command.py\", line 183\r\n out = b''\r\n ^\r\nSyntaxError: invalid syntax\r\n",
"msg": "MODULE FAILURE",
"rc": 0
}
NOTE: Running the command with Ansible's raw module is successful:
ansible all -m raw -a "pwd"
Result:
hostDestination | SUCCESS | rc=0 >>
/home/user
***************************************************************
* *
* Do not attempt to log on unless you are an authorized user. *
* *
* You must have a valid Network account. *
* *
***************************************************************
Shared connection to hostDestination closed.
When I look at the verbose output (-vvvv) of the regular command I see the following modules being used:
Using module_utils file /usr/lib/python2.6/site-packages/ansible-2.4.0-py2.6.egg/ansible/module_utils/basic.py
Using module_utils file /usr/lib/python2.6/site-packages/ansible-2.4.0-py2.6.egg/ansible/module_utils/_text.py
Using module_utils file /usr/lib/python2.6/site-packages/ansible-2.4.0-py2.6.egg/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /usr/lib/python2.6/site-packages/ansible-2.4.0-py2.6.egg/ansible/module_utils/parsing/__init__.py
Using module_utils file /usr/lib/python2.6/site-packages/ansible-2.4.0-py2.6.egg/ansible/module_utils/pycompat24.py
Using module_utils file /usr/lib/python2.6/site-packages/ansible-2.4.0-py2.6.egg/ansible/module_utils/six/__init__.py
Using module file /usr/lib/python2.6/site-packages/ansible-2.4.0-py2.6.egg/ansible/modules/commands/command.py
I feel like one or more modules may be the issue seeing as the last line in the output above is similar to the following line in the failed command's output:
... /tmp/ansible_C2IrV6/ansible_module_command.py\ ...
It looks as if the Python interpreter is having an issue with the syntax of the empty binary string in this file. Unfortunantely, Ansible deletes the file immediately after running the command – preventing me from looking at line 183 to make my own assessment.
You run ansible with sudo under root account and it tries to use ssh-keys from root account (which are absent?).
When you try ssh command, you run it under your current user account and use another access key.
I believe you have no reasons to use sudo here.
I'm a beginner with Ansible and trying to write a string to a file with an Ad-Hoc command I'm trying to play around with the replace module. The file I'm trying to write to is /etc/motd/.
ansible replace --sudo /etc/motd "This server is managed by Ansible"
Any help would be appreciated thanks!
Have a look at the lineinfile module usage and a general syntax for Ad hoc commands.
What you are looking for is:
ansible target_node -b -m lineinfile -a 'dest=/etc/motd line="This server is managed by Ansible"'
in extended form:
ansible target_node --become --module-name=lineinfile --args='dest=/etc/motd line="This server is managed by Ansible"'
Explanation:
target_node is the hostname or group name as defined in the Ansible inventory file
--become (-b) instructs Ansible to use sudo
-module-name (-m) specifies the module to run (lineinfile here)
--args (-a) passes arguments to the module (these change depending on a module)
dest points to the destination file
line instructs Ansible to ensure a particular line is in the file
If you would like to replace the whole contents of the /etc/motd you should use copy module.
ansible target_node -b -m copy -a 'dest=/etc/motd content="This server is managed by Ansible"'
Notice one of the arguments is changed accordingly.
I'm probing a freshly installed Archlinux installation on a Raspberry PI 2 like so:
ansible -i PI2 arch -m setup -c paramiko -k -u alarm -vvvv
This reads to me: Fire the setup module against the IP connecting with the user "alarm" asking for the password of this specific user. However the user that eventually attempts to connect is "root".
Here's the debug response:
Loaded callback minimal of type stdout, v2.0
<192.168.1.18> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 192.168.1.18
192.168.1.18 | UNREACHABLE! => {
"changed": false,
"msg": "ERROR! Authentication failed.",
"unreachable": true
}
The inventory looks like this:
[arch]
192.168.1.18
Some things that may or may not be relevant are the following:
ssh logins via root are not permitted
sudo is not installed
default user and pass are "alarm" : "alarm"
no ssh key being copied to the machine hence the paramiko connection attempt
What is NOT ignored and leads to a successful connection is adding ansible_user=alarm to the IP line in the inventory file.
EDIT
Found this interesting passage in the official docs: http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable which states:
Another important thing to consider (for all versions) is that connection specific variables override config, command line and play specific options and directives. For example:
ansible_user will override-u andremote_user: `
The original question seems to remain though. Without any mention of ansible_user in the inventory, why is root being used instead of the explicitly mentioned user via - u?
EDIT_END
Is this expected behaviour?
Thanks
You don't need to specify paramiko as the connection type, Ansible will figure that part out. You may have a group_vars directory with an ansible_user or ansible_ssh_user variable defined for this host which could be overriding the alarm user.
I was able to replicate your test on ansible 2.0.0.2 without any issues against a raspberry pi 2 running Raspian:
➜ ansible ansible -i PI2 arch -m setup -u alarm -vvvv -k
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback minimal of type stdout, v2.0
<192.168.1.84> ESTABLISH CONNECTION FOR USER: alarm on PORT 22 TO 192.168.1.84
CONNECTION: pid 78534 waiting for lock on 9
CONNECTION: pid 78534 acquired lock on 9
paramiko: The authenticity of host '192.168.1.84' can't be established.
The ssh-rsa key fingerprint is 54e12e8153e0319f450934d606dca7df.
Are you sure you want to continue connecting (yes/no)?
yes
CONNECTION: pid 78534 released lock on 9
<192.168.1.84> EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454502995.07-263327298967895 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454502995.07-263327298967895 )" )
<192.168.1.84> PUT /var/folders/39/t0dm88q50dbcshd5nc5m5c640000gn/T/tmp5DqywL TO /home/alarm/.ansible/tmp/ansible- tmp-1454502995.07-263327298967895/setup
<192.168.1.84> EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/alarm/.ansible/ tmp/ansible-tmp-1454502995.07-263327298967895/setup; rm -rf "/home/alarm/.ansible/tmp/ansible-tmp-1454502995. 07-263327298967895/" > /dev/null 2>&1
192.168.1.84 | SUCCESS => {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"192.168.1.84"
],
I virtualenv'd myself a Ansible 1.9.4 sandbox, copied over the ansible.cfg and the inventory and ran the command again. Kinda worked as expected:
⤷ ansible --version
ansible 1.9.4
configured module search path = None
(ANS19TEST)~/Documents/Code/VENVS/ANS19TEST
⤷ ansible -i PI2 arch -m setup -c paramiko -k -u alarm -vvvv
SSH password:
<192.168.1.18> ESTABLISH CONNECTION FOR USER: alarm on PORT 22 TO 192.168.1.18
<192.168.1.18> REMOTE_MODULE setup
From where I'm standing I'd say this is a bug. Maybe somebody can confirm?! This goes to the bugtracker then...
Cheers
EDIT
For brevity I omitted a important part of my inventory file, which ultimately is responsible for the behavior. It looks like this:
[hypriot]
192.168.1.18 ansible_user=root
[arch]
192.168.1.18
Quote from the Ansible bugtracker:
The names used in the inventory is the key in a dictionary. So everything you put in there as host-specific variables will be merged into one big dictionary. That means that in some conflicting cases variables are superseded by other values.
You can prevent this by using different names for the same host (e.g. using IP address and hostname, or an alias or DNS-alias) and in that case you can still do what you like to do.
So my inventory looks like this now:
[hypriot]
hypriot_local ansible_host=192.168.1.18 ansible_user=root
[archlinux]
arch_local ansible_host=192.168.1.18
This works fine. The corresponding issue on the Ansible tracker is here: https://github.com/ansible/ansible/issues/14268
The following command works from my OSX terminal:
ssh vagrant#192.168.50.100 -i .vagrant/machines/centos100/virtualbox/private_key
I have an inventory file called "hosts" with the following:
192.168.50.100 ansible_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/centos100/virtualbox/private_key
However when I run the following:
ansible -i hosts all -m ping -v
I get:
192.168.50.100 | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
I was using ansible 1.8.2 where ansible_user is still ansible_ssh_user
The correct hosts file is:
192.168.50.100 ansible_ssh_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/centos100/virtualbox/private_key