I'm trying to retrieve some information from a cisco switch via snmp_facts module (yes pysnmp is installed on my ansible host). I keep getting this error:
TASK [snmp_facts] ********************************************************************************
fatal: [10.1.1.1]: FAILED! => changed=false
msg: Missing required pysnmp module (check docs)
This is the command I am running:
ansible 192.168.1.11 -m snmp_facts -a 'community=blah host={{ inventory_hostname }} version=v2c' -k
From playbooks I wrote earlier, I used delegate_to: localhost but haven't been successful, it doesn't look like a valid option
pysnmp is installed on my ansible host
If that's true, you'll need to have ansible run that module using the python that contains pysnmp, not the one that is running ansible (as they can, and very often are, different)
It's close to what #larsks said:
ansible -c local -i localhost, \
-e ansible_python_interpreter=/the/path/to/the/pysnmp/python ...
Related
I have the following logic that I would like to implement with Ansible:
Before to update some operating system packages, I want to check some other remote dependencies, which involve querying some endpoints and decide if the next version is good or not.
The script new_version_available returns 0 if there is something new and 1 if there isn't something new.
To avoid install unnecessary packages in production, or open unnecessary ports in my firewall in the DMZ, I would like to run this script locally in my host and if it succeeds, then we run the next task remotely.
tasks:
- name: Check if there is new version available
command: "{{playbook_dir}}/new_version_available"
delegate_to: 127.0.0.1
register: new_version_available
ignore_errors: False
- name: Install our package
command:
cmd: '/usr/bin/our_installer update'
warn: False
when: new_version_available is succeeded
Which gives me the following error:
fatal: [localhost -> 127.0.0.1]: FAILED! => {"changed": false, "cmd": "/home/foo/ansible-deploy-bar/new_version_available", "msg": "[Errno 2] No such file or directory", "rc": 2}
That means that my command cannot be found, however my script exists and i have permission to access it.
My Development environment where I'm testing the playbook, is running in a virtual machine, via NAT, where forward the Guest port 22 to my host 2222, so if i want to login in my VM I do ssh root#localhost -p 2222. My inventory looks like:
foo:
hosts:
localhost:2222
My Question is:
What would be the Ansible way to achieve what I want, i.e run some command locally and pass the results to a register and use it as condition in a task? Run the command and pass the result as environment variable to Ansible?
I'm using this documentation as support https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html
I've been looking into attempting to get ansible to use python3 on remote targets, in order to run playbooks against them, however, simply running a playbook against a target with python3 installed fails with the error message:
"/bin/sh: 1: /usr/bin/python: not found\r\n"
Looking for answers to this online only seem to discuss configuring ansible on the host to use python3 rather than the remote. Is it possible to configure the remote to use python3 rather than 2?
You can set the ansible_python_interpreter variable to tell Ansible which version of Python to use. You can set this globally, as C. Dodds has suggested in their answer, but it generally makes more sense to set this as a per-host inventory variable. E.g., using a YAML inventory:
all:
hosts:
myhost:
ansible_python_interpreter: /usr/bin/python3
Or using an ini-style inventory:
myhost ansible_python_interpreter=/usr/bin/python3
And of course you can set this per-hostgroup if you have several hosts that require the same configuration.
This is discussed in the Ansible documentation.
Adding the argument "-e 'ansible_python_interpreter=/usr/bin/python3'" was the solution to this:
ansible-playbook sample-playbook.yml -e 'ansible_python_interpreter=/usr/bin/python3'
Just starting out with Ansible. I configured the hosts file like this:
[webserver]
<remote-server-ip> ansible_user=<user> ansible_private_key_file=<full-path-to-private-ssh-key>
When I run:
ansible all -m ping
I get:
<remote-server-ip> | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Couldn't read packet: Connection reset by peer\r\n",
"unreachable": true
I can connect with no issues if I run:
ssh -i <full-path-to-private-ssh-key> <user>#<remote-server-ip>
Notes:
There is no password on the SSH key.
The project is located at ~/my_project_name.
I also tried using ansible_connection=local, and while ansible all -m ping appeared to work, in reality all it does is allow me to execute tasks that modify the host machine Ansible is running on.
The ansible.cfg file has not been modified, though it is in a different directory: /etc/ansible/ansible.cfg.
Ansible by default tries to connect to localhost through ssh. For localhost, set the ansible_connection to local in your hosts file shown below.
<remote-server-ip> ansible_connection=local ansible_user=<user> ansible_private_key_file=<full-path-to-private-ssh-key>
Refer this documentation for more details.
Hope this helps!
I think I saw this earlier, can you try adding below in the hosts file and see if that works
ansible_connection=ssh ansible_port=22
I figured out that this is an issue with the version of Ansible I was using (2.3.1). Using version 2.2.0.0 works with no problems.
I want to get list of services installed and their versions on debian ec2 instances.
I am unable to understand how can i get the list of packages which dpkg --list shows because i want to get this list through ansible on my little server farm.
The easiest would be to simply run a shell task:
- shell: dpkg --list
register: packages
Now you have the result stored in packages.stdout_lines.
If you only want the package names, run something like this
dpkg --get-selections | grep -v "deinstall" | cut -f1
To run the task on the Ansible control host you need to delegate the task:
- shell: dpkg --list
register: packages
delegate_to: localhost
Now the command is executed on the control host (localhost) and the result stored in packages.stdout_lines
---
- hosts: hostblockname
tasks:
- name: Get Packages List
shell: dpkg --list > packageslist
register: packages
- fetch: src=/root/packageslist dest=/root/packagesdirectory/
I added the above playbook which helped serving my purpose. There may be room for optimization but somehow I am able to get it done for me.
I wanted to get list of all packages installed in a proper format on all Cloud Instances. Then I wanted to get list of all packages in a file on my Ansible server.
This playbook first generated list of installed packages on remote instances and then fetched those files back to main Ansible host.
The command to run playbook was:
ansible-playbook -i hostslistfile myplaybook.yml
myplaybook.yml is as above.
hostslistfile is simple file which is as below:
[hostblockname]
192.168.0.144:22
production (inventory file):
#main ansible_host=54.293.2785.210 ansible_port=22 ansible_ssh_user=ubuntu
54.293.2785.210 ansible_ssh_user=ubuntu
Running an ad-hoc command: ansible all -i production -a "hostname" Works!
But when I uncomment the first line and comment the second:
ansible main -i production -a "hostname" -vvvv
Gives the following error:
main | FAILED => SSH Error: ssh: Could not resolve hostname main: Name or service not known
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
Why is this not working?
ansible_host is the new (>=2.0) syntax.
Before that is was simply ansible_ssh_host but this has been deprecated in the more recent versions of Ansible (>=2.0):
Ansible 2.0 has deprecated the “ssh” from ansible_ssh_user, ansible_ssh_host, and ansible_ssh_port to become ansible_user, ansible_host, and ansible_port. If you are using a version of Ansible prior to 2.0, you should continue using the older style variables (ansible_ssh_*). These shorter variables are ignored, without warning, in older versions of Ansible.
If you're using an earlier version of Ansible then ansible_ssh_host should work for you.