Run local command with Ansible and share variable in the remote context - ansible

I have the following logic that I would like to implement with Ansible:
Before to update some operating system packages, I want to check some other remote dependencies, which involve querying some endpoints and decide if the next version is good or not.
The script new_version_available returns 0 if there is something new and 1 if there isn't something new.
To avoid install unnecessary packages in production, or open unnecessary ports in my firewall in the DMZ, I would like to run this script locally in my host and if it succeeds, then we run the next task remotely.
tasks:
- name: Check if there is new version available
command: "{{playbook_dir}}/new_version_available"
delegate_to: 127.0.0.1
register: new_version_available
ignore_errors: False
- name: Install our package
command:
cmd: '/usr/bin/our_installer update'
warn: False
when: new_version_available is succeeded
Which gives me the following error:
fatal: [localhost -> 127.0.0.1]: FAILED! => {"changed": false, "cmd": "/home/foo/ansible-deploy-bar/new_version_available", "msg": "[Errno 2] No such file or directory", "rc": 2}
That means that my command cannot be found, however my script exists and i have permission to access it.
My Development environment where I'm testing the playbook, is running in a virtual machine, via NAT, where forward the Guest port 22 to my host 2222, so if i want to login in my VM I do ssh root#localhost -p 2222. My inventory looks like:
foo:
hosts:
localhost:2222
My Question is:
What would be the Ansible way to achieve what I want, i.e run some command locally and pass the results to a register and use it as condition in a task? Run the command and pass the result as environment variable to Ansible?
I'm using this documentation as support https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html

Related

Ansible module failure. "/bin/sh: sudo: command not found" although it is available on machine I am running against

I have a Gitlab pipeline that uses a gitlab runner to deploy from. From the runner, I run ansible to reach out and configure one of our servers.
In my pipeline step where I run ansible-playbook, I have the following setup:
deploy:
image: registry.com/ansible
stage: deploy
script:
- ansible-playbook server.yml --inventory=hosts.yml
This reaches out to my host and begins to deploy but hits a snag on the first task that has a "become: yes" statement in it. It fails providing the following error:
TASK [mytask : taskOne] ************
task path: my/file/location/path.yml
fatal: [server01[ : FAILED! => {
"changed": false,
"module_stderr": "/bin/sh: sudo: command not found\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
I can login to my server (server01) and run sudo without issues. Any thoughts on what could be causing this? Thanks.
My guess is that you are not using the same user in the GitLab pipeline and for your sudo test on the machine. In this case you should try to become this user on the host and try the sudo command to troubleshoot it. It seems to be a matter of PATH not something related to the configuration of the sudoers (that is a common problem).
As a workaround (it will not solve the sudo problem) you could try to use an alternate become_method like su, more detail in the doc.
- name: Run a command with become su
command: somecommand
become: yes
become_method: su

Ansible delegation from the command line

I'm trying to retrieve some information from a cisco switch via snmp_facts module (yes pysnmp is installed on my ansible host). I keep getting this error:
TASK [snmp_facts] ********************************************************************************
fatal: [10.1.1.1]: FAILED! => changed=false
msg: Missing required pysnmp module (check docs)
This is the command I am running:
ansible 192.168.1.11 -m snmp_facts -a 'community=blah host={{ inventory_hostname }} version=v2c' -k
From playbooks I wrote earlier, I used delegate_to: localhost but haven't been successful, it doesn't look like a valid option
pysnmp is installed on my ansible host
If that's true, you'll need to have ansible run that module using the python that contains pysnmp, not the one that is running ansible (as they can, and very often are, different)
It's close to what #larsks said:
ansible -c local -i localhost, \
-e ansible_python_interpreter=/the/path/to/the/pysnmp/python ...

Run Same Ansible Playbook for Different Local Users

I would like to use an ansible playbook to setup identical configurations for two different users on my localhost (i.e., admin & brian). Currently, the "common" role installs programs that are accessible by both users. In addition, I have settings that are user specific (e.g., desktop wallpaper). When I run my playbook, the user specific settings are updated from one user but not the other. For example, if I run my playbook the wallpaper for brian is changed but the wallpaper for admin is left untouched. I am aware of become_user, but do not want to that for every task that I run. Is it possible to define the hosts file or playbook in such a way that I can simply specify the users on localhost I want the playbook to run against?
I have tried
Is there anyway to run multiple Ansible playbooks as multiple users more efficiently? on the per role level but am getting the following error:
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/usr/bin/python2: can't open file '/home/brian/.ansible/tmp/ansible-tmp-1525409723.54-208533437554058/apt.py': [Errno 13] Permission denied\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 2}
site.yml
---
- name: ansible master playbook
hosts: localhost
connection: local
roles:
- role: common
roles/common/tasks/main.yml
---
- import_tasks: gsettings.yml
roles/common/tasks/gsettings.yml
---
- name: Use 12 hr. clock format
dconf:
key: "/org/gnome/desktop/interface/clock-format"
value: "'12h'"
In Ansible you have the option to launch the playbook as:
ansible-playbook playbooks/playbook.yml --user user
Please note that specifying a user can sometime conflict with a user defined in /etc/ansible/hosts.
(From Ansible documentation)
My solution was to simply log into each user on my local machine and run my ansible playbooks locally. An underlying issue with using the dconf module to change gsettings appears to be that D-Bus for the other user is not set, so the gsettings for the other user do not stick. See related questions below.
https://askubuntu.com/questions/655238/as-root-i-can-use-su-to-make-dconf-changes-for-another-user-how-do-i-actually
http://docs.ansible.com/ansible/latest/modules/dconf_module.html
Access another user's D-Bus session

Ansible test hosts fails

Just starting out with Ansible. I configured the hosts file like this:
[webserver]
<remote-server-ip> ansible_user=<user> ansible_private_key_file=<full-path-to-private-ssh-key>
When I run:
ansible all -m ping
I get:
<remote-server-ip> | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Couldn't read packet: Connection reset by peer\r\n",
"unreachable": true
I can connect with no issues if I run:
ssh -i <full-path-to-private-ssh-key> <user>#<remote-server-ip>
Notes:
There is no password on the SSH key.
The project is located at ~/my_project_name.
I also tried using ansible_connection=local, and while ansible all -m ping appeared to work, in reality all it does is allow me to execute tasks that modify the host machine Ansible is running on.
The ansible.cfg file has not been modified, though it is in a different directory: /etc/ansible/ansible.cfg.
Ansible by default tries to connect to localhost through ssh. For localhost, set the ansible_connection to local in your hosts file shown below.
<remote-server-ip> ansible_connection=local ansible_user=<user> ansible_private_key_file=<full-path-to-private-ssh-key>
Refer this documentation for more details.
Hope this helps!
I think I saw this earlier, can you try adding below in the hosts file and see if that works
ansible_connection=ssh ansible_port=22
I figured out that this is an issue with the version of Ansible I was using (2.3.1). Using version 2.2.0.0 works with no problems.

Ansible execute command locally and then on remote server

I am trying to start a server using ansible shell module with ipmitools and then do configuration change on that server once its up.
Server with ansible installed also has ipmitools.
On server with ansible i need to execute ipmitools to start target server and then execute playbooks on it.
Is there a way to execute local ipmi commands on server running ansible to start target server through ansible and then execute all playbooks over ssh on target server.
You can run any command locally by providing the delegate_to parameter.
- shell: ipmitools ...
delegate_to: localhost
If ansible complains about connecting to localhost via ssh, you need to add an entry in your inventory like this:
localhost ansible_connection=local
or in host_vars/localhost:
ansible_connection: local
See behavioral parameters.
Next, you're going to need to wait until the server is booted and accessible though ssh. Here is an article from Ansible covering this topic and this is the task they have listed:
- name: Wait for Server to Restart
local_action:
wait_for
host={{ inventory_hostname }}
port=22
delay=15
timeout=300
sudo: false
If that doesn't work (since it is an older article and I think I previously had issues with this solution) you can look into the answers of this SO question.

Resources