I am a newbie in Ansible world. I have already created some playbook and I am getting more and more familiar with this technology by the day.
In my playbooks I have always used the command yum to install and manage new packages, but recently I found out about an another command package that claims to be OS independent.
Thus my question: What is the difference between them?
In particular, if I create a role and a playbook that I know that will be executed in RHEL environment (where yum is the default package manager), which advantage do I get from using the command package rather than yum?
Thanks in advance for your help.
Ansible package module autodetect your OS default package manager (e.g yum, apt) from existing facts.
The fact environment variable which stores is "ansible_pkg_mgr".
Here is a command for same.
ansible localhost -m setup | grep ansible_pkg_mgr.
If you are using multiple OS in your environment, then instead of specifying package manager you should use package over yum or apt.
Ansible package module is more general but looks like you still have to handle differences in package names. From package module
# This uses a variable as this changes per distribution.
- name: remove the apache package
package:
name: "{{ apache }}"
state: absent
In this case package name for:
RHEL - httpd
Debian/Ubuntu - apache2
so {{ apache }} variable must be set according to the OS.
Related
I'm new to Ansible and I'm trying to create a package to deploy to a Windows client running Chocolatey. I have all the winrm connections working between my ansible server and my windows client, but I am struggling to understand how to define and create packages.
As an example:
I want to install Notepad++ on the Windows client. I do not want it to connect to the internet to download the installer executable. Instead, I want the ansible server to push the exe to the client, then have the client execute it locally.
Can anyone explain and/or provide an example of a playbook to handle this? I know this is more easily achievable on windows via other products like SCCM, but for these purposes ansible is required.
The ansible playbook call you would look to make would look like such:
- name: Install notepadplusplus.install
win_chocolatey:
name: notepadplusplus.install
version: '8.4.5'
source: https:/YourInternalNuGetV2Repo
state: present
You would look to host the Chococlatey package on an internal NuGet V2 Repository
I think the part here that's missing is that you don't have a packages repository for Chocolatey to pull from. If you want to deploy a package with Chocolatey, it needs to get it from somewhere; the Ansible playbooks don't allow you to create packages directly and push them to machines, they mostly just allow you to setup Chocolatey and run Chocolatey commands.
If you want to build a Chocolatey package directly on the Ansible server, the Ansible modules for Chocolatey specifically don't have that functionality built in. You could potentially use other Ansible modules to construct the necessary script and zip files for the Chocolatey package, bundle in a targeted installer .exe, and upload it to the client. Not sure exactly how you'd do that, Ansible is generally for the deployment itelf moreso than packaging things for deployment.
Then, you could have the client instructed to install it by first adding the local folder that the package was uploaded to as a Chocolatey source:
win_chocolatey_source:
name: local
state: present
source: C:\\packages_folder
win_chocolatey:
name: package_name
source: local
state: latest
Instead, I want the ansible server to push the exe to the client, then have the client execute it locally.
If that is all you want then you don't need Chocolatey. Use win_copy to copy the EXE over from the server to the client and use something like win_command to execute it.
There are some caveats to it. You will need the command line arguments to make it run silently and headless. You'll need to test it all as some installers return immediately (and so control would return immediately to your playbook) even though they are still installing.
If you need to use Chocolatey then the other answers here are what you are looking for.
I wanted to do an uninstallation of OMS agent in our Linux machines. Unfortunately, we do have different OMS agent versions assigned to each machine. I hard coded the version from my Ansible script
command: sudo {{ file_path }}/omsagent-1.13.9-0.universal.x64.sh —-purge
It only works for machine with that same OMS agent version else, it will fail.
I tried adding wildcard syntax, but it is getting an error stating that command not found
stderr: “sudo :/home/filename/omsagent-* : command not found
if I changed my previous command to
command: sudo {{file_path}}/omsagent-*.universal.x64.sh —-purge
Since I do not have this specific agent in place, I can't provide a full tested working example, but some some guidance.
According the package documentation and All bundle operations, the bundle has an option for
--version-check Check versions already installed to see if upgradable.
which should provide the installed version. Furthermore any installed agent has an directory with service control script
/opt/microsoft/omsagent/bin/service_control ...
and probably others, like scxadmin --version. By executing one or the other it should be possible to gather the correct installed version of the agent.
- name: Gather installed OMS agent version
become: true
become_method: sudo
shell:
cmd: /opt/microsoft/omsagent/bin/service_control status | grep <whatever is necessary to get the version string only>
register: VERSION
changed_when: false
check_mode: false
Please take note that instead of using sudo within the command, you should use become. Since it is a version reporting task only, you should also use changed_when and check_mode.
After the correct version is gathered you use it like
- name: Purge installed OMS agent version
become: true
become_method: sudo
shell:
cmd: "omsagent-{{ VERSION }}.universal.x64.sh —-purge"
Is there any reason why the option --upgrade or --force can`t be used?
You may also have a look into How to troubleshoot issues with the Log Analytics agent for Linux, there is a standalone versionless purge script available.
I managed to create playbooks to backup an existing running Wordpress server by installing a VM backup server on Debian, so using APT package manager in Ansible.
Now I would like to be able to use the same playbooks but for installing at the same time the backup on an Alpine linux server.
Is there a more generic way than using APT or APK modules ?
If not what would you recommend me ?
Regards,
FB
Yes, and it's called package module, check https://docs.ansible.com/ansible/2.9/modules/package_module.html
Package names, however, might be different from distro to distro, and you still will have to provide distro-specific instructions. Quoting doc:
Package names also vary with package manager; this module will not "translate" them per distro. For example libyaml-dev, libyaml-devel.
The usual way to avoid it is to create distro-specific subtasks for different os families, or distro-specific variables, which are included with some condition.
In an ansible (ver. 2.10) playbook I would need to invoke the dpkg-reconfigure openssh-server command to recreate SSH server keys.
- name: Create new SSH host's keys
shell: dpkg-reconfigure openssh-server
notify: restart sshd
The problem is that dpkg-reconfigure openssh-server opens a dialog box, and the script get stucked...
Looking into ansible documentation, it seems that dpkg-reconfigure can be managed by debconf module:
Code example related to locales module:
- name: Set default locale to fr_FR.UTF-8
debconf:
name: locales
question: locales/default_environment_locale
value: fr_FR.UTF-8
vtype: select
The question from openssh-server debconf module is: What do you want to do about modified configuration file sshd_config? and the answer would be: keep the local version currently installed.
How could I manage it using ansible debconf module?
This is not a debconf issue. The file is marked as a config file by Debian packaging and dpkg is handling it at a generic level. dpkg --configure has --force-confold but dpkg-reconfigure does not.
https://wiki.debian.org/ConfigPackages may also be useful.
TL;DR; Yes, use Ansible debconf but mv /var/lib/dpkg/info/<package>.config file aside whilst reconfiguring.
The rest of this is for others searching for insight into debconf and Ansible's debconf module.
I spent some time digging into this and have submitted some docs to the Ansible debconf module which I've edited a bit for this answer.
Reconfiguring packages in Debian using debconf is not straightforward!
The Ansible debconf module does not reconfigure packages, it just updates the debconf database. An additional playbook step is needed (typically via notify if debconf makes a change) to reconfigure the package and apply the changes.
Now debconf is primarily used for pre-seeding configuration prior to installation.
So, whilst dpkg-reconfigure does use debconf data, it is not always authoritative and you may need to check how your package is handled.
dpkg-reconfigure is a 3-phase process. It invokes the control scripts from the /var/lib/dpkg/info directory with the following arguments:
<package>.prerm reconfigure <version>
<package>.config reconfigure <version>
<package>.postinst control <version>
The main issue is that the <package>.config reconfigure step for many packages will first reset the debconf database (overriding changes made by the Ansible module) by checking the on-disk configuration. If this is the case for your package then dpkg-reconfigure will effectively ignore changes made by this debconf module.
However although dpkg-reconfigure finally invokes:
/var/lib/dpkg/info/<package>.postinst configure <version>
to actually configure the package; using this turns out not to be that simple. The script is expected to be run from a "debconf frontend" and uses IPC to respond to the _db_cmd statements in the script.
To see this in more detail
export DPKG_MAINTSCRIPT_PACKAGE=<package>
export DPKG_MAINTSCRIPT_NAME=<script path>
export DEBIAN_HAS_FRONTEND=1
and run the script. I was trying to setup unattended-upgrades so I ran:
sh -x /var/lib/dpkg/info/unattended-upgrades.postinst configure 1.11.2
This then halts waiting for a response from the frontend.
Running
/usr/share/debconf/frontend /var/lib/dpkg/info/unattended-upgrades.postinst configure 1.11.2
works... but has the exact same problem as dpkg-reconfigure - it resets the debconf database :(
This is because running
/var/lib/dpkg/info/unattended-upgrades.postinst configure 1.11.2
sources /usr/share/debconf/confmodule which exec()s /usr/share/debconf/frontend which forces the <package>.config configure phase to take place.
This is done based on the existence (ie using shell [-e]) of the .config file and cannot be avoided.
The solution is to mv /var/lib/dpkg/info/<package>.config out of the way whilst dpkg-reconfigure (or other related debconf code) runs.
Note the Debian programmers manual says that the config script's sole purpose is to populate debconf and must not affect other files; so doing this in a playbook is (to my understanding) compliant with debian policy: http://www.fifi.org/doc/debconf-doc/tutorial.html#AEN113
HTH
I also had this problem and I want to share my workaround.
In my case, I want non-root users to be able to sniff network traffic with wireshark/tshark. The package in question is wireshark-common which I normally reconfigure to modify dumpcap as setuid.
In my playbook, I do the following. First, use debconf to modify the config and than run the dpkg-reconfigure command in non-interactive mode, but only if the config has changed.
- name: wireshark setuid
ansible.builtin.debconf:
name: wireshark-common
question: wireshark-common/install-setuid
value: yes
vtype: boolean
register: reconfigure_changed
- name: make debconf changes active
ansible.builtin.command:
cmd: "dpkg-reconfigure wireshark-common"
environment:
DEBIAN_FRONTEND: noninteractive
when: reconfigure_changed.changed
Hope this is helpful.
I am looking myself for a solution but I haven't found one yet to achieve that with debconf and ansible.
The Problem is, debconf has no "selection" in terms of sshd_config.
When you look for debconf and preseed (Debian unattended installation) there is simply no Argument where you can specify to keep the current sshd_config.
for example active debconf settings:
sudo debconf-show openssh-server
openssh-server/permit-root-login: false
openssh-server/password-authentication: false
These are the questions for the ansible debconf module.
what we are looking for, but thats not possible:
- debconf:
name: openssh-server
question: openssh-server/keep-current-sshd-config
value: true
Unfortunately, we have to find a workaround.
For my case, I wanted to reconfigure openssh-server on my raspberry pi's
Luckily, there is a systemd file on raspbian OS /lib/systemd/system/regenerate_ssh_host_keys.service that does what the name says.
To make use of it, just delete the ssh_host_* files and reboot the machine.
If you need to get that for different hosts, you need to find another workaround. Maybe importing new ssh_host key files via ansible, or build a small script.
Is there a simple way how to get back what precisely ansible script(playbook) is trying to do?
E.g. I have
- name: Install required packages
yum: name={{item}} state=present
with_items:
- nmp
become: True
And I want to get:
sudo yum install nmp
That said, I want to know what commands (OS level) ansible is running.
Basically, I need the reverse process to: Ansible and Playbook. How to convert shell commands into yaml syntax?
Summarizing here the central points from comments above [1][2].
Ansible rarely runs external commands, it's mostly Python code & Python libraries being used to control hosts. Therefor, Ansible's "translation" isn't necessarily converting yum (Ansible module) usage into yum (CLI) invocations.
While you may be able to depend on the output, in some cases, by parsing for command sequences in the output of ansible-playbook -vvv foo.yml - the implementation would be flaky at best.
I'd encourage you to keep discussing on this thread what you're trying to accomplish. It's likely there is already a solution, and someone can point you to a tool that already exists.