Install rpm after copy, with ansible - ansible

I have an ansible playbook which will copy a file into a location on a remote server. It works fine. In this case, the file is an rpm. This is the way it works:
---
- hosts: my_host
tasks:
- name: mkdir /tmp/RPMS
file: path=/tmp/RPMS state=directory
- name: copy RPMs to /tmp/RPMS
copy:
src: "{{ item }}"
dest: /tmp/RPMS
mode: 0755
with_items:
[any_rpm-x86_64.rpm]
register: rpms_copied
Now, with the file successfully on the remote server, I need to start some new logic that will install the rpm that sits in /tmp/RPMS. I have run many different versions of the below (So this code is added onto the above block):
- name: install rpm from file
yum:
name: /tmp/RPMS/any_rpm-x86_64.rpm
state: present
become: true
I don't know if the formatting is incorrect, or if this is not the way. Can anyone advise as to how I can get the rpm in the directory /tmp/RPMS installed using a new few lines in the existing playbook?
Thanks.

I did not find this anywhere else, and it genuinely took me all of my working day to get to this point. For anyone else struggling:
- name: Install my package from a file on server
shell: rpm -ivh /tmp/RPMS/*.rpm
async: 1800
poll: 0
become_method: sudo
become: yes
become_user: root

Related

ansible-playbooks - install a list of apt packages from a file

I'm trying create a Ansible playbook that will read contents of a file and use those contents to install packages on a target machine.
In simpler terms, I want to run this command converted to an ansible playbook
cat ./meta/install-list/apt | xargs apt install -y
./meta/install-list/apt
neofetch
tmux
git
./ansible/playbooks/apt.yaml
- hosts: all
become: true
tasks:
- name: Extract APT packages to install
command: cat ../../meta/install-list/apt
register: _pkgs
delegate_to: localhost
run_once: true
- name: Install APT packages
apt:
name: "{{ _pkgs.stdout_lines }}"
state: latest
./ansible.cfg
[defaults]
inventory = ./ansible/inventory/hosts.yaml
./ansible/inventory/hosts.yaml
---
all:
children:
group-machines:
hosts:
target-machine.local
Command to run playbook
ansible-playbook --ask-become-pass ./ansible/playbooks/apt.yaml --limit group-machine
When running the command, it gets stuck on Extract APT packages to install
NOTE:
these files mentioned above are to be only on machine that is running the command. If possible, I'd like to prevent copying files to target machines and then running the playbooks tasks
PS: new to ansible
I don't see anything in your "Extract APT packages to install" task that should cause it to get stuck... but you don't need that task in any case; you can combine your two tasks into a single task like this:
- hosts: all
become: true
tasks:
- name: Install APT packages
apt:
name: "{{ packages }}"
state: latest
vars:
packages: "{{ lookup('file', '../../meta/install-list/apt').splitlines() }}"
Here we're using a file lookup to read the contents of a file. Lookups always run on the local (control) host.
Note that you could write the above like this as well...
- hosts: all
become: true
tasks:
- name: Install APT packages
apt:
name: "{{ lookup('file', '../../meta/install-list/apt').splitlines() }}"
state: latest
...but I like to keep longer jinja expressions in vars in order to keep the rest of the arguments more readable.
The above answer is more than enough by #Zeitounator. But if you do some formatting to your original file of package list as below
packages:
- neofetch
- tmux
- git
After that you can simply run the playbook like below
- hosts: all
become: true
vars_files: ../../meta/install-list/apt
tasks:
- name: Install APT packages
apt:
name: "{{ packages }}"
state: latest
Now suppose if you are lazy enough to not want to do the formatting then below playbook also will do the trick. Its much cleaner and scalable in my opinion.
---
- name: SHow the packages list
hosts: localhost
become: true
tasks:
- name: View the packages list file
shell: cat ../../meta/install-list/apt
register: output
- name: Install the package
apt:
name: "{{ output.stdout_lines }}"
state: latest

ansible yum remove packages on first run, not on all after

I'm attempting to uninstall a list of packages from our RHEL servers. However, I need to account for servers where these packages to uninstall are needed for the application. An good example of this is httpd, which is listed on our uninstall list, but it is an dependency for the application running on the server. Basically I'm managing two states with one playbook.
So here is the list of packages to remove, which is in the role's defaults/main.yml
packagesRemove:
- telnet
- nfs
- nfs-server
- nfs-utils
- named
- httpd
- rsync
- postfix
- autofs
- cups
- smb
- squid
Currently, I'm doing something basic to uninstall the packages on the first run.
- name: Check for packageRemove file
stat:
path: /root/packageRemove.txt
register: stat_result
- name: remove packages not needed
yum:
name: "{{ packagesRemove }}"
autoremove: yes
register: packageRemove_output
when: not stat_result.stat.exists
- name: create packageRemove file
template:
src: output.txt.j2
dest: /root/packageRemove.txt
owner: root
group: root
mode: 0600
when: not stat_result.stat.exists
Basically if the /root/packageRemove.txt file exits, these tasks just get skipped. How can I make this more dynamic, and remove the need for the /root/packageRemove.txt file. I would like to make the packages that are needed into some sort of inventory variables.
Right now, I just have the following to gather a list of packages installed on the server.
- name: gather installed packages
dnf:
list: installed
no_log: true
register: yum_packages
- name: make installed packages a list
set_fact:
installed_packages: "{{ yum_packages.results | map(attribute='name') | list }}"
This is now where I'm stumped, and I'm not quite sure what my next step should be or if I'm on the right track. Any help would be great.

Ansible: 'shell' not executing

I'm trying to run a shell command to install a package downloaded from my local Artifactory repository, as I don't have access to download it straight from the internet.
When I run the command directly on the node as such
rpm -ivh kubectl-1.1.1.x86_64.rpm --nodigest --nofiledigest
It works perfectly.
But then put in Ansible playbook as such
- name: Install Kubectl
shell: rpm -ivh kubectl-1.1.1.x86_64.rpm --nodigest --nofiledigest
Nothing happens.
It doesn't error.. It just doesn't install.
I've tried the command and ansible.builtin.shell module as well, but nothing works.
Is there a way to do this please?
There are different topics in your question.
Regarding
to install a package downloaded from my local Artifactory repository, as I don't have access to download it straight from the internet.
you can use different approaches.
1. Direct download
- name: Make sure package becomes installed from internal repository
yum:
name: https://{{ REPOSITORY_URL }}/artifactory/kube/kubectl-{{ KUBE_VERSION }}.x86_64.rpm
state: present
2. Configure local repository
The next one is to provide a .repo template file like
[KUBE]
name = Kubectl - $basearch
baseurl = https://{{ REPOSITORY_URL }}/artifactory/kube/
username = {{ API_USER }}
password = {{ API_KEY }}
sslverify = 1
enabled = 1
gpgcheck = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-KUBE
and to perform
- name: Make sure package becomes installed from internal repository
yum:
name: kubectl
state: present
This is possible because JFrog Artifactory can provide local RPM repositories if configured correctly. For more information you research the documentation there since it is almost only about proper configuration.
Regarding
Nothing happens. It doesn't error.. It just doesn't install.
you can use several task to split up your steps, make them idempotent and get an better insight how they are working.
3. shell, rpm and debug
- name: Make sure destination folder for package download (/opt/packages) exists
file:
path: "/opt/packages/"
state: directory
- name: Download RPM to remote hosts
get_url:
url: "https://{{ REPOSTORY_URL }}/artifactory/kube/kubectl-{{ KUBE_VERSION }}.x86_64.rpm"
dest: "/opt/packages/kubectl-{{ KUBE_VERSION }}.x86_64.rpm"
- name: Check package content
shell:
cmd: "rpm -qlp /opt/packages/kubectl-{{ KUBE_VERSION }}.x86_64.rpm"
register: rpm_qlp
- name: STDOUT rpm_qlp
debug:
msg: "{{ rpm_qlp.stdout.split('\n')[:-1] }}"
- name: Install RPM using 'command: rpm -ivh'
shell:
cmd: "rpm -ivh /opt/packages/kubectl-{{ KUBE_VERSION }}.x86_64.rpm"
register: rpm_ivh
- name: STDOUT rpm_ivh
debug:
msg: "{{ rpm_ivh.stdout.split('\n')[:-1] }}"
Depending on the RPM package, environment and configuration, all may just work good.
Try use command module and register the output i use it to install oci8 by pecl for oracle database on linux

how could I install a snap package via Ansible on an air-gapped system

Installing snap packages via Ansible on systems that are connected to internet is rather simple. EG:
- name: Install microk8s
become: yes
snap:
name: microk8s
classic: yes
channel: "{{ microk8s_version }}"
Now I would need to do the same on a set of nodes that are air-gapped (no direct connection to internet).
I can do a 'snap download' for the required packages, and move them to the target machine(s).
But then how to do this in Ansible? Is there any support for this? Or do I have to use the shell/command module ?
thx
I have not tested this, but this method works with other modules.
- name: install microk8s, file on local disk
become: yes
snap:
name: /path/to/file
using the hint of #Kevin C I was able to solve the problem using the following playbook
- name: copy microk8s snap to remote
copy:
src: "{{ item }}"
dest: "~/microk8s/"
remote_src: no
with_fileglob:
- "../files/microk8s/*"
- name: snap ack the new package
become: yes
shell: |
snap ack ~/microk8s/microk8s_1910.assert
snap ack ~/microk8s/core_10583.assert
- name: install microk8s, file on local disk
become: yes
snap:
name: "~/microk8s/core_10583.snap"
- name: install microk8s, file on local disk
become: yes
snap:
name: "~/microk8s/microk8s_1910.snap"
classic: yes
I hope this helps others also.
Would be nice to see this documented.
If you can get the packages onto the Ansible server, the following code will copy the file to the target(s). Something like the the following code should work.
- name: copy files/local/microk8s.deb
copy:
src: "files/local/microk8s.deb"
dest: "~/microk8s.deb"
remote_src: no
Where files/ is at the same level as the playbook.

Ansible playbook for remote copy and script execution

I wish to write a playbook in ansible which will first transfer my package to remote hosts and then run a script. In detail, let's say I have apache package in local machine and need to scp/rsync it to remote nodes A & B. Then I have my script to install the package on A & B both, check whether it was installed properly followed by scrutinizing the config file etc. This script should run only if the transfer is successful.
Have written the below playbook which should meet above requirement. Please confirm if it needs further improvement. Thanks in advance !
Playbook:
---
- hosts: droplets
remote_user: root
tasks:
- name: Copy package to target machines
synchronize: src=/home/luckee/apache.rpm dest=/var/tmp/
- name: Run installation and verification script
script: /home/luckee/apache_install.sh
register: result
- name: Show result
debug: msg="{{ result.stdout }}"
...
This way the installation script will only run if the copy tasks changed (was execuded in the process) and exited successfully:
---
- hosts: droplets
remote_user: root
tasks:
- name: Copy package to target machines
synchronize: src=/home/luckee/apache.rpm dest=/var/tmp/
register: result_copy
- name: Run installation and verification script
script: /home/luckee/apache_install.sh
register: result_run
when: result_copy.changed
- name: Show result
debug: msg="{{ result_run.stdout }}"
...

Resources