I have run into a slight snag and would appreciate some advice/help.
In my company we use Ansible Tower running ansible 2.9. Tower is being supported and run by a separate team. They manage all the collections and modules which are provided with v2.9. We use tower to create automation's mainly interacting with VMWare vCentre. During development there have been times where the 2.9 version of some modules have bugs or just dont have the functionality that the community modules have. Therefore till now we have been creating a library folder and a collections folder at the top level of our project and just adding modules which we need in there and its been working absolutely fine.
Recently however we have had the need to set up local environments to be able to make development more efficient. I have done this using vagrant and installed RHEL 8.4 with the below
ansible [core 2.13.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/vagrant/.ansible/collections:/usr/share/ansible/collections
executable location = /home/vagrant/.local/bin/ansible
python version = 3.10.5 (main, Jun 14 2022, 14:27:52) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.1.2
libyaml = True
Now everything has been working fine until I ran a project where I had something to do with vCenter. It threw me an error that stated that ansible could not resolve the module/action. I thought this was due to not having the modules installed so I used the ansible galaxy command to install community-vmware collection. This also worked fine. However because I have a folder called collections at my project level anything that is not part of ansible core and part of any collections, ansible looks in the project level collections folder not the one that ansible is pointing to. Due to this none of my roles are able to execute.
So my questions are:
Is it possible to have a collections folder at the project level and one at the system level and for ansible to look at both to find my module anywhere.
Is there anyway of getting the same modules that I have in ansible tower into my vagrant box so that we are developing with the same things as we would be if I was running this in tower only.
Apologies if I have missed anything out and if so please let me know and I will do my best to provide the info needed.
Thank you all in advance
Related
I am really new to Ansible and I hate getting warnings when I run a playbook. This environment is being used for my education.
Environment:
AWS EC2
4 Ubuntu 20
3 Amazon Linux2 hosts
Inventory
using the dynamic inventory script
playbook
just runs a simple ping against all hosts. I wanted to test the inventory
warning
[WARNING]: Platform linux on host XXXXXX.amazonaws.com is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change the
meaning of that path. See https://docs.ansible.com/ansible-core/2.11/reference_appendices/interpreter_discovery.html for more information.
Things I have tried
updated all sym links on hosts to point to the python3 version
adding the line "ansible_python_interpreter = /usr/bin/python" to "/etc/ansible/ansible.cfg"
I am relying on that cfg file
I would like to know how to solve this. since I am not running a static inventory, I didn't think that I could specific an interpreter on a per host or group of hosts. While the playbook runs, it seems that something is not configured correctly and I would like to get that sorted. This is only present on the Amazon Linux instances. the Ubuntu instances are fine.
Michael
Thank you. I did find another route that work though I am sure that you suggest would also work.
I was using the wrong configuration entry. I was using
ansible_python_interpreter = /usr/bin/python
when I should have been using
interpreter_python = /usr/bin/python
on each host I made sure that /usr/bin/python sym link was pointing and the correct version.
according to the documentation
for individual hosts and groups, use the ansible_python_interpreter inventory variable
globally, use the interpreter_python key in the [defaults] section of ansible.cfg
Regards, Michael.
You can edit your ansible.cfg and set auto_silent mode:
interpreter_python=auto_silent
Check reference here:
https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html
Anyone got a proper instruction set to upgrade Ansible Tower 3.4 to 3.6 ?
(Ansible 2.5, Database - postgres 9.6)
Found Ansible Doc but not in details.
Thanks
EDIT: The original question pertained to upgrading AWX. It's been edited and now pertains to upgrading Ansible Tower. My answer below only applies to upgrading AWX.
If you used the docker-compose installation method and pointed postgres_data_dir to a persistent directory on the host, upgrading AWX is straightforward. I deployed AWX 2.0.0 in 2018 and have upgraded it to every subsequent release (currently running 9.1.0) without issue. Below is my upgrade method which preserves all data including secrets between upgrades and does not rely on using the tower cli / awx cli tool.
AWX path assumptions:
Existing installation: /opt/awx
New release: /tmp/awx
AWX inventory file assumptions:
use_docker_compose=true
postgres_data_dir=/opt/postgres
docker_compose_dir=/var/lib/awx
Manual upgrade process:
Backup your AWX host before continuing! Consider backing up your postgres database as well.
Download the new release of AWX and unpack it to /tmp/awx
Ensure that the patch package is installed on the host.
Create a patch file containing the differences between the new and
existing inventory files:
diff -u /tmp/awx/installer/inventory /opt/awx/installer/inventory > /tmp/awx_inv_patch
Patch the new inventory file with the differences:
patch /tmp/awx/installer/inventory < /tmp/awx_inv_patch
Verify that the files now match:
diff -s /tmp/awx/installer/inventory /opt/awx/installer/inventory
Copy the new release directory over the existing one:
cp -Rp /tmp/awx/* /opt/awx/
Edit /var/lib/awx/docker-compose.yml and change the version numbers
after image: ansible/awx_web: and image: ansible/awx_task: to match the
new version of AWX that you're upgrading to.
Stop the current AWX containers:
cd /var/lib/awx
docker-compose stop
Run the installer:
cd /opt/awx/inventory
ansible-playbook -i inventory install.yml
AWX starts the upgrade process, which usually completes within a couple minutes. I'll typically monitor the upgrade progress with docker logs -f awx_web until I see RESULT 2 / OKREADY appear.
If everything is working as intended, I shut the containers down, pull and then recreate them using docker-compose:
cd /var/lib/awx
docker-compose stop
docker-compose pull && docker-compose up --force-recreate -d
If everything is still working as intended, I delete /tmp/awx and /tmp/awx_inv_patch.
Updgrades in AWX are not supported by ansible/redhat. Only the commercial Tower Licence allows to access scripts and procedures to do this.
From the awx project FAQ
Q: Can I upgrade from one version of AWX to another?
A: Direct in-place upgrades between AWX versions are not supported. It is possible to migrate data between different versions of AWX using the tower-cli tool. To migrate between different instances of AWX, please follow the instructions at https://github.com/ansible/awx/blob/devel/DATA_MIGRATION.md.
The reference link on github AWX project will teach you how to export your current data with tower-cli and reimport it in the new version you install. Note that all credentials are exported with blank secrets so you will have to update them with the passwords/secrets once imported.
I have an RHN Satellite 6.4 Server and have installed rhel-system-roles as per documentation.
yum install rhel-system-roles
But when I click inside the GUI on Configure->Ansible->Roles I get the error:
no ansible roles were found in satellite.
I have also copied some roles from the system-roles folder to /etc/ansible/roles/ and also made a test_role folder there but still cannot import or see them inside the GUI.
I have restarted the server. Can this be why I do not have a host that is connected ok without errors inside hosts?
Thanks in advance.
I know that you can setup proxy in Ansible to provision behind corporate network:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_environment.html
like this:
environment:
http_proxy: http://proxy.example.com:8080
Unfortunately in my case there is no access to internet from the server at all. Downloading roles locally and putting them under /roles folder seems solve the role issue, but roles still download packages from the internet when using:
package:
name: package-name
state: present
I guess there is no way to make dry/pre run so Ansible downloads all the packages, then push that into repo and run Ansible provision using locally downloaded packages?
This isn't really a question about Ansible, as all Ansible is doing is running the relevant package management system on the target host (i.e. yum, dnf or apt or whatever). So it is a question of what solution the specific package management tool provides, for this case.
There are a variety of solutions and for example in the Centos/RHEL world you can:
Create a basic mirror
Install a complete enterprise management system
There is another class of tool generally called an artefact repository. These started out life as tools to store binaries built from code, but have added a bunch of features to act as a proxy and cache packages from a wide variety of sources (OS Packages, PIP, NodeJS, Docker, etc). Two examples that have limited free offerings:
Nexus
Artifactory
They of course still need to collect those packages from a source, so at some point those are going to have to be downloaded to placed within these systems.
Like clockworknet pointed out this is more related to the RHEL package handling. Setting up local mirror somewhere inside the closed network can provide a solution in this situation. More info on "How to create a local mirror of the latest update for Red Hat Enterprise Linux 5, 6, 7 without using Satellite server?": https://access.redhat.com/solutions/23016
My solution:
Install Sonatype Nexus 3
create one or more yum proxy repositories
https://help.sonatype.com/repomanager3/formats/yum-repositories
use Ansible to add these proxies via yum_repository
https://docs.ansible.com/ansible/latest/modules/yum_repository_module.html
yum_repository:
name: proxy-repo
description: internal proxy repo
baseurl: https://your-nexus.server/url-to-repo```
note: did that for APT and works fine, would expect the same for yum
I am using this playbook to install a 3 node ScaleIO cluster on CentOS 7.
https://github.com/sperreault/ansible-scaleio
In the EMC documentation they specify that a CSV file needs to be uploaded to the IM to complete installation, I am not sure though how I can automate that part within this playbook. Has anyone got any practical experience of doing so?
this playbook is used to install ScaleIO manually, not by IM.
so you do not need to prepare a csv file