So I've read over the other articles that are similar to this question and while they provided an accepted answer, I wasn't able to quite get there. I am running AWX on Fedora 35. I have installed it using the following guide.
https://computingforgeeks.com/install-and-configure-ansible-awx-on-centos/
After I finished this guy I also did
yum install Ansible
After I did what was said to do on the juniper website which is
ansible-galaxy install collection juniper.device
sudo ansible-galaxy install Juniper.junos
When I attempt to run my template I get the following.
The other article question/answer I read earlier stated (I also went to the Ansible site) that I need to create a requirements.yml file. I'm not sure where my issue is coming into play but I can't get it to work. Either I don't know where the requirements file is supposed to go. I put it in /var/lib/awx/projects/_8__getconfigs/collections and /var/lib/awx/projects/_8__getconfigs/roles
Inside of the files is very simple (and probably wrong).
Roles
---
roles:
- name: Juniper.junos
Collections
---
collections:
- name: juniper.devices
Lastly I was talking with someone earlier and they mentioned privileges which could make sense. There is no account for awx on my system. If anyone could help me it would be greatly appreciated. Thank you in advance!
You can do that using Execution Environments in AWX
1. Create a Dockerfile from awx-ee image containing the collections:
FROM quay.io/ansible/awx-ee:latest
RUN ansible-galaxy collection install gluster.gluster \
&& ansible-galaxy collection install community.general
Build the Image: docker build -t $ImageName .
Log in to your Docker repository: docker login -u $DockerHubUser
Tag the image: docker image tag $ImageName $DockerHubUser/$ImageName:latest
Push the image to Hub: docker image push $DockerHubUser/$ImageName:latest
2. Add Execution Environment to AWX:
The image location, including the container registry, image name, and version tag
3. That's it:
I've tested that already on a fresh AWX instance, where there is no collections installed.
You don't have to refer to the collection in a requirements.yml file
Whenever a new Galaxy Collection is needed, it should be added to the Dockerfile and pushed to Hub.
You can even install normal Linux packages in the docker image if needed.
Related
I have added an index server in my ~/.pypirc file as:
[distutils]
index-servers = example
[example]
repository: https://example.com/simple/
username: someplaintextusername
password: someplaintextpw
However, I can't install a package which definitely is on the example index server. Now I want to check if pip actually notices that server in the pypirc file.
Can I make pip list all available index servers?
edit: For the problem I'm trying to solve, it seems as if ~/config/pip/pip.config is the file I should edit. But my question is still the same.
pip's own config list command should get you at least some of this info:
path/to/pythonX.Y -m pip config list
Anyone got a proper instruction set to upgrade Ansible Tower 3.4 to 3.6 ?
(Ansible 2.5, Database - postgres 9.6)
Found Ansible Doc but not in details.
Thanks
EDIT: The original question pertained to upgrading AWX. It's been edited and now pertains to upgrading Ansible Tower. My answer below only applies to upgrading AWX.
If you used the docker-compose installation method and pointed postgres_data_dir to a persistent directory on the host, upgrading AWX is straightforward. I deployed AWX 2.0.0 in 2018 and have upgraded it to every subsequent release (currently running 9.1.0) without issue. Below is my upgrade method which preserves all data including secrets between upgrades and does not rely on using the tower cli / awx cli tool.
AWX path assumptions:
Existing installation: /opt/awx
New release: /tmp/awx
AWX inventory file assumptions:
use_docker_compose=true
postgres_data_dir=/opt/postgres
docker_compose_dir=/var/lib/awx
Manual upgrade process:
Backup your AWX host before continuing! Consider backing up your postgres database as well.
Download the new release of AWX and unpack it to /tmp/awx
Ensure that the patch package is installed on the host.
Create a patch file containing the differences between the new and
existing inventory files:
diff -u /tmp/awx/installer/inventory /opt/awx/installer/inventory > /tmp/awx_inv_patch
Patch the new inventory file with the differences:
patch /tmp/awx/installer/inventory < /tmp/awx_inv_patch
Verify that the files now match:
diff -s /tmp/awx/installer/inventory /opt/awx/installer/inventory
Copy the new release directory over the existing one:
cp -Rp /tmp/awx/* /opt/awx/
Edit /var/lib/awx/docker-compose.yml and change the version numbers
after image: ansible/awx_web: and image: ansible/awx_task: to match the
new version of AWX that you're upgrading to.
Stop the current AWX containers:
cd /var/lib/awx
docker-compose stop
Run the installer:
cd /opt/awx/inventory
ansible-playbook -i inventory install.yml
AWX starts the upgrade process, which usually completes within a couple minutes. I'll typically monitor the upgrade progress with docker logs -f awx_web until I see RESULT 2 / OKREADY appear.
If everything is working as intended, I shut the containers down, pull and then recreate them using docker-compose:
cd /var/lib/awx
docker-compose stop
docker-compose pull && docker-compose up --force-recreate -d
If everything is still working as intended, I delete /tmp/awx and /tmp/awx_inv_patch.
Updgrades in AWX are not supported by ansible/redhat. Only the commercial Tower Licence allows to access scripts and procedures to do this.
From the awx project FAQ
Q: Can I upgrade from one version of AWX to another?
A: Direct in-place upgrades between AWX versions are not supported. It is possible to migrate data between different versions of AWX using the tower-cli tool. To migrate between different instances of AWX, please follow the instructions at https://github.com/ansible/awx/blob/devel/DATA_MIGRATION.md.
The reference link on github AWX project will teach you how to export your current data with tower-cli and reimport it in the new version you install. Note that all credentials are exported with blank secrets so you will have to update them with the passwords/secrets once imported.
I am trying to generate my work in progress hugo website locally. It works fine with gitlab CI.
I installed docker and the gitlab runner service.
Then using the guide here I figured that I am supposed to do gitlab-runner exec docker pages.
But that results in:
[0;33mWARNING: Since GitLab Runner 10.0 this command is marked as DEPRECATED and will be removed in one of upcoming releases[0;m
[0KRunning with gitlab-runner 10.5.0 (80b03db9)
[0;m[0KUsing Docker executor with image rocker/tidyverse:latest ...
[0;m[0KPulling docker image rocker/tidyverse:latest ...
[0;m[0KUsing docker image sha256:f9a62417cb9b800a07695f86027801d8dfa34552c621738a80f5fed649c1bc80 for rocker/tidyverse:latest ...
[0;m[31;1mERROR: Job failed (system failure): Error response from daemon: invalid volume specification: '/host_mnt/c/builds/project-0/Users/jan/Desktop/gits/stanstrup-web:C:\Users\jan\Desktop\gits\stanstrup-web:ro'
[0;m[31;1mFATAL: Error response from daemon: invalid volume specification: '/host_mnt/c/builds/project-0/Users/jan/Desktop/gits/stanstrup-web:C:\Users\jan\Desktop\gits\stanstrup-web:ro'[0;m
I also tried registering it as other guides show but I end up with the same issue.
Others have had some issues:
https://gitlab.com/gitlab-org/gitlab-runner/issues/1775 It was said it was fixed...
https://github.com/moby/moby/issues/12751 suggest that you can set COMPOSE_CONVERT_WINDOWS_PATHS=1. I tried setting that as an environmental variable but it didn't help.
More discussion of how to escape the path correctly: https://github.com/docker/compose/issues/3285
More discussion sugestion COMPOSE_CONVERT_WINDOWS_PATHS=1 would work: https://github.com/docker/toolbox/issues/607
Am I supposed to set something in .gitlab-ci.yml? Should volumes be set there? In which case how/where?
The .gitlab-ci.yml says:
image: rocker/tidyverse:latest
before_script:
- apt-get update && apt-get -y install default-jdk pandoc r-base r-cran-rjava curl netcdf-bin libnetcdf-dev libxml2-dev libssl-dev
- R CMD javareconf
- Rscript .gitlab-ci.R
pages:
script:
- R -e "blogdown::build_site()"
artifacts:
paths:
- public
only:
- master
Looks like you hit the colon seperator bug in docker for windows which lots of tools have to work around , gitlab has noticed it
until the fix comes out the simplest workaround would be for you to try doing this in a linux vm on your windows box.
get prebuilt gitlab vm images from bitnami here.
otherwise you could checkout and run the gitlab-runner source branch with the fix, however it shows some conflicts and might have other bugs.
I've specified dependency for my role by declaring it inside meta/main.yml.
---
dependencies:
- role: angstwad.docker_ubuntu
src: https://github.com/angstwad/docker.ubuntu
scm: git
version: v2.3.0
However when I try to execute Ansible playbook I'm seeing error message:
ERROR! the role 'angstwad.docker_ubuntu' was not found in /home/.../ansible/roles:/etc/ansible/roles:/home/.../ansible/roles:/home/.../ansible
The error appears to have been in '/home/.../roles/docker-node/meta/main.yml': line 5, column 5, but may be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- role: angstwad.docker_ubuntu
^ here
From the error message I conclude that external role angstwad.docker_ubuntu hasn't be imported, even if it has been explicitly mentioned in docker-node/meta/main.yml.
Can I specify external dependencies from Ansible Galaxy even if my role itself has not been uploaded to Ansible Galaxy?
Or do I need to explicitly import them using ansible-galaxy?
From Ansible documentation:
Roles can also be dependent on other roles, and when you install a
role that has dependencies, those dependenices will automatically be
installed.
You specify role dependencies in the meta/main.yml file by providing a
list of roles. If the source of a role is Galaxy, you can simply
specify the role in the format username.role_name. The more complex
format used in requirements.yml is also supported, allowing you to
provide src, scm, version and name.
By following suggestions from another SO question (How to automatically install Ansible Galaxy roles?) it sounds like downloading external dependencies needs to be done explicitly either manually or with workaround.
If the role that has this dependency is itself being installed via ansible-galaxy, then revise meta/main.yml to look like this:
---
dependencies:
- src: https://github.com/angstwad/docker.ubuntu.git
scm: git
version: v2.3.0
Thus, when installing the primary role the docker.ubuntu role will automatically be installed in ~/.ansible/roles/docker.ubuntu and it will run when the playbook is executed.
If the role that has this dependency will NOT be installed via ansible-galaxy, that is, if the dependency is the only thing being installed via the ansible-galaxy command, then docker.ubuntu will have to be installed separately, e.g.
ansible-galaxy install git+https://github.com/angstwad/docker.ubuntu.git
or
ansible-galaxy install git+https://github.com/angstwad/docker.ubuntu.git -p /path_to/my_playbook/roles
This will place the role in your search path (again, ~/.ansible/roles/docker.ubuntu if not specified via -p). In this scenario, revise meta/main.yml to this:
---
dependencies:
- role: docker.ubuntu
Note that when pulling your role from github, it lands as docker.ubuntu vs docker_ubuntu. See https://galaxy.ansible.com/docs/using/installing.html#dependencies for more. Hope this helps.
As per documentation you linked, you see:
When dependencies are encountered by ansible-galaxy, it will automatically install each dependency to the roles_path.
Downloading of dependencies is handled by ansible-galaxy only, the src meta information doesn't affect ansible-playbook, which just looks inside your role directory and if it don't see a corresponding folder - it fails. As in answer in question you linked, you can indeed create a first task, or include in you playbook to run ansible galaxy before each start.
I'm working with Ansible using ansible-pull (runs on cron).
Can I install Ansible role from Ansible Galaxy without login in to all computers (just by adding a command to my Ansible playbook)?
If I understand you correctly, you're trying to download and install roles from Ansible Galaxy from the command line, in a hands-off manner, possibly repeatedly (via cron). If this is the case, here's how you can do it.
# download the roles
ansible-galaxy install --ignore-errors f500.elasticsearch groover.packerio
# run ansible-playbook to install the roles downloaded from Ansible Galaxy
ansible-playbook -i localhost, -c local <(echo -e '- hosts: localhost\n roles:\n - { role: f500.elasticsearch, elasticsearch_cluster_name: "my elasticsearch cluster" }\n - { role: groover.packerio, packerio_version: 0.6.1 }\n')
Explanation / FYI:
To download roles from Ansible Galaxy, use ansible-galaxy, not ansible-pull. For details, see the manual. You can download multiple roles at once.
If the role had been downloaded previously, repeated attempts at downloading using ansible-galaxy install will result in an error. If you wish to call this command repeatedly (e.g. from cron), use --ignore-errors (skip the role and move on to the next item) or --force (force overwrite) to work around this.
When running ansible-playbook, we can avoid having to create an inventory file using -i localhost, (the comma at the end signals that we're providing a list, not a file).
-c local (same as --connection=local) means that we won't be connecting remotely but will execute commands on the localhost.
<() functionality is process substitution. The output of the command appears as a file, so we can feed a "virtual playbook file" into the ansible-playbook command without saving the playbook to the disk first (e.g., playbookname.yml).
As shown, it's possible to embed role variables, such as packerio_version: 0.6.1 and apply multiple roles in a single command.
Note that whitespace is significant in playbooks (they are YAML files). Just as in Python code, be careful about indentation. It's easy to make typos in long lines with echo -e and \n (newlines).
You can run updates of roles from Ansible Galaxy and ansible-playbook separately.
With a bit of magic, you don't have to create inventory files or playbooks (this can be useful sometimes). The solution to install Galaxy roles remotely via push is less hacky / cleaner but if you prefer to use cron and pulling then this can help.
I usually add roles from galaxy as submodules in my own repository; that way I have control over when I update them, and ansible-pull will automatically fetch them - removing the need to run ansible-galaxy.
E.g.:
mkdir roles
git submodule add https://github.com/groover/ansible-role-packerio roles/groover.packerio
Yes you can.
# install Ansible Galaxy requirements via the pull playbook itself
- hosts: localhost
tasks:
- command: ansible-galaxy install -r requirements.yml