I'm using a source controlled project in Ansible Tower that plugs into github/azure devops.
I'm looking for (but can't find) a variable that represents the commit ID/SHA hash of the playbook.yml im running, so I can log it to a built VM im building with it and go back and audit it later.
$(Build.SourceVersion) will have the commit ID. Docs
Related
On my AWX System I've configure collection in requirement.yml and all is fine.
Now, I need to insert another collection providing by Automation Hub. That's means the source is in another place. I've read this document Downloading a collection from Automation Hub but unfortunately it doesn't run. I've create an ansible.cfg in my role directory but perhaps it is not the right way.
By AWX is not possibly to configure it through WebUI like Ansible Tower.
As anyone any idea how to resolve this?
Where's define the ansible.cfg by AWX and is it possible to configure many?
Best regards,
H.
I am looking for a way to trigger a git pull or a refresh from source control within ansible tower. The situation is that I have added a playbook in source control, however I cannot see it within ansible tower.
Is there a way to trigger a refresh or a git pull ?
Thanks in advance.
I just realised that if the template job is executed, if you go back to the template it now shows any newly added playbook.
What it looks like is that when a template job is executed, it forces a git pull or a refresh behind the scenes.
In order to find an updated list of playbooks available to your project you need to refresh your project. This also happens when you run the template job which is why you kind of solved it while running the job.
I'm setting up a ansible server and I have a basic question.
What is the best practice regarding setting up the first ansible server itself ? (Installing specific version's python, ansible, etc.)
Ansible server is used to setup other non-ansible (and ansible servers),
but the first/root ansible server can't be helped by any ansible servers.
I'm writing a shell script just for the first one but I'm feeling I'm in early 2000.
You can get all the information you require to setup ansible at the below given links,
WATCH ANSIBLE QUICK START VIDEO
HOW ANSIBLE WORKS
DOWNLOAD ANSIBLE
I struggled with the same issue. I solved it in the following way:
Set up the first server manually (bear with me!) to a bare Ansible control server.
Create a second server with only the OS, no Ansible yet.
Write scripts on the first server to build up the second server to a fully specced Ansible control server. I did need to have an extra (shell)-script that installs the required galaxy roles. You can use Ansible to have those roles automatically installed on the second server.
On the second server, pull (you're Ansible scripts are in version control right?) the scripts and use them to keep the first server uptodate.
Switch regularly between using the first and second server as Ansible control server.
Yes, this does indeed add overhead (extra server, extra switching). But this way you make sure that when both servers die, you only need to have a first simple server with a bare Ansible and either build up itself or a second server.
We are working on converting our project to Ansible. Due to the scale of the project, we will need a large amount of roles (30+). Where we're running into problems is how to store and manage these roles. Things we have considered:
1) Github repo per role -> This is unrealistic. We don't want to manage 30+ git repositories simply for the purpose of maintaining our roles
2) Ansible Galaxy -> This would be valuable if we could have a local instance of Ansible Galaxy, but the central instance won't work
3) We can simply store the roles in a flat directory, however we lose the benefit of being able to version them in this case. There is also the matter of how to automatically push our ansible roles directories to the ansible controller host into the correct directory
Is there a solution I'm missing?
I would suggest keeping the roles in a single git repo.
For the automatic push to the ansible controller, you could either create a standalone playbook that uses the git module to retrieve the appropriate version of the roles. This could then be run on a regular basis (or scheduled via cron).
Alternatively, you could add the git retrieval to your existing playbooks, and then it would check/update the roles prior to executing them.
I know Puppet can be used to keep the server in the consistent state. So for instance if someone else (perfectly legally) created a new user "bob", Puppet would spot this is not how the specification should be and then delete user "bob".
Is there a similar way to do this in Ansible?
By default Ansible is designed to work in "push" mode, ie you actively send instructions to servers to do something.
However, Ansible also has ansible-pull command. I'm quoting from http://docs.ansible.com/playbooks_intro.html#ansible-pull
Ansible-Pull
Should you want to invert the architecture of Ansible, so that nodes
check in to a central location, instead of pushing configuration out
to them, you can.
Ansible-pull is a small script that will checkout a repo of
configuration instructions from git, and then run ansible-playbook
against that content.
Assuming you load balance your checkout location, ansible-pull scales
essentially infinitely.
Run ansible-pull --help for details.
There’s also a clever playbook available to configure
ansible-pull via a crontab from push mode.