Ansible, how to differentiate the targets based on the different cloud - ansible

Im trying to write a logic (ansible playbook) where different roles are called based on the conditions.
Eg: AWS role should be in invoked if the condition matches for AWS, likewise for GCP, AZURE, vmware.
Tried to check through ansible_facts, no much information is available.
Please help, which condition can I use.
Thanks

The cloud provider would be a property of the inventory and not the host itself.
Identification of the cloud provider can be done while the inventory is being built.
As such, during inventory build you can use additional variables/ groups in your hosts file to pass such information as shown in examples below
[aws_hosts]
host1
host2
[aws_hosts:vars]
cloud_provider=aws
Then use the hostvars magic variable within the script to identify the cloud provider.
hostvars[host1].cloud_provider
More information can be found here.

Related

Automation Hub collections by AWX

On my AWX System I've configure collection in requirement.yml and all is fine.
Now, I need to insert another collection providing by Automation Hub. That's means the source is in another place. I've read this document Downloading a collection from Automation Hub but unfortunately it doesn't run. I've create an ansible.cfg in my role directory but perhaps it is not the right way.
By AWX is not possibly to configure it through WebUI like Ansible Tower.
As anyone any idea how to resolve this?
Where's define the ansible.cfg by AWX and is it possible to configure many?
Best regards,
H.

How to store and retrieve secret in AWX for inventory script?

I have a dynamic inventory script based on a MariaDB database. It works as expected when I call it from Ansible and now I want to use it with AWX as an inventory source from a project. But I can't figure out how to store my database credentials and how to retrieve them from the script.
Currently I use a json file with my credentials, anyway I don't want to store these data on the Git repository. So I'm stuck at that point: I have a script that works, but I can't use it in AWX because I didn't find any way to retrieve secrets from the inventory script.
Do you have some advices to give to me?

ansible deploy to multi aws accounts using codebuild

The ansible playbook I'm running via aws codebuild only deploys to the same account. Instead of using a separate build for each account, I'd like to use only one build and manage multi-account deployment via ansible inventory. How can I set up the ansible static library to add yml files for every other aws account or environment it will be deploying to? That is, the inventory classifies those accounts into dev, stg & prod environments.
I know a bit about how this should be structured and that is to create a yml file in the inventory folder having the account name and also create a relevant file in the group-vars subfolder without the yml extension. But, I do not know the details of file contents. Can you please explain this to me?
On the other side, codebuild environment variable is given a few account names, the environment, and the role it should be assuming in those accounts to deploy. My question is how inventory structure and file content should be set up for this to work?
If you want to act on resources in different account, the general idea in AWS is to "assume" a role in that account and run API calls as normal. I see that Ansible has a module 'sts_assume_role' which helps to assume a role. I found the following blog article that may give you some pointers. Whether you run the ansible command on your laptop or CodeBuild, the idea is the same:
http://www.drivenbydevops.io/aws-ansible-and-assumed-roles/

Deployment with Ansible from Gitlab CI, dealing with passwords

I'm trying to achieve an "password-free" deployment workflow using Gitlab CI and Ansible.
Some steps do require a password (I'm already using SSH Keys whenever I can) so I've stored those password inside an Ansible Vault. Next, I would just need to provide the Vault password when running the playbook.
But how could I integrate this nicely with Gitlab CI?
May I register a gitlab-ci job (or jobs are suitable for builds only?), which just runs the playbook providing the vault password somehow?! Can this be achieved without a password laying around in plain text?!
Also, I would be really happy if someone can point me some material that shows how we can deploy builds using Ansible. As you can notice, I've definitively found nothing about that.
You can set an environment variable in the GitLab CI which would hold the Ansible Vault password. In my example i called it $ANSIBLE_VAULT_PASSWORD
Here is the example for .gitlab-ci.yml:
deploy:
only:
- master
script:
- echo $ANSIBLE_VAULT_PASSWORD > .vault_password.txt
- ansible-playbook -i ansible/staging.yml --vault-password-file .vault_password.txt
Hope this trick helps you out.
I'm not super familiar with gitlab ci, or ansible vault for that matter, but one strategy that I prefer for this kind of situation is to create a single, isolated, secure, and durable place where your password or other secrets exist. A private s3 bucket, for example. Then, give your build box or build cluster explicit access to that secure place. Of course, you'll then want to make sure your build box or cluster are also locked down, such as within a vpc that isn't publicly accessible and can only be accessed via vpn or other very secure means.
The idea is to give the machines that need your password explicit knowledge of where to get it AND seperately the permission & access they need to get it. The former does not have to be a secret (so it can exist in source control) but the latter is virtually impossible to attain without compromising the entire system, at which point you're already boned anyway.
So, more specifically, the machine that runs ansible could be inside the secure cluster. It knows how to access the password. It has permission to do so. So, it can simply get the pw, store as a variable, and use it to access secure resources as it runs. You'll want to be careful not to leak the password in the process (like piping ansible logs with the pw to somewhere outside the cluster, or even anywhere perhaps). If you want to kick off the ansible script from outside the cluster, then you would vpn in to run the ansible playbook on the remote machine.

Is it ok to use ansible for deployement of apps instead of make files

I have recently started using ansible for configuration management of linux servers.
My habbit is that if I learn one tool then I try to use it as much as possible.
Initially for my php web apps I had a long Makefile which used to download, install packages , make php.ini file chnages , extract zip files , copy files between folders etc to deploy my application in as automated way.
Now, I am thinking of converting that Makefile deployment to Ansible because then I can arrange the separate yml file for separate areas rather than one big makefile for the whole project.
I want to know that is it good idea to use ansible for that or Makefile will be good for that.
Sure, Ansible is great for that. You can separate all your different steps into different playbooks that are identified by yaml files.
You can define common tasks and then include them in your specific playbooks.
You can also make use of Ansible roles to create complete set of playbooks depending on the role of the server. For example, one set servers' role could be webservers and another set of servers' role could be databases.
You can find more info on roles here: http://docs.ansible.com/playbooks_roles.html
There are's also a few modules on the web out there that you can also use to get you started and you can also use Ansible Galaxy to import roles.
Of course, you can accomplish the same by breaking down your Makefile but maybe you want to learn a new tool.
Hope it helps.

Resources