I'm trying to achieve an "password-free" deployment workflow using Gitlab CI and Ansible.
Some steps do require a password (I'm already using SSH Keys whenever I can) so I've stored those password inside an Ansible Vault. Next, I would just need to provide the Vault password when running the playbook.
But how could I integrate this nicely with Gitlab CI?
May I register a gitlab-ci job (or jobs are suitable for builds only?), which just runs the playbook providing the vault password somehow?! Can this be achieved without a password laying around in plain text?!
Also, I would be really happy if someone can point me some material that shows how we can deploy builds using Ansible. As you can notice, I've definitively found nothing about that.
You can set an environment variable in the GitLab CI which would hold the Ansible Vault password. In my example i called it $ANSIBLE_VAULT_PASSWORD
Here is the example for .gitlab-ci.yml:
deploy:
only:
- master
script:
- echo $ANSIBLE_VAULT_PASSWORD > .vault_password.txt
- ansible-playbook -i ansible/staging.yml --vault-password-file .vault_password.txt
Hope this trick helps you out.
I'm not super familiar with gitlab ci, or ansible vault for that matter, but one strategy that I prefer for this kind of situation is to create a single, isolated, secure, and durable place where your password or other secrets exist. A private s3 bucket, for example. Then, give your build box or build cluster explicit access to that secure place. Of course, you'll then want to make sure your build box or cluster are also locked down, such as within a vpc that isn't publicly accessible and can only be accessed via vpn or other very secure means.
The idea is to give the machines that need your password explicit knowledge of where to get it AND seperately the permission & access they need to get it. The former does not have to be a secret (so it can exist in source control) but the latter is virtually impossible to attain without compromising the entire system, at which point you're already boned anyway.
So, more specifically, the machine that runs ansible could be inside the secure cluster. It knows how to access the password. It has permission to do so. So, it can simply get the pw, store as a variable, and use it to access secure resources as it runs. You'll want to be careful not to leak the password in the process (like piping ansible logs with the pw to somewhere outside the cluster, or even anywhere perhaps). If you want to kick off the ansible script from outside the cluster, then you would vpn in to run the ansible playbook on the remote machine.
Related
The ansible playbook I'm running via aws codebuild only deploys to the same account. Instead of using a separate build for each account, I'd like to use only one build and manage multi-account deployment via ansible inventory. How can I set up the ansible static library to add yml files for every other aws account or environment it will be deploying to? That is, the inventory classifies those accounts into dev, stg & prod environments.
I know a bit about how this should be structured and that is to create a yml file in the inventory folder having the account name and also create a relevant file in the group-vars subfolder without the yml extension. But, I do not know the details of file contents. Can you please explain this to me?
On the other side, codebuild environment variable is given a few account names, the environment, and the role it should be assuming in those accounts to deploy. My question is how inventory structure and file content should be set up for this to work?
If you want to act on resources in different account, the general idea in AWS is to "assume" a role in that account and run API calls as normal. I see that Ansible has a module 'sts_assume_role' which helps to assume a role. I found the following blog article that may give you some pointers. Whether you run the ansible command on your laptop or CodeBuild, the idea is the same:
http://www.drivenbydevops.io/aws-ansible-and-assumed-roles/
I manage MapR based large scale infrastructure running on on prem dc's. As part of configuration management enhancement we have written several of playbooks and keeping everything in github. Now I dont want anyone to download/clone those repo local to Ansible client nodes and run it from there. Is there a way where i can run playbooks from ansible without downloading to local machine. So basically what i want, a script/playbook where i pass which playbook to run, it should download that playbook and run it locally.
You're looking for some web interface that users will simply run your tasks, and in the background it will execute Ansible.
There are many methods to achieve what you need, however most likely you're looking for any of this:
AWX project - official ansible web interface
Jenkins or Rundeck - more bloated software that you can create your own "jobs" for users to interact with, create CI/CD flows and cron tasks to run any time you need.
You can also look into workflow automation, such as Airflow
There are alternatives to all the mentions I put, so be sure to check everything when deciding what you need.
I'm setting up a ansible server and I have a basic question.
What is the best practice regarding setting up the first ansible server itself ? (Installing specific version's python, ansible, etc.)
Ansible server is used to setup other non-ansible (and ansible servers),
but the first/root ansible server can't be helped by any ansible servers.
I'm writing a shell script just for the first one but I'm feeling I'm in early 2000.
You can get all the information you require to setup ansible at the below given links,
WATCH ANSIBLE QUICK START VIDEO
HOW ANSIBLE WORKS
DOWNLOAD ANSIBLE
I struggled with the same issue. I solved it in the following way:
Set up the first server manually (bear with me!) to a bare Ansible control server.
Create a second server with only the OS, no Ansible yet.
Write scripts on the first server to build up the second server to a fully specced Ansible control server. I did need to have an extra (shell)-script that installs the required galaxy roles. You can use Ansible to have those roles automatically installed on the second server.
On the second server, pull (you're Ansible scripts are in version control right?) the scripts and use them to keep the first server uptodate.
Switch regularly between using the first and second server as Ansible control server.
Yes, this does indeed add overhead (extra server, extra switching). But this way you make sure that when both servers die, you only need to have a first simple server with a bare Ansible and either build up itself or a second server.
I would like have a setup where my ec2 instances are getting terminated sometimes and new nodes comes up with the same host name .My puppetserver supposed to have the old certificates with them and instantly push the configs via the required modules.
Is this a viable solution?In this case do I need to keep the ssl certs of clients and push them to the ec2instnces via user-data ? what would be the best alternatives?
I can't really think of too many arguments against certificate re-use, only that puppet cert clean "$certname" on the CA master is so simple that I can't really think of a reason to re-use certificate names.
That said, I'd recommend building some kind of pipeline that includes certificate cleaning upon destruction of your ec2 instances. This is fairly easy to do in a terraform workflow with the AWS Terraform Provider and local-exec provisioner. For each aws_instance you create, you'd also create a local-exec resource, and specify the local-execonly executes at destruction time: when = "destroy".
If you're re-using hostnames for convenience, maybe it would be wise to instead rely on dns to point to the new hosts instead of relying on the hostnames themselves and stop worrying about puppet cert clean.
I know Puppet can be used to keep the server in the consistent state. So for instance if someone else (perfectly legally) created a new user "bob", Puppet would spot this is not how the specification should be and then delete user "bob".
Is there a similar way to do this in Ansible?
By default Ansible is designed to work in "push" mode, ie you actively send instructions to servers to do something.
However, Ansible also has ansible-pull command. I'm quoting from http://docs.ansible.com/playbooks_intro.html#ansible-pull
Ansible-Pull
Should you want to invert the architecture of Ansible, so that nodes
check in to a central location, instead of pushing configuration out
to them, you can.
Ansible-pull is a small script that will checkout a repo of
configuration instructions from git, and then run ansible-playbook
against that content.
Assuming you load balance your checkout location, ansible-pull scales
essentially infinitely.
Run ansible-pull --help for details.
There’s also a clever playbook available to configure
ansible-pull via a crontab from push mode.