I would like have a setup where my ec2 instances are getting terminated sometimes and new nodes comes up with the same host name .My puppetserver supposed to have the old certificates with them and instantly push the configs via the required modules.
Is this a viable solution?In this case do I need to keep the ssl certs of clients and push them to the ec2instnces via user-data ? what would be the best alternatives?
I can't really think of too many arguments against certificate re-use, only that puppet cert clean "$certname" on the CA master is so simple that I can't really think of a reason to re-use certificate names.
That said, I'd recommend building some kind of pipeline that includes certificate cleaning upon destruction of your ec2 instances. This is fairly easy to do in a terraform workflow with the AWS Terraform Provider and local-exec provisioner. For each aws_instance you create, you'd also create a local-exec resource, and specify the local-execonly executes at destruction time: when = "destroy".
If you're re-using hostnames for convenience, maybe it would be wise to instead rely on dns to point to the new hosts instead of relying on the hostnames themselves and stop worrying about puppet cert clean.
Related
Summary: We have a concern based on one piece of documentation regarding Gitlab.com CI for private projects
Note: This is in reference to Gitlab.com (and not a self-hosted gitlab)
Concern: We came across this link, https://docs.gitlab.com/ee/ci/runners/#be-careful-with-sensitive-information
My Interpretation: its not advisable to build private projects in Default Gitlab CI Runners
Is the interpretation valid? and to what extent a concern?
What do you think will be the best practice for this?
Question:
Is it fine to use Gitlab.com Shared Runners for CI in Private Projects?
Our Solution: IF and only IF we need an alternative (POC for this was successfully implemented)
We created a EC2 Instance (a private box)
Installed Gitlab Runner to the box
Connected EC2 Instance to Gitlab
Disabled Shared Runner from Project setting
On CI run, it successfully sends the request to our EC2 instance
https://gitlab.com/gitlab-org/gitlab/-/issues/215677
Short:
My interpretation was wrong. Gitlab.com is fully safe. Quoted Doc is NOT for this use-case.
Read Response from Gitlab.com: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/25468#note_333854812
Quote of response:
The Shared Runners on GitLab.com are isolated VM's that are provisioned for each CI job and removed after job execution. This is documented here.
The documentation that you reference is actually referring to the situation where as a user you are now setting up and managing your own Runners. This is actually what you have done in the Our Solution section. So the security concern is that on your EC2 instance, if the Runner is configured to use the Shell executor for example, then any user in your organization that can execute CI jobs on the Runner on that EC2 instance is now able to execute a script which has full access to the filesystem on the EC2 instance.
So this is why on GitLab.com we always create new isolated VM's for each job.
I have a situation where I want to test AWS EC2 sever using the Kitchen test framework. We are using cloudformation for our infrastructure creation and not the Chef. I want to use Kitchen Verify functionality by writing the test cases, but can't use Chef recipes for infrastructure creation.
Is there any way, I can just use Kitchen Verify command against existing EC2 infrastructure created by CloudFormation? How do I specify address on existing server which is not created using Kitchen Converge command.
Appreciate your help!
KitchenCI is only a tool (a powerful one, no doubt! :-)) which connect other tools/drivers (provisioners, verifiers, etc).
Since you do not use it for provisioning your test infrastructure it makes a little to no sense to use it for verification. Instead, I would suggest a research if your preferred verifier (you didn't mention which one you are using) can be used standalone. For example, you can run inspec without Kitchen (look for backend/host flags).
There is a drive plugin for Cloudformation, which includes its own pass through provisioner. But I’ve never used it and using standalone InSpec or Serverspec is probably easier :)
I'm trying to achieve an "password-free" deployment workflow using Gitlab CI and Ansible.
Some steps do require a password (I'm already using SSH Keys whenever I can) so I've stored those password inside an Ansible Vault. Next, I would just need to provide the Vault password when running the playbook.
But how could I integrate this nicely with Gitlab CI?
May I register a gitlab-ci job (or jobs are suitable for builds only?), which just runs the playbook providing the vault password somehow?! Can this be achieved without a password laying around in plain text?!
Also, I would be really happy if someone can point me some material that shows how we can deploy builds using Ansible. As you can notice, I've definitively found nothing about that.
You can set an environment variable in the GitLab CI which would hold the Ansible Vault password. In my example i called it $ANSIBLE_VAULT_PASSWORD
Here is the example for .gitlab-ci.yml:
deploy:
only:
- master
script:
- echo $ANSIBLE_VAULT_PASSWORD > .vault_password.txt
- ansible-playbook -i ansible/staging.yml --vault-password-file .vault_password.txt
Hope this trick helps you out.
I'm not super familiar with gitlab ci, or ansible vault for that matter, but one strategy that I prefer for this kind of situation is to create a single, isolated, secure, and durable place where your password or other secrets exist. A private s3 bucket, for example. Then, give your build box or build cluster explicit access to that secure place. Of course, you'll then want to make sure your build box or cluster are also locked down, such as within a vpc that isn't publicly accessible and can only be accessed via vpn or other very secure means.
The idea is to give the machines that need your password explicit knowledge of where to get it AND seperately the permission & access they need to get it. The former does not have to be a secret (so it can exist in source control) but the latter is virtually impossible to attain without compromising the entire system, at which point you're already boned anyway.
So, more specifically, the machine that runs ansible could be inside the secure cluster. It knows how to access the password. It has permission to do so. So, it can simply get the pw, store as a variable, and use it to access secure resources as it runs. You'll want to be careful not to leak the password in the process (like piping ansible logs with the pw to somewhere outside the cluster, or even anywhere perhaps). If you want to kick off the ansible script from outside the cluster, then you would vpn in to run the ansible playbook on the remote machine.
I am using Enteprise Chef. There is just one validation key per organisation. That means that once I download it for my workstation, other devops people in team can't have it => they can't bootstrap nodes from workstation. If I automate process of bootstraping, for instance if I put the key on S3, then I also will have to think about keeping my workstation validation key and S3 key in sync (and all other devops people in team).
So question is:
What are the best practices for distributing this single validation key across nodes and workstations?
My best bet on this:
Use chef on your workstations to manage the distribution of the validation.pem file.
Another way is to set it on a shared place (cifs or nfs share) for all your team.
According to this blog post this will become unneeded with chef 12.2.
For the record, the validation key is only necessary for the node to register itself and create it's own client.pem file at first run, it should (must if you're security aware) be deleted from the node after this first run.
The chef_client cookbook can take care of cleaning the validator key and will help manage nodes configuration.
My fantasy is to be able to spin up a standard AMI, load a tiny script and end up with a properly configured server instance.
Part of this is that I would like to have a PRIVATE yum repo in S3 that would contain some proprietary code.
It seems that S3 wants you to either be public or use AMZN's own special flavor of authentication.
Is there any way that I can use standard HTTPS + either Basic or Digest auth with S3? I'm talking about direct references to S3, not going through a web-server to get to S3.
If the answer is 'no', has anyone thought about adding AWS Auth support to yum?
The code in cgbystrom's git repo is an expression of intent rather than working code.
I've made a fork and gotten things working, at least for us, and would love for someone else to take over.
https://github.com/rmela/yum-s3-plugin
I'm not aware that you can use non-proprietary authentication with S3, however we accomplish a similar goal by mounting an EBS volume to our instances once they fire up. You can then access the EBS volume as if it were part of the local file system.
We can make changes to EBS as needed to keep it up to date (often updating it hourly). Each new instance that mounts the EBS volume gets the data current as of the mount time.
You can certainly use Amazon S3 to host a private Yum repository. Instead of fiddling with authentication, you could try a different route: limit access to your private S3 bucket by IP address. This is entirely supported, see the S3 documentation.
A second option is to use a Yum plug-in that provides the necessary authentication. Seems like someone already started working on such a plug-in: https://github.com/cgbystrom/yum-s3-plugin.