Is there a way to run molecule tests with external dependencies? - ansible

I have several roles that run actions on a remote database that executes sentences for user and privilege creation.
I have seen molecule used to test playbooks that run against a single host, but I am unsure of how you could setup a second container to run a docker instance in the same network as the molecule container (similar to a docker-compose setup). However I have not been able to find a setup like this in the documentation.
Is there a recommended way to run molecule tests with external dependencies? Or should I just use docker-compose or similar to run my tests?

There is a 'prepare' stage in Molecule specifically for that. You need to separate questions:
where external resource (database) is run?
why and how it's configured?
Those are very separate, and mixing them together is a bad idea.
For 1 there are different answers:
It is (out of blue, configured by other people). Use non-managed hosts in molecule.yml.
We OK to run it on the same host as the host we run our code. Shovel installation into 'prepare' stage.
We want it to be on separate server. Put additional host in platforms in a different group and configure it in prepare stage.
If you find your driver is not good enough, you always can opt for 'delegated' driver. In this case you need to write playbooks for create/destroy of hosts. It's relatively easy. The main trick is to use 'platforms' variable to get information about content of molecule.yaml's platform section.

Related

How can I run a task that requires multiple containers?

This is my first question so please presume ignorance and positive intent.
As part of our build pipeline I need to run what our dev team calls "unit tests." These unit tests are run via an ant target. Before running that ant target we must spin up, configure and partially populate (the ant target does some of the population) several containers including:
application server
ldap instance
postgres instance
It looks as if each task only supports a single container. Is there a straightforward way to get all of these running together? Ideally I could create a task that would allow me to specify a pod template with the commands running in one of the containers of that pod.
I realize that I could hack this together by using the openshift client or kubernetes actions but I was hoping for something a bit more elegant. I feel like using either of those tasks would require that I build out status awareness, error checking, retry logic, etc that is likely already part of the pipeline logic and then parse the output of the ant run to determine if all of the tests were successful. This is my first foray into tekton so accepted patterns or idioms would be greatly appreciated.
For greater context I see these tasks making up my pipeline:
clone git repo
build application into intermediate image
launch pod with all necessary containers
wait for all containers to become "ready"
execute ant target to run unit tests
if all tests pass build runtime image
copy artifacts from runtime image to external storage (for deployment outside of openshift)
Thank you for your time
Have a look at sidecars. The database must be up and running for the tests to execute, so start the database in a sidecar. The steps in the task will be started as soon as all sidecars are running.

Ansible to run playbooks from github

I manage MapR based large scale infrastructure running on on prem dc's. As part of configuration management enhancement we have written several of playbooks and keeping everything in github. Now I dont want anyone to download/clone those repo local to Ansible client nodes and run it from there. Is there a way where i can run playbooks from ansible without downloading to local machine. So basically what i want, a script/playbook where i pass which playbook to run, it should download that playbook and run it locally.
You're looking for some web interface that users will simply run your tasks, and in the background it will execute Ansible.
There are many methods to achieve what you need, however most likely you're looking for any of this:
AWX project - official ansible web interface
Jenkins or Rundeck - more bloated software that you can create your own "jobs" for users to interact with, create CI/CD flows and cron tasks to run any time you need.
You can also look into workflow automation, such as Airflow
There are alternatives to all the mentions I put, so be sure to check everything when deciding what you need.

CI based on docker-compose?

I am currently building a little application that requires that some massively annoying software is installed and running in backround. To ease the pain of developing, I wrote a set of docker-compose files that runs the necessary daemons, creates some jobs, and throws in some test data.
Now, I'd like to run this in a CI-like manner. I currently have Jenkins check all the different repositories and execute a shell script that calles docker-compose up --abort-on-container-exit. That gets the job done, but it seems like a hack, and I'm not such a huge fan of Jenkins.
What I want to ask is: is there a more beautiful way of doing this? Specifically, is there a CI that will
watch a set of git repositories,
re-execute docker-compose (possibly multiple times with different sets of parameters), and
nicely collect and split the logs and tell me which container exactly failed how?
(Optionally) is not some cloud service but installable on my local server?
If the answer to this is "write a Jenkins module", then fine, so be it.
I'm aware that there are options like gitlab-ci, but I'd like to keep the CI script in a fashion that can also be easily executed during development, before pusing to a repo.

Is it ok to use ansible for deployement of apps instead of make files

I have recently started using ansible for configuration management of linux servers.
My habbit is that if I learn one tool then I try to use it as much as possible.
Initially for my php web apps I had a long Makefile which used to download, install packages , make php.ini file chnages , extract zip files , copy files between folders etc to deploy my application in as automated way.
Now, I am thinking of converting that Makefile deployment to Ansible because then I can arrange the separate yml file for separate areas rather than one big makefile for the whole project.
I want to know that is it good idea to use ansible for that or Makefile will be good for that.
Sure, Ansible is great for that. You can separate all your different steps into different playbooks that are identified by yaml files.
You can define common tasks and then include them in your specific playbooks.
You can also make use of Ansible roles to create complete set of playbooks depending on the role of the server. For example, one set servers' role could be webservers and another set of servers' role could be databases.
You can find more info on roles here: http://docs.ansible.com/playbooks_roles.html
There are's also a few modules on the web out there that you can also use to get you started and you can also use Ansible Galaxy to import roles.
Of course, you can accomplish the same by breaking down your Makefile but maybe you want to learn a new tool.
Hope it helps.

How can Puppet fit into a Continuous Delivery tool chain?

I'm investigating Puppet as our future deployment and provisioning tool in our shop, but now I'm stuck at how to make a clever Continuous Integration/Delivery tool chain with deployment through Puppet.
In any of our environments (dev, test, qa, demo, prod) we have a range of components. We need to be able to deploy each component separately and possibly even concurrently.
I'd like a way to initiate (through script) a deploy of a single component package (=Puppet module) and gather the output and success status of that.
Simply waiting for a scheduled agent pull, or doing a 'puppet agent --test' on each node on the environment isn't good enough, because it may pick up other pending changes (I don't know if another component is also in the process of being deployed).
In my tool chain I would like the deployment output and status from component A and component B to be recorded separately and not mixed up.
So my question is: Can I use puppet to deploy one single named package (module) at a time?
And if not, where did I take a wrong turn when I drove down this path?
I realise a master-less Puppet set-up with modules and manifests replicated to each node perhaps could do it, but IMHO a master-less Puppet set-up kind of defeats the purpose of Puppet.
PS: I think what I'm trying to achieve is called 'Directed Orchestration' in Damon Edwards' very enlightening video at Integrating DevOps tools into a Service Delivery Platform (at timestamp around 22:30).
So my question is: Can I use puppet to deploy one single named package (module) at a time?
Yes, you can, via puppet apply. First you need to create a moduledir and a module that will contain your manifests. e.g. :
/scratch/user/puppet/local/ # This is your modulepath for local deployment
# Following contains the manifests for a module name "localmod"
/scratch/user/puppet/local/localmod/manifests/init.pp
# example content of init.pp
class localmod {
notify{"I am in in local module....":}
}
On that local machine you can test this module via puppet apply :
puppet apply -v --modulepath=/scratch/user/puppet/local -e "include localmod"
echo $? # Get the exit status of the above command
I watched the video at the point your video. There are two types of automation you can do.
Application build/deploy automation, which can be achieved via maven/ant (Build) and ant/capistrano/chrome/bash/msdeploy (Deploy) or as termed on that slide "Installer".
System/Infrastructure automation can be achieved via Chef/Puppet/CFEngine.
This question seems to be ... "How do I do applications build using puppet (implied as a system automation tool)"
So quite simply, oval tool in round hole. (I didn't say square)
At my company, we use Jenkins and the Build Pipeline Integration plugin to build massive multi component projects. As an example, a Java app will use ant in a build job, the next chained job will be a "deploy to dev" job which uses Capistrano to deploy the application, then the next job in the chain is "Configure Dev" which calls Chef to update the system configurations in the DEV environment. Chef is used to configure the application. Each of these jobs can be set to run automatically and sequentially.
a master-less Puppet set-up kind of defeats the purpose of Puppet.
Only if you discount
The rich DSL puppet has to offer
So many peer reviewed community modules
Otherwise, something like this gives you remote directed orchestration.
#update manifests etc (version control is the source of truth)
ssh user#host git pull
#run puppet
ssh user#host sudo puppet-apply

Resources