I use molecule for end-to-end testing ansible roles that interact with kubernetes clusters.
Due to the fact that spinning up a kubernetes cluster (with additional features I need for my tests) to get a clean environment for each test is very time consuming (up to 45 minutes), I have pre-provisioned a few clusters and created an API which tells me which clusters are available for testing purposes.
In my molecule tests I use the delegated driver with a local connection, so the tests are being run on my local machine or the CI runner.
Now I need to trigger the api, get information on the cluster the tests will be run against and inject those information into each of the molecule steps.
First I though about getting the connection info during the prepare step and making them global somehow (f.e. by defining them as a fact or a host_var), so the converge, verify and cleanup steps can access them.
After research and trying to build a proof-of-concept I suspect that this is not possible. Each molecule step is a new invokation of ansible-playbook, so information can't be passed around.
Am I'm completely missing possibilites molecule is offering?
Are there any suggestions on how to reach my goal?
Related
I have several roles that run actions on a remote database that executes sentences for user and privilege creation.
I have seen molecule used to test playbooks that run against a single host, but I am unsure of how you could setup a second container to run a docker instance in the same network as the molecule container (similar to a docker-compose setup). However I have not been able to find a setup like this in the documentation.
Is there a recommended way to run molecule tests with external dependencies? Or should I just use docker-compose or similar to run my tests?
There is a 'prepare' stage in Molecule specifically for that. You need to separate questions:
where external resource (database) is run?
why and how it's configured?
Those are very separate, and mixing them together is a bad idea.
For 1 there are different answers:
It is (out of blue, configured by other people). Use non-managed hosts in molecule.yml.
We OK to run it on the same host as the host we run our code. Shovel installation into 'prepare' stage.
We want it to be on separate server. Put additional host in platforms in a different group and configure it in prepare stage.
If you find your driver is not good enough, you always can opt for 'delegated' driver. In this case you need to write playbooks for create/destroy of hosts. It's relatively easy. The main trick is to use 'platforms' variable to get information about content of molecule.yaml's platform section.
For our Airflow Projects (running Airflow 2.0.1) I have implemented some general test that verify the DAG validity, check if each DAG has an owner/email, check for parsing time and so on ..
Now I am trying to set up some CI/CD pipeline that runs these tests and then pushes the DAGs to the Cloud Composer bucket. This test however will obviously fail if I use any Airflow Connection or Variable as these have not been created yet on the runners.
What I do not want to do is using mocking, as I have to specify each connection/variable which is a bit too much work for general tests. How do you deal with connections/variables for testing on different environments (development/testing/production)
Since you're using Airflow 2 you can use the stable API to create or update variables in the desired environment. You just need to call that API from the CI/CD pipeline.
Check the official documentation for create a variable and update a variable.
This is my first question so please presume ignorance and positive intent.
As part of our build pipeline I need to run what our dev team calls "unit tests." These unit tests are run via an ant target. Before running that ant target we must spin up, configure and partially populate (the ant target does some of the population) several containers including:
application server
ldap instance
postgres instance
It looks as if each task only supports a single container. Is there a straightforward way to get all of these running together? Ideally I could create a task that would allow me to specify a pod template with the commands running in one of the containers of that pod.
I realize that I could hack this together by using the openshift client or kubernetes actions but I was hoping for something a bit more elegant. I feel like using either of those tasks would require that I build out status awareness, error checking, retry logic, etc that is likely already part of the pipeline logic and then parse the output of the ant run to determine if all of the tests were successful. This is my first foray into tekton so accepted patterns or idioms would be greatly appreciated.
For greater context I see these tasks making up my pipeline:
clone git repo
build application into intermediate image
launch pod with all necessary containers
wait for all containers to become "ready"
execute ant target to run unit tests
if all tests pass build runtime image
copy artifacts from runtime image to external storage (for deployment outside of openshift)
Thank you for your time
Have a look at sidecars. The database must be up and running for the tests to execute, so start the database in a sidecar. The steps in the task will be started as soon as all sidecars are running.
Regarding the Cypress Dashboard and runs
We are running smoke and end-to-end tests post-deployment and the tests are run against multiple environments
Is it somehow possible to combine the results for all the environments into a single run in the Dashboard?
Current results as an example:
We run the smoke tests against 8 environments that have different configurations
7 of these environments are ok and are marked as a success
1 environment fails
If the run for the failed environment isn't the latest to be run the test in the Dashboard is green and we sometimes don't notice that something failed through the Dashboard
Technically they are all being run from the same commit id
Is there any command-line parameter that will combine this so that the Dashboard will look at this as the same run but just against different environments similar to how it does with browsers?
I've been going through the documentation and issues on GitHub but cant find anything related to this
Yes, you can construct a unique id and use the --ci-build-id passed to cypress run https://docs.cypress.io/guides/guides/command-line.html#cypress-run-ci-build-id-lt-id-gt
There is an example in the Cypress Real World App, a payment application to demonstrate real-world usage of Cypress testing methods, patterns, and workflows, in the Circle CI configuration. Note, this uses the Circle CI Orb syntax which maps to the --ci-build-id command line parameter.
I'm used to having a single entity checkout, build, test, and deploy code, on every commit change (whether it be for a staging server or a production server). Now that we have started looking into Ansible, I'm beginning to think that there are isolated roles with these tools.
Basically I'm asking is it Ansible's responsibility to handle compiling and testing the code before deployment, or should it grab artifacts from a CI server such as Bamboo and trust that artifact is ready for deployment?
I'm not sure about the idea of using ansible to do the compiling, I rather just do that inside of CI as they have facilities done just for that. As for testing it depends on type of tests - if those are unit tests then they should be ran right after build (preferably inside of CI again) and either fail or pass a build.
But if those tests are of integration/functional nature (where they verify whether service actually works in the environment as we expect) then they for sure should be a part of post_tasks of the playbook, and if they don't pass you should mark the deployment as failed and act accordingly. This of course gives an idea of having a safe way to do that, before the service is exposed to production traffic, so if the tests do not pass, you can safely unroll the thing.
Nope, Ansible's responsibility is not to handle compiling and testing the code before deployment.
Yes it should grab artifacts from a CI server such as Bamboo and trust that artifact is ready for deployment.
Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs.
https://www.ansible.com/how-ansible-works