Combine Cypress results for multiple environments as single run in Dashboard - cypress

Regarding the Cypress Dashboard and runs
We are running smoke and end-to-end tests post-deployment and the tests are run against multiple environments
Is it somehow possible to combine the results for all the environments into a single run in the Dashboard?
Current results as an example:
We run the smoke tests against 8 environments that have different configurations
7 of these environments are ok and are marked as a success
1 environment fails
If the run for the failed environment isn't the latest to be run the test in the Dashboard is green and we sometimes don't notice that something failed through the Dashboard
Technically they are all being run from the same commit id
Is there any command-line parameter that will combine this so that the Dashboard will look at this as the same run but just against different environments similar to how it does with browsers?
I've been going through the documentation and issues on GitHub but cant find anything related to this

Yes, you can construct a unique id and use the --ci-build-id passed to cypress run https://docs.cypress.io/guides/guides/command-line.html#cypress-run-ci-build-id-lt-id-gt
There is an example in the Cypress Real World App, a payment application to demonstrate real-world usage of Cypress testing methods, patterns, and workflows, in the Circle CI configuration. Note, this uses the Circle CI Orb syntax which maps to the --ci-build-id command line parameter.

Related

Is there a way to run cypress test to test web application deployed on AWS in different environments?

I can give the URL in cy.visit to point to deployed application, but I am not clear on the setup, as I don't have CI/CD yet. The goal is to be able to run this test on a button click, not having to checkout workspace and build the application.
I can give the URL in cy.visit to point to deployed application, but I am not clear on the setup, as I don't have CI/CD yet. The goal is to be able to run this test on a button click, not having to checkout workspace and build the application.
Yes, you could test your application directly on a staging/preproduction environment on AWS without A CI/CD.
However, I would not recommend using Cypress against a production environment because for E2E testing to be efficient, you will probably create/delete/edit many things, which may not be suited for a prod env.
Finally, it depends on what you try to achieve with your tests. In general, you will use Cypress to ensure your product works as expected after adding a new feature, for example. You always want to test against the latest version of your code.
On the setup, the E2E tests can be packaged in the project and run manually until you have a CI to execute them automatically.
https://docs.cypress.io/guides/guides/command-line#cypress-run

Best way to test code with Airflow variables and connections on different environments

For our Airflow Projects (running Airflow 2.0.1) I have implemented some general test that verify the DAG validity, check if each DAG has an owner/email, check for parsing time and so on ..
Now I am trying to set up some CI/CD pipeline that runs these tests and then pushes the DAGs to the Cloud Composer bucket. This test however will obviously fail if I use any Airflow Connection or Variable as these have not been created yet on the runners.
What I do not want to do is using mocking, as I have to specify each connection/variable which is a bit too much work for general tests. How do you deal with connections/variables for testing on different environments (development/testing/production)
Since you're using Airflow 2 you can use the stable API to create or update variables in the desired environment. You just need to call that API from the CI/CD pipeline.
Check the official documentation for create a variable and update a variable.

Cypress running in parallel on AWS recording to old run

I am running cypress in a pipeline that has 2 build actions (that use the same codebuild) for running in parallel. After I manually initiate the pipeline, it will pass with a success. The problem is that the test suite isn't ran. It lists the link to cypress dashboard but it links to an older run and not a new one. I am using --ci-build-id $CODEBUILD_INITIATOR per the cypress documentation. Anyone have an idea why I am not getting unique runs every time the pipeline is ran?
This problem was due to the buildspec not being setup correctly to run machines in parallel. The the batch build configuration was not set in Codebuild. Quite simple really.
Originally the pipeline just ran two instances of the same codebuild. The first times it was ran by initiating the pipeline manually it worked and reported fine through cypress dashboard. Changing the buildspec to run a buildlist solved this problem. I was too hung up on following the buildmatrix example that was given in the cypress documentation.

How can I run a task that requires multiple containers?

This is my first question so please presume ignorance and positive intent.
As part of our build pipeline I need to run what our dev team calls "unit tests." These unit tests are run via an ant target. Before running that ant target we must spin up, configure and partially populate (the ant target does some of the population) several containers including:
application server
ldap instance
postgres instance
It looks as if each task only supports a single container. Is there a straightforward way to get all of these running together? Ideally I could create a task that would allow me to specify a pod template with the commands running in one of the containers of that pod.
I realize that I could hack this together by using the openshift client or kubernetes actions but I was hoping for something a bit more elegant. I feel like using either of those tasks would require that I build out status awareness, error checking, retry logic, etc that is likely already part of the pipeline logic and then parse the output of the ant run to determine if all of the tests were successful. This is my first foray into tekton so accepted patterns or idioms would be greatly appreciated.
For greater context I see these tasks making up my pipeline:
clone git repo
build application into intermediate image
launch pod with all necessary containers
wait for all containers to become "ready"
execute ant target to run unit tests
if all tests pass build runtime image
copy artifacts from runtime image to external storage (for deployment outside of openshift)
Thank you for your time
Have a look at sidecars. The database must be up and running for the tests to execute, so start the database in a sidecar. The steps in the task will be started as soon as all sidecars are running.

Acceptance Test and CI

We have 3 types of tests, unit, functional and acceptance.
The first 2 can be run with phpunit or other tools on top of it like codeception. So in CI the deploy script will run all these tests and if one fail build will fail and merge request will be cancelled.
But in CI deploy script how to run acceptance tests? These tests need to be run in browser in an already deployed build. Is there a workaround for that? Maybe run acceptance tests after build succeeds?
But then revert will be a pain.
You can parallelize the test jobs as advised in every CI/CD pipeline.
But in CI deploy script how to run acceptance tests?
For this you will need dedicated Test infrastructure, like available browsers on the server. After build step is successful - run all Test steps.
The parallel jobs can be setup like this:

Resources