I am trying to run my Rspec tests in Parallel but doing so causes data conflicts i.e I have a delete_all method to give me a clean slate every tests run which causes tests running in parallel to lose the data they have created. What would be the ideal way to resolve this?
Separate Databases.
Not sure how you are running your tests in parallel. Parallel tests should each have their own databases, otherwise, yes, you'll get massive data conflicts.
If your parallel test runners are sharing the same database and you call delete_all on one of your runners, it will cause the other runners to fail because their test databases (which are all the same) will now be empty.
Gems like parallel_tests use separate databases for each test runner by suffixing the test database name with the runner number, e.g. test_database1, test_database2, etc.
So, you should use one of these pre-built libraries that helps take care of this but if you're rolling your own, then make sure you run each runner in a separate database to avoid database conflicts.
Related
I have several roles that run actions on a remote database that executes sentences for user and privilege creation.
I have seen molecule used to test playbooks that run against a single host, but I am unsure of how you could setup a second container to run a docker instance in the same network as the molecule container (similar to a docker-compose setup). However I have not been able to find a setup like this in the documentation.
Is there a recommended way to run molecule tests with external dependencies? Or should I just use docker-compose or similar to run my tests?
There is a 'prepare' stage in Molecule specifically for that. You need to separate questions:
where external resource (database) is run?
why and how it's configured?
Those are very separate, and mixing them together is a bad idea.
For 1 there are different answers:
It is (out of blue, configured by other people). Use non-managed hosts in molecule.yml.
We OK to run it on the same host as the host we run our code. Shovel installation into 'prepare' stage.
We want it to be on separate server. Put additional host in platforms in a different group and configure it in prepare stage.
If you find your driver is not good enough, you always can opt for 'delegated' driver. In this case you need to write playbooks for create/destroy of hosts. It's relatively easy. The main trick is to use 'platforms' variable to get information about content of molecule.yaml's platform section.
This is my first question so please presume ignorance and positive intent.
As part of our build pipeline I need to run what our dev team calls "unit tests." These unit tests are run via an ant target. Before running that ant target we must spin up, configure and partially populate (the ant target does some of the population) several containers including:
application server
ldap instance
postgres instance
It looks as if each task only supports a single container. Is there a straightforward way to get all of these running together? Ideally I could create a task that would allow me to specify a pod template with the commands running in one of the containers of that pod.
I realize that I could hack this together by using the openshift client or kubernetes actions but I was hoping for something a bit more elegant. I feel like using either of those tasks would require that I build out status awareness, error checking, retry logic, etc that is likely already part of the pipeline logic and then parse the output of the ant run to determine if all of the tests were successful. This is my first foray into tekton so accepted patterns or idioms would be greatly appreciated.
For greater context I see these tasks making up my pipeline:
clone git repo
build application into intermediate image
launch pod with all necessary containers
wait for all containers to become "ready"
execute ant target to run unit tests
if all tests pass build runtime image
copy artifacts from runtime image to external storage (for deployment outside of openshift)
Thank you for your time
Have a look at sidecars. The database must be up and running for the tests to execute, so start the database in a sidecar. The steps in the task will be started as soon as all sidecars are running.
Regarding the Cypress Dashboard and runs
We are running smoke and end-to-end tests post-deployment and the tests are run against multiple environments
Is it somehow possible to combine the results for all the environments into a single run in the Dashboard?
Current results as an example:
We run the smoke tests against 8 environments that have different configurations
7 of these environments are ok and are marked as a success
1 environment fails
If the run for the failed environment isn't the latest to be run the test in the Dashboard is green and we sometimes don't notice that something failed through the Dashboard
Technically they are all being run from the same commit id
Is there any command-line parameter that will combine this so that the Dashboard will look at this as the same run but just against different environments similar to how it does with browsers?
I've been going through the documentation and issues on GitHub but cant find anything related to this
Yes, you can construct a unique id and use the --ci-build-id passed to cypress run https://docs.cypress.io/guides/guides/command-line.html#cypress-run-ci-build-id-lt-id-gt
There is an example in the Cypress Real World App, a payment application to demonstrate real-world usage of Cypress testing methods, patterns, and workflows, in the Circle CI configuration. Note, this uses the Circle CI Orb syntax which maps to the --ci-build-id command line parameter.
I use molecule for end-to-end testing ansible roles that interact with kubernetes clusters.
Due to the fact that spinning up a kubernetes cluster (with additional features I need for my tests) to get a clean environment for each test is very time consuming (up to 45 minutes), I have pre-provisioned a few clusters and created an API which tells me which clusters are available for testing purposes.
In my molecule tests I use the delegated driver with a local connection, so the tests are being run on my local machine or the CI runner.
Now I need to trigger the api, get information on the cluster the tests will be run against and inject those information into each of the molecule steps.
First I though about getting the connection info during the prepare step and making them global somehow (f.e. by defining them as a fact or a host_var), so the converge, verify and cleanup steps can access them.
After research and trying to build a proof-of-concept I suspect that this is not possible. Each molecule step is a new invokation of ansible-playbook, so information can't be passed around.
Am I'm completely missing possibilites molecule is offering?
Are there any suggestions on how to reach my goal?
I searching for an way to do a different migration in production and development.
I want to create a Spring Webapplication with Maven.
In development i want to update database schema AND load test data.
In production when a new version of the application is deployed i want only change the schema and don't load test data.
My first idea was to save the schema update and insert statements into different folders.
I think every body has solved this problem and can help me, thank you very much.
Basically, you have two options:
You could use different locations for your migrations in your flyway.locations property, i.e.:
for Test
flyway.locations=sql/structure,sql/test
for Production
flyway.locations=sql/structure
That way, you include your test data in the sql/test folder. You would have to take care with numbering, of course.
The second option (the one I prefer), is don't include test data in your migrations at all.
Rather, create your testdata any way you want and create an sql-dump of this data, which you keep separate from your migrations.
This works best if you have a separate database (instance, schema, whatever) containing your pristine testdata, where you apply each migration as part of your build process. This build job could then create a dump always matching the current migration.
When preparing your test machine, you first apply your migrations, then you load the contents of the matching dump.
I think this is a lot cleaner than the first version, especially because your test data can be prepared using other tools (your application) and has not to be handcoded.