Hibernate: DB not reliably rolled back at end of UnitTests in spite of #Transactional - spring

We have a large application using Spring for application setup, initialisation and "wiring" and Hibernate as persistence framework. For that application we have a couple of unit tests which are causing us headaches because they again and again run "red" when executing them on our Jenkins build server.
These UnitTests execute and verify some rather complex and lengthy core-operations of our application and thus we considered it too complex and too much effort to mock the DB. Instead these UTs run against a real DB. Before the UTs are executed we create the objects required (the "pre-conditions"). Then we run a Test and then we verify the creation of certain objects, their status and values etc. All plain vanilla...
Since we run multiple tests in sequence which all need the same starting point these tests derive from a common parent class that has an #Transactional annotation. The purpose of that is that the DB is always rolled back after each unit-test so that the subsequent test can start from the same baseline.
That approach is working perfectly and reliably when executing the unit-tests "locally" (i.e. running a "mvn verify" on a developer's workstation). However, when we execute the very same tests on our Jenkins, then - not always but very often - these tests fail because there are too many objects being found after a test or due to constraint violations because certain objects already exist that shouldn't yet be there.
As we found out by adding lots of log-statements (because it's otherwise impossible to observe code running on Jenkins) the reason for these failures is, that the DB is occasionally not properly rolled back after a prior test. Thus there are left-overs from the previous test(s) in the DB and these then cause issue during subsequent tests.
What's puzzling us most is:
why are these tests failing ONLY when we execute them on Jenkins, but never when we run the very same tests locally? We are using absolute identical maven command line and code here, also same Java version, Maven version, etc.
We are by now sure that this has nothing to do with UTs being executed in parallel as we initially suspected. We disabled all options to run UTs in parallel, that the Maven Surefire plugin offers. Our log-statements also clearly show that the tests are perfectly serialized but again and again objects "pile up", i.e. after each test-method, the number of these objects that were supposed to have been removed/rolled back at the end of the test, are still there and their number increases which each test.
We also observed a certain "randomness" with this effect. Often, the Jenkins builds run fine for several commits and then suddenly (even without any code change, just by retriggering a new build of the same branch) start to run red. The DB, however, is getting re-initialized before each build & test-run, so that can not be the source of this effect.
Any idea anyone what could cause this? Why do the DB rollbacks that are supposed to be triggered by the #org.springframework.transaction.annotation.Transactional annotation work reliable on our laptops but not on our build server? Any similar experiences and findings on that anyone?

Related

How do I avoid liquibase running multiple times during JUNIT Test cases

I have an application and I'm managing database with Liquibase. While I was looking at my package process during build jobs, I noticed that the Liquibase is being run multiple times. I'm trying to optimize my application code so that the build gets done faster. I've already implemented parallel test case running and I got my time reduced from 18 mins to 10 mins which is good. I would still like to optimize it to be faster. I noticed that my Liquibase is running multiple times to set up a H2 database. How can I optimize this process of test cases so that Liquibase is run only once and I have test cases running on it.

How to retry only failed tests in the CI job run on Gitlab?

Our automation tests run in gitlab CI environment. We have a regression suite of around 80 tests.
If a test fails due to some intermittent issue, the CI job fails and since the next stage is dependent on the Regression one, the pipeline gets blocked.
We retry the job to rerun regression suite expecting this time it will pass, but some other test fails this time.
So, my question is:
Is there any capability using which on retrying the failed CI job, only the failed tests run (Not the whole suite)?
You can use the retry keyword when you specify the parameters for a job, to define how many times the job can be automatically retried: https://docs.gitlab.com/ee/ci/yaml/#configuration-parameters
[Retry Only Failed Scenarios]
Yes, but it depends. let me explain. I'll mention the psuedo-steps which can be performed to retry only failed scenarios. The steps are specific to pytest, but can be modified depending on the test-runner.
Execute the test scenarios with --last-failed. At first, all 80 scenarios will be executed.
The test-runner creates a metadata file containing a list of failed tests. for example, pytest creates a folder .pytest_cache containing lastfailed file with the list of failed scenarios.
We now have to add the .pytest_cache folder in the GitLab cache with the key=<gitlab-pipeline-id>.
User checks that there are 5 failures and reruns the failed job.
When the job is retried it will see that now .pytest_cache folder exists in the GitLab cache and will copy the folder to your test-running directory. (shouldn't fail if the cache doesn't exist to handle the 1st execution)
you execute the same test cases with the same parameter --last-failed to execute the tests which were failed earlier.
In the rerun, 5 test cases will be executed.
Assumptions:
The test runner you are using creates a metadata file like pytest.
POC Required:
I have not done POC for this but in theory, it looks possible. The only doubt I have is how Gitlab parses the results. Ideally in the final result, all 80 scenarios should be pass. If it doesn't work out this way, then we have to have 2 jobs. execute tests -> [manual] execute failed tests to get 2 parsed results. I am sure with 2 stages, it will definitely work.
You can use Retry Analyser. This will help you definitely.

Gradle : Cleanup resources after build failure

I execute test suite through Gradle for the build and it spins up a lots of processes on different ports. Also, failFast is set to true for my test task. So, following happens when I execute my suite:
Suite starts up and spins up processes/servers listening to different ports
Tests in the suite are executed
When one or more tests fails, the suite execution is halted and the build is marked as failed
Now, when failing tests are fixed and the build is eventually run, step 1 (described above) fails with the message that the port is already in use. Also, I am using forkEvery parameter, meaning the previous tests might have more than one JVM running.
Is there any way to clean everything up (in terms of processes and not the physical files) when a build fails through gradle?
You can add a custom TestListener that stops the processes/servers from (1)
You can reference Spring Boot's FailureRecordingTestListener: https://github.com/spring-projects/spring-boot/blob/master/buildSrc/src/main/java/org/springframework/boot/build/testing/TestFailuresPlugin.java#L57..L95
The basic idea here is that in the afterSuite method, you would stop whatever processes where started/created from (1). Although within the TestListener, you don't have access to the test instance where processes were started from (1). So you'll need to figure out how to stop those processes without having a reference to the original class where it may have defined some things.

Integration test execution should wait until server is ready

I have written Selenium tests which should be executed during the build process of an web application. I am using the maven-failsafe-plugin to execute the integration tests and the tomcat7-maven-plugin to start up a tomcat server in the pre-integration-test phase and after the execution of the tests it gets stopped in the post-integration-test phase. This works fine.
The problem is that the tomcat server is caching some data when started up to improve the search speed. Some of my tests rely on that data, so the integration tests should wait for the server to finish caching the data.
How can I make that happen?
I added a process bar to show the loading progress. Once the loading is complete the process bar is not rendered anymore and the data table will be rendered. In this way I can add to the tests which depend on the data table to be loaded this line of code:
longWait.until(ExpectedConditions.presenceOfElementLocated(By.id("dataTablePanel")));
Additionally I am using org.junit.runners.Suite as a runner so that I can specify the order of how my test classes will be executed. Thereby I can execute the test which do not rely on the data first and then the ones which need it. To ensure that the data is present and I don't need to check that in every test case, I have created a test class which will only check the presence of the data and will be executed before all test cases which depend on the data.

Sonar: Execution time history of single test

TXTFIT = test execution time for individual test
Hello,
I'm using Sonar to analyze my Maven Java Project. I'm testing with JUnit and generating reports on the test execution time with the Maven Surefire plugin.
In my Sonar I can see the test execution time and drill down to see how long each individual test took. In the time machine I can only compare the overall test execution time between two releases.
What I want is to see how the TXTFIT changed from the last version.
For example:
In version 1.0 of my software the htmlParserTest() takes 1sec to complete. In version 1.1 I add a whole bunch of test (so the overall execution time is going the be way longer) but also the htmlParserTest() suddenly takes 2secs, I want to be notified "Hey mate, the htmlParserTest() takes twice as long as it used to. You should take a look at it".
What I'm currently struggling to find out:
How exactly do the TXTFIT get from the surefire xml report into sonar?
I'm currently looking at AbstractSurefireParser.java
but I'm not sure if that's actually the default surefire plugin.
I was looking at 5 year old stuff. I'm currently checking out this. Still have no idea, where Sonar is getting the TXTFIT from and how or where it is connecting it to the Source Files.
Can I find the TXTFIT in the Sonar DB?
I'm looking at the local DB from my test Sonar with DBVisualizer and I don't really know where to look. The SNAPSHOT_DATA doesn't seem like it's readable by humans.
Are the TXTFIT even saved in the DB?
Depending on this I have to write a Sensor that actually saves them or a widget that simply shows them on the dashboard
Any help is very much appreciated!
The web service api/tests/* introduced in version 5.2 allows to get this information. Example: http://nemo.sonarqube.org/api/tests/list?testFileUuid=8e3347d5-8b17-45ac-a6b0-45df2b54cd3c

Resources