Integration test execution should wait until server is ready - maven

I have written Selenium tests which should be executed during the build process of an web application. I am using the maven-failsafe-plugin to execute the integration tests and the tomcat7-maven-plugin to start up a tomcat server in the pre-integration-test phase and after the execution of the tests it gets stopped in the post-integration-test phase. This works fine.
The problem is that the tomcat server is caching some data when started up to improve the search speed. Some of my tests rely on that data, so the integration tests should wait for the server to finish caching the data.
How can I make that happen?

I added a process bar to show the loading progress. Once the loading is complete the process bar is not rendered anymore and the data table will be rendered. In this way I can add to the tests which depend on the data table to be loaded this line of code:
longWait.until(ExpectedConditions.presenceOfElementLocated(By.id("dataTablePanel")));
Additionally I am using org.junit.runners.Suite as a runner so that I can specify the order of how my test classes will be executed. Thereby I can execute the test which do not rely on the data first and then the ones which need it. To ensure that the data is present and I don't need to check that in every test case, I have created a test class which will only check the presence of the data and will be executed before all test cases which depend on the data.

Related

Hibernate: DB not reliably rolled back at end of UnitTests in spite of #Transactional

We have a large application using Spring for application setup, initialisation and "wiring" and Hibernate as persistence framework. For that application we have a couple of unit tests which are causing us headaches because they again and again run "red" when executing them on our Jenkins build server.
These UnitTests execute and verify some rather complex and lengthy core-operations of our application and thus we considered it too complex and too much effort to mock the DB. Instead these UTs run against a real DB. Before the UTs are executed we create the objects required (the "pre-conditions"). Then we run a Test and then we verify the creation of certain objects, their status and values etc. All plain vanilla...
Since we run multiple tests in sequence which all need the same starting point these tests derive from a common parent class that has an #Transactional annotation. The purpose of that is that the DB is always rolled back after each unit-test so that the subsequent test can start from the same baseline.
That approach is working perfectly and reliably when executing the unit-tests "locally" (i.e. running a "mvn verify" on a developer's workstation). However, when we execute the very same tests on our Jenkins, then - not always but very often - these tests fail because there are too many objects being found after a test or due to constraint violations because certain objects already exist that shouldn't yet be there.
As we found out by adding lots of log-statements (because it's otherwise impossible to observe code running on Jenkins) the reason for these failures is, that the DB is occasionally not properly rolled back after a prior test. Thus there are left-overs from the previous test(s) in the DB and these then cause issue during subsequent tests.
What's puzzling us most is:
why are these tests failing ONLY when we execute them on Jenkins, but never when we run the very same tests locally? We are using absolute identical maven command line and code here, also same Java version, Maven version, etc.
We are by now sure that this has nothing to do with UTs being executed in parallel as we initially suspected. We disabled all options to run UTs in parallel, that the Maven Surefire plugin offers. Our log-statements also clearly show that the tests are perfectly serialized but again and again objects "pile up", i.e. after each test-method, the number of these objects that were supposed to have been removed/rolled back at the end of the test, are still there and their number increases which each test.
We also observed a certain "randomness" with this effect. Often, the Jenkins builds run fine for several commits and then suddenly (even without any code change, just by retriggering a new build of the same branch) start to run red. The DB, however, is getting re-initialized before each build & test-run, so that can not be the source of this effect.
Any idea anyone what could cause this? Why do the DB rollbacks that are supposed to be triggered by the #org.springframework.transaction.annotation.Transactional annotation work reliable on our laptops but not on our build server? Any similar experiences and findings on that anyone?

Gradle : Cleanup resources after build failure

I execute test suite through Gradle for the build and it spins up a lots of processes on different ports. Also, failFast is set to true for my test task. So, following happens when I execute my suite:
Suite starts up and spins up processes/servers listening to different ports
Tests in the suite are executed
When one or more tests fails, the suite execution is halted and the build is marked as failed
Now, when failing tests are fixed and the build is eventually run, step 1 (described above) fails with the message that the port is already in use. Also, I am using forkEvery parameter, meaning the previous tests might have more than one JVM running.
Is there any way to clean everything up (in terms of processes and not the physical files) when a build fails through gradle?
You can add a custom TestListener that stops the processes/servers from (1)
You can reference Spring Boot's FailureRecordingTestListener: https://github.com/spring-projects/spring-boot/blob/master/buildSrc/src/main/java/org/springframework/boot/build/testing/TestFailuresPlugin.java#L57..L95
The basic idea here is that in the afterSuite method, you would stop whatever processes where started/created from (1). Although within the TestListener, you don't have access to the test instance where processes were started from (1). So you'll need to figure out how to stop those processes without having a reference to the original class where it may have defined some things.

How to start and stop protractor driver for each test

I want to run my protractor tests in lambdatest cloud. to update the test name for each test I had to restart the browser for each test (it block). I tried two options
browser.restart() - this creates the same session again so the test name is not updated properly.
I used restartBrowserBetweenTests: true this restart browser as expected but an extra blank test is generated as the browser is initialized/invoked in the config.ts and from script file am passing URL using browser.get().
When I run multiple sessions in parallel the count to extra test multiplies.
Is there a way how I can manage browser initialization and quit from beforeEach and AfterEach blocks not from config file.

selenium grid with cucumber

I am trying to setup selenium grid to achieve parallel execution of my tests. First, I'll explain the my current scenario.
I have my fully functional test suite written in cucumber with watir webdriver
I need to execute all my tests in multiple environments.
I created a setup for selenium hub and node
I can run my tests on a single node through hub
My goal is to run my tests on multiple vm's simultaneously.
I missing a part where I need to configure my tests to run in parallel. there are some examples about grid setup in the web, as I am using different framework I couldn't relate to my scenario.
Thanks in advance
I was able to do this by leveraging Jenkins with Selenium Grid... Why Jenkins? a) Jenkins is a build tool and is built to run parallel jobs by default. I leverage that ability to send tests to Selenium Grid in parallel, the Grid manages the flow from that point on. b) Jenkins is part of many development build processes. Dev's could make calls to your QA jenkins to kick off tests as they commit/build. c) Jenkins provides a nice UI to see pass/failures of your tests (as well as sending email notifications of failures) and d) Jenkins has a great Cucumber reporting plugin.
You could avoid Jenkins, but you'll need to send your cucumber features in parallel to the Grid. If you just run cucumber, it will farm jobs to the grid, but they will run sequentially. You would need something to kick off each feature async.
Below is my full set up. Jenkins is basically being used here to start multiple/simultaneous cucumber jobs. the details are below:
Grid Setup
I had 10 VM's. I took VM1 as my primary. It was a windows server box, so I put the selenium grid standalone on it and wrote a batch file like so:
#echo off
“C:\[Add your Java Path Here]\java.exe” -jar “C:\[Add your Selenium Grid Jar Path]\selenium-server-standalone-2.31.0.jar” -role hub
Then used the windows server task to auto run that batch file in case of VM restart.
On each of the other VM's I made them part of the grid by registering them with VM1 (hub):
java -jar selenium-server-standalone-2.31.0.jar -role node -hub http://[the server name of your Selenium Hub]:4444/grid/register -browser browserName=chrome,maxInstances=5
Cucumber Set UP
In Cucumber, I set up a env.rb file in the features/support folder. This allows me to specify command line arguments before the tests run, and what happens when they stop. I added a begin statement that set the a value for using a browser, as well as using the Grid...
Browser & Environment Config
In the env.rb file I add:
def browser_name
(ENV['BROWSER'] ||= ‘firefox’).downcase.to_sym
end
def environment
(ENV['ENVI'] ||= ‘int’).downcase.to_sym
end
Grid Config
Then I add:
Before do |scenario|
p "Starting #{scenario}"
if environment == :int
#browser = Watir::Browser.new(:remote, :url=>"http://[Your Selenium Grid Hub]:4444/wd/hub", :desired_capabilities=> browser_name)
#Optional: in the case of setting your default start page #browser.goto "http://[your start page of your test site]:8080"
elsif environment == :local
#browser = Watir::Browser.new browser_name
#browser.goto "http://[some other environment]:8080"
end
end
Now you could pass a argument like: cucumber feature/login.feature BROWSER=firefox ENV=int and it would farm all the work to the Grid HUB - which should pass it to the Grid NODES it's connected to, with the browser support (i.e send the test for login.feature to a firefox compatible node - maybe not all your nodes have firefox, if they all do, then it will go to any one of them.)
At this point, you get one job at a time going through. So how to run more then one?
You would have a script that kicks off all feature files (or sans-Cucumber, your tests) through that same browser profile to use the Grid HUB. If you use a script it needs to make these calls asynchronously - so all the features/tests are sent to the Grid at the same time and the Grid manages the jobs.
How I did that was with...
Jenkins
Jenkins is used for builds/deploys of code - but in my case I use it to trigger QA jobs. Jenkins is a Java JAR, that you just run... i.e. java jenkins.jar It launches a local UI on some port, and you can start adding jobs.
From a high level
I built Jenkins jobs and had a parent job that ran all jobs - sending them to the Grid. The Selenium Grid Hub would then manage the flow of the jobs.
I wanted to be able to kick off individual feature tests, individual feature tests by browser, and all tests by browser. To do that I started with individual jobs for each feature by browser.
Details
In Jenkins I created "A New Job" and choose "Free style software component" and then filled out the description with "[my feature name] [browser name]" i.e. Login Tests via IE
Down in the Build section of this Jenkins job, I choose to use a BATCH Command. Since it's a windows box, I picked Windows Batch command and input something like this:
bundle exec cucumber BROWSER=ie ENVI=int features/login.feature --format json -o login-results/login.json
That stuff from --format on, that's all making use of a Cucumber reports plugin for Jenkins. It makes these nice looking graph driven reports on the features tests that passed/failed. You don't need it, it's optional.
If you "build" the Jenkins job, it will execute that windows batch file and it does the following:
Starts the Job
Runs a cucumber command to use a specific browser (IE in this case)
Runs all the tests of that feature file (i.e. login.feature could have 20 tests)
All tests run through the grid, which farms them off to nodes
It's not yet running jobs in parallel though.
Running Jobs in Parallel
Now that Jenkins can kick off the tests by feature and browser, via a Grid - we can now finally run jobs in parallel, we just need more jobs. So create a few more jobs... like:
Jenkins job for registration.feature, and jenkins job for subscription.feature. Each will have it's own windows batch command like:
bundle exec cucumber BROWSER=ie ENVI=int features/registration.feature --format json -o registration/registration.json
Or
bundle exec cucumber BROWSER=ff ENVI=int features/registration.feature --format json -o registration/registrationff.json
That last one is a duplicate of the registration test, just calling a different browser.
Jenkins by default limits how many simultaneous jobs you can run.. I think it's 10. You can change that in the Jenkins config. I changed mine to 40.
So now, you can click on your first Jenkins job:
Login Test via IE
and click BUILD
As it starts, click "BUILD" on your second job:
Registration Test via IE
and the same for your other jobs...
Jenkins will start each job in parallel, farming them to the Grid HUB, which sends the jobs to your appropriate nodes! All in parallel. By default Jenkins limits the jobs to 5 or 10 parallel jobs. You can change that. I modified mine to be 25 to 40, depending on my needs.
Jenkins will also update the UI with pass/fail and has details of the failures in the logs of each job. You can see the history of the failures... and another Jenkins repo (i.e. the dev repo) can make rest calls to your repo to auto trigger these test as they build.
Launching All Tests From One Job
Rather then manually running your individual jobs, you can build a parent job. In Jenkins the parent job would be a NEW Job that is the same type, "Free style software component." You go down to the very bottom and it should have a field called, "Build Other Projects" In that field you can put each project for deploying: Login, Registration, etc.
Now when you "Build" this parent job, you'll see Jenkins starts all jobs/Projects to kick off at the same time. Basically your entire set of tests could start with one click - all being sent to Selenium Grid, which would then manage the flow.
Group Tests
In Jenkins I create tabs for my tests... IE Tests, FF Tests, Chrome Tests, etc.
In each tab I put the appropriate features.
Then I create a new Parent job to kick off all jobs of a type like:
All IE Tests
All FF Tests
etc.
Conclusion
You can probably avoid Cucumber if you like. You just need something to kick off the jobs in parallel, for me I used Jenkins to do that. You could use a script that runs async jobs (kicking off your tests.)
Hopefully something in post is useful to your needs.
I have this documented at my site, along with some pictures of the Jenkins flow...
URLs
My tutorial on setting up Cucumber and Selenium Grid:
http://sdet.us/selenium-grid-with-watir-and-cucumber-browser-automation/
My tutorial on setting up Cucumber Reports with Jenkins:
http://sdet.us/jenkins-and-cucumber-reports/
look at MBUnit as this can run tests in parallel, This should help. It will only run tests in parallel in one assembly and will not coordinate across multiple assemblies.

Cannot run VSTS LoadTest at a time

I have a VSTS project with a list of 30 LoadTest tests that I want to run sequentially. All tests are independent from each other.
When I try to run all the tests, it starts with the first test and it executes it perfectly, but once the first test is finished, it automatically starts to mark the rest of the tests as completed, but without executing them.
Do I have to configure any option to run all of them together? Am I missing something?
Note: when the first test is finished it also asks me if I want to view the "detailed results from the load test".
Any advice/comment is welcomed...
Thanks,
albert
UPDATE (16/07/2010)
More info... I'm trying to run the load tests as in the image that you can see at freeimagehosting.net/image.php?69cc93fa7b.gif. After the first loadtest is finished the rest of them are just marked as completed.
The load tests are individual tests, and have never seen a way to execute them simultaneously. You create individual web tests that you then put into a load test to run.

Resources