is there a way to improve the performance of the Grails commandline tasks. For example the test-app task is taking some time until all dependencies are checked, classes compiled etc. Even simple tasks like create-domain-class is taking some seconds to run.
You can use the "grails interactive" shell for running unit tests, generating basic artifacts and so on. This way you don't have to pay the startup cost for every simple command.
Note that this didn't really work for integration tests last time I tried.
Related
I am building a Go provider for Pruner (a CLI that runs only the tests that ran through the lines you changed, saving you time).
For that, I need to be able to see per-test coverage. Not just a full coverage report after all tests have run, but I need a way to know which tests ran through what line.
Is that possible in Go?
I tried using -func, but it just gives me the method names of the original code, not the test code. In other words, I can't know what code each individual test runs through.
I need a way to know which tests ran through what line.
Is that possible in Go?
It's not supported by the tools. But you can do it. It's just very inefficient.
The way to do this is to run:
go test -cover -run=TheName/OfSome/SpecificTest
Then run this for each test in your suite.
Naturally, this will make your tests much more cumersome to manage, and incredibly slow.
So I would consider whether this is truly a requrement for your use case.
Go is optimized, from the ground-up, to compile quickly. If you have a Go project so long, that running all the tests is too slow, you may want to consider other alternatives. Some suggestions:
Run more tests in parallel, so the total runtime is reduced.
Take advantage of Short mode, and only run short tests by default, saving long-running tests for special cases.
If you really need to run only a subset of tests, do it on a per-package basis, not on a per-test basis.
I am pretty new to GitLab. I've set up pipelines and stages via .gitlab-ci.yml and they seem to work but I've just discovered that some of my assumptions were wrong.
I have a large, multi-project Gradle setup, producing many artifacts. We are in the process of setting up GitLab and I really wanted to make use of the GitLab UI to show the progress of the build. The idea was to nicely indicate to developers and reviewers how far the build got before it failed, something like:
Got its dependencies
Compiled main code, YAY!
Compiled test code, yippie!
Passed unit tests, we rock!
Passed integration tests, awesome!
Passed various static code analysis tests. We're almost good to go!
Generated documentation - can we ship it?
I've set up each of these as individual jobs of their individual stages, (incorrectly) assuming that Gradle will be able to do its incremental build magic and that this will be almost as quick as running it as a single step.
Then I noticed that each stage causes what seems to be a Docker container reinitialization. This also means that the Gradle daemon has to restart and has no knowledge of the past. It has to get all the dependencies. I think I could cache these, but it seems that they would be separately cached for each job. Finally, these some jobs end up repeating what jobs before them already did because their output isn't available to them. My thinking that serialized jobs would execute inside the same container instance was proven wrong. Each of the subsequent jobs generally have to repeat what jobs before them already did, significantly increasing the build time.
I think I understand that I could declare artifacts of each job and make them available to dependent jobs that way, but that does not eliminate all of the overhead and adds some of its own - copying the artifacts out to "somewhere" and then back, while also hitting the limits of how much I can pass on. In fact, my unit test job is now failing and I can't see why because of the log size limit, but it seems it has to do only with artifacts (the report) as the unit test pass nicely when I run them outside GitLab.
I also think I understand that the idea behind jobs was to be able to run them in parallel on separate runners. That is a very fine feature and I probably can use them for later stages, but not for (1)-(5) as they heavily rely on a lot of output of at least some of the previous jobs.
I could merge (1)-(5) into a single job (and a single stage) for performance reasons, but then there is no indication in the UI (that I know of) as to how far the build got ... and the logs would be even longer and nastier to figure out even if the log limit got lifted.
Do any of you have any suggestions as to what am I missing / should do here?
After further research, I found that this is not possible (yet). Jobs are meant to be units of (potentially) concurrent execution and can only communicate by copying artifacts, obviously.
What I would be interested in is steps lesser than jobs that would be indicated in the UI and that could post their artifacts when they (steps) complete but before the entire job is done. This would eliminate 1-2 minutes of job startup overhead that I am facing now.
Background
I am currently in the process of writing test scripts in Java with Testng, Maven, selenium and Jenkins. I have a plan to write hundreds of scripts. At this current time I have about 80 scripts written. Only 8 scripts have been uploaded to Bitbucket. Note that each script can have anywhere between 5-25 tests in it based on complexity, for example the 8 scripts currently on the server run 100 tests.
Problem
The issue I can see arising here very very quickly is that a huge amount of test scripts running.Jenkins runs the entire Maven project that sits on Bitbucket. Currently with only 8 scripts Jenkins takes a total of 20 mins to run. By the time I have the more complex ones up this could take hours to run even days with all of the scripts I plan on uploading.
Research
So far I've looked around for some way to break up the testing process so for instance I would have separate Maven projects in my Bitbucket repository for different areas. Then I would have several different builds on Jenkins one for each area of the site. I'm not sure how that would work though since Jenkins seems to just go in and read all of the tests on my repo.
I'm almost certain that having all of the tests in a single build is bad practice but I just cant find information on how to handle a huge test suit, I'm hoping someone with real experience can clarify this for me.
Software
Added as a side note in-case anyone wants to know what I'm using
Maven: 3.3.3
Java: 1.7.0_79
Selenium: 2.46 & 2.47(currently 2.47)
Jenkins: 1.622
Conclusion
I believe there must be a way of breaking the test suit up without having separate bit-bucket repos for each section.
One possible solution is using selenium grid. This will allow you to distribute tests over several vm's. Its fairly well documented.
we have a typical web application stack. there are 120 selenium (webdriver) tests that are executed against the application. this takes roundabout 1 hour. we execute them as part of our build chain "compile > unit test > integration test > gui tests". the gui tests take up a lot of time and we are wondering how to better structure them. currently they are "happy case and unhappy" case tests. they are quite stable i.e. they won't fail because of programmer errors.
we want to get the build times down and the biggest part are the gui tests. we want to do this based on "customer journeys" i.e. specify (together with the business people) some typical use case and test them (happy path) instead of testing too much.....
how do you guys structure your gui tests? here are some ideas that came to my mind
only execute happy path tests
do a "customer journey test", i.e. do several happy path tests in one ("clicking through the pages")
only take the "top 10" specified by the business (mission critical)
top 10 + "all the rest" as nightly build (one time)
i would appreciate your ideas
thanks
marcel
The nighttime is a perfect time for Selenium tests - you just have to remember to put a "Don't turn me off!" sticky note on your computer :).
Also, there always is Selenium Grid when the nighttime begins to be too short to run all tests. With Grid, you can run your tests on several machines in parallel!
We have several test suites that are applicable to different situations. Before a major release (to test, to pre-live, to production), everything runs. Usually (on a daily or even hourly basis on rush days) only the "The Quickened Normal Path of a User Through the Application" suite runs. And if somebody "fixes" a large bug, then the tests related to that part of application are run.
An hour seems absolutely fine to me.
One suggestion could be to decide which of the tests come under smoke tests, and are required to run every night. That is, tests that show the core functionality of your web application is still intact and working - other more detailed tests can be run at different times (once every few days?).
With that said, ours take around 2 hours - the only problem comes when one test has failed, you fix it, commit it, but then have to wait a very long time to verify it is fixed on the CI server.
TeamCity allows to run builds in parallel on the same machine, so gui tests should not be in build chain along with unit and integration tests. UI tests should have separate database and separate build so they will not waste time of developers or manual testers or any other stakeholders. TeamCity will gather all statistic, will send email on build failures and so on.
Next step is parallelization. As Slanec said you can use Grid (several machines are not required) with Mbunit (c#) or TestNG (java). With the help of Grid you can decrease tests execution time e.g. by 10 times so it will take only 6(!) min to run all your tests.
Also you can combine some of your tests in the bigger ones (but this will lead to increasing time for discover the reasons of failure and make tests difficult to maintain).
After these steps Gui tests can be executed after each source commit and provide fast response on application bugs status.
Great question, great answers.
An extra consideration is that you could prioritize your 120 gui tests: You can run them in an order such that the most important ones or those that are most likely to fail are run first.
This won't help to get the build times down, but it will help to get useful feedback from a build faster.
This prioritization (your top 10) need not be fixed, but can change per release / iteration / completed story / day, etc.
For example, you may want to run the newest gui tests first. Or those that were changed most recently. Or the ones covering most of the code that was most recently changed.
There is no tooling up and running out there immediately supporting this, as far as I know, although there is quite some (academic) research going on in the area of test case prioritization.
We have a huge project with many submodules. A full build takes currently over 30mins.
I wonder how this time distributes over different plugins/goals, e.g. tests, static analysis (findbugs, pmd, checkstyle, etc ...)
Would it be possible to time the build to see where (in both dimensions: modules and goals) most time is spent?
The maven-buildtime-extension is a maven plugin that can be used to see the times of each goal:
https://github.com/timgifford/maven-buildtime-extension
If you run the build in a CI server like TeamCity or Jenkins (formerly Hudson), it will give you timestamps for every step in the build process and you should be able to use these values to determine which goals/projects are taking the most time.
I don't think there is any way built in to maven to do this. In fact, in the related question artbristol posted, there is a link to a Maven feature request for this functionality. Unfortunately, this issue is unresolved and I don't know if it will ever be added.
The other potential solution is to write your own plugin which would provide this build metadata for you.
I don't think there is a way to determine the timing of particular goals. What you can do is run the particular goals separately to see how long they take. So instead of doing a "mvn install" which runs all of your tests, checkstyle, etc.. just do "mvn checkstyle:checkstyle" to see how long that takes for a particular module.
Having everything done every time is nice when its done by an automated server (continuum/jenkins/hudson) but when you are building locally, sometimes its better to be able to just compile. Some of the things you can do are have the static analysis goals ONLY run when you pass in a certain parameter or profile. Another option is to only have them ran when maven.test.skip=false.
If you are using a continuous build, try having the static analysis only done every 4 hours, or daily.