Running JProfiler for grails on IntelliJ - performance

I am trying to run JProfiler for grails application. I would really appreciate any suggestions with the following:
1) Since I don't have an explicit class with main() method in grails application, I am assuming Attaching to a running JVM is my only option. Is that true? Is there a way I could attach JProfiler before the grails application starts?
2) After attaching to a running JVM, what does JProfiler need inorder to profile the controller/service/src/domain files. Do I have to execute the test cases. In my case they are rest controllers so do I have to run the requests for all possible scenarios?
3) Is it possible to have the Jprofiler profile the code without me running the test cases, since I may not be able cover all scenario's?

Since I don't have an explicit class with main() method in grails application
A JVM profiler does not depend on a main method that is written by yourself. The only thing you have to be able to do is to pass the -agentpath VM parameter to the JVM. The exact parameter is given by Session->Integration wizard->New Remote Integration in JProfiler and has to be added to the environment variable GRAILS_FORK_OPTS for Grails >= 3.1.5.
The Intellij IDEA integration can profile Grails run configurations directly, so you don't have to do the above.
Using the attach API is also possible, but has a higher overhead when connecting and prevents some profiling capabilities from being enabled.
do I have to run the requests for all possible scenarios
The profiler profiles the entire JVM, so whatever use case you run while profiling will show up in the profiler.

Related

start the Spring Boot application once, before Cucumber tests run

I am writing some BDD tests using Cucumber for my Spring Boot application (v2.2.1), and it works OK.
However, I am facing some performance issue, because the application gets started/stopped for every scenario in the feature file : I am using in-memory DB with Liquibase, so for each scenario, this gets executed (takes a few seconds).
Sure, it's currently guaranteed that my scenarios are very well isolated.. Maybe in some cases I will want this behaviour, but right now, most of my feature files would benefit from a one time set up : since each scenario sets up different records (with no overlap) it needs in the in-memory DB, I could theoretically executed my scenarios in parallel on a single Spring Boot application running.
I saw https://blog.codecentric.de/en/2017/02/integration-testing-strategies-spring-boot-microservices-part-2/ , but it requires to have built the application first, then start it from the jar.
Isn't there a way to do the same, but with the application started once from the Cucumber runner ? any example somewhere ?
Thanks to #mpkorstanje link, I was able to find the issue : while trying to replicate the suggestion in my project, I discovered that one of the config that was scanned had a #DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS) annotation.. So that was the issue. Now I need to look at a workaround like what is suggested here : #DirtiesContext tears context down after every cucumber test scenario, not class

Forked execution of SpringBootTest's with maven failsafe plugin - does it work out of the box?

Consider you have a Spring-Boot Application and within this application also a bunch of Integration-Tests, which are annotated with #SpringBootTest and run with the SpringRunner class.
They are invoked by the maven failsafe plugin, which by default does not parallelise tests in any way. The tests all run fine without any issues.
What changes if you use failsafe's feature of forkCount - can you expect the test execution to work out of the box? Do you need to adjust some code? What do you need to look out for that could potentially not allow these integration tests to run in a forked, "parallel" environment via this plugin?
From my understanding, the failsafe plugin will create forkCount-many JVMs and in each some of the integration tests are executed. That sounds like there is nothing to do, you don't need to make anything threadsafe, you don't need to make Singleton-beans into ThreadScoped beans or anything - as the process of having multiple JVM's should already create multiple of these beans.
Sorry if the question appears weird, I tried researching this question but I could not find an answer.
From Maven doc:
The parameter forkCount defines the maximum number of JVM processes
This means that the tests will run in it's own process and therefore you will have separate Spring Boot instances. So you really don't have to care about thread safety.
But you have to care about memory consumption.

Running jacoco report where integration tests are in one code base and source code is in another code base

I recently started working on creating jacoco reports for maven projects including unit and integration tests and they seem to work out correctly.
Now I have encountered a different scenario which I am not sure how to approach.
I have one workspace which consists of integration test cases - application A, but the source code does not exist in the same workspace/code base. The source code which actually runs on invoking these integration test scripts are in a different workspace/code base - application B(they are invoked using rest api calls with the localhost urls. The jboss server is started for application B so that the localhost context is up) from the integration tests.
The aim is to invoke these integration tests from application A, which in turn calls the source code of these tests in application B generating the jacoco report of the code coverage for application B.
I am not actually sure how to achieve this.
Can someone provide some input.
Thanks.
If I understand you correctly, you actually have 2 different processes in your scenario:
The "client" process that runs the integration tests and for which jacoco can be easily applied, but it's not what you need
The "server" process that runs the actual JBoss server and executes the actual code.
Client process contacts the server via HTTP.
In this case, I'm afraid jacoco won't be able to provide a coverage for you if you're running the tests from maven/gradle, because jacoco instruments only bytecode on the running JVM. So you have to be "creative" here :)
I'll list here some possible approaches
Disclaimer: I haven't tried them though (didn't work with jboss/java ee), but maybe you'll be able to at least borrow some ideas
The first approach would be running the tests together with the application somehow, like its done for example in spring tests (I'm not sure whether JBoss provides similar capabilities).
The idea is simple:
You run the integration test, it runs the jboss "embedded in the same jvm" and you can inject beans / EJB session beans into the test (like autowiring with spring).
The advantage of such a method is that you'll be able just to use jacoco maven plugin and it will instrument everything for you
I don't know how easy will be achieving this architecture technically, I know that recent jboss versions support embedded mode, So maybe you'll find This link to be a useful foundation
Another direction is to take a look at Arquillian project. They have some jacoco extension that probably will help, but I've never tried it.
And the last approach I can think of is running the jboss server with jacoco agent directly instead of relying on the build system that runs jacoco for you.
The idea here is to stream the results of covered server code into some file / tcp endpoint. So you run the jboss with -javaagent:[yourpath/]jacocoagent.jar and it starts streaming the results wherever you need it to stream. After the tests you should gather these results and prepare a report. You can find Here more information about this approach

Spring boot project publish to production environment choose war(standalone tomcat) or jar(embedded tomcat)?

Latest project I used Spring boot, and prepare to deploy to production environment, I want to know which way to run application have better performance or have the same performance?
generate a war package and put it in a stand-alone tomcat
generate a jar package and use embedded tomcat
In addition, when publish to production environment if should to remove devtools dependency.
This is a broad question. The answer is it depends on your requirements.
Personally, I prefer standalone applications with Spring Boot today. One app, one JVM. It gives you more flexibility and reliability in regard to deployments and runtime behaviour. Spring Boot 1.3.0.RELEASE comes with init scripts which allows you to run your Spring Boot application as a daemon on a Linux server. For instance, you can integrate rpm-maven-plugin into your build pipeline in order to package and publish your application as a RPM for deployment or you can dockerize your application easily.
With a classic deployment into a servlet container like Tomcat you will be facing various memory leaks after redeployment for example with logging frameworks, badly managed thread local objects, JDBC drivers and a lot more.
Either you spend time to fix all of those memory leaks inside your application and frameworks you use or just restart servlet container after a deployment. Running your application as a standalone version, you don't care about those memory leaks because you are forced to restart in order to bring you new version up.
In the past, several webapps ran inside one servlet container. This could lead to performance degradation for all webapps because every webapp has its own memory, cpu and GC characteristics which may interfere with each other. Further more, resources like thread pools were shared among all webapps.
In fact, a standalone application is not save from performance degradation due to high load on the server but it does not interfere with others in respect to memory utilization or GC. Keep in mind that performance or GC tuning is much more simpler if you can focus on the characteristics of just one application. It gets complicated as soon as you'll need to find common denominator for several webapps in one servlet container.
In the end, your decision may depend on your work environment. If you are building an application in a corporation where software is running and maintained by operations, it is more likely that you are forced to build a war. If you have the freedom to choose your deployment target, then I recommend a standalone application.
In order to remove devtools from a production build
you can use set the excludeDevtools build property to completely
remove the JAR. The property is supported with both the Maven and
Gradle plugins.
See Spring Boot documentation.

How to speed up grails test execution

While developing a Grails 1.0.5 app I'm appalled at how slow the grails test-app command is. Even though the actual tests take just ~10 seconds, the whole execution adds up to
real 1m26.953s
user 0m53.955s
sys 0m1.860s
This includes grails bootstrapping, loading plugins, compiling all the code, etc.
Any hints on how to speed up the grails test-app execution would be greatly appreciated.
You can use interactive mode to speed up your test runs.
Just run
grails interactive
Then type
test-app
The first time will be the same as usual but each time after that will be dramatically faster. There are currently some issues with interactive mode (like running out of memory after a few runs) but I still find it worth it.
There aren't any hard and fast rules for speeding it up, and the performance issues that you're seeing might be specific to your app.
If your bootstrapping is taking ~75 seconds, that sounds pretty long. I'd take a close look at whatever you have in your Bootstrap.groovy file to see if that can be slimmed down.
Do you have any extra plugins that you might not need (or that could have a major performance penalty)?
This might not be a possibility for you right now, but the speed improvements in grails 1.1.1/groovy 1.6.3 over grails 1.0.5/groovy 1.5.7 are fairly significant.
Another thing that really helps me when testing, is to specify only integration tests or only unit tests if I'm workiing on one or the other:
grails test-app -unit
grails test-app -integration
You can also specify a particular test class (without the "Tests" prefix), to run a single test which can really help with TDD (ex for "MyServiceTests" integration):
grails test-app -integration MyService
In grails 1.1.1, bootstrapping with 5 plugins and ~40 domain classes takes me less than 20 seconds.
If you're still using Groovy 1.5.x you could probably of shave a few seconds by upgrading to Groovy 1.6
Please see my answer here. A plugin relying on a poorly defined maven artifact can cause grails to go and look every time for a newer version.
Grails very slow to resolve certain dependencies
You can choose to run unit and integration tests in parallel as well - see this article
Increasing the java memory/JVM options can definitely speed things up. The amount of memory you can give depends on your equipment.
If you are running grails from the command line, set the GRAILS_OPTS environment variable. Add something like this to ~/.bash_profile
export GRAILS_OPTS="-Xms3000M -Xmx3000M -XX:PermSize=256m -XX:MaxPermSize=512m"
If you use GGTS(Eclipse) you'll need to add this to the VM arguments of the run configuration.
There are also a few JVM settings that can be modified to increase the speed:
-XX:+UseCodeCacheFlushing
-XX:MaxInlineLevel=15
-noverify (turns off class validation)
grails now comes with http://grails.org/plugin/testing installed. this mocks the domain stuff, so you can do some testing of domain classes as unit tests. they run pretty fast.

Resources