Asynchronous task execution using Spring in container managed environment - spring

I want to run few tasks asynchronously in a web application. My question is which Spring implementation of task executors i should use in a Container managed environment.
I refereed to this chapter in Spring documentation and found few options.
One option I considered is WorkManagerTaskExecutor. This is very simple and works seamlessly with the IBM Websepher server which I'm currently using but this is very specific to IBM Websphere and Oracle Weblogic servers. I don't want to tie my code specifically to one particular implementation as in some test and local regions we are using Jetty container & this implementation creates problems to run the code in Jetty.
Other options like SimpleThreadPoolTaskExecutor does not seem to be best fit to leverage thread pooling in container managed environment and I don't want to create new thread myself.
Could you pleas suggest how do I go about this. Any pointers to a sample implementation will be great help.

As usual, it depends. If you rely on the container's thread management and want to be able to set thread pools on its admin interface or if you're application is not the only app inside the container or you use specific features like setting thread pool priorities for EJB or JMS you should add support for the WorkManagerTaskExecutor and make it configurable. If not, you can use whatever you want cause in the end threads are just threads. Since Spring is an IOC container you can do it. To use the same app everywhere I wouldn't suggest to change the XML config per app version. Rather
use profiles with configuration to set the executor type and inside your java config return the proper bean type. If you use Jetty you should have a configuration for the thread pool sizes to to be able to tune it.
use spring boot like auto configuration which usually rely on available classes on classpath (#ConditionalOnClass). If your weblogic or websphere specific classes are available or any other container specific thing like env variables you can create the WorkManagerTaskExecutor
With both of these you can deploy the same war everywhere.

Related

How do I tell Camunda (in Modeller) v7.15.0 that Java Delegates are remotely deployed?

I have Camunda 7.15.0 deployed in GCP Cloud Run. I have a Java 11 application through which I wish to access the Camunda RuntimeService in order to start a process instance, but this requires some community extension so I went with Camunda REST API. Now I want to add Java Delegates coded in the above Java application, how do I tell Camunda (Modeller) that Java Delegates are remote?
The Java Delegates are only working if you embed Camunda in your application as library.
In your case you need to work with the External Task pattern.
Here is a Blog, that explains this Pattern: https://medium.com/#dashedsouvik/camunda-external-task-pattern-fd84a29d9d3e
(I assume you are asking about Camunda 7.x)
If the code is supposed to run in a different JVM, change the model to use implementation type "external". Then change the JavaDelegate. You do not need to implement this interface anymore. Instead, for Camunda 7 use one of the client libraries:
https://github.com/camunda-community-hub/awesome-camunda-external-clients
The libraries take care of the remote communication to the engine. You only need to change the delegate code to be wrapped similar to this: https://github.com/rob2universe/c7-rest-task-worker/blob/main/src/main/java/com/camunda/example/worker/RESTWorker.java
Also see:
https://docs.camunda.org/manual/7.17/user-guide/ext-client/
https://blog.bernd-ruecker.com/moving-from-embedded-to-remote-workflow-engines-8472992cc371

Update for JavaEE application

Our application are built on Spring boot, the app will be packaged to a war file and ran with java -jar xx.war -Dspring.profile=xxx. Generally the latest war package will served by a static web server like nginx.
Now we want to know if we can add auto-update for the application.
I have googled, and people suggested to use the Application server which support hot deployment, however we use spring boot as shown above.
I have thought to start a new thread once my application started, then check update and download the latest package. But I have to terminate the current application to start the new one since they use the same port, and if close the current app, the update thread will be terminated too.
So how to you handle this problem?
In my opinion that should be managed by some higher order dev-ops level orchestration system not by either the app nor its container. The decision to replace an app should not be at the dev-ops level and not the app level
One major advantage of spring-boot is the inversion of the traditional application-web-container to web-app model. As such the web container is usually (and best practice with Spring boot) built within the app itself. Hence it is fully self contained and crucially immutable. It therefore should not be the role of the app-web-container/web-app to replace either part-of or all-of itself.
Of course you can do whatever you like but you might find that the solution is not easy because it is not convention to do it in this way.

Share spring container between test application and embedded tomcat

We are using cucumber-jvm to write an integration test layer in our application. One of the challenges we are finding is managing the database between the tests and the web application.
A typical scenario is that we want to persist some entities in a Given step of a scenario, then perform some actions on the user interface that may, in turn, persoist more entities. At the end, we want to clean the database. Because the cucumber-jvm tests are in one jvm and the web application is running in another jvm we cannot share a transaction (at least in a way of which I am aware) so the database must be cleaned manually.
My initial thought was to use an Embedded Tomcat server running off of an embedded in-memory database (HSQLDB) in the same JVM as the cucumber-jvm test. This way we might be able to share a single spring container, and by extension a single transaction, from which all objects could be retrieved.
During my initial tests it looks like Spring gets loaded and configured twice: once when the test starts and the cucumber.xml is read, and a second time when the embedded tomcat starts and the web application reads its applicationContext.xml. These appear to be in two completely separate containers because if I try to resolve an object in one container that is specified in the other container then it doesn't resolve. If I duplicate my configuration then I get errors about duplicate beans with the same id.
Is there a way that I can tell Spring to use the same container for both my test application and the embedded tomcat?
I'm using Spring 3.2.2.GA and Embedded Tomcat 7.0.39 (latest versions of both libraries).
Am I crazy? Do I need to provide more technical details? Apologies if I use some incorrect terminology.
Thanks
p.s. If my problem seems familiar to you and you can suggest an alternative solution to the one I am trying, please let me know!
Jeff,
It is normal that spring is loaded twice. There are two places where two spring contexts are created:
In the servlet container listener org.springframework.web.context.ContextLoaderListener that is configured in web.xml. This one reads its configuration from the file set by the context-param contextConfigLocation.
In the implementation of ObjectFactory provided by cucumber-spring plugin cucumber.runtime.java.spring.SpringFactory. This one reads its configuration from cucumber.xml.
The two spring contexts are totally different and their instances are kept in two different places. As a servlet context attribute for the former and kept by the JavaBackend for the latter.
When starting the embedded tomcat, it is possible to get access to the servlet context and thus set ourself the spring context used bt tomcat with the one from cucumber. But, spring has a special class called WebApplicationContext for context used in a servlet container. The cucumber SpringFactory on other hand creates its context through ClassPathXmlApplicationContext. So unless there is a way to specify the type of application context from the xml config, we will have to provide an ObjectFactory that shoots a WebApplicationContext.
What we can do is to have two web.xml. One for the normal and one for the test. For the test, we use our version of the ContexLoader listener.

glassfish 3.1.2 monitoring EJB container, bean-methods

the glassfish application server provides a nice monitoring REST interface.
To use it u can enable several monitorable items in the admin console, for example the EJB container. The documentation says, you can retreive EJB-statistics for every deployed application.
If you request a URL like localhost:4848/monitoring/domain1/server/applications/APPNAME/EJBNAME you will get statistics for a given EJB of the application.
Further, there is a possibility to look more deeply into each bean-method of the ejb, for example the executiontime, about which the documentation says:
"Time, in milliseconds, spent executing the method for the last successful/unsuccessful attempt to run the operation. This is collected for stateless and stateful session beans and entity beans if monitoring is enabled on the EJB container."
The problem now is, monitoring is enabled on the EJB-container (Level set to HIGH), but nothing is sampled in any bean-method in any EJB in any deployed application.
Is there something special to do in the bean and/or the glassfish ?
Thanks in advance for help,
Chris
EDIT:
Ok, I noticed something more about that behaviour:
In the server log you get a log message for each deployed EJB like that:
INFO: EJB5181:Portable JNDI names for EJB DataFetcher // ...
If I set the ejb-container monitoring level to HIGH (which is what I want to do), I get the following warning for each deployed EJB, regardless which app I deploy:
WARNING: MNTG0201:Flashlight listener registration failed for listener class : com.sun.ejb.monitoring.stats.StatelessSessionBeanStatsProvider , will retry later
I googled the warning but none of the resulst really help me enabling EJB monitoring...
This seems to be a Bug in Glassfish.
EJB Monitoring is currently not working in 3.1.2.
JIRA issue is already raised: http://java.net/jira/browse/GLASSFISH-19677
There is nothing "special" to do.
http://docs.oracle.com/cd/E18930_01/html/821-2431/abeea.html
For me it seems as if you probably enabled the monitoring option on the wrong configuration. Please double check.
To get rid of this message you can disable the monitoring on ejb container option below in the image
From Monitor Data--->Configure monitoring--->make ejb container log off

Different EARs using common services. Should I use remote calls, or package them as local?

I have multiple EJB3 ears deployed on jboss server. One of them is an application containing common services exposed as remote. Now all other ears use those services via remote and it seems to be realy painful for performance.
What can i do to overcome this? Can I make those services #Local and package this jar to every single application to allow them to be used via #Local not #Remote?
According to the JavaEE tutorial, the client of a #Local bean "must run in the same JVM as the enterprise bean it accesses."
So you should have no problem using local calls between different deployed applications on the same server.
Are you sure this is the cause of your performance problems, though?
I agree with you, you cannot inject an ejb session bean from another ear and in the same jvm using local interface.
if you have two ear you must use the remote interface, and using:
#EJB(lookup ="JNDI_BEAN_NAME")
I disagree. See this thread as it provides an excellent description of why you shouldn't: http://www.coderanch.com/t/79249/Websphere/Local-EJB-calls-separate-ear
The only way, normally, to make local calls between ears is to make cross-classloader calls. Not pretty.
You can get around this by having a single classloader per server (versus scoped by ear). Also a bad idea for security/isolation reasons.
You should use remote calls between ears. Some Java EE implementations optimize these calls to be more efficient when calling within the same jvm.

Resources