Can i use ExecutorService in an ejb? - ejb-3.0

I have a scenario in which the results of various students are generated from within one ejb call by looping over the student list. I was thinking of creating threads for processing each student using executorService within a ejb call. Currently i just look up my ejb once.

i think this post should answer your question
EJB's and Threading
in general an EJB should not spawn new threads or do 'handcrafted' asynchronous execution.

In EE 7+ servers, you should just use JSR 236, which let's your application have access to executors/pools that are managed by the application server.
Otherwise, in theory, the EJB spec does not allow EJBs to create their own ExecutorService, which would create/manage its own threads:
The enterprise bean must not attempt to manage threads. The enterprise
bean must not attempt to start, stop, suspend, or resume a thread, or
to change a thread’s priority or name. The enterprise bean must not
attempt to manage thread groups.
These functions are reserved for the EJB container. Allowing the
enterprise bean to manage threads would decrease the container's
ability to properly manage the runtime environment.
In practice, it might work if you have complete control over the server running your application (you know which other applications are running and how many threads/pools they're creating in order to avoid overloading the system), and you limit the actions taken in those threads (for example, java:comp lookups won't work, transactional behavior might be limited, etc.).

Related

How to debug spring boot application not starting

Spring lists SO as the only place to ask questions on their community page, which is why I ask this rather generic question here. It may not be the best fit for SO, but, according to Spring's community overview page, there's no other adequate place to ask such questions.
I have a spring boot application built on spring cloud gateway (version 2) which also uses an embedded hazelcast cluster. It runs in multiple instances, which communicate via hazelcast. Everything works fine, except under heavy load. If one instance fails, restarting it is no longer possible.
When the instance is restarted while the cluster of instances is under heavy load, it will start creating and wiring beans, up to some point, after which it will not do anything spring-related anymore. Hazelcast-generated messages are visible in the log (with root log level DEBUG), past that point, but nothing generated by spring or the application itself.
In order to restart that one instance that failed, I need to stop the load generation, wait some 10-15 minutes, then restart the failed instance. Then the new/restarted instance starts up rather quickly, with no problems at all.
The load consists of http requests which get proxied to another application, and is of such nature that it generates a lot of read accesses to hazelcast's distributed storage, but very few writes.
My problem: I have no idea how to debug this. Since the http endpoint never becomes available, there's no way I can query metrics or other actuator information.
So my question is: what tools or mechanisms can I employ to debug this problem? I.e. how can I find out exactly how the boot sequence under heavy load of the other instances of the hazelcast cluster differs from the boot sequence when there is no load at all in the cluster? Once I have this information, the problem is narrowed down enough for me to investigate it further on my own.
I didn't find a way to debug the problem, but had an idea of what might cause it, tried it, and it was a fix.
My application was running as a Kubernetes deployment. A few beans inside the application were relying on a usable CP subsystem during their initialization. Spring's bean initialization process is by necessity sequential and blocking, to account for inter-bean dependencies.
I hypothesized that under heavy load, for whatever reason, the initialization of those beans was blocking forever. As a first experiment, I made that initialization code async, so that Spring can finish bean wiring, even if, until that async part finished too, the instance was unable to perform usable work, to see if that was the problem, at least.
To my surprise, that fully fixed the problem. This way, Spring finished bean wiring, the HZ-dependant initialization also finished rather quickly, when executed async, even under high load, and the instance became usable soon after being started.
I didn't have the time to dig deeper to find out what the precise failure mechanism was. What I believe might have been the problem is the interaction between HZ and K8s. K8s-based discovery works using a K8S service. A pod/instance isn't added to the service until it becomes healthy. If a bean inside the application prevents initialization, the instance is never added to the service. As such, discovery never finds the new/restarted instance. I don't know what effect this might have on the HZ cluster's inner workings.

How does the shared engine look up the required resources for a process step invocation?

I am using a shared process engine on WebSphere and I want to understand how the engine looks up the required resources (custom code shipped with my process application) for a process step invocation. Is a thread context switch applied?
The shared process engine can be used by multiple applications, one of these applications being the Camunda webapplication.
Whenever the process engine "does something" within a process instance, such as executing a service task, it performs a Thread Context Switch. This Thread Context Switch is performed to the application which deployed the BPMN process that the engine is currently executing. This is necessary for the process engine to be able to use resources locally available within that application.
Examples for these kinds of resources:
The application's classloader, in order to instantiate Java Delegates
The application's CDI Bean Manager, in order to be able to invoke CDI Beans.
How does this "Thread Context Switch" work technically?
The process engine executes a callback method on an EJB which has to be included within the application. This is why you include the camunda-ejb-client.jar. Relevant information: the process engine invokes the local business interface of that EJB.
As a result, the Thread Context Switch is performed with EJB local invocation semantics. Whatever Websphere puts in place for an EJB local invocation will work and whatever Websphere does not put in place for an EJB local invocation will not work.
The behavior is exactly the same as it would be if you would put the code from your Java Delegates into an EJB with a local business interface and invoke that from another application.

When to use a Local EJB Interface

As per Oracle docs here
Local Clients A local client has these characteristics.
It must run in the same application as the enterprise bean it
accesses.
It can be a web component or another enterprise bean.
To the local client, the location of the enterprise bean it accesses
is not transparent.
As for :It must run in the same application as the enterprise bean it
accesses.
When it says 'same application, it means the EJb client and the EJB bean must be part of the same jar file? Or same EAR file? If it is part of the same jar file, why even use an EJB in the first place? We can just import the EJB bean in the client and use it like a utility class.
It means the same EAR.
Regardless, the only reason to ever use EJB is because you want to delegate responsibility to the container (transactions, security, interceptors, resource injection, asynchronous methods, timers, etc.). There's nothing to stop you from implementing all the qualities of service yourself (e.g., Spring did it), but by using EJB, you don't have to worry about getting all the details right, and (in theory) you make it easier for many people to develop an application because they share a common understanding.

OSGI service vs. Singleton?

I am a beginner to OSGI and I am wondering if someone can enlighten me about the difference between creating OSGI service vs singleton pattern. For example, suppose I have a bundle core which provides IService, and multiple bundles that needs to access this. I can:
register a service in the core-bundle, in which the plugins can access
provide a singleton class, which provides the service
Using OSGI service seems to be quite cumbersome; and since the plugins have to depend on Core anyways (to get the interface), what's the advantage of using OSGI service?
Services are the connections between independent modules. Having modules depend on services (with their specification packages) can significantly reduce coupling between modules and thus provide much of the benefits of modularity.
I think the singleton pattern is used in two different ways: you just want a single object be shared between a set of users (e.g. a Log Service) or you can really only have one instance (e.g. there is only one piece of hardware). In general, I see that most people in the enterprise software world talk about the former. However, experience shows that when projects grow, singletons become less singleton but more a shared object, or at least an appearing to be shared object.
The nice thing in OSGi is that you can model both and the clients of the "singleton" are oblivious of it, nor does it require some central configuration. The reason is that OSGi relies on modules in charge, registering a service is a local decision as is listening to a service.
The power of services are not in its dynamics (they are cool though, especially during development), the essence of service is that they provide full local control inside the module without central configuration. Once you understand how powerful this is, there is no way back :-)
Last, OSGi services are not cumbersome, not since we have DS with annotations. Registering a service is now much simpler than creating a Spring bean, no xml, no central configuration:
// A component registered as a ISingleton service
#Component
public class MyImpl implements ISingleton {
void doSingle() { ... }
}
// A component that uses the ISingleton component
#Component
public class MyConsumer {
#Reference
void setISingleton(ISingleton is) { ... }
}
... And the dynamics come largely for free ...
Short answer: if you don't -- and won't -- need the benefit of an OSGi service (e.g., dynamically-managed service implementations and service searches), then you don't need an OSGi service.
But there is more to consider here than whether or not the service would be cumbersome. Heck, OSGi itself can be considered cumbersome. Will another bundle need to provide an implementation of that class? Maybe not. Will the Core bundle ever shut down or otherwise be unable to provide an implementation on demand? Maybe.
To determine if a service is right for the class in question, read the run-down of the specific benefits of a service on the OSGi Alliance's What Is OSGi page. They have a very good explanation of how your singleton class may become more cumbersome than a service.
Good luck.
My OSGi Threading Model 's poc is resulted into believing me that, every service is a singleton for a service consumer. As the only one service object get registered into the osgi service registry. (but you can override this behavior also). So as far as programming is considered, the behavior of a singleton class and an OSGi service is the same. Your class level variables are shared among the various service consumer calls.
I will say OSGI Service is Singleton++
But there are also differences.
OSGi gives you a separate class-loader for each service which is not possible in a singleton. All {singleton} classes are loaded by a single classloader. We can't have two classes with the same name (fully qualified name) in a singleton but this is possible in OSGi.
In certain situations we must be confirmed that a class should be loaded only once (making hibernate session factory, hdfc service initialization, POJO creations which are heavy initializations required only once). Now if you are living in a Java EE scenario some times your singleton class gets loaded twice by two different classloaders. So this results into two times the execution of a static block; an unnecessary job.
Such classloader problems are easily handled by OSGi (as you are a beginner I feel classloading itself is a problem for you in the next few days).
Another great feature provided by OSGi is updating a bundle.
Consider you changed the code in your singleton class. Now you need to deploy this updated class in your running application. You essentially need to restart the system, so that every singleton class loader updates the new instance of the singleton. This is not required in OSGi, just update the bundle.
I will say if you're going to design for larger applications (enterprise scale), or if you need to design code for a limited hardware capacity (low memory constraints, low computing power) then go for OSGi, it is best for the extreme ends. For all others your normal java coding will work perfectly.
You can manage the life cycle (deploy new version of the service, concurrently run multiple versions etc) of a service but you can't manage the life cycle of singleton without restarting the JVM (even with restart you can just have 1 version available at any point of time).

Spring + Thread safe singletons

I'm working on a project where we use MULE and Spring. In the context we create beans that provide the services. All the beans are essentially thread safe singletons. Is this a popular/recommended way of writing services?
By default a bean in spring will be a singleton and it is a very common scenario you describe.
Might be problematic performance wise. If you have many threads competing for the same service. The bean is defined thread safe, so acess from different threads would be effectively serialized.
In our RESTFul service we set up our entry points on a
#com.sun.jersey.spi.resource.PerRequest
basis and
#org.springframework.context.annotation.Scope("request")
which keeps our throughput up but we monitor to ensure that GC is good enough not to bloat the app.
Spring singletons are inherently thread-safe plus this is the default scope -- which performs quite well as we use them in all our web apps.

Resources