JSF + Stateless EJB performance - performance

I have a JSF 2.0 application running on GlassFish v3. It has EJBs that serve database data via JPA for the main applications sessions. Even on non-IDE app-server, the EJB calls are very slow. Between some pages, the user has to wait over 10 seconds to get to the next page.
The EJB runs on the same application server, and only Local interface is used. The EJB is injected via #EJB annotation.
Any clues?
Thanks in advance,
Daniel
EDIT See my answer for solution.

It's hard to tell without profiling the code and/or unit-testing every part in the business logic to see what step exactly takes that much time. My initial suspicion would be the DB performance. Maybe the table contains million of records and is poorly indexed which causes a simple SELECT to take ages. Maybe the network bandwidth is poor which causes to take more time to transfer the data.
At this point, with the as far given little information, it can be everything. I'd just profile it.

At debug, the app is stuck at the EJB call for several seconds, after that it jumps inside the EJB method and it runs fine.
You need to provide more details:
Do you use Local or Remote interfaces? Is the client (the webapp) on a remote machine?
How do you access the EJBs? Are they injected? Do you perform a JNDI lookup?
What did you measure? (either during profiling or using System.nanoTime() at various points)
Do the measures really show that most of the time is spent in the invocation itself?
Answering these questions should help to identify where to look and possible causes.

The problem was previously, that both Local and Remote interfaces had been implemented, and only the Remote interface was used, however there is no need for that. Both interfaces had the same methods, which is something to avoid according to the NetBeans warning message I got:
When a session bean has remote as well as local business interface, there should not be any method common to both the interfaces.
More detailedly:
The invocation semantics of a remote business method is very different
from that of a local business method. For this reason, when a session bean
has remote as well as local business method, there should not be any
method common to both the interfaces. Example below is an incorrect use
case:
Remote public interface I1 { void foo();}
Local public interface I2 { void foo();}
Stateless public class Foo implements I1, I2 { ... }
So the solution was to remove the Remote interface, and set the application logic to use the Local interface.

Related

Migrating from EJB2 to EJB3 EntityBean performance

Our project is designed in EJB 2.0.
We are not using any kind of EJB persistance methods in the BMP EntityBeans. In SessionBeans we are getting reference to EntityHome object by using method getEJBXXXXHome() method and there by calling home.findByPrimaryKey("") method to get the EJB reference. Then we are calling actual methods for CRUD operations. In CRUD operations methods our people have used normal JDBC API methods.
Now we are migrating to EJB3. As part of migration from EJB 2.0 TO EJB3 am converting all my BMP EntityBeans to normal Java classes i.e there are no more entitybeans. If EJB container maintains a pool for the entitybeans earlier, it wont be there now. Its working normally when I have tested in my local machine for one transaction
My concern is, will it affects the perfromance for multiple threads in production?.
Afer changing the code now, every call creates one EntityBean Object. If 60k calls were made in just one hour will that affect my server. How this one is handled previously in EJB 2.0? is there any way to handle it in the changed code (i.e for normal java classes as they are no more entitybeans concept)
Generally speaking, the overhead of objection creation/collection is going to be lower than the overhead of whatever the EJB container was doing for your entities previously. I suspect a larger concern than object creation overhead is round-trips to the database. Depending on your EJB container configuration, it's likely the container was optimizing the JDBC SQL and possibly caching the retrieved data (unrelated to object caching). You should likely design your application to minimize calls to the database and ensure you don't execute unnecessary queries.
Ultimately, I suspect only you are going to be able to assess the performance of your application on your application server on your hardware. I recommend following good programming practices to avoid egregious overhead, profile the result, and optimize from there rather than worrying about the performance up-front.

OSGI service vs. Singleton?

I am a beginner to OSGI and I am wondering if someone can enlighten me about the difference between creating OSGI service vs singleton pattern. For example, suppose I have a bundle core which provides IService, and multiple bundles that needs to access this. I can:
register a service in the core-bundle, in which the plugins can access
provide a singleton class, which provides the service
Using OSGI service seems to be quite cumbersome; and since the plugins have to depend on Core anyways (to get the interface), what's the advantage of using OSGI service?
Services are the connections between independent modules. Having modules depend on services (with their specification packages) can significantly reduce coupling between modules and thus provide much of the benefits of modularity.
I think the singleton pattern is used in two different ways: you just want a single object be shared between a set of users (e.g. a Log Service) or you can really only have one instance (e.g. there is only one piece of hardware). In general, I see that most people in the enterprise software world talk about the former. However, experience shows that when projects grow, singletons become less singleton but more a shared object, or at least an appearing to be shared object.
The nice thing in OSGi is that you can model both and the clients of the "singleton" are oblivious of it, nor does it require some central configuration. The reason is that OSGi relies on modules in charge, registering a service is a local decision as is listening to a service.
The power of services are not in its dynamics (they are cool though, especially during development), the essence of service is that they provide full local control inside the module without central configuration. Once you understand how powerful this is, there is no way back :-)
Last, OSGi services are not cumbersome, not since we have DS with annotations. Registering a service is now much simpler than creating a Spring bean, no xml, no central configuration:
// A component registered as a ISingleton service
#Component
public class MyImpl implements ISingleton {
void doSingle() { ... }
}
// A component that uses the ISingleton component
#Component
public class MyConsumer {
#Reference
void setISingleton(ISingleton is) { ... }
}
... And the dynamics come largely for free ...
Short answer: if you don't -- and won't -- need the benefit of an OSGi service (e.g., dynamically-managed service implementations and service searches), then you don't need an OSGi service.
But there is more to consider here than whether or not the service would be cumbersome. Heck, OSGi itself can be considered cumbersome. Will another bundle need to provide an implementation of that class? Maybe not. Will the Core bundle ever shut down or otherwise be unable to provide an implementation on demand? Maybe.
To determine if a service is right for the class in question, read the run-down of the specific benefits of a service on the OSGi Alliance's What Is OSGi page. They have a very good explanation of how your singleton class may become more cumbersome than a service.
Good luck.
My OSGi Threading Model 's poc is resulted into believing me that, every service is a singleton for a service consumer. As the only one service object get registered into the osgi service registry. (but you can override this behavior also). So as far as programming is considered, the behavior of a singleton class and an OSGi service is the same. Your class level variables are shared among the various service consumer calls.
I will say OSGI Service is Singleton++
But there are also differences.
OSGi gives you a separate class-loader for each service which is not possible in a singleton. All {singleton} classes are loaded by a single classloader. We can't have two classes with the same name (fully qualified name) in a singleton but this is possible in OSGi.
In certain situations we must be confirmed that a class should be loaded only once (making hibernate session factory, hdfc service initialization, POJO creations which are heavy initializations required only once). Now if you are living in a Java EE scenario some times your singleton class gets loaded twice by two different classloaders. So this results into two times the execution of a static block; an unnecessary job.
Such classloader problems are easily handled by OSGi (as you are a beginner I feel classloading itself is a problem for you in the next few days).
Another great feature provided by OSGi is updating a bundle.
Consider you changed the code in your singleton class. Now you need to deploy this updated class in your running application. You essentially need to restart the system, so that every singleton class loader updates the new instance of the singleton. This is not required in OSGi, just update the bundle.
I will say if you're going to design for larger applications (enterprise scale), or if you need to design code for a limited hardware capacity (low memory constraints, low computing power) then go for OSGi, it is best for the extreme ends. For all others your normal java coding will work perfectly.
You can manage the life cycle (deploy new version of the service, concurrently run multiple versions etc) of a service but you can't manage the life cycle of singleton without restarting the JVM (even with restart you can just have 1 version available at any point of time).

Different EARs using common services. Should I use remote calls, or package them as local?

I have multiple EJB3 ears deployed on jboss server. One of them is an application containing common services exposed as remote. Now all other ears use those services via remote and it seems to be realy painful for performance.
What can i do to overcome this? Can I make those services #Local and package this jar to every single application to allow them to be used via #Local not #Remote?
According to the JavaEE tutorial, the client of a #Local bean "must run in the same JVM as the enterprise bean it accesses."
So you should have no problem using local calls between different deployed applications on the same server.
Are you sure this is the cause of your performance problems, though?
I agree with you, you cannot inject an ejb session bean from another ear and in the same jvm using local interface.
if you have two ear you must use the remote interface, and using:
#EJB(lookup ="JNDI_BEAN_NAME")
I disagree. See this thread as it provides an excellent description of why you shouldn't: http://www.coderanch.com/t/79249/Websphere/Local-EJB-calls-separate-ear
The only way, normally, to make local calls between ears is to make cross-classloader calls. Not pretty.
You can get around this by having a single classloader per server (versus scoped by ear). Also a bad idea for security/isolation reasons.
You should use remote calls between ears. Some Java EE implementations optimize these calls to be more efficient when calling within the same jvm.

Example use-cases for using Dependency Injection with the Play Framework

I am a big fan of Dependency Injection and the Play Framework, but am having trouble seeing how the two could be exploited together.
There are modules for Spring and Guice, but the way that Play works makes it hard for me to see how DI could be beneficial beyond some quite simple cases.
A good example of this is that Play expects JPA work to be done by static methods associated with the entity in question:
#Entity
Person extends Model {
public static void delete(long id) {
em().find(id).remove();
}
//etc
}
So there is no need for a PersonManager to be injected into controllers in the way it might for a Spring J2EE application. Instead a controller just calls Person.delete(x).
Obviously, DI is beneficial when there are interfaces with external systems, as the concrete implementation can be mocked for testing etc., but I don't see much benefit for a self-contained Play application.
Does anyone have any good examples? Does anyone use it to inject a Manager-style class into Controllers so that a number of operations can be done within the same transaction, for example?
I believe from this sentence you wrote:
"Does anyone have any good examples? Does anyone use it to inject a Manager-style class into Controllers so that a number of operations can be done within the same transaction, for example?"
that before answering the DI question I should note something: transactions are managed automatically by Play. If you check the model documentation you will see that a transaction is automatically created at the beginning of a request, and committed at the end. You can roll it back via JPA or it will be rolled back if an exception is raised.
I mention this because from the wording of your sentence I'm not sure if you are aware of this.
Now, on DI itself, in my (not-so-extensive) experience with DI, I've seen it used mainly to:
Load the ORM (Hibernate) factory/manager
Load Service classes/DAOs into another class to work with them
Sure, there are more scenarios, but these probably cover most of the real usage. Now:
The first one is irrelevant to Play as you get access to your JPA object and transaction automatically
The second one is quite irrelevant too as you mainly work with static methods in controllers. You may have some helper classes that need to be instantiated, and some may even belong to a hierarchy (common interface) so DI would be beneficial. But you could just as well create your won factory class and get rid of the jars of DI.
There is another matter to consider here: I'm not so sure about Guice, but Spring is not only DI, it also provides a lot of extra functionalities which depend on the DI module. So maybe you don't want to use DI in Play, but you want to take advantage of the Spring tools and they will use DI, albeit indirectly (via xml configuration).
The problem in my humble opinion on the static initialization approach of Play! is that it makes testing harder. Once you approach the HTTP vs Object Orientation problem with static members and objects that carries the HTTP message data (request and response) you make a trade of having to create new instances for each request by the ability of make your objects loosely coupled with the rest of your project classes.
One good example of a different design are servlets, it also extends a base class but it approaches the problem with a single instance creation (by default, because there are configurations that enable more instances).
I believe that maybe a mix of the two approaches would be better, having a singleton of each controller would give the same characteristics of a full static class and would allow dependency injection of some kinds of object. But not the objects with request or session scope, once the controller would need to be created every new request. Moreover it would improve testability by inverting the control of dependency injection, thus allowing arbitrary injection points.
Dependencies would be injected by the container or by a test, probably using mocks for the heavy stuff that much likely would already have been tested before.
In my point of view, this static model pushes the developer away from testing controllers because extending FunctionalTest starts the application server, thus paying the price of heavy objects like repositories, services, crawlers, http clients, etc. I don't want to wait a lot of objects to be bootstrapped just to check if some code was executed on the controller, tests should be quick and clear to make developers love them as their programming assistant/guide.
DI is not the ultimate solution to use everywhere... Don't use DI just because you have it in your hands... In play, you don't need DI to develop controllers/models etc... but sometimes it could be a nice design: IMO, you could use it if you have a service with a well know interface but you would like to develop this service outside Play and test it outside play and even test your play project with just a dummy service in order NOT to depend on the full service implementation. Therefore DI can be interesting: you plug the service loosely in play. In fact, this is the original use case for DI afaik...
I just wrote a blog post about setting up a Play Framework application with Google Guice. http://geeks.aretotally.in/dependency-injection-with-play-framework-and-google-guice
I see some benefits, especially when a component of your application requires a different behavior based on a certain context or something like that. But I def believe people should be selective about what goes into a DI context.
It shows again that you should only use dependencies injection if you really have a benefit. If you have complex services it's useful, but in many cases it's not. Read the chapter about models in the play-documentation.
So to give you an example where you can use DI with play. Perhaps you must make a complex calculation, or you create a pdf with a report-engine. There I think DI can be useful, specially for testing. There I think the guice-module and spring-module are useful and can help you.
Niels
As of a year and some change later, Play 2.1 now has support for dependency injection in controllers. Here's their demo project using Spring 3, which lays it out pretty clearly.
Edit: here's another example using Guice and Scala, if that's your poison.

Can remote stateless session bean references be cached in EJB3?

I am calling a remote stateless session bean from a J2SE application and would like to cache the reference to the session bean in order to reduce the cost of the lookup. Is this ok?
In EJB2 the ServiceLocator pattern was commonly used to cache lookups to remote resources, but EJB3 doesn't have separate EJB Home (which were usually cached) and Remote objects.
Googling around, a common answer to this is to use EJB3 injection, but since I am doing a call to a remote EJB server from a J2SE client, I can't use injection.
Yes, they can be cached. But I don't know if the behavior is defined what will happen should you have a cached reference and the server is rebooted underneath it. You can test that scenario, but the behavior may vary with the container.
If the server goes away, your references become invalid.
As for caching during the normal lifecycle, this should be fine. I've done this for years, both in EJB2 and EJB3, and never had an issue. Generally I just have a static 'LookupServices' class that just looks up the home, or returns the existing one if it's already there - and stores it in a map.

Resources