Can remote stateless session bean references be cached in EJB3? - caching

I am calling a remote stateless session bean from a J2SE application and would like to cache the reference to the session bean in order to reduce the cost of the lookup. Is this ok?
In EJB2 the ServiceLocator pattern was commonly used to cache lookups to remote resources, but EJB3 doesn't have separate EJB Home (which were usually cached) and Remote objects.
Googling around, a common answer to this is to use EJB3 injection, but since I am doing a call to a remote EJB server from a J2SE client, I can't use injection.

Yes, they can be cached. But I don't know if the behavior is defined what will happen should you have a cached reference and the server is rebooted underneath it. You can test that scenario, but the behavior may vary with the container.

If the server goes away, your references become invalid.
As for caching during the normal lifecycle, this should be fine. I've done this for years, both in EJB2 and EJB3, and never had an issue. Generally I just have a static 'LookupServices' class that just looks up the home, or returns the existing one if it's already there - and stores it in a map.

Related

Stateful Session Bean lifecycle

I have a "stateful session bean" that initializes a synchronizedList,i add products to the list and check the list, it works(all during the same session).
Is it normal that when I do the "undeploy" of my application and then make another"deploy" I lose all the saved data of my bean?
It is desired behavior, just imagine what might happen to created SFSB if I would change collection type from list to map and redeploy app.
Actually hot deploy feature is out scope of EJB specification, so session beans might behave differently depending of witch application server you use. For example in weblogic 8 after redeploy of any type of app module, all SFSBeans referefences are lost (stubs are discarded).
Personally I prefer to restart production server after hot deploy, as there always might be some memory leaks (caused by previous classloaders).

When to use a Local EJB Interface

As per Oracle docs here
Local Clients A local client has these characteristics.
It must run in the same application as the enterprise bean it
accesses.
It can be a web component or another enterprise bean.
To the local client, the location of the enterprise bean it accesses
is not transparent.
As for :It must run in the same application as the enterprise bean it
accesses.
When it says 'same application, it means the EJb client and the EJB bean must be part of the same jar file? Or same EAR file? If it is part of the same jar file, why even use an EJB in the first place? We can just import the EJB bean in the client and use it like a utility class.
It means the same EAR.
Regardless, the only reason to ever use EJB is because you want to delegate responsibility to the container (transactions, security, interceptors, resource injection, asynchronous methods, timers, etc.). There's nothing to stop you from implementing all the qualities of service yourself (e.g., Spring did it), but by using EJB, you don't have to worry about getting all the details right, and (in theory) you make it easier for many people to develop an application because they share a common understanding.

How to share bean INSTANCE across war in SPRING?

I want to share a singleton bean across multiple war. I know sharing ApplicaitonContext using parentContextKey attribute(Example, http://blog.springsource.org/2007/06/11/using-a-shared-parent-application-context-in-a-multi-war-spring-application/)
But this way instance of bean created multiple (for 2 war, 2 instance). I want only 1 instance across 2 war.
Another way, If i set some value in any POJO, it should be accessible in another war.
Reason i need this is, there are some beans(like HibernateSessionFactory, Datasource etc which are expensive) which are created multiple times(n instance for n war). Whereas i want to utilize same instance instead of creating same in different war.
Can anyone provide me solution for this?
You could achieve this by binding the objects into the global JNDI tree. That means that both WARs would have references to an object looked up in JNDI.
Hibernate allows you to use the hibernate.session_factory_name property (this may well be a good starting point. Data sources should already be looked up from JNDI.
One thing, I would not class a session factory or a data source as expensive, so you may well be saving a miniscule amount of memory in exchange for a lot of additional complexity, so I would ask myself the question on whether this is worth the additional maintenance headaches.
Spring provide a way to expose any bean (service) and these bean can be access from any other web application or any standalone application.
please refer Remoting and Web Service using Spring to get more details.

JSF + Stateless EJB performance

I have a JSF 2.0 application running on GlassFish v3. It has EJBs that serve database data via JPA for the main applications sessions. Even on non-IDE app-server, the EJB calls are very slow. Between some pages, the user has to wait over 10 seconds to get to the next page.
The EJB runs on the same application server, and only Local interface is used. The EJB is injected via #EJB annotation.
Any clues?
Thanks in advance,
Daniel
EDIT See my answer for solution.
It's hard to tell without profiling the code and/or unit-testing every part in the business logic to see what step exactly takes that much time. My initial suspicion would be the DB performance. Maybe the table contains million of records and is poorly indexed which causes a simple SELECT to take ages. Maybe the network bandwidth is poor which causes to take more time to transfer the data.
At this point, with the as far given little information, it can be everything. I'd just profile it.
At debug, the app is stuck at the EJB call for several seconds, after that it jumps inside the EJB method and it runs fine.
You need to provide more details:
Do you use Local or Remote interfaces? Is the client (the webapp) on a remote machine?
How do you access the EJBs? Are they injected? Do you perform a JNDI lookup?
What did you measure? (either during profiling or using System.nanoTime() at various points)
Do the measures really show that most of the time is spent in the invocation itself?
Answering these questions should help to identify where to look and possible causes.
The problem was previously, that both Local and Remote interfaces had been implemented, and only the Remote interface was used, however there is no need for that. Both interfaces had the same methods, which is something to avoid according to the NetBeans warning message I got:
When a session bean has remote as well as local business interface, there should not be any method common to both the interfaces.
More detailedly:
The invocation semantics of a remote business method is very different
from that of a local business method. For this reason, when a session bean
has remote as well as local business method, there should not be any
method common to both the interfaces. Example below is an incorrect use
case:
Remote public interface I1 { void foo();}
Local public interface I2 { void foo();}
Stateless public class Foo implements I1, I2 { ... }
So the solution was to remove the Remote interface, and set the application logic to use the Local interface.

Unity performance considerations with container controlled lifetime - Is there any reflection lag with multiple Resolve<T>() calls?

Is there any reflection performance considerations when repeatedly calling container.Resolve<T>() when a resolution has already been established?
I'm using it in an MVC controller to resolve my data service, so it will be called on every HTTP request. I'm storing the container instance in Application state, and I'm using container controlled lifetime so it maintains a singleton instance of my resolved class. My assumption is that while the container is alive and has created a new instance of the service, it will not need to use reflection on subsequent calls to resolve it.
I'm considering keeping a reference to the resolved class instead if performance of Resolve<T>() is an issue. But with the singleton lifetime setup, it seems like I would be duplicating something that's already built-in.
While not directly answering your question, Torkel Ödegaard's IoC container benchmarks suggest that you'll not be seeing a major performance hit related to resolving dependecies.

Resources