I'm building a JEE6 application with performance and scalability in the forefront of my mind.
Business logic and JPA2-facade is held in stateless session beans (EJB3.1). As of right now, the SLSBs implement only #Remote-interfaces. When a bean needs to access another bean, it does so via RMI.
My reasoning behind this is the assumption that, once the application runs on a bunch of clustered application servers, the RMI-part allows the execution to be distributed across the whole cluster automagically.
Is that a correct assumption?
I'm fine with dealing with the downsides of that (objects lose entityManager session, pass-by-value), at least I think so. But I am wondering if constant remote invocation isn't adding more load then necessary.
The EJB specification don't specify how clustering should be achieved, so this will depend on the particular implementation used. Actually, the EJB specifications are on purpose written to not make assumptions about the deployment: they don't mandate any support of clustering, but are written in a way that makes it possible (and a lot of restrictions in the EJB model stems from potential clustering issues, e.g. access to the file system). The implementer is then free to support clustering or not, and still comply with the spec.
In Glassfish, the reference to the remote EJB does the distribution itself. See my answer here for more information. Each request could potentially be dispatched to a different node. That's probably the way most implementations work. So I would say your assumption is correct.
I do hope however that they optimize the case when one EJB calls another EJB and try to dispatch the invacation on the same node whenever possible. That will depend whether the deployment is homogeneous or not (all nodes have the same beans, or not). Again, the spec are a bit vague regarding such points. But I guess that most deployment are homogeneous in practice: the same ear is deployed on all nodes.
Regarding the performance overhead of remote vs. local calls, I did some measures once (on Glassfish). See my answer here. Inter EJB calls in the same .ear through remote interface was 3x slower than local calls. That sounds big, but we are speaking of milliseconds, so the relative overhead depends on what the methods really does. I don't know the performance of other app. server.
Hope it helps.
Related
problem statement
Let's say I want to implement a simple key-value storage, which provides basic CRUD operations. The only requirement is that all the modifications should persist between application launches. I'm not concerned about particular persistence mechanism, as it may change during development, or even be different depending on the platform.
the question
How should I approach testing such library / class / module (whichever it becomes in the end) ?
All the answers I found were focused on testing particular database solution, but that's what I want to avoid. I'm only concerned about the fact that changes are being persisted, not about how they're persisted.
Solutions I considered:
extract the actual persistence implementation, and test each implementation separately, and only test if the abstraction layer calls correct methods
The problem I see with this approach is large amount of code for very simple task, and the fact I'm still concerned about particular persistence mechanism in the end. This may complicate development, especially for such a simple thing as a key-value storage.
make some test that actually launches the application multiple times, and checks if the data changes are persisted between launches
That would test exactly what I need, but such a test would probably be expensive, and not necessarily easy to write.
Test only if the methods work for single process, do not test persistence at all
Well, the whole point of the library would be to test persistence. Say, some user settings, or game save. Testing if it works in memory doesn't test the real problem.
Maybe some other solution, I didn't think of?
I've read through most of the topics related to persistence and TDD here, and on other sites, but all I could find focuses on specific persistence mechanism.
somewhat related, but doesn't touch on the testing subject:
How many levels of abstraction do I need in the data persistence layer?
A persistence layer is a port of your application. As such, your approach should be to write an adapter to that port, and test that adapter using an integration test.
The test itself doesn't need to spin the adapter more than once - testing that things actually persist would be testing the persistence mechanism itself, and that's not your concern. At some point, you'll have to assume things work, and they have their own tests to make sure that's correct. A typical write and then read back test would do for most cases.
Once the adapter exists, you can use several techniques for writing other tests - mock the adapter when it is a collaborator for other units, or use an in-memory implementation of the adapter (using a contract test to see that the in-memory implementation is the same as the other one).
Is it sensible to use Spring in the server side of an in memory data grid based application?
My gut feeling tells me that it is nonsense in a low latency high performance system. A colleague of mine is insisting on including Spring in it. What are the pros and cons of such inclusion?
My position is that Spring is OK to be used in the client but it is too heavy for the server, it brings too many dependancies and is one more leaky abstraction to think of.
Data Grid systems are memory and I/O intensive in general. Using Spring does not affect that (you may argue that Spring creates a lot of beans but with proper Garbage Collection tuning this is not a problem).
On the other hand using Spring (or any other DI) helps you structure and test your code.
So if you are using implementing some sort of server based on Data Grid systems, pay attention to properly adjusting GC, sockets in your OS (memory buffers and socket memories). Those will give you much more benefits than cutting down DI.
First, I'm surprised by the "leaky abstraction" comment. I've never heard anyone criticize Spring for this. In fact, it's just the opposite. Spring removes the implementation details of infrastructure such as data grids from your application code and provides a consistent and familiar programming model, allowing you to focus on business logic. Spring does a lot to enhance configuration and access to data grids, especially Gemfire, and generally does not create any runtime overhead per se. During initialization of a Spring application, Spring uses tools like reflection and AOP internally which may increase the start up time of an application, but this has no impact on runtime performance. Spring has been proven in many high-throughput, low-latency production applications. In extreme cases, things like network latency and serialization, concerns external to Spring, are normally the biggest factors affecting performance.
"Spring brings in too many dependencies" is a common complaint, but is a fallacy. I would say Spring brings in the exact right amount of dependencies for what it needs to do. Additionally, Spring Boot starters and the platform BOM do a lot to simplify dependency management so you don't need to worry about version incompatibilities or explicitly declaring common dependencies. I'll have to side with your colleague on this one.
Our project is designed in EJB 2.0.
We are not using any kind of EJB persistance methods in the BMP EntityBeans. In SessionBeans we are getting reference to EntityHome object by using method getEJBXXXXHome() method and there by calling home.findByPrimaryKey("") method to get the EJB reference. Then we are calling actual methods for CRUD operations. In CRUD operations methods our people have used normal JDBC API methods.
Now we are migrating to EJB3. As part of migration from EJB 2.0 TO EJB3 am converting all my BMP EntityBeans to normal Java classes i.e there are no more entitybeans. If EJB container maintains a pool for the entitybeans earlier, it wont be there now. Its working normally when I have tested in my local machine for one transaction
My concern is, will it affects the perfromance for multiple threads in production?.
Afer changing the code now, every call creates one EntityBean Object. If 60k calls were made in just one hour will that affect my server. How this one is handled previously in EJB 2.0? is there any way to handle it in the changed code (i.e for normal java classes as they are no more entitybeans concept)
Generally speaking, the overhead of objection creation/collection is going to be lower than the overhead of whatever the EJB container was doing for your entities previously. I suspect a larger concern than object creation overhead is round-trips to the database. Depending on your EJB container configuration, it's likely the container was optimizing the JDBC SQL and possibly caching the retrieved data (unrelated to object caching). You should likely design your application to minimize calls to the database and ensure you don't execute unnecessary queries.
Ultimately, I suspect only you are going to be able to assess the performance of your application on your application server on your hardware. I recommend following good programming practices to avoid egregious overhead, profile the result, and optimize from there rather than worrying about the performance up-front.
When people mention that Spring is a lightweight containter compared to other frameworks, do they mean? That it occupies less memory in the system or it does not have the operations like start stop that we have for EJB containers and it doesn't use a special container?
What makes Spring a lightweight container?
Whether it is "lightweight" or "heavyweight", it is all about comparison. We consider Spring to be lightweight when we are comparing to normal J2EE container. It is lightweight in the sense of extra memory footprint for the facilities provided (e.g. Transaction Control, Life Cycle, Component dependency management)
However, there are sometimes other criteria to compare for the "weight" of a container, e.g. intrusiveness in design and implementation; facilities provided etc.
Ironically, Spring is sometimes treated as heavy weight container when compared to other POJO-based container, like Guice and Plexus.
Spring calls itself 'lightweight' because you don't need all of Spring to use part of it. For example, you can use Spring JDBC without Spring MVC.
Spring provides various modules for different purposes; you can just inject dependencies according to your required module. That is, you don't need to download or inject all dependencies or all JARs to use a particular module.
If you want to run a Java EE application, you can't just create a small application that will run on its own. You will need a Java EE application server to run your application, such as Glassfish, WebLogic or WebSphere. Most application servers are big and complex pieces of software, that are not trivial to install or configure.
You don't need such a thing with Spring. You can use Spring dependency injection, for example, in any small, standalone program.
I think "lightweight" is mostly a buzz-word. It's meaning is highly subjective and based on context. It can mean "low memory footprint", it can be low execution overhead, low start-up overhead. People also use it to differentiate between some perceived level of complexity and/or learning-curve. In any case, it's assuredly relative as there is no defined point on any scale where "light" becomes "heavy" in terms of "weight".
I personally think it's a dangerous word since it has no real, quantifiable meaning. It's something people throw into architecture proposals to beef up the "pro" section of a certain framework they want to use anyway. If you see or hear it being used in any such situation, it's a perfect opportunity to ask "what does that mean?". If you get an angry or frustrated response (combined with rolling of eyes and shaking of head), it means that the person has decided on a certain architecture, but hasn't managed to formulate coherent or objective reasons for it.
EDIT: not sure I would categorize spring as a "container" either, but that's a similar apples and oranges discussion. I'd call it a framework.
Spring is light weight becouse other J2ee container especially EJB2.1 require more configuration, It can have lot of do nothing code to ,it have complex directory structure for packing applications, overall it took extra memory;on other hand spring minimizes all this things.so it light weight.
I think one can also say that spring is light weight because it uses POJO(Plain old java object) .POJO class does not require to implement,extends technologies specific API(Interfaces,Classes) or it is not bounded to any technology specific API
What are the advantages and disadvantages of the Session Façade Core J2EE Pattern?
What are the assumptions behind it?
Are these assumptions valid in a particular environment?
Session Facade is a fantastic pattern - it is really a specific version of the Business Facade pattern. The idea is to tie up business functionality into discrete bundles - such as TransferMoney(), Withdraw(), Deposit()... So that your UI code is accessing things in terms of business operations instead of low level data access or other details that it shouldn't have to be concerned with.
Specifically with the Session Facade - you use a Session EJB to act as the business facade - which is nice cause then you can take advantage of all the J2EE services (authentication/authorization, transactions, etc)...
Hope that helps...
The main advantage of the Session Facade pattern is that you can divide up a J2EE application into logical groups by business functionality. A Session Facade will be called by a POJO from the UI (i.e. a Business Delegate), and have references to appropriate Data Access Objects. E.g. a PersonSessionFacade would be called by the PersonBusinessDelegate and then it could call the PersonDAO. The methods on the PersonSessionFacade will, at the very least, follow the CRUD pattern (Create, Retrieve, Update and Delete).
Typically, most Session Facades are implemented as stateless session EJBs. Or if you're in Spring land using AOP for transactions, you can create a service POJO that which can be all the join points for your transaction manager.
Another advantage of the SessionFacade pattern is that any J2EE developer with a modicum of experience will immediately understand you.
Disadvantages of the SessionFacade pattern: it assumes a specific enterprise architecture that is constrained by the limits of the J2EE 1.4 specification (see Rod Johnson's books for these criticisms). The most damaging disadvantage is that it is more complicated than necessary. In most enterprise web applications, you'll need a servlet container, and most of the stress in a web application will be at the tier that handles HttpRequests or database access. Consequently, it doesn't seem worthwhile to deploy the servlet container in a separate process space from the EJB container. I.e. remote calls to EJBs create more pain than gain.
Rod Johnson claims that the main reason you'd want to use a Session Facade is if you're doing container managed transactions - which aren't necessary with more modern frameworks (like Spring.)
He says that if you have business logic - put it in the POJO. (Which I agree with - I think its a more object-oriented approach - rather than implementing a session EJB.)
http://forum.springframework.org/showthread.php?t=18155
Happy to hear contrasting arguments.
It seems that whenever you talk about anything J2EE related - there are always a whole bunch of assumptions behind the scenes - which people assume one way or the other - which then leads to confusion. (I probably could have made the question clearer too.)
Assuming (a) we want to use container managed transactions in a strict sense through the EJB specification then
Session facades are a good idea - because they abstract away the low-level database transactions to be able to provide higher level application transaction management.
Assuming (b) that you mean the general architectural concept of the session façade - then
Decoupling services and consumers and providing a friendly interface over the top of this is a good idea. Computer science has solved lots of problems by 'adding an additional layer of indirection'.
Rod Johnson writes "SLSBs with remote interfaces provide a very good solution for distributed applications built over RMI. However, this is a minority requirement. Experience has shown that we don't want to use distributed architecture unless forced to by requirements. We can still service remote clients if necessary by implementing a remoting façade on top of a good co-located object model." (Johnson, R "J2EE Development without EJB" p119.)
Assuming (c) that you consider the EJB specification (and in particular the session façade component) to be a blight on the landscape of good design then:
Rod Johnson writes
"In general, there are not many reasons you would use a local SLSB at all in a Spring application, as Spring provides more capable declarative transaction management than EJB, and CMT is normally the main motivation for using local SLSBs. So you might not need th EJB layer at all. " http://forum.springframework.org/showthread.php?t=18155
In an environment where performance and scalability of the web server are the primary concerns - and cost is an issue - then the session facade architecture looks less attractive - it can be simpler to talk directly to the datbase (although this is more about tiering.)