Spring caching implementation - spring-boot

I am exploring spring caching facility. I have few queries regarding this.
First, should it be applied at service method level or DAO method level, where service method is calling DAO method.
Second, how should I avoid cache data getting stale?

IMO, The answer to both questions is "it depends".
Technically, Spring Wise, cache annotations applied on both Service and DAO will work, I don't think there is any difference, so it boils down to the concrete use case.
For example, if you "logically" plan to provide a cacheable abstraction of what should be calculated as a result of some computational process done on server, you better go with Caching at the service level.
If, on the other hand you have a DAO method that looks like Something getSomethingById(id) in the dao, and you would like to avoid relatively expensive calls to the underlying database, you can provide a cache at the level of the DAO. Having said that, it probably won't be useful to apply caching if you have methods like List<Something> fetchAll() or List<Something> fetchAllByFilter(). If you're working with JPA (implemented with Hibernate) they have their own abstraction of cache, but its kind of beyond the scope of the question, just something you should be aware of...
There are plenty of tutorials available on internet, some illustrate the service based approach, some go for DAO's methods annotations but again, these are only simple examples, in the real world you'll have to take a decision.
Now regarding the second question. In general caching makes sense if your data doesn't change much, so first off if it changes often, then probably caching is not appropriate/relevant for the use case.
Other than that, there are many techniques:
Cache Data Eviction (usually time based). See this tutorial
Some kind of messaging system that will send the message about the cache entry change. This one is especially useful if you have a distributed application and keep the cache in-memory only. When getting a message you might opt for "cache replication" or totally wiping out the cache, so that it will be "filled" with a new data eventually
Using the Distributed cache technologies like Hazelcast or Redis as opposed to the in-memory caching. So that technically the cached data consistency will be guaranteed by caching provider.
I would like also to recommend you This tutorial - the speaker talks about different aspects of caching implementation and I think its really relevant to your question.

Related

Is it problematic that Spring Data REST exposes entities via REST resources without using DTOs?

In my limited experience, I've been told repeatedly that you should not pass around entities to the front end or via rest, but instead to use a DTO.
Doesn't Spring Data Rest do exactly this? I've looked briefly into projections, but those seem to just limit the data that is being returned, and still expecting an entity as a parameter to a post method to save to the database. Am I missing something here, or am I (and my coworkers) incorrect in that you should never pass around and entity?
tl;dr
No. DTOs are just one means to decouple the server side domain model from the representation exposed in HTTP resources. You can also use other means of decoupling, which is what Spring Data REST does.
Details
Yes, Spring Data REST inspects the domain model you have on the server side to reason about the way the representations for the resources it exposes will look like. However it applies a couple of crucial concepts that mitigate the problems a naive exposure of domain objects would bring.
Spring Data REST looks for aggregates and by default shapes the representations accordingly.
The fundamental problem with the naive "I throw my domain objects in front of Jackson" is that from the plain entity model, it's very hard to reason about reasonable representation boundaries. Especially entity models derived from database tables have the habit to connect virtually everything to everything. This stems from the fact that important domain concepts like aggregates are simply not present in most persistence technologies (read: especially in relational databases).
However, I'd argue that in this case the "Don't expose your domain model" is more acting on the symptoms of that than the core of the problem. If you design your domain model properly there's a huge overlap between what's beneficial in the domain model and what a good representation looks like to effectively drive that model through state changes. A couple of simple rules:
For every relationship to another entity, ask yourself: couldn't this rather be an id reference. By using an object reference you pull a lot of semantics of the other side of the relationship into your entity. Getting this wrong usually leads entities referring to entities referring to entities, which is a problem on a deeper level. On the representation level this allows you to cut off data, cater consistency scopes etc.
Avoid bi-directional relationships as they're notoriously hard to get right on the update side of things.
Spring Data REST does quite a few things to actually transfer those entity relationships into the proper mechanisms on the HTTP level: links in general and more importantly links to dedicated resources managing those relationships. It does so by inspecting the repositories declared for entities and basically replaces an otherwise necessary inlining of the related entity with a link to an association resource that allows you to manage that relationship explicitly.
That approach usually plays nicely with the consistency guarantees described by DDD aggregates on the HTTP level. PUT requests don't span multiple aggregates by default, which is a good thing as it implies a scope of consistency of the resource matching the concepts of your domain.
There's no point in forcing users into DTOs if that DTO just duplicates the fields of the domain object.
You can introduce as many DTOs for your domain objects as you like. In most of the cases, the fields captured in the domain object will reflect into the representation in some way. I have yet to see the entity Customer containing a firstname, lastname and emailAddress property, and those being completely irrelevant in the representation.
The introduction of DTOs doesn't guarantee a decoupling by no means. I've seen way too many projects where they where introduced for cargo-culting reasons, simply duplicated all fields of the entity backing them and by that just caused additional effort because every new field had to be added to the DTOs as well. But hey, decoupling! Not. ¯\_(ツ)_/¯
That said, there are of course situations where you'd want to slightly tweak the representation of those properties, especially if you use strongly typed value objects for e.g. an EmailAddress (good!) but still want to render this as a plain String in JSON. But by no means is that a problem: Spring Data REST uses Jackson under the covers which offers you a wide variety of means to tweak the representation — annotations, mixins to keep the annotations outside your domain types, custom serializers etc. So there is a mapping layer in between.
Not using DTOs by default is not a bad thing per se. Just imagine the outcry by users about the amount of boilerplate necessary if we required DTOs to be written for everything! A DTO is just one means to an end. If that end can be achieved in a different way (and it usually can), why insist on DTOs?
Just don't use Spring Data REST where it doesn't fit your requirements.
Continuing on the customization efforts it's worth noticing that Spring Data REST exists to cover exactly the parts of the API, that just follow the basic REST API implementation patterns it implements. And that functionality is in place to give you more time to think about
How to shape your domain model
Which parts of your API are better expressed through hypermedia driven interactions.
Here's a slide from the talk I gave at SpringOne Platform 2016 that summarizes the situation.
The complete slide deck can be found here. There's also a recording of the talk available on InfoQ.
Spring Data REST exists for you to be able to focus on the underlined circles. By no means we think you can build a great really API solely by switching Spring Data REST on. We just want to reduce the amount of boilerplate for you to have more time to think about the interesting bits.
Just like Spring Data in general reduces the amount of boilerplate code to be written for standard persistence operations. Nobody would argue you can actually build a real world app from only CRUD operations. But taking the effort out of the boring bits, we allow you to think more intensively about the real domain challenges (and you should actually do that :)).
You can be very selective in overriding certain resources to completely take control of their behavior, including manually mapping the domain types to DTOs if you want. You can also place custom functionality next to what Spring Data REST provides and just hook the two together. Be selective about what you use.
A sample
You can find a slightly advanced example of what I described in Spring RESTBucks, a Spring (Data REST) based implementation of the RESTBucks example in the RESTful Web Services book. It uses Spring Data REST to manage Order instances but tweaks its handling to introduce custom requirements and completely implement the payment part of the story manually.
Spring Data REST enables a very fast way to prototype and create a REST API based on a database structure. We're talking about minutes vs days, when comparing with other programming technologies.
The price you pay for that, is that your REST API is tightly coupled to your database structure. Sometimes, that's a big problem. Sometimes it's not. It depends basically on the quality of your database design and your ability to change it to suit the API user needs.
In short, I consider Spring Data REST as a tool that can save you a lot of time under certain special circumstances. Not as a silver bullet that can be applied to any problem.
We used to use DTOs including the fully traditional layering ( Database, DTO, Repository, Service, Controllers,...) for every entity in our projects. Hopping the DTOs will some day save our life :)
So for a simple City entity which has id,name,country,state we did as below:
City table with id,name,county,.... columns
CityDTO with id,name,county,.... properties ( exactly same as database)
CityRepository with a findCity(id),....
CityService with findCity(id) { CityRepository.findCity(id) }
CityController with findCity(id) { ConvertToJson( CityService.findCity(id)) }
Too many boilerplate codes just to expose a city information to client. As this is a simple entity no business is done at all along these layers, just the objects is passing by.
A change in City entity was starting from database and changed all layers. (For example adding a location property, well because at the end the location property should be exposed to user as json). Adding a findByNameAndCountryAllIgnoringCase method needs all layers be changed changed ( Each layer needs to have new method).
Considering Spring Data Rest ( of course with Spring Data) this is beyond simple!
public interface CityRepository extends CRUDRepository<City, Long> {
City findByNameAndCountryAllIgnoringCase(String name, String country);
}
The city entity is exposed to client with minimum code and still you have control on how the city is exposed. Validation, Security, Object Mapping ... is all there. So you can tweak every thing.
For example, if I want to keep client unaware on city entity property name change (layer separation), well I can use custom Object mapper mentioned https://docs.spring.io/spring-data/rest/docs/3.0.2.RELEASE/reference/html/#customizing-sdr.custom-jackson-deserialization
To summarize
We use the Spring Data Rest as much as possible, in complicated use cases we still can go for traditional layering and let the Service and Controller do some business.
A client/server release is going to publish at least two artifacts. This already decouples client from server. When the server's API is changed, applications do not immediately change. Even if the applications are consuming the JSON directly, they continue to consume the legacy API.
So, the decoupling is already there. The important thing is to think about the various ways a server's API is likely to evolve after it is released.
I primarily work with projects which use DTOs and numerous rigid layers of boilerplate between the server's SQL and the consuming application. Rigid coupling is just as likely in these applications. Often, changing anything in the DB schema requires us to implement a new set of endpoints. Then, we support both sets of endpoints along with the accompanying boilerplate in each layer (Client, DTO, POJO, DTO <-> POJO conversions, Controller, Service, Repository, DAO, JDBC <-> POJO conversion, and SQL).
I'll admit that there is a cost to dynamic code (like spring-data-rest) when doing anything not supported by the framework. For example, our servers need to support a lot of batch insert/update operations. If we only need that custom behavior in a single case, it's certainly easier to implement it without spring-data-rest. In fact, it may be too easy. Those single cases tend to multiply. As the number of DTOs and accompanying code grows, the inconsistencies eventually become extremely burdensome to maintain. In some non-dynamic server implementations, we have hundreds of DTOs and POJOs that are likely no longer used by anything. But, we are forced to continue supporting them as their number grows each month.
With spring-data-rest, we pay the cost of customization early. With our multi-layer hard-coded implementations, we pay it later. Which one is preferred depends on a lot of factors (including the team's knowledge and the expected lifetime of the project). Both types of project can collapse under their own weight. But, over time, I've become more comfortable with implementations (like spring-data-rest without DTOs) that are more dynamic. This is especially true when the project lacks good specifications. Over time, such a project can easily drown in the inconsistencies buried within its sea of boilerplate.
From the Spring documentation I don't see Spring data REST exposes entities, you are the one doing it.
Spring Data projects intend to ease the process of accessing different data sources, but you are the one deciding which layer to expose on Spring Data Rest.
Reorganizing your project will help to solve your issue.
Every #Repository that you create with Spring data represents more a DAO in the sense of design than a Repository. Each one is tightly coupled with a particular Data source you want to reach out to. Say JPA, Mongo, Redis, Cassandra,...
Those layers are meant to return entity representations or projections.
However if you check out the Repository pattern from a design perspective you should have a higher layer of abstraction from those specific DAOs where your app use those DAOs to get info from as many different sources it needs, and builds business specific objects for your app (Those might looks more like your DTOs).
That is probably the layer you want to expose on your Spring Data Rest.
NOTE: I see an answer recommending to return Entity instances only because they have the same properties as the DTO. This is normally a bad practice and in particular is a bad idea in Spring and many other frameworks because they do not return your actual classes, they return proxy wrappers so that they can work some magic like lazy loading of values and the likes.

Caching (Ehcache) - Hibernate 2nd level cache and Spring

In my web application (Spring 3.1, Hibernate 4), I am using Ehcache for Hibernate 2nd level cache and Spring #Cache. I would like to know, where to use Hibernate Cache and Spring Cache?
For Example, I have few domain classes (view in database) which I am using as lookup values on screen. I can cache them using Hibernate 2nd level cache as well as Spring #Cache.
So, in my service layer if I cache these domain objects using Spring #Cache, I would receive these objects without hitting persistence layer at all (hibernate HQL query), once cached. Is it right approach?
Depends on your layer architecture.
Assume you have three services (or three methods within the same service) that all return a collection of Customer entities i.e. domain objects. If you cache at service layer there's a fair chance the same representation of a single database record will live in the cache multiple times. They are multiple objects of essentially the same information. Why? Because the results of Service.getWhateverCustomers(String) and Service.getWhateverCustomers(String, Integer) are stored under two different cache keys.
If you cache at entity level using the JPA #Cachable annotation on the other hand your Customer entity is cached no matter from which service or service method you call the code that actually retrieves the entity. Of course the rules about when the JPA provider can/does cache an entity apply. Read up on them if you're not familiar with them.
Hope this gives you an idea which path to follow. Post follow-up comments if you have more questions and I'll edit this answer.
The right approach is:
Ask yourself if you even need to mess with the complexity of caching. Is your app failing to perform up to requirements?
Only if the answer to the previous question is "yes", profile your app to find out where the performance problem(s) is/are.
Determine the appropriate way to solve a performance problem identified in step 2. It may or may not involve caching to prevent costly operations. If it does involve caching, then where to cache and which cache to use should be abundantly clear because you'll know exactly what you're trying to prevent from happening.
The moral of the story is that you don't cache because it's cool. You cache for performance. And you only optimize code when it's proven necessary.

JPA2 Entities Caching

As it stands I am using a JSF request scoped bean to do all my CRUD operations. As I'm sure you most likely know Tomcat doesn't provide container managed persistence so in my CRUD request bean I am using EnityManagerFactory to get fold of enity manager. Now about the validity of my choice to use request scoped bean for this task, it's probably open for a discussion (again) but I've been trying to put it in the context of what I've read in the articles you gave me links to, specifically the first and second one. From what I gather EclipseLink uses Level 2 cache by default which stored cached entity. On ExlipseLink Examples - JPA Caching website it says that:
The shared cache exists for the duration of the persistence unit ( EntityManagerFactory, or server)
Now doesn't that make my cached entities live for a fraction of time during the call that is being made to the CRUD request bean because the moment the bean is destroyed and with it EntityManagerFactory then so is the cache. Also the last part of the above sentence "EntityManagerFactory, or server" gets me confused .. what precisely is meant by or server in this context and how does one control it. If I use the #Cache annotation and set appropriate amount of expire attribute, will that do the job and keep the entities stored on the servers L2 cache than, regardless of whether my EntityManagerFactory has been destroyed ?
I understand there is a lot of consideration to do and each application has specific requirements . From my point of view configuring L2 cache is probably the most desirable (if not only, on Tomcat) option to get things optimized. Quoting from your first link:
The advantages of L2 caching are:
avoids database access for already loaded entities
faster for reading frequently accessed unmodified entities
The disadvantages of L2 caching are:
memory consumption for large amount of objects
stale data for updated objects
concurrency for write (optimistic lock exception, or pessimistic lock)
bad scalability for frequent or concurrently updated entities
You should configure L2 caching for entities that are:
read often
modified infrequently
not critical if stale
Almost all of the above points apply to my app. At the heart of it, amongst other things, is constant and relentless reading of entities and displaying them on the website (the app will serve as a portal for listing properties). There's also a small shopping cart being build in the application but the products sold are not tangible items that come as stock but services. In this case stale entities are no problem and also, so I think, isn't concurrency as the products (here services) will never be written to. So the entities will be read often, and they will be modified infrequently (and those modified are not part of the cart anyway, an even those are modified rarely) and therefore not critical if stale. Finally the first two points seem to be exactly what I need, namely avoidance of database access to already loaded entities and fast reading of frequently accessed unmodified enteties. But there is one point in disadvantages which still concerns me a bit: memory consumption for large amount of objects. Isn't it similar to my original problem?
My current understanding is that there are two options, only one of which applies to my situation:
To be able to delegate the job of longer term caching to the persistence layer than I need to have access to PersistenceContext and create a session scoped bean and set PersistenceContextType.EXTENDED. (this options doesn't apply to me, no access to PersistenceContext).
Configure the L2 #Cache annotation on entities, or like in option 1 above create a session scoped bean that will handle long term caching. But aren't these just going back to my original problem?
I'd really like to hear you opinion and see what do you think could be a reasonable way to approach this, or perhaps how you have been approaching it in your previous projects. Oh, and one more thing, just to confirm.. when annotating an entity with #Cache all linked entities will be cached along so I don't have to annotate all of them?
Again all the comments and pointers much appreciated.
Thanks for you r answer .. when you say
"In Tomcat you would be best to have some static manager that holds onto the EntityManagerFactory for the duration of the server."
Does it mean I could for example declare and initialize static EntityManagerFactory field in an application scoped been to be later used by all the beans throughout the life of the application ?
EclipseLink uses a shared cache by default. This is shared for all EntityManagers accessed from an EntityManagerFactory. You do not need to do anything to enable caching.
In general, you do not want to be creating a new EntityManagerFactory per request, only a new EntityManager. Creating a new EntityManagerFactory is quite expensive, so not a good idea, even ignoring caching (it has its own connection pool, must initialize the meta-data, etc.).
In Tomcat you would be best to have some static manager that holds onto the EntityManagerFactory for the duration of the server. Either never close it, or close it when a Servlet is destroyed.

Why use service layer?

I looked at the example on http://solitarygeek.com/java/developing-a-simple-java-application-with-spring/comment-page-1#comment-1639
I'm trying to figure out why the service layer is needed in the first place in the example he provides. If you took it out, then in your client, you could just do:
UserDao userDao = new UserDaoImpl();
Iterator users = userDao.getUsers();
while (…) {
…
}
It seems like the service layer is simply a wrapper around the DAO. Can someone give me a case where things could get messy if the service layer were removed? I just don’t see the point in having the service layer to begin with.
Having the service layer be a wrapper around the DAO is a common anti-pattern. In the example you give it is certainly not very useful. Using a service layer means you get several benefits:
you get to make a clear distinction between web type activity best done in the controller and generic business logic that is not web-related. You can test service-related business logic separately from controller logic.
you get to specify transaction behavior so if you have calls to multiple data access objects you can specify that they occur within the same transaction. In your example there's an initial call to a dao followed by a loop, which could presumably contain more dao calls. Keeping those calls within one transaction means that the database does less work (it doesn't have to create a new transaction for every call to a Dao) but more importantly it means the data retrieved is going to be more consistent.
you can nest services so that if one has different transactional behavior (requires its own transaction) you can enforce that.
you can use the postCommit interceptor to do notification stuff like sending emails, so that doesn't junk up the controller.
Typically I have services that encompass use cases for a single type of user, each method on the service is a single action (work to be done in a single request-response cycle) that that user would be performing, and unlike your example there is typically more than a simple data access object call going on in there.
Take a look at the following article:
http://www.martinfowler.com/bliki/AnemicDomainModel.html
It all depends on where you want to put your logic - in your services or your domain objects.
The service layer approach is appropriate if you have a complex architecture and require different interfaces to your DAO's and data. It's also good to provide course grained methods for clients to call - which call out to multiple DAO's to get data.
However, in most cases what you want is a simple architecture so skip the service layer and look at a domain model approach. Domain Driven Design by Eric Evans and the InfoQ article here expand on this:
http://www.infoq.com/articles/ddd-in-practice
Using service layer is a well accepted design pattern in the java community. Yes, you could straightaway use the dao implementation but what if you want to apply some business rules.
Say, you want to perform some checks before allowing a user to login into the system. Where would you put those logics? Also, service layer is the place for transaction demarcation.
It’s generally good to keep your dao layer clean and lean. I suggest you read the article “Don’t repeat the DAO”. If you follow the principles in that article, you won’t be writing any implementation for your daos.
Also, kindly notice that the scope of that blog post was to help beginners in Spring. Spring is so powerful, that you can bend it to suit your needs with powerful concepts like aop etc.
Regards,
James

Performance in JavaEE 6 Applications (Glassfish v3) - Logging, DI, Database-Operations, EJBs, Managed Beans

The important technologies i use are: Glassfish v3, JSF 2.0, JPA 2.0, EclipseLink 2.0.2, log4j 1.2.16, commons-logging 1.1.1.
My problem is that some parts of the application are pretty slow. I analysed this with the netbeans 6.8 Profiling capabilities.
I. Logging - i use log4j and apache commons logging to generate logs in a logging file and in the console. The logs also appear in glassfish's server log. I use loggers as follows:
private static Log logger = LogFactory.getLog(X.class);
...
if (logger.isDebugEnabled()) {
...
logger.debug("Log...");
}
The Problem is that sometimes such short statements take much time (about 800 ms). When i switch to java.util.logging its not that bad but also very slow (200 ms band). What's the problem?
I need some logging... UPDATE - The problem with the slow logging was solved after switching from Netbeans 6.8 to Netbeans 6.9.1. - Netbeans 6.8 possibly is very slow when logs are printed to its console?! So it had nothing to do with Log4J or commons logging..
II. DB Operation:
The first time i call the find Method of the following EJB it takes 2,4 s! Additional calls last only some ms. So why takes the first operation that long? Is this (only because of) the connection establishment or has it something to do with the Dependency Injections
of the XFacade and when are these Injections performed?:
#Stateless
#PermitAll
public class XFacade {
#PersistenceContext(unitName = "de.x.persistenceUnit")
private EntityManager em;
// Other DI's
...
public List<News> find(int maxResults) {
return em.createQuery(
"SELECT n FROM News n ORDER BY n.published DESC").setMaxResults(maxResults).getResultList()
}
}
III. Dependency Injection, JNDI Lookup: Is there a difference beetween DI like (#EJB ...) and InitialContext lookups concerncing performance? Is there a difference (performance view) between injecting local, remote and no-interface EJB's?
IV. Managed Beans - I use many Session Scoped Beans, because the ViewScope seems to be very buggy and Request Scoped is not always practically. Is there an alternative? - because these Beans are not slow but the server side memory is stressed during a whole session. And when a user logs out it takes some time!
V. EJBs - I don't use MDB only Session Beans and Singleton Beans. Often they inject other Beans with the #EJB Annotation. One Singleton Bean use #Schedule Annotations to start repeatedly operations. A interesting thing i found is that since EJB 3.1 you can use the #Asynchronous Annotation to make Session Bean Method's asynchronous. What should i generally consider when implementing EJBs concerning performance?
Maybe someone could give me some general and/or specific tips to increase the performance of javaee applications, especially concerning the above issues. Thanks!
To start with, you should bench your application in a real environment using load testing tools, you can't really make valid conclusions from the behavior you observe in your IDE. On top of that don't forget that profiling actually alters performances.
I. Logging (...) The problem with the slow logging was solved after switching from Netbeans 6.8 to Netbeans 6.9.1
That's the first proof you can't use trust the behavior inside your IDE.
II. DB Operation: The first time i call the find Method of the following EJB it takes 2,4 s! Additional calls last only some ms. So why takes the first operation that long?
Maybe because some GlassFish services are (lazy) loaded, maybe because the stateless session beans (SLSB) have to be instantiated, maybe because the EntityManagerFactory has to be created. What did the profiling say? Why do you see when activating app server logging? And what's the problem since subsequent calls are ok?
III. Dependency Injection, JNDI Lookup: Is there a difference beetween DI like (#EJB ...) and InitialContext lookups concerncing performance?
JNDI lookups are expensive and it was a de facto practice to use some caching in the good old service locator. I thus don't expect the performances to be worst when using DI (I actually expect the container to be good at it). And honestly, it has never been a concern for me.
When you work on performance optimization, the typical workflow is 1) detect a slow operation 2) find the bottleneck 2) work on it 3) if the operation is still not fast enough, go back to 2). To my experience, the bottleneck is 90% of the time in the DAL. If your bottleneck is DI, you have no performance problem IMO. In other words, I think you're worrying too much and you're very close to "premature optimization".
IV. Managed Beans - I use many Session Scoped Beans, because the ViewScope seems to be very buggy and Request Scoped is not always practically. Is there an alternative? - because these Beans are not slow but the server side memory is stressed during a whole session. And when a user logs out it takes some time!
I don't see any question :) So I don't have anything to say. Update (answering a comment): Using a conversation scope might be indeed less expensive. But as always, measure.
V. EJBs - I don't use MDB only Session Beans and Singleton Beans. Often they inject other Beans with the #EJB Annotation. One Singleton Bean use #Schedule Annotations to start repeatedly operations. A interesting thing i found is that since EJB 3.1 you can use the #Asynchronous Annotation to make Session Bean Method's asynchronous. What should i generally consider when implementing EJBs concerning performance?
SLSBs (and MDBs) perform very well in general. But here are some points to keep in mind when using SLSBs:
Prefer Local over Remote interfaces if your clients and EJBs are collocated to avoid the overhead of remote calls.
Use Stateful Session Beans (SFSB) only if necessary (they are harder to use, managing state has a performance cost and they don't scale very well by nature).
Avoid chaining EJBs too much (especially if they are remote), prefer coarse grained methods over multiple fine-grained methods invocation.
Tune the SLSB pool (so that you have just enough beans to serve your concurrent clients, but not too much to avoid wasting resources).
Use transactions appropriately (e.g. use NOT_SUPPORTED for read only methods).
I would also suggest to not use something unless you really need the feature and there is a real problem to solve (e.g. #Asynchronous).
Maybe someone could give me some general and/or specific tips to increase the performance of javaee applications, especially concerning the above issues. Thanks!
Focus on your data access code, I'm pretty sure this represents 80% of the execution time of most business flows.
And as I already hinted, are you sure that you actually have a performance problem? I'm not convinced. But if you really are, measure performances outside your IDE.
I agreed with Pascal Thivent, in that "premature optimization" is bad. Design your code well, and then worry about performance.
Also, you can optimize for performance or memory-conservation. Pick which one is causing you headaches in the morning, and optimize for that.
I actually really like NetBeans' profiler; it's one of the best free ones, IMHO.
I would recommend:
Profiling for Performance but limiting the scope to either either just your own packages (e.g., de.x) or excluding Java core classes. This is so your application isn't bogged down with profiling and giving you misleading performance numbers. Basically, try to get the "Overhead" as low as possible, as indicated by the bar at the bottom.
Browse around your web app for a while see what is taking up most of your CPU time. Also take note of the fact that some objects are lazily-loaded so the first time you access pieces of code it may be slow. Also, you're using a JIT-compiler, so code may be slow the first (or second...) time you run it, but it will get faster as you use that piece of code more often. In short, use the web app for a while. You can "record" a web session using Selenium and "play" it back. This will allow you to get better performance metrics and see where your application really is slow instead of measuring "warm up time."
If there is a specific piece of code or class that is causing you trouble, use profiling points to see exactly what is causing that piece of code to be slow.
If you have a specific piece of code that's causing you problems, feel free to post about that.

Resources