C3P0 rawStatementOperation usage - java-8

I am using C3P0 connection pooling in my project.
I came across below Method in C3P0ProxyStatement,
C3P0ProxyStatement pStmt = (C3P0ProxyStatement) stmt;
pStmt.rawStatementOperation(..,..);
Please help me with below doubts ,
What is the use of rawStatementOperation in this c3p0 ?
Why does this statement takes reflect API method as parameter?
Using it will it impact performance?

People rarely use this API recently, preferring the JDBC4 standard unwrap(...) to get access to native Statements and Connections.
Yes, this c3p0-specific API is reflective (and it is a bit safer than unwrapping, as c3p0 will track and try to clean up some JDBC resources that might be returned). The cost of a reflective method call is high relative to an ordinary method call, but negligible relative to the cost of database operations. It won't meaningfully impact performance.

Related

Spring caching implementation

I am exploring spring caching facility. I have few queries regarding this.
First, should it be applied at service method level or DAO method level, where service method is calling DAO method.
Second, how should I avoid cache data getting stale?
IMO, The answer to both questions is "it depends".
Technically, Spring Wise, cache annotations applied on both Service and DAO will work, I don't think there is any difference, so it boils down to the concrete use case.
For example, if you "logically" plan to provide a cacheable abstraction of what should be calculated as a result of some computational process done on server, you better go with Caching at the service level.
If, on the other hand you have a DAO method that looks like Something getSomethingById(id) in the dao, and you would like to avoid relatively expensive calls to the underlying database, you can provide a cache at the level of the DAO. Having said that, it probably won't be useful to apply caching if you have methods like List<Something> fetchAll() or List<Something> fetchAllByFilter(). If you're working with JPA (implemented with Hibernate) they have their own abstraction of cache, but its kind of beyond the scope of the question, just something you should be aware of...
There are plenty of tutorials available on internet, some illustrate the service based approach, some go for DAO's methods annotations but again, these are only simple examples, in the real world you'll have to take a decision.
Now regarding the second question. In general caching makes sense if your data doesn't change much, so first off if it changes often, then probably caching is not appropriate/relevant for the use case.
Other than that, there are many techniques:
Cache Data Eviction (usually time based). See this tutorial
Some kind of messaging system that will send the message about the cache entry change. This one is especially useful if you have a distributed application and keep the cache in-memory only. When getting a message you might opt for "cache replication" or totally wiping out the cache, so that it will be "filled" with a new data eventually
Using the Distributed cache technologies like Hazelcast or Redis as opposed to the in-memory caching. So that technically the cached data consistency will be guaranteed by caching provider.
I would like also to recommend you This tutorial - the speaker talks about different aspects of caching implementation and I think its really relevant to your question.

EJB weblogic.ejb20.cache.CacheFullException

I am working on one application using EJB1.2. previously running fine but from past few days I am getting following exception
Exception in ejbLoad:: weblogic.ejb20.cache.CacheFullException: size=85783, target=5000, incr=1 at weblogic.ejb20.cache.EntityCache$SizeTracker.shrinkNext(JI)Lweblogic.ejb20.cache.EntityCache$MRUElement;(EntityCache.java:438) at weblogic.ejb20.cache.EntityCache.put
(Ljavax.transaction.Transaction;Lweblogic.ejb20.cache.CacheKey;Ljavax.ejb.EntityBean;Lweblogic.ejb20.interfaces.CachingManager;)V(EntityCache.java:141) at weblogic.ejb20.manager.DBManager.getReadyBean(Ljavax.transaction.Transaction;Ljava.lang.Object;)Ljavax.ejb.EntityBean;(DBManager.java:332) at
weblogic.ejb20.manager.DBManager.preInvoke(Lweblogic.ejb20.internal.InvocationWrapper;)Ljavax.ejb.EnterpriseBean;(DBManager.java:249) at
weblogic.ejb20.internal.BaseEJBLocalObject.preInvoke(Lweblogic.ejb20.internal.InvocationWrapper;)Lweblogic.ejb20.internal.InvocationWrapper;(BaseEJBLocalObject.java:228) at weblogic.ejb20.internal.EntityEJBLocalObject.preInvoke(Lweblogic.ejb20.internal.MethodDescriptor;Lweblogic.security.service.ContextHandler;)Lweblogic.ejb20.internal.InvocationWrapper;(EntityEJBLocalObject.java:72) at com.nextjet.enterprise.locationcode.locationcode.LocationCode_v2epgs_ELOImpl.getLocationCodeData()Lcom.nextjet.enterprise.locationcode.LocationCodeData;(LocationCode_v2epgs_ELOImpl.java:28) at com.nextjet.enterprise.locationcode.locationcodemanager.LocationCodeManagerBean.loadShippingAddress(Ljava.lang.Long;Ljava.lang.String;)Lcom.nextjet.enterprise.locationcode.LocationCodeView;(LocationCodeManagerBean.java:538) at com.nextjet.enterprise.locationcode.locationcodemanager.LocationCodeManagerBean.doSearchShippingAddresses(Ljava.lang.String;)Lcom.nextjet.enterprise.locationcode.LocationCodeSearchResult;(LocationCodeManagerBean.java:514) at com.nextjet.enterprise.locationcode.locationcodemanager.LocationCodeManagerBean.lookupAccountShipping.....
For now I am changing value of <max-beans-in-cache> in weblogic-ejb-jar.xml
I am changing the above value to <max-beans-in-cache>100000</max-beans-in-cache>
is it the only solution for this kind of exception or could there be a data related issue from database?
10000 is quite a high value for max-beans-in-cache and from the log it seems the application tried to make a call for up to 85785 instances of the EJB.
I would suggest some refactoring in your code.
Your code is doing
com.nextjet.enterprise.locationcode.locationcode.LocationCode_v2epgs_ELOImpl
.getLocationCodeData()
Is this is mainly a read operation ? Or are you doing simultaneous writes and reads?
You could refactor this in 2 ways to reduce the EJB overheads if it is mainly doing read operations.
1) Read Oracle's recommendation on tuning the EJB settings and database options for concurrency, especially the Read-Mostly pattern
http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/entity.html#ChoosingaConcurrencyStrategy
2) If you are mainly doing reads - then dont use Entity EJBs at all. Use the FastLaneReader pattern which is using a direct JDBC call to fetch the data for SELECT, and you can do writes using the EJBs as at present. In this way, the max-beans-in-cache can be reduced
A very detailed example is given on the Sun Design Patterns site
http://java.sun.com/blueprints/patterns/FastLaneReader.html

is there any disadvantage using hibernate query.settimeout() method

we are planning to implement setTimeout() for all our hibernate queries.
In our application, a few of the queries take a long time. We would like to specify an explicit timeout for these queries.
Are there any issues with this approach? Or is there any better way to set timeout ?
I understand that there may be opportunities to tune the queries on the server side. At the moment, however, we are looking exclusively for a client-side solution.
The setTimeout() method you linked to is not a timeout on a query. It's a timeout on the whole transaction. I fail to see how it's a client-side solution.
I think you're looking for Query.setTimeout().

Entity Framework and ObjectContext n-tier architecture

I have a n-tier application based on pretty classic different layers: User Interface, Services (WCF), Business Logic and Data Access.
Database (Sql Server) is obviously quered throught Entity Framework, the problem is basically that every call starts from user interface and go throught all the layers, but doing that I need to create a new ObjectContext each time for every operation and that makes performance very bad because every time I need to reload metadata and recompile the query.
The most suggested pattern it would be the one below and it is what I'm actually doing: creating and passing the new context throught business layer methods each time the service receives a call
public BusinessObject GetQuery(){
using (MyObjectContext context = new MyObjectContext()){
//..do something } }
For easy query I don't see any particular dealy and it works fine but for complex and heavy query it makes a 2 seconds query to keep going for like 15 seconds each call.
I could set the ObjectContext static and it would solve the performance issue but it appears to be not suggested by anyone, also because I won't be able to access the context at the same time from different thread and multiple calls raise an exception. I could make it thread-safe but mantain the same ObjectContext for long time makes it bigger and bigger (and slower) because the reference it imports each query it execute a query.
The architecture I have I think it is the most common so what is the best and known way to implement and use ObjectContext?
Thank you,
Marco
In a Web context, it's best to use a stateless approach and create an ObjectContext for each request.
The cost of ObjectContext construction are minimal. The metadata is loaded from a global cache so only the first call will have to load it.
Static is definitely not a good idea. The ObjectContext is not thread save and this will lead to problems when using it in a WCF service with multiple calls. Making it thread save will result in less performance and it can cause subtle errors when reusing it in multiple requests.
Check this info: How to decide on a lifetime for your ObjectContext
Working with a static object context is not a good idea. A static context will be shared by all users of the web application meaning that when one user makes modifications to a context such as calling saveChanges , all other users using the context will be affected (this would be a problem when supposing they have added or updated data to the context but have not called save changes). The best practice while working with object context is to keep it alive for the period of the request and use if to perform any atomic business operations. You would want to check out the UnitOfWork pattern and repository pattern
uow
uow and repository in EF
If you feel you are having performance issues with your queries and there is a possibility that you would reuse your query , I would recommend you use precompiled linq queries. You can check out the links below for more info
precompiled linq julie lermann
precompiled linq
What you show is the typical pattern to use a context - by request, similar to using a database connection.
What makes you think the bad performance is related to recreating the context? This is very, very likely not the case. How did you measure this impact?
If you have such performance critical code that this overhead truly matters you should not use Entity Framework since there always will be some overhead, even if the overhead should be very little in the general case. I would start focusing on your data model though and the underlying data store which will have a much larger impact on your query performance. Have you optimized your queries? Did you put indexes everywhere you need them? Can you de-normalize the data to remove joins?

Performance in JavaEE 6 Applications (Glassfish v3) - Logging, DI, Database-Operations, EJBs, Managed Beans

The important technologies i use are: Glassfish v3, JSF 2.0, JPA 2.0, EclipseLink 2.0.2, log4j 1.2.16, commons-logging 1.1.1.
My problem is that some parts of the application are pretty slow. I analysed this with the netbeans 6.8 Profiling capabilities.
I. Logging - i use log4j and apache commons logging to generate logs in a logging file and in the console. The logs also appear in glassfish's server log. I use loggers as follows:
private static Log logger = LogFactory.getLog(X.class);
...
if (logger.isDebugEnabled()) {
...
logger.debug("Log...");
}
The Problem is that sometimes such short statements take much time (about 800 ms). When i switch to java.util.logging its not that bad but also very slow (200 ms band). What's the problem?
I need some logging... UPDATE - The problem with the slow logging was solved after switching from Netbeans 6.8 to Netbeans 6.9.1. - Netbeans 6.8 possibly is very slow when logs are printed to its console?! So it had nothing to do with Log4J or commons logging..
II. DB Operation:
The first time i call the find Method of the following EJB it takes 2,4 s! Additional calls last only some ms. So why takes the first operation that long? Is this (only because of) the connection establishment or has it something to do with the Dependency Injections
of the XFacade and when are these Injections performed?:
#Stateless
#PermitAll
public class XFacade {
#PersistenceContext(unitName = "de.x.persistenceUnit")
private EntityManager em;
// Other DI's
...
public List<News> find(int maxResults) {
return em.createQuery(
"SELECT n FROM News n ORDER BY n.published DESC").setMaxResults(maxResults).getResultList()
}
}
III. Dependency Injection, JNDI Lookup: Is there a difference beetween DI like (#EJB ...) and InitialContext lookups concerncing performance? Is there a difference (performance view) between injecting local, remote and no-interface EJB's?
IV. Managed Beans - I use many Session Scoped Beans, because the ViewScope seems to be very buggy and Request Scoped is not always practically. Is there an alternative? - because these Beans are not slow but the server side memory is stressed during a whole session. And when a user logs out it takes some time!
V. EJBs - I don't use MDB only Session Beans and Singleton Beans. Often they inject other Beans with the #EJB Annotation. One Singleton Bean use #Schedule Annotations to start repeatedly operations. A interesting thing i found is that since EJB 3.1 you can use the #Asynchronous Annotation to make Session Bean Method's asynchronous. What should i generally consider when implementing EJBs concerning performance?
Maybe someone could give me some general and/or specific tips to increase the performance of javaee applications, especially concerning the above issues. Thanks!
To start with, you should bench your application in a real environment using load testing tools, you can't really make valid conclusions from the behavior you observe in your IDE. On top of that don't forget that profiling actually alters performances.
I. Logging (...) The problem with the slow logging was solved after switching from Netbeans 6.8 to Netbeans 6.9.1
That's the first proof you can't use trust the behavior inside your IDE.
II. DB Operation: The first time i call the find Method of the following EJB it takes 2,4 s! Additional calls last only some ms. So why takes the first operation that long?
Maybe because some GlassFish services are (lazy) loaded, maybe because the stateless session beans (SLSB) have to be instantiated, maybe because the EntityManagerFactory has to be created. What did the profiling say? Why do you see when activating app server logging? And what's the problem since subsequent calls are ok?
III. Dependency Injection, JNDI Lookup: Is there a difference beetween DI like (#EJB ...) and InitialContext lookups concerncing performance?
JNDI lookups are expensive and it was a de facto practice to use some caching in the good old service locator. I thus don't expect the performances to be worst when using DI (I actually expect the container to be good at it). And honestly, it has never been a concern for me.
When you work on performance optimization, the typical workflow is 1) detect a slow operation 2) find the bottleneck 2) work on it 3) if the operation is still not fast enough, go back to 2). To my experience, the bottleneck is 90% of the time in the DAL. If your bottleneck is DI, you have no performance problem IMO. In other words, I think you're worrying too much and you're very close to "premature optimization".
IV. Managed Beans - I use many Session Scoped Beans, because the ViewScope seems to be very buggy and Request Scoped is not always practically. Is there an alternative? - because these Beans are not slow but the server side memory is stressed during a whole session. And when a user logs out it takes some time!
I don't see any question :) So I don't have anything to say. Update (answering a comment): Using a conversation scope might be indeed less expensive. But as always, measure.
V. EJBs - I don't use MDB only Session Beans and Singleton Beans. Often they inject other Beans with the #EJB Annotation. One Singleton Bean use #Schedule Annotations to start repeatedly operations. A interesting thing i found is that since EJB 3.1 you can use the #Asynchronous Annotation to make Session Bean Method's asynchronous. What should i generally consider when implementing EJBs concerning performance?
SLSBs (and MDBs) perform very well in general. But here are some points to keep in mind when using SLSBs:
Prefer Local over Remote interfaces if your clients and EJBs are collocated to avoid the overhead of remote calls.
Use Stateful Session Beans (SFSB) only if necessary (they are harder to use, managing state has a performance cost and they don't scale very well by nature).
Avoid chaining EJBs too much (especially if they are remote), prefer coarse grained methods over multiple fine-grained methods invocation.
Tune the SLSB pool (so that you have just enough beans to serve your concurrent clients, but not too much to avoid wasting resources).
Use transactions appropriately (e.g. use NOT_SUPPORTED for read only methods).
I would also suggest to not use something unless you really need the feature and there is a real problem to solve (e.g. #Asynchronous).
Maybe someone could give me some general and/or specific tips to increase the performance of javaee applications, especially concerning the above issues. Thanks!
Focus on your data access code, I'm pretty sure this represents 80% of the execution time of most business flows.
And as I already hinted, are you sure that you actually have a performance problem? I'm not convinced. But if you really are, measure performances outside your IDE.
I agreed with Pascal Thivent, in that "premature optimization" is bad. Design your code well, and then worry about performance.
Also, you can optimize for performance or memory-conservation. Pick which one is causing you headaches in the morning, and optimize for that.
I actually really like NetBeans' profiler; it's one of the best free ones, IMHO.
I would recommend:
Profiling for Performance but limiting the scope to either either just your own packages (e.g., de.x) or excluding Java core classes. This is so your application isn't bogged down with profiling and giving you misleading performance numbers. Basically, try to get the "Overhead" as low as possible, as indicated by the bar at the bottom.
Browse around your web app for a while see what is taking up most of your CPU time. Also take note of the fact that some objects are lazily-loaded so the first time you access pieces of code it may be slow. Also, you're using a JIT-compiler, so code may be slow the first (or second...) time you run it, but it will get faster as you use that piece of code more often. In short, use the web app for a while. You can "record" a web session using Selenium and "play" it back. This will allow you to get better performance metrics and see where your application really is slow instead of measuring "warm up time."
If there is a specific piece of code or class that is causing you trouble, use profiling points to see exactly what is causing that piece of code to be slow.
If you have a specific piece of code that's causing you problems, feel free to post about that.

Resources