How to utilize jeemanagement-1.1 performance statistics in openliberty - websphere

We're migration a legacy software from WebSphere traditional to OpenLiberty.
Just recently we have been extracting Performance Data based on the jeemanagement specification which says:
JSR77.6.1 Performance Data Framework
The Performance Data Framework consists of the StatisticsProvider
model, which any managed object may implement, the Stats interfaces,
which specify standard performance attribute semantics for each
managed object type, and the Statistic interfaces which provide
specific interfaces for representing the common performance data
types.
unfortunately only our own Performance implementations are still working all JMX beans that used to provide statistics on open liberty no longer do.
E.g. the JVM JMX Bean used to have a stats attribute that could be queried and returned a Stats object, with various Statistics.
On OpenLiberty there is a JVMStats JMX Bean, but that has only attributes and no CountStatisic or whatsoever - the way the specification requests it.
As it seems we are back to square one and the whole Performance Monitoring based on the old specification seems to have been dropped.
Is that true or did we miss something?
The docs for the feature unfortunately say nothing that one can query for those statistics but only that the feature provides access to the specified interfaces. That would mean it's just there but dead, while the old PMI stuff can still be enabled.
Any suggestions if we might have missed a configuration option to enable that Performance Framework again would be appreciated.
We need to keep the application working on the new and the old application server for quiet some time until the migration to the mpMonitoring is possible.

I recently had a similar problem and investigated.
I found if you set , the Perf MBean appears (although the existing query recommended for WAS won't find it, as the type attribute is no longer set in the name).
WebSphere:type=Perf
However it is not the same as in WAS. If you compare the available API to WAS9
https://www.ibm.com/docs/api/v1/content/SSEQTJ_9.0.5/com.ibm.websphere.javadoc.doc/web/mbeanDocs/Perf.html
There are only around 10 methods available, all of the methods that took ObjectName have been removed. So you cannot lookup stats by ObjectName any more.
> WebSphere:type=Perf
Attribute: StatisticSet of Type : java.lang.String
Attribute: CustomSetString of Type : java.lang.String
Operation: java.lang.String
queryAllStatsAsString()
Operation: [Lcom.ibm.websphere.pmi.stat.StatLevelSpec;
getInstrumentationLevel(p1:com.ibm.websphere.pmi.stat.StatDescriptor p2:java.lang.Boolean)
Operation: void
appendCustomSetString(p1:java.lang.String p2:java.lang.Boolean)
Operation: [Lcom.ibm.websphere.pmi.PmiModuleConfig;
getConfigs(p1:java.util.Locale)
Operation: [Lcom.ibm.websphere.pmi.stat.StatDescriptor;
listStatMembers(p1:com.ibm.websphere.pmi.stat.StatDescriptor p2:java.lang.Boolean)
Operation: void
setInstrumentationLevel(p1:[Lcom.ibm.websphere.pmi.stat.StatLevelSpec; p2:java.lang.Boolean)
Operation: [Lcom.ibm.websphere.pmi.stat.WSStats;
getStatsArray(p1:[Lcom.ibm.websphere.pmi.stat.StatDescriptor; p2:java.lang.Boolean)
Operation: void
setCustomSetString(p1:java.lang.String p2:java.lang.Boolean)
Operation: com.ibm.websphere.pmi.PmiModuleConfig
getConfig(p1:java.lang.String)
There is a new method getStatsAsString which returns the list of available stats, which seems much less that what WAS had. Seems setCustomSetString has no effect either.
Overall the takeaway is that there is something there, but it is a far cry from what was available in WAS, and the Liberty Perf bean is not API compatible with the WAS Perf bean.
I raised https://github.com/OpenLiberty/open-liberty/issues/22483 to see if the OL folks can help.

Related

CacheLoader is not getting called while trying to find an entity using GemfireRepository

CacheLoader is not getting called while trying to find an entity using GemfireRepository.
As a solution, I am using Region<K,V> for looking up, which is calling CacheLoader. So wanted to know whether there is any restriction for Spring Data Repository which doesn't allow CacheLoader to be called when entry is not present in the cache.
And, is there any other alternative? Because I have one more scenario where my cache key is combination of id1 & id2 and I want to get all entries based on id1. And if there is no entry present in cache, then it will call CacheLoader to load all entries from Cassandra store.
There are no limitations nor restrictions in SDG when using the SD Repository abstraction (and SDG's Repository extension) that would prevent a CacheLoader from being invoked so long as the CacheLoader was properly registered on the target Region. Once control is handed over to GemFire/Geode to complete the data access operation (CRUD), it is out of SDG's hands.
However, you should know that GemFire/Geode only invokes CacheLoaders on gets (i.e. Region.get(key) operations), never on (OQL) queries. OQL queries are invoked from derived query methods or custom, user-defined query methods using #Query annotated methods declared in the application Repository interface.
NOTE: See Apache Geode CacheLoader Javadoc and User Guide for more details.
For a simple CrudRepository.findById(key) call, the call stack follows from...
SimplyGemfireRepository.findById(key)
GemfireTemplate.get(key)
And then, Region.get(key) (from here).
By way of example, and to illustrate this behavior, I added the o.s.d.g.repository.sample.RepositoryDataAccessOnRegionUsingCacheLoaderIntegrationTests to the SDG test suite as part of DATAGEODE-308. You can provide additional feedback in this JIRA ticket, if necessary.
Cheers!

EJB weblogic.ejb20.cache.CacheFullException

I am working on one application using EJB1.2. previously running fine but from past few days I am getting following exception
Exception in ejbLoad:: weblogic.ejb20.cache.CacheFullException: size=85783, target=5000, incr=1 at weblogic.ejb20.cache.EntityCache$SizeTracker.shrinkNext(JI)Lweblogic.ejb20.cache.EntityCache$MRUElement;(EntityCache.java:438) at weblogic.ejb20.cache.EntityCache.put
(Ljavax.transaction.Transaction;Lweblogic.ejb20.cache.CacheKey;Ljavax.ejb.EntityBean;Lweblogic.ejb20.interfaces.CachingManager;)V(EntityCache.java:141) at weblogic.ejb20.manager.DBManager.getReadyBean(Ljavax.transaction.Transaction;Ljava.lang.Object;)Ljavax.ejb.EntityBean;(DBManager.java:332) at
weblogic.ejb20.manager.DBManager.preInvoke(Lweblogic.ejb20.internal.InvocationWrapper;)Ljavax.ejb.EnterpriseBean;(DBManager.java:249) at
weblogic.ejb20.internal.BaseEJBLocalObject.preInvoke(Lweblogic.ejb20.internal.InvocationWrapper;)Lweblogic.ejb20.internal.InvocationWrapper;(BaseEJBLocalObject.java:228) at weblogic.ejb20.internal.EntityEJBLocalObject.preInvoke(Lweblogic.ejb20.internal.MethodDescriptor;Lweblogic.security.service.ContextHandler;)Lweblogic.ejb20.internal.InvocationWrapper;(EntityEJBLocalObject.java:72) at com.nextjet.enterprise.locationcode.locationcode.LocationCode_v2epgs_ELOImpl.getLocationCodeData()Lcom.nextjet.enterprise.locationcode.LocationCodeData;(LocationCode_v2epgs_ELOImpl.java:28) at com.nextjet.enterprise.locationcode.locationcodemanager.LocationCodeManagerBean.loadShippingAddress(Ljava.lang.Long;Ljava.lang.String;)Lcom.nextjet.enterprise.locationcode.LocationCodeView;(LocationCodeManagerBean.java:538) at com.nextjet.enterprise.locationcode.locationcodemanager.LocationCodeManagerBean.doSearchShippingAddresses(Ljava.lang.String;)Lcom.nextjet.enterprise.locationcode.LocationCodeSearchResult;(LocationCodeManagerBean.java:514) at com.nextjet.enterprise.locationcode.locationcodemanager.LocationCodeManagerBean.lookupAccountShipping.....
For now I am changing value of <max-beans-in-cache> in weblogic-ejb-jar.xml
I am changing the above value to <max-beans-in-cache>100000</max-beans-in-cache>
is it the only solution for this kind of exception or could there be a data related issue from database?
10000 is quite a high value for max-beans-in-cache and from the log it seems the application tried to make a call for up to 85785 instances of the EJB.
I would suggest some refactoring in your code.
Your code is doing
com.nextjet.enterprise.locationcode.locationcode.LocationCode_v2epgs_ELOImpl
.getLocationCodeData()
Is this is mainly a read operation ? Or are you doing simultaneous writes and reads?
You could refactor this in 2 ways to reduce the EJB overheads if it is mainly doing read operations.
1) Read Oracle's recommendation on tuning the EJB settings and database options for concurrency, especially the Read-Mostly pattern
http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/entity.html#ChoosingaConcurrencyStrategy
2) If you are mainly doing reads - then dont use Entity EJBs at all. Use the FastLaneReader pattern which is using a direct JDBC call to fetch the data for SELECT, and you can do writes using the EJBs as at present. In this way, the max-beans-in-cache can be reduced
A very detailed example is given on the Sun Design Patterns site
http://java.sun.com/blueprints/patterns/FastLaneReader.html

ASP.NET MVC - Repository pattern with Entity Framework

When you develop an ASP.NET application using the repository pattern, do each of your methods create a new entity container instance (context) with a using block for each method, or do you create a class-level/private instance of the container for use by any of the repository methods until the repository itself is disposed? Other than what I note below, what are the advantages/disadvantages? Is there a way to combine the benefits of each of these that I'm just not seeing? Does your repository implement IDisposable, allowing you to create using blocks for instances of your repo?
Multiple containers (vs. single)
Advantages:
Preventing connections from being auto-closed/disposed (will be closed at the end of the using block).
Helps force you to only pull into memory what you need for a particular view/viewmodel, and in less round-trips (you will get a connection error for anything you attempt to lazy load).
Disadvantages:
Access of child entities within the Controller/View is limited to what you called with Include()
For pages like a dashboard index that shows information gathered from many tables (many different repository method calls), we will add the overhead of creating and disposing many entity containers.
If you are instantiating your context in your repository, then you should always do it locally, and wrap it in a using statement.
If you're using Dependency Injection to inject the context, then let your DI container handle calling dispose on the context when the request is done.
Don't instantiate your context directly as a class member, since this will not dispose of the contexts resources until garbage collection occurs. If you do, then you will need to implement IDipsosable to dispose the context, and make sure that whatever is using your repository properly disposes of your repository.
I, personally, put my context at the class level in my repository. My primary reason for doing so is because a distinct advantage of the repository pattern is that I can easily swap repositories and take advantage of a different backend. Remember - the purpose of the repository pattern is that you provide an interface that provides back data to some client. If you ever switch your data source, or just want to provide a new data source on the fly (via dependency injection), you've created a much more difficult problem if you do this on a per-method level.
Microsoft's MSDN site has good information the repository pattern. Hopefully this helps clarify some things.
I disagree with all four points:
Preventing connections from being auto-closed/disposed (will be closed
at the end of the using block).
In my opinion it doesn't matter if you dispose the context on method level, repository instance level or request level. (You have to dispose the context of course at the end of a single request - either by wrapping the repository method in a using statement or by implementing IDisposable on the repository class (as you proposed) and wrapping the repository instance in a using statement in the controller action or by instantiating the repository in the controller constructor and dispose it in the Dispose override of the controller class - or by instantiating the context when the request begins and diposing it when the request ends (some Dependency Injection containers will help to do this work).) Why should the context be "auto-disposed"? In desktop application it is possible and common to have a context per window/view which might be open for hours.
Helps force you to only pull into memory what you need for a
particular view/viewmodel, and in less round-trips (you will get a
connection error for anything you attempt to lazy load).
Honestly I would enforce this by disabling lazy loading altogether. I don't see any benefit of lazy loading in a web application where the client is disconnected from the server anyway. In your controller actions you always know what you need to load and can use eager or explicit loading. To avoid memory overhead and improve performance, you can always disable change tracking for GET requests because EF can't track changes on a client's web page anyway.
Access of child entities within the Controller/View is limited to what
you called with Include()
Which is rather an advantage than a disadvantage because you don't have the unwished surprises of lazy loading. If you need to populate child entities later in the controller actions, depending on some condition, you could load them through additional repository methods (LoadNavigationProperty or something) with the same or even a new context.
For pages like a dashboard index that shows information gathered from
many tables (many different repository method calls), we will add the
overhead of creating and disposing many entity containers.
Creating contexts - and I don't think we are talking about hundreds or thousands of instances - is a cheap operation. I would call this a very theoretical overhead which doesn't play a role in practice.
I've used both approaches you mentioned in web applications and also the third option, namely to create a single context per request and inject this same context into every repository/service I need in a controller action. They all three worked for me.
Of course if you use multiple contexts you have to be careful to do all the work in the same unit of work to avoid attaching entities to multiple contexts which will lead to well know exceptions. It's usually not a problem to avoid this situations but requires a bit more attention, especially when processing POST requests.
I lately use contexts per request, because it is easier and I just don't see the benefit of having very narrow contexts and I see no reason to use more than one single unit of work for the whole request processing. If I would need multiple contexts - for whatever reason - I could always create specialized methods which act with their own context instead of the "default context" of the request.

JPA2 Entities Caching

As it stands I am using a JSF request scoped bean to do all my CRUD operations. As I'm sure you most likely know Tomcat doesn't provide container managed persistence so in my CRUD request bean I am using EnityManagerFactory to get fold of enity manager. Now about the validity of my choice to use request scoped bean for this task, it's probably open for a discussion (again) but I've been trying to put it in the context of what I've read in the articles you gave me links to, specifically the first and second one. From what I gather EclipseLink uses Level 2 cache by default which stored cached entity. On ExlipseLink Examples - JPA Caching website it says that:
The shared cache exists for the duration of the persistence unit ( EntityManagerFactory, or server)
Now doesn't that make my cached entities live for a fraction of time during the call that is being made to the CRUD request bean because the moment the bean is destroyed and with it EntityManagerFactory then so is the cache. Also the last part of the above sentence "EntityManagerFactory, or server" gets me confused .. what precisely is meant by or server in this context and how does one control it. If I use the #Cache annotation and set appropriate amount of expire attribute, will that do the job and keep the entities stored on the servers L2 cache than, regardless of whether my EntityManagerFactory has been destroyed ?
I understand there is a lot of consideration to do and each application has specific requirements . From my point of view configuring L2 cache is probably the most desirable (if not only, on Tomcat) option to get things optimized. Quoting from your first link:
The advantages of L2 caching are:
avoids database access for already loaded entities
faster for reading frequently accessed unmodified entities
The disadvantages of L2 caching are:
memory consumption for large amount of objects
stale data for updated objects
concurrency for write (optimistic lock exception, or pessimistic lock)
bad scalability for frequent or concurrently updated entities
You should configure L2 caching for entities that are:
read often
modified infrequently
not critical if stale
Almost all of the above points apply to my app. At the heart of it, amongst other things, is constant and relentless reading of entities and displaying them on the website (the app will serve as a portal for listing properties). There's also a small shopping cart being build in the application but the products sold are not tangible items that come as stock but services. In this case stale entities are no problem and also, so I think, isn't concurrency as the products (here services) will never be written to. So the entities will be read often, and they will be modified infrequently (and those modified are not part of the cart anyway, an even those are modified rarely) and therefore not critical if stale. Finally the first two points seem to be exactly what I need, namely avoidance of database access to already loaded entities and fast reading of frequently accessed unmodified enteties. But there is one point in disadvantages which still concerns me a bit: memory consumption for large amount of objects. Isn't it similar to my original problem?
My current understanding is that there are two options, only one of which applies to my situation:
To be able to delegate the job of longer term caching to the persistence layer than I need to have access to PersistenceContext and create a session scoped bean and set PersistenceContextType.EXTENDED. (this options doesn't apply to me, no access to PersistenceContext).
Configure the L2 #Cache annotation on entities, or like in option 1 above create a session scoped bean that will handle long term caching. But aren't these just going back to my original problem?
I'd really like to hear you opinion and see what do you think could be a reasonable way to approach this, or perhaps how you have been approaching it in your previous projects. Oh, and one more thing, just to confirm.. when annotating an entity with #Cache all linked entities will be cached along so I don't have to annotate all of them?
Again all the comments and pointers much appreciated.
Thanks for you r answer .. when you say
"In Tomcat you would be best to have some static manager that holds onto the EntityManagerFactory for the duration of the server."
Does it mean I could for example declare and initialize static EntityManagerFactory field in an application scoped been to be later used by all the beans throughout the life of the application ?
EclipseLink uses a shared cache by default. This is shared for all EntityManagers accessed from an EntityManagerFactory. You do not need to do anything to enable caching.
In general, you do not want to be creating a new EntityManagerFactory per request, only a new EntityManager. Creating a new EntityManagerFactory is quite expensive, so not a good idea, even ignoring caching (it has its own connection pool, must initialize the meta-data, etc.).
In Tomcat you would be best to have some static manager that holds onto the EntityManagerFactory for the duration of the server. Either never close it, or close it when a Servlet is destroyed.

Performance in JavaEE 6 Applications (Glassfish v3) - Logging, DI, Database-Operations, EJBs, Managed Beans

The important technologies i use are: Glassfish v3, JSF 2.0, JPA 2.0, EclipseLink 2.0.2, log4j 1.2.16, commons-logging 1.1.1.
My problem is that some parts of the application are pretty slow. I analysed this with the netbeans 6.8 Profiling capabilities.
I. Logging - i use log4j and apache commons logging to generate logs in a logging file and in the console. The logs also appear in glassfish's server log. I use loggers as follows:
private static Log logger = LogFactory.getLog(X.class);
...
if (logger.isDebugEnabled()) {
...
logger.debug("Log...");
}
The Problem is that sometimes such short statements take much time (about 800 ms). When i switch to java.util.logging its not that bad but also very slow (200 ms band). What's the problem?
I need some logging... UPDATE - The problem with the slow logging was solved after switching from Netbeans 6.8 to Netbeans 6.9.1. - Netbeans 6.8 possibly is very slow when logs are printed to its console?! So it had nothing to do with Log4J or commons logging..
II. DB Operation:
The first time i call the find Method of the following EJB it takes 2,4 s! Additional calls last only some ms. So why takes the first operation that long? Is this (only because of) the connection establishment or has it something to do with the Dependency Injections
of the XFacade and when are these Injections performed?:
#Stateless
#PermitAll
public class XFacade {
#PersistenceContext(unitName = "de.x.persistenceUnit")
private EntityManager em;
// Other DI's
...
public List<News> find(int maxResults) {
return em.createQuery(
"SELECT n FROM News n ORDER BY n.published DESC").setMaxResults(maxResults).getResultList()
}
}
III. Dependency Injection, JNDI Lookup: Is there a difference beetween DI like (#EJB ...) and InitialContext lookups concerncing performance? Is there a difference (performance view) between injecting local, remote and no-interface EJB's?
IV. Managed Beans - I use many Session Scoped Beans, because the ViewScope seems to be very buggy and Request Scoped is not always practically. Is there an alternative? - because these Beans are not slow but the server side memory is stressed during a whole session. And when a user logs out it takes some time!
V. EJBs - I don't use MDB only Session Beans and Singleton Beans. Often they inject other Beans with the #EJB Annotation. One Singleton Bean use #Schedule Annotations to start repeatedly operations. A interesting thing i found is that since EJB 3.1 you can use the #Asynchronous Annotation to make Session Bean Method's asynchronous. What should i generally consider when implementing EJBs concerning performance?
Maybe someone could give me some general and/or specific tips to increase the performance of javaee applications, especially concerning the above issues. Thanks!
To start with, you should bench your application in a real environment using load testing tools, you can't really make valid conclusions from the behavior you observe in your IDE. On top of that don't forget that profiling actually alters performances.
I. Logging (...) The problem with the slow logging was solved after switching from Netbeans 6.8 to Netbeans 6.9.1
That's the first proof you can't use trust the behavior inside your IDE.
II. DB Operation: The first time i call the find Method of the following EJB it takes 2,4 s! Additional calls last only some ms. So why takes the first operation that long?
Maybe because some GlassFish services are (lazy) loaded, maybe because the stateless session beans (SLSB) have to be instantiated, maybe because the EntityManagerFactory has to be created. What did the profiling say? Why do you see when activating app server logging? And what's the problem since subsequent calls are ok?
III. Dependency Injection, JNDI Lookup: Is there a difference beetween DI like (#EJB ...) and InitialContext lookups concerncing performance?
JNDI lookups are expensive and it was a de facto practice to use some caching in the good old service locator. I thus don't expect the performances to be worst when using DI (I actually expect the container to be good at it). And honestly, it has never been a concern for me.
When you work on performance optimization, the typical workflow is 1) detect a slow operation 2) find the bottleneck 2) work on it 3) if the operation is still not fast enough, go back to 2). To my experience, the bottleneck is 90% of the time in the DAL. If your bottleneck is DI, you have no performance problem IMO. In other words, I think you're worrying too much and you're very close to "premature optimization".
IV. Managed Beans - I use many Session Scoped Beans, because the ViewScope seems to be very buggy and Request Scoped is not always practically. Is there an alternative? - because these Beans are not slow but the server side memory is stressed during a whole session. And when a user logs out it takes some time!
I don't see any question :) So I don't have anything to say. Update (answering a comment): Using a conversation scope might be indeed less expensive. But as always, measure.
V. EJBs - I don't use MDB only Session Beans and Singleton Beans. Often they inject other Beans with the #EJB Annotation. One Singleton Bean use #Schedule Annotations to start repeatedly operations. A interesting thing i found is that since EJB 3.1 you can use the #Asynchronous Annotation to make Session Bean Method's asynchronous. What should i generally consider when implementing EJBs concerning performance?
SLSBs (and MDBs) perform very well in general. But here are some points to keep in mind when using SLSBs:
Prefer Local over Remote interfaces if your clients and EJBs are collocated to avoid the overhead of remote calls.
Use Stateful Session Beans (SFSB) only if necessary (they are harder to use, managing state has a performance cost and they don't scale very well by nature).
Avoid chaining EJBs too much (especially if they are remote), prefer coarse grained methods over multiple fine-grained methods invocation.
Tune the SLSB pool (so that you have just enough beans to serve your concurrent clients, but not too much to avoid wasting resources).
Use transactions appropriately (e.g. use NOT_SUPPORTED for read only methods).
I would also suggest to not use something unless you really need the feature and there is a real problem to solve (e.g. #Asynchronous).
Maybe someone could give me some general and/or specific tips to increase the performance of javaee applications, especially concerning the above issues. Thanks!
Focus on your data access code, I'm pretty sure this represents 80% of the execution time of most business flows.
And as I already hinted, are you sure that you actually have a performance problem? I'm not convinced. But if you really are, measure performances outside your IDE.
I agreed with Pascal Thivent, in that "premature optimization" is bad. Design your code well, and then worry about performance.
Also, you can optimize for performance or memory-conservation. Pick which one is causing you headaches in the morning, and optimize for that.
I actually really like NetBeans' profiler; it's one of the best free ones, IMHO.
I would recommend:
Profiling for Performance but limiting the scope to either either just your own packages (e.g., de.x) or excluding Java core classes. This is so your application isn't bogged down with profiling and giving you misleading performance numbers. Basically, try to get the "Overhead" as low as possible, as indicated by the bar at the bottom.
Browse around your web app for a while see what is taking up most of your CPU time. Also take note of the fact that some objects are lazily-loaded so the first time you access pieces of code it may be slow. Also, you're using a JIT-compiler, so code may be slow the first (or second...) time you run it, but it will get faster as you use that piece of code more often. In short, use the web app for a while. You can "record" a web session using Selenium and "play" it back. This will allow you to get better performance metrics and see where your application really is slow instead of measuring "warm up time."
If there is a specific piece of code or class that is causing you trouble, use profiling points to see exactly what is causing that piece of code to be slow.
If you have a specific piece of code that's causing you problems, feel free to post about that.

Resources