Jmeter- Throughput increases after ehcache clear - performance

While executing Jmeter test on staging environment, we ran ehcache clear command, which removed all site cache. Since the ehcache got cleared, we were expecting that the performance and throughput would go down for some time. Instead, the number of transactions per second (throughput) increased drastically.
What can be the explanation for this?

It could be a bug/wrong implementation of ehcache, you can check a detailed how ehcache dissected:
...database connections were kept open. Which meant that the database
started to slow down. This meant that other activity started to take
longer as well...
and in summary:
for a non-distributed cache, it performs well enough as long as you
configure it okay.
Also check guidelines which will conclude in an interesting way:
we learned that we do not need a cache. In fact, in most cases where
people introduce a cache it is not really needed. ...
Our guidelines for using a cache are as follows:
You do not need a cache.
Really, you don’t.
If you still have a performance issue, can you
solve it at the source? What is slow? Why is it slow? Can you
architect it differently to not be slow? Can you prepare data to be
read-optimized?

If it's not due to a slow EhCache configuration, I suppose explanation could be that :
You don't have response assertion in your test plan so just base it on Response Code which might be 200 while response page was not the one you requested
The page served after the clear, are maybe lighter (error pages ? default pages) and do not require as
See:
http://www.ubik-ingenierie.com/blog/best-practice-using-jmeter-assertions/

Related

Dynacache - Caching everything

I have taken over an application that serves around 180 TPS. The responses are always SOAP XML responses with a size of around 24000 bytes. We have been told that we have a dynacache and i can see that we have a cachespec.xml. But I am unable to understand how many entries it holds currently and its max limit.
How can i check this? I have tried DynamicCacheAccessor.getDistributedMap().size() but this always returns 0.
We have a lot of data inconsistencies because of Java hashmap caching layers internally. What are your thoughts on increasing dynacache and eliminate the internal caching ? How much server memory might this consume ?
Thanks in advance
The DynamicCacheAccessor accesses the default servlet cache instance, baseCache. If size() always returns zero then your cachespec.xml is configured to use a different cache instance.
Look for a directive in the cachespec.xml:
<cache-instance name="cache_instance_name"></cache-instance> to determine what cache instance you are using.
Also install the Cache Monitor from the installableApps directory. See
Monitoring and
CacheMonitor. The Cache Monitor is an invaluable tool when developing/maintaining an app using servlet caching.
Using liberty, install the webCacheMonitor-1.0 feature.

Liferay 6.2 Session autoextend Disadvantages

I found that it's possible to automatically extend Liferay's session. So that the session doesn't expire till you close your browser. Is there any limitations or disadvantages of such approach. Any performance degrade or load issues?
As with any abstract question about hypothetical performance impact (or preliminary optimization) this question is basically unanswerable - but here's some criteria:
Naturally, pinging the server in order to extend a session will incur some extra load - if that results in a performance decrease, you'll most likely have a highly congested installation in the first place. If your server is bored all day, the extra ping won't bring it down.
You may or may not have custom applications running in your installation that store data in the user's session. If those are a few bytes (like Liferay does, e.g. the currently logged in user's information): There's probably no degradation. If you store 1MB of information per session (in your own custom apps - Liferay doesn't do this), things might differ: Just multiply your session storage size by the number of concurrent users that you expect. In case this use of memory indicates a problem: Make your custom apps use the session less - it's bad style anyway.
Will your particular installation suffer from any degradation? Measure. There's no way around this.
From a system maintenance point of view: If you're running a cluster and want to take individual machines out of the load balancer: Artificially extending sessions might indicate that a machine still has sessions open, even though they're mostly on unattended browsers - you'll get inflated numbers and it takes longer to bring machines down when you need to wait for the session count to come close to zero.

IIS7 Performance Issues for Web-services

We are experiencing slow processing of requests under heavy load. When looking at the currently running requests during these bursts I can see many requests to our web-service code.
The number of requests is not that large but they appear to be stuck in a preprocessing state. Below is an example:
We are running an IIS7 app pool in classic mode due to the need to support some legacy code.
Other requests continue to be processed but these stuck requests gradually seem to fill up the available threads leading to slow processing of other pages.
Does anyone have any idea on where these requests are getting stuck.
There appears to be no resource issue with the DB and the requests state show suggest this is all preprocessing.
We have run load tests on the code involved on local machines and can not replicate the issue.
Another possible factor is we are making use of MVC and UrlRouting.
Many thanks for any help.
Some issues only happen at production servers unfortunately, as load test can never simulate real world users.
You can try to capture hang dumps when performance is bad, and then analyze them (on your own or open a support case via http://support.microsoft.com to work with Microsoft support).
Usually you might have hit the famous thread pool bottleneck, http://support.microsoft.com/kb/821268. Dump analysis can easily tell the culprit and help locate a solution.
Why not move them into their own AppPool to separate them from the Classic ASP app - you'll then have more options to tune.

MVC3 memory management

What is the best way to check memory usage in an ASP.NET MVC3 application?
I have been told by my hosting provider to recyle the IIS application pool every so often to improve the speed of the site. Is this what is 'recommended practice'? Surely I shouldn't need to restart my application every so often? I'd much rather find out if it is an issue with memory usage in my application and correct it. So any tips & best practices you use would be quite helpful too.
The application is based on ASP.NET MVC3, C# and EF Code First. Any guidance, links appreciated.
EDIT:
I found this page after I posted, which is quite useful. But I'd still like to hear any other views.
ASP.NET MVC and EF Code First Memory Usage
Thank you
I have a site that never recycles (until the machine is rebooted weekly)
Your application generally should keep performing fine. If it doesn't, there is some leak.
This can occur because
Cache never expires
Cache never expires
Session storage keeps growing and never times out
ObjectContexts are never disposed and kept in the session, etc
Objects that should be disposed aren't
Objects that are created via a dependency injection container aren't setup to release after each request, and thus potentially have internal collections that keep growing.
There are more causes - but these are a few main ones.
So the question really is 'there is no best practice - it depends on your app'
If you are worried about current sessions during a restart, keep in mind a restart can be quick and current requests are allowed to finish (sometimes) and forms authentication tokens will survive the restart, however sessions will not unless you configure an out of process state server.
If your memory usage keeps growing, then setup a restart schedule, otherwise do once a week or never - or setup once memory goes to XYZ then reset. ASP.NET will restart automatically once a certain threshold is reached as well based on what the hoster has setup on memoryLimit:
http://msdn.microsoft.com/en-us/library/7w2sway1.aspx
By default IIS recycles the application pool automatically at an interval (I think is 29 hours or so) but that is surely set by the host, no matter how little or how much memory you're the process is using. THe recycling trigger can be a time interval or when the process hits a certain memory usage limit. I'm sure any shared host has both of them set.
About memory usage, you can use the GC.GetTotalMemory method which will give you an approximate usage. Even when using Perfmon the readings aren't very accurate but it gives you an idea.
//global.asax.cs
void Application_EndRequest(object o,EventArgs a)
{
var ctype=Context.Response.Headers["Content-Type"];
if (ctype == null || !ctype.Contains("text/html")) return;
Context.Response.Write(string.format("<p>Memory usage: {0}</p>",GC.GetTotalMemory(false)));
}
Be aware that you'll see the usage increasing increasing until the GC kicks in and the usage will drop to a more 'realistic' value.
If you have the money I recommend a specialized tool such as the Memory profiler
Other things you can do to at least be ready if the application has memory or performance problems:
Proper layering of the application, means you can refactor the more inefficient parts without affecting the others.
The Repository pattern will be very helpful, because you can start using EF , find out that EF uses to much memory (like in the link you've found), but then you could switch the repository implementation to use PetaPoco or Dapper.net.
In general an OR\M is more of a heavy library, if the application doesn't need ORM features but just a quick way to work with a db, use from the beginning a mico-Orm like those mentioned above.
Always dispose objects implementing IDisposable.
When dealing with large db records, use pagination. It's good for both server resources usage and user experience
Apply the YAGNI (You Aint Gonna Need It) principle as much as possible, this somehow implies a bit of TDD :)

Strange performance using JPA, am I missing something?

We have a JPA -> Hibernate -> Oracle setup, where we are only able to crank up to 22 transactions per seconds (two reads and one write per transaction). The CPU and disk and network are not bottlenecking.
Is there something I am missing? I wonder if there could be some sort of oracle imposed limit that the DBA's have applied?
Network is not the problem, as when I do raw reads on the table, i can do 2000 reads per second. The problem is clearly writes.
CPU is not the problem on the app server, the CPU is basically idling.
Disk is not the problem on the app server, the data is completely loaded into memory before the processing starts
Might be worth comparing performance with a different client technology (or even just a simple test using SQL*Plus) to see if you can beat this performance anyway - it may simply be an under-resourced or misconfigured database.
I'd also compare the results for SQLPlus running directly on the d/b server, to it running locally on whatever machine your Java code is running on (where it is communicating over SQLNet). This would confirm if the problem is below your Java tier.
To be honest there are so many layers between your JPA code and the database itself, diagnosing the cause is going to be fun . . . I recall one mysterious d/b performance problem resolved itself as a misconfigured network card - the DBAs were rightly insistent that the database wasn't showing any bottlenecks.
It sounds like the application is doing a transaction in a bit less than 0.05 seconds. If the SELECT and UPDATE statements are extracted from the app and run them by themselves, using SQL*Plus or some other tool, how long do they take, and if you add up the times for the statements do they come pretty near to 0.05? Where does the data come from that is used in the queries, and which eventually gets used in the UPDATE? It's entirely possible that the slowdown is not the database but somewhere else in the app, such a the data acquisition phase. Perhaps something like a profiler could be used to find out where the app is spending its time.
Share and enjoy.

Resources