Springframework's InvocableHandlerMethod.getMethodArgumentValues eating lot of cpu - spring

I have a web application written in java with Spring 4.0 and deployed on tomcat (on red hat linux). On profiling our webapp with JProfiler we found that most of the time is spent inside Springframework and this is causing a slowdown in our API's. e.g. Consider the below mentioned data which shows that out of 614 seconds, 609 seconds are spent inside spring, and this is for 105 API calls, which means per API call the time is ~ 6 second.
So I wanted to know if there is configuration in spring which can avoid this overhead?
EDIT: Adding some more data that I got on using JProfiler
91.0% - 614 s org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest
90.2% - 609 s org.springframework.web.method.support.InvocableHandlerMethod.getMethodArgumentValues
55.9% - 377 s org.springframework.validation.DataBinder.convertIfNecessary
34.2% - 231 s org.springframework.web.method.annotation.RequestParamMethodArgumentResolver.resolveName
0.8% - 5,709 ms org.springframework.web.method.support.InvocableHandlerMethod.invoke
EDIT:
ON drilling down further I found that out of this 90.2%, 88% time is eaten by the below 2 methods
org.springframework.util.ConcurrentReferenceHashMap.put
org.springframework.util.ConcurrentReferenceHashMap.get
and they are being called from
org.springframework.core.ResolvableType.forType
Has anybody also observed this on linux with Spring app?
FYI My Controller method has 23 query parameters a 9 of them are List<>, will this create any problem? Am I not supposed to have these many query parameters (#RequestParam) ?

park does not perform a busy wait. It actually doesn’t even know the condition the thread is waiting for. That’s up to the caller. However you still can have a lot of CPU consumption if park is called very often, e.g. unpark has been called but after re-checking the wait condition park is called again. Then the fixed overhead of park will accumulate.
So the situation you have here seems to be that you have heavy contention on a particular lock. From the stack trace you have posted I would guess the the ConcurrentReferenceHashMap has been configured for a concurrency level far smaller than your actual number of threads.

Seems like a custom argument resolver did the trick. After creating one as mentioned at
Spring HandlerMethodArgumentResolver not executing
the jprofiler timings are fine and my customArgumentResolver is not eating as much CPU as HandlerMethodArgumentResolverComposite.resolveArgument
was doing.
Now that extra 90.2 % time has reduced to just 2% (10's of seconds)

Related

Heap size (RAM consumtion) does not decrease after executing a save query (Spring boot)

I remarked that after inserting 400 000 rows into mysql using spring boot, the heap size and RAM consumption go up, but never drops down after.
If a similar request is done the second time, then the heap rises with additional 5-10%.
In what the problem could be in ?
Why the heap is not cleared after executing a save query ?
Is there a problem with garbage collector ? if yes, how is it possible to fix ?
P.S: I tried some solutions provided in this article, but they did not help
https://medium.com/quick-code/what-a-recurring-outofmemory-error-taught-me-11f2061063a1
Edited:
I tried to select 1.2 mil records in order to fulfill the heap.
And after 20-25 minutes passed, the heap started to decrease. Why is this happening ? Isn't garbage collector suppose to clear the heap faster ? In this way, if during these 25 minutes other requests are done, server will just crash.
Edited 2:
Seems that when trying the same on ec2, garbage collector does not work at all. It happened already 3 times that server just runs out of memory and crashes.
Anyone know the cause ?
I ran 1 #Async thread which had inside it another 2 #Async threads called from different beans. They finished execution and after that heap never got down.
I do not have any #Cacheable methods or similar stuffs. I am just getting some data from tables, process them and update it. Have no inputstreams etc.

H2 database: Does a 60 second write delay have adverse effects on db health?

We're currently using H2 version 199 in embedded mode with default nio file protocol and MVStore storage system. The write_delay parameter is set to 60 seconds.
We run a batch insert/update/delete of about 30.000 statements within 2 seconds (in one transaction) followed by another batch of a couple of hundred statements only 30 seconds later (in a second transaction). The next attempt to open a db connection (only 2 minutes later) shows that the DB is corrupt:
File corrupted while reading record: null. Possible solution: use the recovery tool [90030-199]
Since the transactions occur within a minute, we wonder whether the write_delay of 60 seconds might be contributing to the issue.
Changing write_delay to 60s (from a default value of 0.5s) will definitely increase your risk of lost transactions, and I do not see a good reason for doing it. Should not cause a db corruption, though. More likely some thread interruptions do that, since you are running in the same JVM a web server and who knows what else. Using async file store might help in that area, and yes it is stable enough (how much worse it can go for your app, than a database corruption, anyway).

JMeter response times much larger than the requests' latencies

JMeter machines with versions: 2.13 r13365067, 2.11.20140918 |
Java: OpenJDK 1.7.0_79 |
OS: Debian 8.1
I'm having a problem where some HTTP requests seem to be processed far too long on a load injector that isn't really under load.
Examples from result files from tests with 20 vUs (with caching, on weaker load injector, JMeter v2.11) and 40vUs (without caching, on much higher spec'd load injector, JMeter v2.13):
<time_stamp>,3257,<request_name>,200,<thread_name>,true,28537,20,20,437
<time_stamp>,5158,<request_name>,304,<thread_name>,true,138,40,40,0
Memory is at 75% in the first case, and below 50% in the second. CPU doesn't seem to spike (measured in 1 sec intervals) and goes up to 20% max in both examples.
Checked the JVM's garbage collection, and it doesn't seem like the GC is at its limits at the time of the requests either (actually at no point during the test).
I noticed this in the case where I had caching (via Cache Manager with "Use Cache-Control/Expires headers..." checked) enabled, and, like in the second example above, get the unrealistic response time of 5158 ms.
This only happens at some steps during an iteration and to more than one thread, but not all.
It seems like JMeter is somehow processing the result too long, but I can't really see that my load injectors are under heavy load, to cause processing times of seconds.
Clearly this is messing up the performance statistics so I would like to know how this is happening.
Hope someone can help.
EDIT:
#First example: Case where ResponseTime >> Latecy > 0, happens on both JMeter machines (JMeter v2.11, JMeter v2.13).
#Second example: Case where ResponseTime >> Latecy = 0 happens only on the machine with JMeter v2.13.
2nd EDIT:
Turns out it doesn't matter what JMeter version I run (or on which node).
Regex'd my result file:
Of the same requested resources, cached (latency=0), with header check, about 10% took 1 second or multiple seconds. Without header check it is 6%.
You should run same JMeter version on all nodes. If this won't solve the problem, monitor your JMeter instance resource utilisation with jconsole.

infinispan hot rod delay

We are using infinispan hot rod in our application.
Some times the retrieval from cache takes more time .This is not happening consistently . Most of the time it takes 6m sec but at times it takes very long ( 200 msec ) .
The size of the object retrieved from cache is around 200 bytes.
We tested both in infinispn 5.2.1 and JDG 6.3.2
Did anybody face this issue ?
Thanks
Lives
Remember that you're running Java, and that means that garbage collector can fire any time and that will give you 200 ms if you're very lucky, several seconds if you're not and up to minutes if you have large heap and not well tuned GC settings.
As the retrieval from distributed cache requires RPC to another node and handled RPC there, thread scheduling also plays vital role. And in busy system there's no surprise to have the thread waiting.
From Infinispan perspective, there's nothing the retrieval should wait for. The request gets translated into RPC to remote mode, and there it's handled by the same thread that received the message. The request does not wait for any locks.
In JGroups, there may be some delay involved. The message can get lost on network or discarded on receiver if it cannot handle the load, and then it's resent. Also, the UFC protocol makes sure that the receiver speed can match to sender's.
Anything built on top of non-realtime Java works with best effort, and sometimes sh!t happens. 200 ms is still a good response time.

WP7 Max HTTPWebRequests

This is kind of a 2 part question
1) Is there a max number of HttpWebRequests that can be run at the same time in WP7?
I'm going to create a ScheduledTaskAgent to run a PeriodicTask. There will be 2 different REST service calls the first one will get a list of IDs for records that need to be downloaded, the second service will be used to download those records one at a time. I don't know how many records there will be my guestimage would be +-50.
2.) Would making all the individual record requests at once be a bad idea? (assuming that its possible) or should I wait for a request to finish before starting another?
Having just spent a week and a half working at getting a BackgroundAgent to stay within it's memory limits, I would suggest doing them one at a time.
You lose about half your memory to system libraries and the like, your first web request will take another nearly 20%, but it seems to reuse that memory on subsequent requests.
If you need to store the results into a local database, it is going to take a good chunk more. I have found a CompiledQuery uses less memory, which means holding a single instance of your context.
Between each call I would suggest doing a GC.Collect(), I even add a short Thread.Sleep() just to be sure the process has some time to tidying things up.
Another thing I do is track how much memory I am using and attempt to exit gracefully when I get to around 97 or 98%.
You can not use the debugger to test memory limits as the debug memory is much higher and the limits are not enforced. However, for comparative testing between versions of your code, the debugger does produce very similar result on subsequent runs over the same code.
You can track your memory usage with Microsoft.Phone.Info.DeviceStatus.ApplicationCurrentMemoryUsage and Microsoft.Phone.Info.DeviceStatus.ApplicationMemoryUsageLimit
I write a status log into IsolatedStorage so I can see the result of runs on the phone and use ScheduledActionService.LaunchForTest() to kick the off. I then use ShellToast notifications to let me know when the task runs and also when it completes, that way I can launch my app to read the status log without interrupting it.
Tyler,
My 2 cents here.
I don't believe there is any restriction on how mant HTTPWebequests you can spin up. These however have to be async, off course, and may be served from the browser stack. Most modern browsers including IE9 handle over 5 concurrently to the same domain; but you are not guaranteed a request handle immediately. However, it should not matter if you are willing to wait on a separate thread, dump your content on to the request pipe & wait for response on yet another thread. This post (here) has a nice walkthrough of why we need to do this.
Nothing wrong with this approach either, IMO. You're just going to have to wait until all the requests have their respective pipelines & then wait for the responses.
Thanks!
1) Your memory limit in a PeriodicTask or ResourceIntensiveTask is 5 MB. So you definitely should control your requests really careful. I dont think there is a limit in the code.
2)You have only 5 MB. So when you start all your requests at the same time it will terminate immediately.
3) I think you should better use a ResourceIntensiveTask because a PeriodicTask should only run 15 seconds.
Good guide for Multitasking features in Mango: http://blogs.infosupport.com/blogs/alexb/archive/2011/05/26/multi-tasking-in-windows-phone-7-1.aspx
I seem to remember (but can't find the reference right now) that the maximum number of requests that the OS can make at once is 7. You should avoid making this many at once though as it will stop other/system apps from being able to make requests.

Resources