Improve caching performance in Mule - performance

I am using Anypoint 6.1 and Mule 3.8.1 and I'm finding problems with the performance and it looks like it is down to the cache scope.
The cache is a managed store (so I can invalidate the cache when new data is loaded) and has the following values:
Max Entries: 1000
Entry TTL: 84600
Expiration Interval: 84600
The response returns approx 200 JSON records.
Is there anyway to improve this and make this a faster response?
Thanks

Expiration Interval is the frequency with which the object store checks for expired cached response events. It can be set as low as 1 seconds to hours, depending upon the message rate you are expecting, you can try different values to test performance of your application.
Also, try in-memory-object-store for your caching strategy, as it saves responses in system memory, so a little bit faster but have to be careful in usage to avoid OutOfMemory errors.

Related

Heap size (RAM consumtion) does not decrease after executing a save query (Spring boot)

I remarked that after inserting 400 000 rows into mysql using spring boot, the heap size and RAM consumption go up, but never drops down after.
If a similar request is done the second time, then the heap rises with additional 5-10%.
In what the problem could be in ?
Why the heap is not cleared after executing a save query ?
Is there a problem with garbage collector ? if yes, how is it possible to fix ?
P.S: I tried some solutions provided in this article, but they did not help
https://medium.com/quick-code/what-a-recurring-outofmemory-error-taught-me-11f2061063a1
Edited:
I tried to select 1.2 mil records in order to fulfill the heap.
And after 20-25 minutes passed, the heap started to decrease. Why is this happening ? Isn't garbage collector suppose to clear the heap faster ? In this way, if during these 25 minutes other requests are done, server will just crash.
Edited 2:
Seems that when trying the same on ec2, garbage collector does not work at all. It happened already 3 times that server just runs out of memory and crashes.
Anyone know the cause ?
I ran 1 #Async thread which had inside it another 2 #Async threads called from different beans. They finished execution and after that heap never got down.
I do not have any #Cacheable methods or similar stuffs. I am just getting some data from tables, process them and update it. Have no inputstreams etc.

JMeter - handle large responses

I am testing REST API, each of which returns a 10 MB response body.
Now during load test, JMeter gives Out of Memory exception at 20 threads.
Concluded that this is due to APIs have huge response body. When other APIs with fairly low size Response Body are tested I am able to scale up to 500 Threads
Have tried all options shared under hacks to avoid Out of Memory exception:
Running in Non GUI mode
All Listeners Off - generating JTL from command line
Disabled all assertions, relying on application logs
Groovy used as scripting language in JSR223, using to emulate Pacing between requests
Heap Size increased to 12 GB
JMeter 5.4.1 and JDK used
Going for distributed load testing from multiple JMeter host machines also seems to have problem, as when I reduced No. of Threads to 10 for same APIs, Out of Memory Exception still came up.
How to effectively handle huge response body to a request made from JMeter ?
If you don't need to read or assert the response then you can reduce disk space usage, check Save response as MD5 hash in your HTTP Request Advanced tab
Save response as MD5 hash? If this is selected, then the response is not stored in the sample result. Instead, the 32 character MD5 hash of the data is calculated and stored instead. This is intended for testing large amounts of data.

H2 database: Does a 60 second write delay have adverse effects on db health?

We're currently using H2 version 199 in embedded mode with default nio file protocol and MVStore storage system. The write_delay parameter is set to 60 seconds.
We run a batch insert/update/delete of about 30.000 statements within 2 seconds (in one transaction) followed by another batch of a couple of hundred statements only 30 seconds later (in a second transaction). The next attempt to open a db connection (only 2 minutes later) shows that the DB is corrupt:
File corrupted while reading record: null. Possible solution: use the recovery tool [90030-199]
Since the transactions occur within a minute, we wonder whether the write_delay of 60 seconds might be contributing to the issue.
Changing write_delay to 60s (from a default value of 0.5s) will definitely increase your risk of lost transactions, and I do not see a good reason for doing it. Should not cause a db corruption, though. More likely some thread interruptions do that, since you are running in the same JVM a web server and who knows what else. Using async file store might help in that area, and yes it is stable enough (how much worse it can go for your app, than a database corruption, anyway).

Postman - Very Slow - Memory Allocation

I'm new to postman. I have an array of 8000 URLS that i'm testing for 200s. Currently testings using the application (using the front end of it and also the console)
It seems really slow and hangs alot, The array of URLs is only 5MB and it's testing for 200's in a loop, currently takes about 30 mins.
I'm thinking it's a memory issue maybe, is there anyway to check or increase the memory allocation for postman?
Thanks!

infinispan hot rod delay

We are using infinispan hot rod in our application.
Some times the retrieval from cache takes more time .This is not happening consistently . Most of the time it takes 6m sec but at times it takes very long ( 200 msec ) .
The size of the object retrieved from cache is around 200 bytes.
We tested both in infinispn 5.2.1 and JDG 6.3.2
Did anybody face this issue ?
Thanks
Lives
Remember that you're running Java, and that means that garbage collector can fire any time and that will give you 200 ms if you're very lucky, several seconds if you're not and up to minutes if you have large heap and not well tuned GC settings.
As the retrieval from distributed cache requires RPC to another node and handled RPC there, thread scheduling also plays vital role. And in busy system there's no surprise to have the thread waiting.
From Infinispan perspective, there's nothing the retrieval should wait for. The request gets translated into RPC to remote mode, and there it's handled by the same thread that received the message. The request does not wait for any locks.
In JGroups, there may be some delay involved. The message can get lost on network or discarded on receiver if it cannot handle the load, and then it's resent. Also, the UFC protocol makes sure that the receiver speed can match to sender's.
Anything built on top of non-realtime Java works with best effort, and sometimes sh!t happens. 200 ms is still a good response time.

Resources