Dynacache - Caching everything - caching

I have taken over an application that serves around 180 TPS. The responses are always SOAP XML responses with a size of around 24000 bytes. We have been told that we have a dynacache and i can see that we have a cachespec.xml. But I am unable to understand how many entries it holds currently and its max limit.
How can i check this? I have tried DynamicCacheAccessor.getDistributedMap().size() but this always returns 0.
We have a lot of data inconsistencies because of Java hashmap caching layers internally. What are your thoughts on increasing dynacache and eliminate the internal caching ? How much server memory might this consume ?
Thanks in advance

The DynamicCacheAccessor accesses the default servlet cache instance, baseCache. If size() always returns zero then your cachespec.xml is configured to use a different cache instance.
Look for a directive in the cachespec.xml:
<cache-instance name="cache_instance_name"></cache-instance> to determine what cache instance you are using.
Also install the Cache Monitor from the installableApps directory. See
Monitoring and
CacheMonitor. The Cache Monitor is an invaluable tool when developing/maintaining an app using servlet caching.
Using liberty, install the webCacheMonitor-1.0 feature.

Related

spring boot maximum throughput can a rest api like get support

I was doing a project that needs to support a cluster of 30k nodes, all those nodes periodic calls the api to get data.
I want to have the maximum amount of concurrent get operation per second, and due to it is get operation, it must be in synced way.
And my local pc is 32GB 8Core, spring boot version is 2.6.6, configurations are like
server.tomcat.max-connections=10000
server.tomcat.threads.max=800
I use jmeter to do concurrent test, and the through out is around 1k/s, average response time is 2 seconds.
Is there any way to make it support more requests per second?
Hard to say without details on the web service, implementation of what it actually does and where the bottleneck actually is (threads, connections, CPU, memory or others) but, as a general recommendation, using non-blocking APIs would help but it should then be full non-blocking to actually make a real difference.
I mean that just adding Webflux and have blocking DB would not improve so much.
Furthermore, all improvements in execute time would help so check if you can improve the code and maybe have a look at trying to go native (which will come "built in" in Boot 3.X btw)

Jmeter- Throughput increases after ehcache clear

While executing Jmeter test on staging environment, we ran ehcache clear command, which removed all site cache. Since the ehcache got cleared, we were expecting that the performance and throughput would go down for some time. Instead, the number of transactions per second (throughput) increased drastically.
What can be the explanation for this?
It could be a bug/wrong implementation of ehcache, you can check a detailed how ehcache dissected:
...database connections were kept open. Which meant that the database
started to slow down. This meant that other activity started to take
longer as well...
and in summary:
for a non-distributed cache, it performs well enough as long as you
configure it okay.
Also check guidelines which will conclude in an interesting way:
we learned that we do not need a cache. In fact, in most cases where
people introduce a cache it is not really needed. ...
Our guidelines for using a cache are as follows:
You do not need a cache.
Really, you don’t.
If you still have a performance issue, can you
solve it at the source? What is slow? Why is it slow? Can you
architect it differently to not be slow? Can you prepare data to be
read-optimized?
If it's not due to a slow EhCache configuration, I suppose explanation could be that :
You don't have response assertion in your test plan so just base it on Response Code which might be 200 while response page was not the one you requested
The page served after the clear, are maybe lighter (error pages ? default pages) and do not require as
See:
http://www.ubik-ingenierie.com/blog/best-practice-using-jmeter-assertions/

Azure Redis Memory Usage (w3wp.exe dump) Insight

Can anyone tell me if the type of behavior outlined in the memory dump from Visual Studio
Is normal? for instance does the StackExchange.Redis.PhysicalConnection run that high on inclusive size (bytes)? Or is that really high?
Basically we are experiencing slowness with our web head after converting our code to run on Azure Redis from Session (we are now serializing and deserializing as needed and storing in Redis cache) but overall performance is horrible.
The requests complete but it can take a while, is that due to the single threaded nature of Redis? We are using the configuration outlined as best practice by the Azure Redis team as outlined here https://stackoverflow.com/a/28821220
What else can we look at to help increase the performance as the current performance is not acceptable as a viable replacement for our session based implementation (asp.net webforms/sql server/azure IaaS) we currently have.
PS - Serialization and Deserialization does cause a hit, we understand that IIS spoiled us with its own special memory pool for non-serialized datasets and such, but there is no way that it should cause a 300-500% increase in page loads like it is now for us.
Thoughts appreciated!
#Tim Wieman
How large are your cached objects?
They can range in size, there are some datasets stored in redis.
What type of objects are they?
Most objects are custom objects w/variable number of properties, some even contain collections.
What serializer are you using?
We are using Newtonsoft for anything that doesn't require Rowstate and the required binary serializer for the datasets that do need rowstate.
All serialization, and subsequent deserialization, is done in code before call redis databases StringGet or StringSet.
If appears the memory was in fact extremely high, we were erroneously creating thousands of connections to Redis instead of a singleton instance of the Redis Cache.
The multiple connections were not getting cleaned up by the GC before the CPU would get to 98% and the server would become unresponsive.
We adjusted our code to ensure a single instance of the connection to Azure Redis is used for all Redis calls and have tested thoroughly.
It appears to be resolves as Azure Redis is no longer eating up memory or CPU resources.

how to configure Jmeter to discard downloaded files?

first of all i already had a look at several questions which are quite similar. But i wasn't able to find a solution.
My script performs a load test it calls several different URLs(GET http) to download the content behind.
After 120 requests the memory usage increases up to 2GB and after 500 to 5-6GB
I changed already the xmx size in hope that this will solve the problem but it doesn't.
Is there any way to configure jmeter to not save the files coming from a response? Or lets say to discard immediately the downloaded files?
Is it maybe an JRE setting?
Or is there no way to solve this memory increasing problem?
Br,
Kabba
Try disabling the View Results Tree in the script as it records all results for you to inspect.
The jMeter documentation specifically mentions this:
18.3.6 View Results Tree
View Results Tree MUST NOT BE USED during load test as it consumes a
lot of resources (memory and CPU). Use it only for either functional
testing or during Test Plan debugging and Validation.
I don't think that it is something you can do via JMeter settings. As per JMeter Performance and Tuning Tips guide (make sure that you follow all the recommendations):
Default JMeter java configuration comes with 512 Mo and very little GC tuning.
First ensure you set -Xmx option value to a reasonable value regarding your test requirements.
Then change MaxNewSize option in jmeter file to respect the original ratio between MaxNewSize and -Xmx.
Finally try tuning GC options only if you master this domain.
So you can try different Garbage Collection options on JMeter's side. See How to Tune Java Garbage Collection guide for ramp-up on the domain or consult a Java developer if one is around.

WebApi - Redis cache vs Output cache

I have been studying about Redis (no experience at all - just studied theory), and after doing some research, found out that its also being used as cache. e.g. StackOverfolow it self.
My question is, if I have an asp.net WebApi service, and I use output caching at the WebApi level to cache responses, I am basically storing kind of key/value (request/response) in server's memory to deliver cached responses.
Now as redis is an in memory database, how will it help me to substitute WebApi's output caching with redis cache?
Is there any advantage?
I tried to go through this answer redis-cache-vs-using-memory-directyly, but I guess I didn't got the key line in the answer:
"Basically, if you need your application to scale on several nodes sharing the same data, then something like Redis (or any other remote key/value store) will be required."
I am basically storing kind of key/value (request/response) in server's memory to deliver cached responses.
This means that after a server restart, the server will have to rebuild the cache . That won't be the case with Redis. So one advantage of Redis over a homemade in-memory solution is persistence (only if that's an issue for you and that you did not planned to write persistence yourself).
Then instead of coding your own expiring mechanism, you can use Redis EXPIRE or command EXPIREAT or even simply specifying the expire timestamp when putting the api output string into cache with SETEX.
if you need your application to scale on several nodes sharing the same data
What it means is that if you have multiple instances of the same api servers, putting the cache into redis will allow these servers to share the same cache, thus reducing, for instance, memory consumption (1 cache instead of 3 in-memory cache), and so on...

Resources