How to set maximum concurrency requests in camel cache - caching

I'm using camel cache to avoid external calls for same type of requests. But the problem is when many requests are trying to hit the same cache component at a time, all the request threads are getting unresponsive and apparently server is getting hanged.
Is there any way to set the max limit requests in cache component ?

Somehow I manged to resolve this issue.
This issue is not because of maximum hits to cache component, its because of deadlock happening within the cache component. So I found that there is no necessity of setting the max value concurrency hits to cache component.
Thanks

Related

Dynacache - Caching everything

I have taken over an application that serves around 180 TPS. The responses are always SOAP XML responses with a size of around 24000 bytes. We have been told that we have a dynacache and i can see that we have a cachespec.xml. But I am unable to understand how many entries it holds currently and its max limit.
How can i check this? I have tried DynamicCacheAccessor.getDistributedMap().size() but this always returns 0.
We have a lot of data inconsistencies because of Java hashmap caching layers internally. What are your thoughts on increasing dynacache and eliminate the internal caching ? How much server memory might this consume ?
Thanks in advance
The DynamicCacheAccessor accesses the default servlet cache instance, baseCache. If size() always returns zero then your cachespec.xml is configured to use a different cache instance.
Look for a directive in the cachespec.xml:
<cache-instance name="cache_instance_name"></cache-instance> to determine what cache instance you are using.
Also install the Cache Monitor from the installableApps directory. See
Monitoring and
CacheMonitor. The Cache Monitor is an invaluable tool when developing/maintaining an app using servlet caching.
Using liberty, install the webCacheMonitor-1.0 feature.

javax.servlet.ServletException: Session Object Internals: id : overflowed-session

I have a Portlet deployed on IBM Websphere Portal server and at busy times when I have a lot of users the Portal server is showing "this portlet is unavailable" when you hit it's Url.
In the logs the following exception is showing...
ServletWrappe E SRVE0068E: Could not invoke the service() method on servlet MyCystomPortlet. Exception thrown : javax.servlet.ServletException:
Session Object Internals:
id : overflowed-session
After doing some research on google I believe what is happening us that there are too many concurrent sessions. First of all can someone give me confirmation that this understanding is correct?
Secondly I believe there are settings in Websphere you can make around this. Maximum in memory session count. At the moment it's set to 1000. I would like to just increase it to 1500 but I am unsure of how to work out if this is too high and hence will risk the server falling over . Can someone please give me advice on this?
Lastly is reducing session timeouts in my portlet another effective way to try and fix this?
Thanks
Shortening shorter timeout will help if users are abandoning sessions without logging out, but its usually better to shorten it from default 30 min.
You can increase maximum sessions held in memory but you should also increase maximum heap size then. But make sure your operating system has enough memory resources to handle increased heap, because otherwise if system starts to swap you will have very poor performance.
So try to change that only for failing application (you can override session settings per application), do not change global settings in web container as they apply to all applications by default.

Totally Disable JMeter Cache

I am testing a few JSON APIs against strong stress. However, at one point the "Trend" of increasing response times plateaus. It increases to a very high point and then goes down into a response time that never changes.
I have tried unticking all both JMeter cache manager options, and set the max cache size to one. Also added an HTTP header that the max life on Cache-Control is zero.
How do I totally disable caching in JMeter?
Note :
I searched for relevant posts : however, what comes up is designing JMeter to act like a browser. I am trying to do the total opposite.
There is no caching unless you use HTTP Cache Manager.
Your issue might be due to overwhelmed server responding with either request rejection or wrong response that takes few seconds to compute.
Or you might be hitting a bandwidth limiter or firewall.
Check by adding assertion that your responses are correct.
if everything is ok then you just might be having a server cache, so response times degrade until cache is filled up.

Azure Cache Preview cache getting reset

I have a default cache that is fairly small and static. It contains just string keys and a string objects.
Since I won't be using anywhere near the allowed amount of memory, I'd like to just preload all of the objects into the cache on startup and have them never expire. I added a log message on start indicating that the cache was loaded.
Right now the project is still in development so the cache isn't being hit often (other than by web spiders/crawlers/scripts). The problem I'm seeing is that every hour to few hours, I'm seeing the log message that my cache was loaded. I'd expect it to load once and then not reload until I force it to.
Is there any way to keep the cache "alive" so that it doesn't have to frequently reload? Is it like an IIS worker process that dies out after some amount of inactivity?
FYI I have the cache configured for Expiry Policy: Never, Time: 0min, Eviction: Disabled. Also the way I check if the cache is still alive is that on load I add a special object to the cache. Then I check to see if that object exists and if it doesn't I assume the cache needs to be reloaded.
For anyone else who stumbles across this I ended up creating a scheduled task that hit the cache every 5 minutes. Since then, I haven't seemed to have any issues with it reloading. Not sure if this is the best answer, but it worked for me.

Azure cache failing with multiple concurrent requests

Everything with my co-located cache works fine as long as there is one request at a time. But when I hit my service with several concurrent requests, my cache doesn't seem to work.
Preliminary analysis led me to this - https://azure.microsoft.com/en-us/documentation/articles/cache-dotnet-how-to-use-service/
Apparently, I would have to use maxConnectionsToServer to allow multiple concurrent connections to cache. But the document also talks about a useLegacyProtocol parameter which has to be set to false to enable connection pooling.
I have the following questions:
My service would be getting a few hundred concurrent requests. Would this be a
good setting for such a scenario:
<dataCacheClient name="default" maxConnectionsToServer="100"
useLegacyProtocol="false">
This is my understanding of the behavior I would get with this configuration - Each time a request comes in, an attempt would be made to retrieve a connection from the pool. If there is no available connection, a new connection would be created if there are less than 100 connections currently, else the request would fail. Please confirm if this is correct.
The above documentation says that one connection would be used per instance of DataCacheFactory. I have a cache manager class which manages all interactions with cache. This is a singleton class. It creates a DataCacheFactory object and uses it to get a handle to the cache during its instantiation. My service would have 2 instances. Looks like I would need only 2 connections to server. Is this correct? Do I even need connection pooling?
What is the maximum value maxConnectionsToServer can accept and what would be an ideal value for the given scenario?
I also see a boolean paramater named "ConnectionPool". This looks complementary to "useLegacyProtocol". Is this not redundant? How is setting useLegacyProtocol="false" different from connectionPool="true"? I am confused as to whether or not and how to use this parameter.
Are maxConnectionsToServer and ConnectionPool parameters related in any way? What does it mean when I have maxConnectionsToServer set to 5 and ConnectionPool=true?

Resources