In my springboot application currently cache gets refresh with a miss, cache key contains requestKey+versionId mapping, if versionId changes the cache started to miss, in case of miss - first get query will be performed and update the cache. This extra call and update of cache invites extra latency.
We need a mechanism where instead of waiting for a cache miss to update the cache, we can schedule periodic updates of the cache in background to reduce the latency caused by cache misses.
This time cache key will only be requestKey
I was thinking to create a separate process or thread that periodically checks if version is updated and updates the cache if version is changed. This process can run independently of the main thread and not impact the requests.
My question is - is this the best way to achieve this ?
Which cache library I can use ? as I need it to be thread safe.
Related
I remarked that after inserting 400 000 rows into mysql using spring boot, the heap size and RAM consumption go up, but never drops down after.
If a similar request is done the second time, then the heap rises with additional 5-10%.
In what the problem could be in ?
Why the heap is not cleared after executing a save query ?
Is there a problem with garbage collector ? if yes, how is it possible to fix ?
P.S: I tried some solutions provided in this article, but they did not help
https://medium.com/quick-code/what-a-recurring-outofmemory-error-taught-me-11f2061063a1
Edited:
I tried to select 1.2 mil records in order to fulfill the heap.
And after 20-25 minutes passed, the heap started to decrease. Why is this happening ? Isn't garbage collector suppose to clear the heap faster ? In this way, if during these 25 minutes other requests are done, server will just crash.
Edited 2:
Seems that when trying the same on ec2, garbage collector does not work at all. It happened already 3 times that server just runs out of memory and crashes.
Anyone know the cause ?
I ran 1 #Async thread which had inside it another 2 #Async threads called from different beans. They finished execution and after that heap never got down.
I do not have any #Cacheable methods or similar stuffs. I am just getting some data from tables, process them and update it. Have no inputstreams etc.
I am using Anypoint 6.1 and Mule 3.8.1 and I'm finding problems with the performance and it looks like it is down to the cache scope.
The cache is a managed store (so I can invalidate the cache when new data is loaded) and has the following values:
Max Entries: 1000
Entry TTL: 84600
Expiration Interval: 84600
The response returns approx 200 JSON records.
Is there anyway to improve this and make this a faster response?
Thanks
Expiration Interval is the frequency with which the object store checks for expired cached response events. It can be set as low as 1 seconds to hours, depending upon the message rate you are expecting, you can try different values to test performance of your application.
Also, try in-memory-object-store for your caching strategy, as it saves responses in system memory, so a little bit faster but have to be careful in usage to avoid OutOfMemory errors.
We are using infinispan hot rod in our application.
Some times the retrieval from cache takes more time .This is not happening consistently . Most of the time it takes 6m sec but at times it takes very long ( 200 msec ) .
The size of the object retrieved from cache is around 200 bytes.
We tested both in infinispn 5.2.1 and JDG 6.3.2
Did anybody face this issue ?
Thanks
Lives
Remember that you're running Java, and that means that garbage collector can fire any time and that will give you 200 ms if you're very lucky, several seconds if you're not and up to minutes if you have large heap and not well tuned GC settings.
As the retrieval from distributed cache requires RPC to another node and handled RPC there, thread scheduling also plays vital role. And in busy system there's no surprise to have the thread waiting.
From Infinispan perspective, there's nothing the retrieval should wait for. The request gets translated into RPC to remote mode, and there it's handled by the same thread that received the message. The request does not wait for any locks.
In JGroups, there may be some delay involved. The message can get lost on network or discarded on receiver if it cannot handle the load, and then it's resent. Also, the UFC protocol makes sure that the receiver speed can match to sender's.
Anything built on top of non-realtime Java works with best effort, and sometimes sh!t happens. 200 ms is still a good response time.
I am using redis and saving data to disk in certain time interval. I see normally redis read and write time is order of .2 miliseconds but I see few peeks of order of 30 milliseconds. I read redis forks a background process to write data into disk , is forking happens on same (redis use single thread to serve all requests) thread which serves read and write request.
If this is true I want a solution such that persistence would not increase latency for read and write request.
If you issue a BGSAVE, the background save will fork. The OS needs to have a lazy separate CPU thread available ofcourse, for this not to impact Redis-server's main thread. If you configure save in redis.conf, a BGSAVE is basically what happens. I would configure it to off and issue BGSAVE manually while troubleshooting.
If you issue a SAVE, saving will be syncronous, and other clients will have to wait.
See also here. You might want to skip rdb snapshotting altogether, and rely on AOF.
Also see my remark on sensitive data: SO comment. There are many ways to make sure your data is safe. Disk-persistence is only one of them.
Hope this helps, TW
I have implemented AgentX using mib2c.create-dataset.conf ( with cache enabled)
In my snmd.conf :: agentXTimeout 15
In testtable.h file I have changed cache value as below...
#define testTABLE_TIMEOUT 60
According to my understanding It loads data every 60 second.
Now my issue is if the data in data table is exceeds some amount it takes some amount of time to load it.
As in between If I fired SNMPWALK it gives me “no response from the host” If I use SNMPWALK for whole table and in between testTABLE_TIMEOUT occurs it stops in between and shows following error (no response from the host).
Please tell me how to solve it ? In my table large amount of data is present and changing frequently.
I read some where:
(when the agent receives a request for something in this table and the cache is older than the defined timeout (12s > 10s), then it does re-load the data. This is the expected behaviour.
However the agent does not automatically release the local cache (i.e. call the 'free' routine) as soon as the timeout has expired.
Instead this is handled by a regular "garbage collection" run (once a minute), which will free any stale caches.
In the meantime, a request that tries to use that cache will spot that it's expired, and reload the data.)
Is there any connection between these two ?? I can’t get this... How to resolve my problem ???
Unfortunately, if your data set is very large and it takes a long time to load then you simply need to suffer the slow load and slow response. You can try and load the data on a regular basis using snmp_alarm or something so it's immediately available when a request comes in, but that doesn't really solve the problem either since the request could still come right after the alarm is triggered and the agent will still take a long time to respond.
So... the best thing to do is optimize your load routine as much as possible, and possibly simply increase the timeout that the manager uses. For snmpwalk, for example, you might add -t 30 to the command line arguments and I bet everything will suddenly work just fine.