I am trying to understand Laravels session handler and can't find anything online. At the moment, in session.php I am doing
'lifetime' => 10,
I have the session driver set to file. So from what I have read, this sets the idle timeout of the session to 10 minutes.
So what does idle mean in this case? I am assuming it means if no request is sent to the server within 10 minutes it will expire. Is this correct?
Also, how can it tell if no request has been sent within 10 minutes? I have taken a look at the session file within storage, and I do not see any timestamp.
So how exactly does all of this work?
Thanks
Yes you are correct: if you don't send any request after the lifetime config value the session will be destroyed.
The Illuminate\Session\FileSessionHandler class has a gc() function, it is a garbage collector function that has a probability to be called on every request, you can control the chances with the session.lottery config value. This function destroy each session file that has a modified timestamp older than now - lifetime.
You can find the Illuminate\Session\FileSessionHandler class in the file vendor/laravel/framework/src/Illuminate/Session/FileSessionHandler.php if you want to take a look at the source code.
I am trying to implement session management, where we store jwt token to redis. Now I want remove the key if the object idle time is more than 8 hours. Pls help
There is no good reason that comes to my mind for using IDLETIME instead of using the much simpler pattern of issuing a GET followed by an EXPIRE apart from very trivial memory requirements for key expiry.
Recommended Way: GET and EXPIRE
GET the key you want.
Issue an EXPIRE <key> 28800.
Way using OBJECT IDLETIME, DEL and some application logic:
GET the key you want.
Call OBJECT IDLETIME <key>.
Check in your application code if the idletime > 8h.
If condition 3 is met, then issue a DEL command.
The second way is more cumbersome and introduces network latency since you need three round trips to your redis server while the first solution just does it in one round trip if you use a pipeline or two round trips without any app server time at worst.
This is what I did using Jedis. I am fetching 1000 records at a time. You can add a loop to fetch all records in a batch.
Jedis jedis = new Jedis("addURLHere");
ScanParams scanParams = new ScanParams().count(1000);
ScanResult<String> scanResult = jedis.scan(ScanParams.SCAN_POINTER_START, scanParams);
List<String> result = scanResult.getResult();
result.stream().forEach((key) -> {
if (jedis.objectIdletime(key) > 8 * 60 * 60) { // more than 5 days
//your functionality here
}
});`
Recently, I'm using Redis to cache token for OpenStack Keystone. The function is fine, but some expired cache data still in Redis.
my Keystone config:
[cache]
enabled=true
backend=dogpile.cache.redis
backend_argument=url:redis://127.0.0.1:6379
[token]
provider = uuid
caching=true
cache_time= 3600
driver = kvs
expiration = 3600
but some expired data in Redis:
Data was over expiration time, but still in here, because the TTL is -1.
My question:
How can I change settings to stop this rubbish data created?
Is some gracefully way to clean it up?
I was trying to use command 'keystone-manage token_flush', but after reading code, I realized this command just clean up the expired tokens in Mysql
I hope this question still relevant.
I'm trying to do the same thing as you are, and for now the only option I found working is the argument on dogpile.cache.redis: redis_expiration_time.
Checkout the backend dogpile.redis API or source code.
http://dogpilecache.readthedocs.io/en/latest/api.html#dogpile.cache.backends.redis.RedisBackend.params.redis_expiration_time
The only problem with this argument is that it does not let you choose a different TTL for different categories, for example you want tokens for 10 minutes and catalog for 24 hours or so. The other parameters on keystone.conf just don't work from my experience (expiration_time and cache_time on each category)... Anyway this problem isn't relevant if you are using redis to store only keystone tokens.
[cache]
enabled=true
backend=dogpile.cache.redis
backend_argument=url:redis://127.0.0.1:6379
// Add this line
backend_argument=redis_expiration_time:[TTL]
Just replace the [TTL] with your wanted ttl and you'll start noticing keys with ttl in redis and after a while you will see that they are no more.
about the second question:
This is maybe not the best answer you'll see, but you can use OBJECT idletime [key] command on redis-cli to see how much time the specific key wasn't used (even GET reset idletime). You can delete the keys that have bigger idletime than your token revocation using a simple script.
Remember that the data on Redis isn't persistent data, meaning you can always use FLUSHALL and your OpenStack and keystone will work as usual, but ofc the first authentications will take longer.
I am using Mule 3.5.0 and trying to implement the Cache Strategy. The cache is supposed to be hit by APIs for grabbing a Sugar CRM OAuth Token. Multiple endpoints are hitting this cache.
My requirement is to keep only one active element which in the queue which serves this active token to every API call for 5 minutes. When the TTL expires, the cache should grab another token and cache it for subsequent calls.
The problem arises, when multiple inbound endpoints are hitting the cache, old values are also being spit out by the cache. Is all I need to do is change the maxEntries to 1? OR is there a better way of achieving this?
<ee:object-store-caching-strategy name="Caching_Strategy" doc:name="Caching Strategy">
<in-memory-store name="sugar-cache-in-memory" maxEntries="500" entryTTL="300000" expirationInterval="300000"/>
</ee:object-store-caching-strategy>
<flow name="get-oauth-token-cache" doc:name="get-oauth-token-cache" tracking:enable-default-events="true">
<ee:cache cachingStrategy-ref="Caching_Strategy" doc:name="Cache">
..............................
..............................
..............................
<logger message="------------------------ Direct Call for Token----------------------" level="INFO" doc:name="Logger"/>
<DATAMAPPER to set #payload.access_token />
</ee:cache>
<set-session-variable variableName="access_token" value="#[payload.access_token]" doc:name="Session Variable"/>
</flow>
The problem was that in the first line after ee:cache I had the Set Payload function. Had to take it outside the Cache Scope.
Sorry.
I have just upgraded Tomcat from version 7.0.52 to 8.0.14.
I am getting this for lots of static image files:
org.apache.catalina.webresources.Cache.getResource Unable to add the
resource at [/base/1325/WA6144-150x112.jpg] to the cache because there
was insufficient free space available after evicting expired cache
entries - consider increasing the maximum size of the cache
I haven't specified any particular resource settings, and I didn't get this for 7.0.52.
I have found mention of this happening at startup in a bug report that was supposedly fixed. For me this is happening not at startup but constantly when the resource is requested.
Anybody else having this issue?
Trying to at least just disable the cache, but I cannot find an example of how to specify not to use the cache. The attributes have gone from the context in Tomcat version 8. Have tried adding a resource but cannot get the config right.
<Resource name="file"
cachingAllowed="false"
className="org.apache.catalina.webresources.FileResourceSet"
/>
Thanks.
I had the same issue when upgrading from Tomcat 7 to 8: a continuous large flood of log warnings about cache.
1. Short Answer
Add this within the Context xml element of your $CATALINA_BASE/conf/context.xml:
<!-- The default value is 10240 kbytes, even when not added to context.xml.
So increase it high enough, until the problem disappears, for example set it to
a value 5 times as high: 51200. -->
<Resources cacheMaxSize="51200" />
So the default is 10240 (10 mbyte), so set a size higher than this. Than tune for optimum settings where the warnings disappear.
Note that the warnings may come back under higher traffic situations.
1.1 The cause (short explanation)
The problem is caused by Tomcat being unable to reach its target cache size due to cache entries that are less than the TTL of those entries. So Tomcat didn't have enough cache entries that it could expire, because they were too fresh, so it couldn't free enough cache and thus outputs warnings.
The problem didn't appear in Tomcat 7 because Tomcat 7 simply didn't output warnings in this situation. (Causing you and me to use poor cache settings without being notified.)
The problem appears when receiving a relative large amount of HTTP requests for resources (usually static) in a relative short time period compared to the size and TTL of the cache. If the cache is reaching its maximum (10mb by default) with more than 95% of its size with fresh cache entries (fresh means less than less than 5 seconds in cache), than you will get a warning message for each webResource that Tomcat tries to load in the cache.
1.2 Optional info
Use JMX if you need to tune cacheMaxSize on a running server without rebooting it.
The quickest fix would be to completely disable cache: <Resources cachingAllowed="false" />, but that's suboptimal, so increase cacheMaxSize as I just described.
2. Long Answer
2.1 Background information
A WebSource is a file or directory in a web application. For performance reasons, Tomcat can cache WebSources. The maximum of the static resource cache (all resources in total) is by default 10240 kbyte (10 mbyte). A webResource is loaded into the cache when the webResource is requested (for example when loading a static image), it's then called a cache entry.
Every cache entry has a TTL (time to live), which is the time that the cache entry is allowed to stay in the cache. When the TTL expires, the cache entry is eligible to be removed from the cache. The default value of the cacheTTL is 5000 milliseconds (5 seconds).
There is more to tell about caching, but that is irrelevant for the problem.
2.2 The cause
The following code from the Cache class shows the caching policy in detail:
152 // Content will not be cached but we still need metadata size153 long delta = cacheEntry.getSize();154 size.addAndGet(delta);156 if (size.get() > maxSize) {157 // Process resources unordered for speed. Trades cache158 // efficiency (younger entries may be evicted before older159 // ones) for speed since this is on the critical path for160 // request processing161 long targetSize =162 maxSize * (100 - TARGET_FREE_PERCENT_GET) / 100;163 long newSize = evict(164 targetSize, resourceCache.values().iterator());165 if (newSize > maxSize) {166 // Unable to create sufficient space for this resource167 // Remove it from the cache168 removeCacheEntry(path);169 log.warn(sm.getString("cache.addFail", path));170 }171 }
When loading a webResource, the code calculates the new size of the cache. If the calculated size is larger than the default maximum size, than one or more cached entries have to be removed, otherwise the new size will exceed the maximum. So the code will calculate a "targetSize", which is the size the cache wants to stay under (as an optimum), which is by default 95% of the maximum. In order to reach this targetSize, entries have to be removed/evicted from the cache. This is done using the following code:
215 private long evict(long targetSize, Iterator<CachedResource> iter) {217 long now = System.currentTimeMillis();219 long newSize = size.get();221 while (newSize > targetSize && iter.hasNext()) {222 CachedResource resource = iter.next();224 // Don't expire anything that has been checked within the TTL225 if (resource.getNextCheck() > now) {226 continue;227 }229 // Remove the entry from the cache230 removeCacheEntry(resource.getWebappPath());232 newSize = size.get();233 }235 return newSize;236 }
So a cache entry is removed when its TTL is expired and the targetSize hasn't been reached yet.
After the attempt to free cache by evicting cache entries, the code will do:
165 if (newSize > maxSize) {166 // Unable to create sufficient space for this resource167 // Remove it from the cache168 removeCacheEntry(path);169 log.warn(sm.getString("cache.addFail", path));170 }
So if after the attempt to free cache, the size still exceeds the maximum, it will show the warning message about being unable to free:
cache.addFail=Unable to add the resource at [{0}] to the cache for web application [{1}] because there was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
2.3 The problem
So as the warning message says, the problem is
insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
If your web application loads a lot of uncached webResources (about maximum of cache, by default 10mb) within a short time (5 seconds), then you'll get the warning.
The confusing part is that Tomcat 7 didn't show the warning. This is simply caused by this Tomcat 7 code:
1606 // Add new entry to cache1607 synchronized (cache) {1608 // Check cache size, and remove elements if too big1609 if ((cache.lookup(name) == null) && cache.allocate(entry.size)) {1610 cache.load(entry);1611 }1612 }
combined with:
231 while (toFree > 0) {232 if (attempts == maxAllocateIterations) {233 // Give up, no changes are made to the current cache234 return false;235 }
So Tomcat 7 simply doesn't output any warning at all when it's unable to free cache, whereas Tomcat 8 will output a warning.
So if you are using Tomcat 8 with the same default caching configuration as Tomcat 7, and you got warnings in Tomcat 8, than your (and mine) caching settings of Tomcat 7 were performing poorly without warning.
2.4 Solutions
There are multiple solutions:
Increase cache (recommended)
Lower the TTL (not recommended)
Suppress cache log warnings (not recommended)
Disable cache
2.4.1. Increase cache (recommended)
As described here: http://tomcat.apache.org/tomcat-8.0-doc/config/resources.html
By adding <Resources cacheMaxSize="XXXXX" /> within the Context element in $CATALINA_BASE/conf/context.xml, where "XXXXX" stands for an increased cache size, specified in kbytes. The default is 10240 (10 mbyte), so set a size higher than this.
You'll have to tune for optimum settings. Note that the problem may come back when you suddenly have an increase in traffic/resource requests.
To avoid having to restart the server every time you want to try a new cache size, you can change it without restarting by using JMX.
To enable JMX, add this to $CATALINA_BASE/conf/server.xml within the Server element:
<Listener className="org.apache.catalina.mbeans.JmxRemoteLifecycleListener" rmiRegistryPortPlatform="6767" rmiServerPortPlatform="6768" /> and download catalina-jmx-remote.jar from https://tomcat.apache.org/download-80.cgi and put it in $CATALINA_HOME/lib.
Then use jConsole (shipped by default with the Java JDK) to connect over JMX to the server and look through the settings for settings to increase the cache size while the server is running. Changes in these settings should take affect immediately.
2.4.2. Lower the TTL (not recommended)
Lower the cacheTtl value by something lower than 5000 milliseconds and tune for optimal settings.
For example: <Resources cacheTtl="2000" />
This comes effectively down to having and filling a cache in ram without using it.
2.4.3. Suppress cache log warnings (not recommended)
Configure logging to disable the logger for org.apache.catalina.webresources.Cache.
For more info about logging in Tomcat: http://tomcat.apache.org/tomcat-8.0-doc/logging.html
2.4.4. Disable cache
You can disable the cache by setting cachingAllowed to false.
<Resources cachingAllowed="false" />
Although I can remember that in a beta version of Tomcat 8, I was using JMX to disable the cache. (Not sure why exactly, but there may be a problem with disabling the cache via server.xml.)
In your $CATALINA_BASE/conf/context.xml add block below before </Context>
<Resources cachingAllowed="true" cacheMaxSize="100000" />
For more information: http://tomcat.apache.org/tomcat-8.0-doc/config/resources.html
You have more static resources that the cache has room for. You can do one of the following:
Increase the size of the cache
Decrease the TTL for the cache
Disable caching
For more details see the documentation for these configuration options.
This isn’t a solution in the sense that it doesn’t resolve the conditions which cause the message to appear in the logs, but the message can be suppressed by appending the following to conf/logging.properties:
org.apache.catalina.webresources.Cache.level = SEVERE
This filters out the “Unable to add the resource” logs, which are at level WARNING.
In my view a WARNING is not necessarily an error that needs to be addressed, but rather can be ignored if desired.
Some more tip (issue i meet):
if $CATALINA_BASE/conf/context.xml ovverrides by intellij.
Just add inside block <Context> </Context>:
<Resources cachingAllowed="true" cacheMaxSize="100000" />
in your Tomcat/apache-tomcat-x.x.x/conf/context.xml