I would like to know whether session data persistence
<SessionDataPersist>
<Enable>true</Enable>
<RememberMePeriod>..</RememberMePeriod>
<CleanUp>
<Enable>true</Enable>
<Period>..</Period>
<TimeOut>..</TimeOut>
</CleanUp>
<Temporary>false</Temporary>
</SessionDataPersist>
which is available as part of WSO2 IS SP1 is only applicable when Remember me option is selected? Is there any other config where we manage these session timeouts?
Regards,
Cijoy
If you do not enable the session persistence, WSO2IS invalidates the SSO session after 15 minutes inactive time and the value is not configurable as it is the cache invalidation time. WSO2IS 5.0.0 just stores the SSO session in caches which is not correct and which can leads to lot of issues. Then WSO2IS 5.0.0 SP1 introduces the session persistence. So; currently there is no configuration to define the session timeout explicitly. But timeout can be happened, when these session can be deleted.
Therefore timeout can be achieve with cleanup task but it is not an inactive time out.
<CleanUp>
<Enable>true</Enable>
<Period>10</Period>
<TimeOut>60</TimeOut>
</CleanUp>
CleanUp.Period defines the time period among two consecutive cleanups in minutes. By default it is 1 day.CleanUp.TimeOut defines the timeout value of session data in minutes. By default it is two weeks.
For an example if we consider the above configuration it means that the clean up task will run periodically with a period of 10 minutes.
And in a cleanup process it will remove all session data persisted before 60 minutes.
Related
I have used the WSO2 API manager in my company. When I change the settings of the available methods (scopes) or the available authorisation methods (application level security), the application of these settings takes up to 15 minutes (I tested the work of the methods through postman). This is a lot for running tests.
I followed the recommendation to change the timeout in the deployment.toml
[apim.cache.resource]
enable = true
expiry_time = "900s"
There were no such settings in my config, but I added them and change for 60s. After the reboot, the settings were applied instantly (not even after 60 seconds). However, after a while, the settings were applied again after 15 minutes. I disabled the cache altogether, but it didn't help the as well. Settings are applied quickly only the first time after restarting WSO2. Has anyone had the same problem?
In WSO2 APIM if you update an API, the resource cache gets invalidated and the changes are reflected within a few minutes. If you want to apply the changes quickly you can restart the server and check the flow.
The default cache size of any type of cache in a WSO2 product is 10,000 elements/records. Cache eviction occurs from the 10001st element. All caches in WSO2 products can be configured using the <PRODUCT_HOME>/repository/conf/deployment.toml file. In case you have not defined a value for default cache timeout under server configurations, the defaultCacheTimeout of 15 minutes will be applied which comes by default.
[server]
default_cache_timeout = 15
Please refer https://apim.docs.wso2.com/en/3.0.0/administer/product-configurations/configuring-caching/ for further details on caching.
We have a Spring Boot application usin AWS Aurora serverless as a database. Our back-end is 99% of the time not used; therefore, we set the minimumidle parameter to 0 - hoping that Hikari would after a certain amount of time, close all the connections until they're needed again. That way our serverless database would also pause after a certain amount of inactive time.
Unfortunately, it's not working. We can see that Hikari is indeed closing the idle connections as we monitor our database. However, our serverless database isn't pausing due to inactivity.
My question is: Is Hikari is doing periodical polling to the database at certain intervals even after no idle/active connections are left? We suspect this is what's keeping our DB "active".
We are using alfresco 5.2.3 enterprise with ADF 3.4.0
The web.xml files in our both alfresco and share war has 60
And for ADF we have not found any session timeout settings or config.
So, ideally the session should not expire before 60 mins, but the customer is complaining that after remaining idle for around 15 mins, their session expires/logs out. They need to relogin.
So, what should be the ideal way to make the session valid for actual 60 mins and not just 15 mins.
I tried overriding the session timeout using the following link but it's not working:
Overriding alfresco timeout
Also tried setting the following property in alfresco-global.properties file with different values:
authentication.ticket.validDuration=PT1H
But does not work.
The same behaviour is noted when we use ADF url as well as Share url.
Share Url actually logs out the user, ADF url mostly invalidates the session so our custom actions do not appear against the documents if user remains idle for 15 mins.
NOTE: There is no SSO integration done for our project.
Any suggestions or pointers would be really helpful.
I tried out with multiple options:
authentication.ticket.ticketsExpire=true
to
authentication.ticket.ticketsExpire=false
authentication.ticket.expiryMode=AFTER_INACTIVITY
to
authentication.ticket.expiryMode=DO_NOT_EXPIRE
authentication.ticket.useSingleTicketPerUser=false
to
authentication.ticket.useSingleTicketPerUser=true
But, none of the above settings after restart give any impact on the behaviour. So, this session timeout settings are mostly carried forward from the proxy server or load balancer settings and applied here.
i am doing a PoC with the newly released Spring Session component. This is backed-up by Redis repository, where both the Session and the objects/data stored in session are persisted to.
Session got created in the application
ran the "Keys *" command in Redis CLI and saw a new entry (like "spring:session:sessions:6b55103a-baf5-4a05-a127-3a9cfa15c164")
From the application, Added a custom bean to the session
ran the "Keys *" command in Redis CLI and saw one more new entry for
this bean (like "\xac\xed\x00\x05t\x00\tcustomer1" , because the
bean had a string with value 'customer1')
I had configured an auto expiry of 30 seconds and left the application unused for that time
The sessionDestroyEvent got triggered and was captured in the Listener implementing ApplicationListener
ran the "Keys *" command in Redis CLI and now the first created
entry for the session was gone but, the custom bean object
(customer1) was still left over in Redis
Question:
Is it the user responsibility to clean-up the Redis Store ? If i had
many data elements stored in my session, should i have to manually
clean them up from the redis store during session destroy (logout and
timeout events).
Update:
While i posted this question and went back (probably after 3/4 mins)
to Redis-CLI to list the Keys, now i do not find the Customer1 object.
So does that mean the clean-up is performed by Redis on some regular
interval, like the Garbage collection ?
The Session Expiration section Spring Session reference describes in detail how sessions are cleaned up.
From the documentation:
One problem with this approach is that Redis makes no guarantee of
when the expired event will be fired if they key has not been
accessed. Specifically the background task that Redis uses to clean up
expired keys is a low priority task and may not trigger the key
expiration. For additional details see Timing of expired events
section in the Redis documentation.
...
For this reason, each session expiration is also tracked to the
nearest minute. This allows a background task to access the
potentially expired sessions to ensure that Redis expired events are
fired in a more deterministic fashion.
I use JBPM as my workflow engine, but my system is so busy, I may start 500 process per second. JBPM Session is lightweight, but it also cost some time.Can JBPM Session reused? Or how to clear the state of session
You can reuse a session if you want, or there are other strategies like a session per request or per process instance. The state of a process instance is stored separately in the database, so you don't have to clear a session.
For reaching 500 processes per second (when using persistence), you probably want to run multiple sessions in parallel though, possibly across multiple nodes in a cluster.