Reload Cleared CACHE in KARAF after refreshing a feature - caching

I have an application to insert the data into a database using API. I deployed the features in apache karaf. When I refresh the features, the API is not working as it tries to insert the data in the first row.
Forex: if my database has five records and after refreshing the feature in karaf, it tries to put record in first row. After five requests API works correctly. I use the Redisson service for the cache.
What's my question is how to reload the cleared cache after refreshing features in karaf?
Is there any way to get the cleared cache in karaf?
Thanks in advance for your suggestions.

Related

Laravel Application session expires on AWS ELB

I have a Laravel application deployed on AWS Elastic Beanstalk with a classic load balancer. Somehow the user sessions expire at irregular times. Sometimes it expires right after logging in and most times few minutes after logging in. On some occasions to, it takes hours to expire. but on localhost, this doesn't happen.
I have configured my session duration in my Laravel application to 10hours and this works perfectly on localhost but somehow it doesn't work on AWS ELB.
I'm suspecting that AWS resets the app sessions a number of times within a day. If that's the case, how do I overcome this? If that's not the case, then what might be causing this?
I'm posting the answer here just so anyone runs into the same problem. What happens with AWS servers is that, They kind of redeploy your codes a couple of times a day and this clears all newly created files and uploaded files in your project. That's why you have to use a cloud storage if you want to store files and the same thing happens with sessions.
By default laravel saves sessions in a file and whenever AWS redeploy your code, it wipes all current session because it deletes the session file. The solution is store the sessions anywhere but the file. So i used my database to store sessions and cache. You can do that by
Going to the config/session.php and changing the driver to database
After run
php artisan session:table
php artisan migrate
These will create the sessions table in the database for you and that should fix the AWS problem. Just like #arun-a said in short. you can checkout the sessions docs for more info.
If you are using load balancer, you have to keep session as centralized to access over multiple servers. So use session driver as database instead of file and do related migration. Refer here.

Howto handle infinispan cache creating and deployment

We have a infinispan cluster serving as cache server for our applications. Every time we need a new cache, we have to edit the config files, and redeploy the cluster, which is problematic. For obvious reasons, we don't want to redeploy the cache cluster.
We can add the new cache definition through web interface, or cli. But it has downside of not recording this configuration in a repo. Ideally I want to be able to add cache definitions in a way that is persistent in my code repo. So that in case of a disaster, I can simply redeploy the cache cluster.
We looked into creating cache definition through the source code, at application startup, but that doesn't seems to be possible.
Does anyone has an idea about the best practises for this issue?
After some R&D, this is what we found:
Programatic creation of the caches, are possible through jcache implementation in Infinispan, but we could not find a way to properly configure it. End result is just an empty cache definition, with no properties
What we ended up doing is to create caches using jboss cli. Use an script to create the cache definitions, and commit that script to version control system. This way you can recreate your cache server anytime by rerunning that script. The downside of this approach is that you are going to need to install jboss-cli on your deploying machine - CI probably- which is very inconvenient. We just decided to do this step manually for time being.

In Weblogic 10.3.5, is there any way to expire an html file from cache without going through a server restart

In Weblogic 10.3.5, is there any way to expire an html file from cache without going through a server restart. I am supporting a server with frequent HTML changes and hoping to find a way not to restart the server each time the HTML is updated. Environment is supporting a PeopleSoft domain. Thanks.
There's a way indeed, the parameter "Resource Reload Check (in seconds)" which can be found on a web app setup is what you're looking for. I've setup this to 5(secondes) in order to have a periodic refresh on dynamic ressources generated by an application engine (an xml parsed by an xslt)
For some details here's doc of 12.1.2 but I confirm it exists also on 10.3.4 (so on your version too) : https://docs.oracle.com/middleware/1212/wls/WLACH/pagehelp/J2EEwebappwebappconfigurationtitle.html

Problems flushing Magento Redis Cache on an installation with a separate backend server

My problem is that I do not think I am able to refresh the magento redis cache from the admin page. I realize that the problem could come from many sources, but my gut tells me it has something to do with the backend being on a separate server. My magento installation is as follows:
Magento CE 1.8
Backend server and NFS(media) on an Amazon AWS EC2 at
http://admin.example.com
Database on AWS RDS MySQL 2 app servers
(scalable to more) on AWS Elastic Beanstalk at http://www.example.com
(route53)
regular backend cache(database 0), Lesti-FPC(database 0), and
redis_session (database 1) on AWS elasticache redis
I originally had my Lesti-FPC configured to use database 2 on the redis cache. I thought it worked pretty well as far as I could tell, until I realized that I couldn't flush the cache at all from the admin System>Cache Management page. "Flush Magento Cache," "Flush Cache Storage," "disable", and "refresh" did nothing. I could only flush it by rebooting the redis node or going in with redis-cli and using redis commands.
I then tried configuring Lesti-FPC to use database 0 as described above. It worked better. Now, I could flush the FPC with "Flush Cache Storage," although the other options still didn't work. At the time, I assumed it was an issue specifically with Lesti-FPC. But anyway, using "Flush Cache Storage" was good enough for me at the time, especially once I discovered that I could flush the cache through code using
Mage::app()->getCacheInstance()->flush();
I just recently found out that the problem may not be specific to Lesti-FPC. While trying to fix the Lesti issue, I tried monitoring redis. I know nothing about redis or caching, but when I would try to refresh the FPC, I would see commands like:
“del” “zc:ti:403_FPC”
“srem” “zc:tags” “403_FPC”
But those tags never existed. Doing:
keys *FPC*
in redis would give me
“zc:ti:109_FPC”
but nothing with 403. SO this means that my fpc caches do not get invalidated like they're supposed to after product/category changes and reindexing. I got around this by manually flushing the cache after changes and running cron jobs to flush and prime the fpc every few hours.
But it made me suspicious. I tried refreshing the other caches from the admin, and I found that magento would always try to delete and read the 403 keys(some of which existed and some of which did not) but never any 109 keys (of which there are many).
My guess is that the 403 keys are specific to the admin server, and the 109 keys are specific to the app servers. The admin server, maybe because it is on a different subdomain, is not touching the app servers' cached stuff. But the app servers are able to find its own keys fine, as demonstrated by the fact that the FPC is working very well.
Does this make sense? Is there something I could do to fix this? Did I configure something incorrectly or is this a magento bug?
It turns out that the Zend cache prefix is the first three characters of an md5 hash of the path to your etc folder.
My app server has its document root at /var/www/html. The full path of /var/www/html/app/etc gives an id of 403. The app servers running on elastic beanstalk have their document roots at /var/app/current which is done automatically at deployment.
It seems pretty dumb. Why not a hash of the database address and database name or something? That would make more sense.
Anyway, I hope this helps someone.

EPiserver cache load balancer

I have a strange problem with EpiServer 6.0 . When it is on the environment behind the load
balancer it seems to have out dated cache. For example when accessing user settings it doesn't bring the latest data however when run locally or when load balancer is not present I got the latest results. There is an event listener set-up between the two load balanced servers to update the cache. Could anybody advise please?
This article will probably be helpful to you.
First thing to check is that the enableEvents and enableRemoteEvents attributes are both set to “true” in the episerver.config file

Resources