Disable Laravel Redis Cache - laravel

i have set up the laravel Eloquent ORM query with Redis cache, i.e
->remember(10)->get(), for 10 mintues cache,
however, i want to disable all the cache somehow when I am working on the development site, on production site, the cache need to be turned on.
how can I turn on and off the cache without removing the ->remember(10) from the query? Because i have many query and do not want to remove them one by one in development site and add them all in production site.
i have tried ->remember(0), -1, none, all not working, the redis cache still working.. any one can help?

You should have an environment file (.env) where you can set up the cache driver. Use the array or file driver on the development environment and redis on your production environment.
http://laravel.com/docs/5.0/configuration#environment-configuration

Related

Laravel cache store does not support tagging on production

I've set CACHE_DRIVER=redis locally and on production in my ENV file. Locally I can use tags and I see the tags and their values showing up in the Redis database.
However if I deploy to production I get cache store does not support tagging. My server is deployed through Forge so it should have Redis set up. I've been using it for a few weeks with Redis as cache driver so I'm surprise that it's throwing the error now.

Sitecore custom cache is not shared across all CD servers

I have implemented a Sitecore custom cache which works fine on my local instance. I have pushed changes to production (in production we have 5 CD servers which are served through a load balancer). When the first request is made the code fetches some data from DB and cache it. If the same request goes to other CD box the previously fetched data is not available in the cache!
Can someone help on this? Do CD servers not share the cache across?
This is the expected behaviour of the in-memory cache what works well for single-server deployments only like your local instance, but when you have multiple CDs each of them has its own local memory cache store not shared with others.
Therefore, for scaled deployments, you can use the Redis Cache as a centralised cache store, so that each CD role can access that cache store and retrieve the cached data.
Redis Cache is verified by time and works well when synchronising data updates between multiple CDs.

Which is better Redis on Lumen or Laravel?

I just learned about Redis and I want to try create a scalable Web Application, to achieve this I'm going to use Laravel as the main and Lumen as the microservice (API). So after I learned about Redis, I want to add it to my project, but I confused and tried to get a explanation from google, but no luck. I still confused after read a lot of tutorials.
My questions are:
Should I make it separated from the server? (because I saw it on
Docker, redis will be on separated container)
Should I append it to the Laravel? (because it's the main)
Thank you
To connect redis to Laravel see laravel official document
To connect lumen to redis see this links:
lumen doc for cache
lumen doc for queue
You can put your redis in any server you want and connect it to laravel or lumen with (in your .env file):
REDIS_Host="yout server"
REDIS_port="port of your server to connect redis"
REDIS_password="password which set in redis"
NOte: You are not force append redis to laravel if you need it in lumen just!
First of all, Redis is an in-memory data structure that is used as a database, cache and message broker What is Redis. It is similar to the database (DB) you would connect to but not something you can include in your app.
It sits somewhere, running as a daemon and you connect to it for the purposes of caching or message brokering, etc.
Now that you know you cannot append to it, do you want faster caching or session management? do you have the resources to support it? If yes, then you should connect to Redis.
Kindly take notice of something however, if you are going to run both Lumen and Laravel on the same system, you have to make certain changes to both environment files for the two applications.
eg. .env (Laravel app), you can change things like REDIS_HOST to REDIS_HOST_LARAVEL while you maintain it for .env (Lumen app). Another example is DB_HOST to something else like MY_DB_HOST and change them accordingly in the config/ files.
For some reason they can behave weird running to Lumen or Laravel apps on the same server connecting to Redis for cache or session management.

Laravel Application session expires on AWS ELB

I have a Laravel application deployed on AWS Elastic Beanstalk with a classic load balancer. Somehow the user sessions expire at irregular times. Sometimes it expires right after logging in and most times few minutes after logging in. On some occasions to, it takes hours to expire. but on localhost, this doesn't happen.
I have configured my session duration in my Laravel application to 10hours and this works perfectly on localhost but somehow it doesn't work on AWS ELB.
I'm suspecting that AWS resets the app sessions a number of times within a day. If that's the case, how do I overcome this? If that's not the case, then what might be causing this?
I'm posting the answer here just so anyone runs into the same problem. What happens with AWS servers is that, They kind of redeploy your codes a couple of times a day and this clears all newly created files and uploaded files in your project. That's why you have to use a cloud storage if you want to store files and the same thing happens with sessions.
By default laravel saves sessions in a file and whenever AWS redeploy your code, it wipes all current session because it deletes the session file. The solution is store the sessions anywhere but the file. So i used my database to store sessions and cache. You can do that by
Going to the config/session.php and changing the driver to database
After run
php artisan session:table
php artisan migrate
These will create the sessions table in the database for you and that should fix the AWS problem. Just like #arun-a said in short. you can checkout the sessions docs for more info.
If you are using load balancer, you have to keep session as centralized to access over multiple servers. So use session driver as database instead of file and do related migration. Refer here.

Problems flushing Magento Redis Cache on an installation with a separate backend server

My problem is that I do not think I am able to refresh the magento redis cache from the admin page. I realize that the problem could come from many sources, but my gut tells me it has something to do with the backend being on a separate server. My magento installation is as follows:
Magento CE 1.8
Backend server and NFS(media) on an Amazon AWS EC2 at
http://admin.example.com
Database on AWS RDS MySQL 2 app servers
(scalable to more) on AWS Elastic Beanstalk at http://www.example.com
(route53)
regular backend cache(database 0), Lesti-FPC(database 0), and
redis_session (database 1) on AWS elasticache redis
I originally had my Lesti-FPC configured to use database 2 on the redis cache. I thought it worked pretty well as far as I could tell, until I realized that I couldn't flush the cache at all from the admin System>Cache Management page. "Flush Magento Cache," "Flush Cache Storage," "disable", and "refresh" did nothing. I could only flush it by rebooting the redis node or going in with redis-cli and using redis commands.
I then tried configuring Lesti-FPC to use database 0 as described above. It worked better. Now, I could flush the FPC with "Flush Cache Storage," although the other options still didn't work. At the time, I assumed it was an issue specifically with Lesti-FPC. But anyway, using "Flush Cache Storage" was good enough for me at the time, especially once I discovered that I could flush the cache through code using
Mage::app()->getCacheInstance()->flush();
I just recently found out that the problem may not be specific to Lesti-FPC. While trying to fix the Lesti issue, I tried monitoring redis. I know nothing about redis or caching, but when I would try to refresh the FPC, I would see commands like:
“del” “zc:ti:403_FPC”
“srem” “zc:tags” “403_FPC”
But those tags never existed. Doing:
keys *FPC*
in redis would give me
“zc:ti:109_FPC”
but nothing with 403. SO this means that my fpc caches do not get invalidated like they're supposed to after product/category changes and reindexing. I got around this by manually flushing the cache after changes and running cron jobs to flush and prime the fpc every few hours.
But it made me suspicious. I tried refreshing the other caches from the admin, and I found that magento would always try to delete and read the 403 keys(some of which existed and some of which did not) but never any 109 keys (of which there are many).
My guess is that the 403 keys are specific to the admin server, and the 109 keys are specific to the app servers. The admin server, maybe because it is on a different subdomain, is not touching the app servers' cached stuff. But the app servers are able to find its own keys fine, as demonstrated by the fact that the FPC is working very well.
Does this make sense? Is there something I could do to fix this? Did I configure something incorrectly or is this a magento bug?
It turns out that the Zend cache prefix is the first three characters of an md5 hash of the path to your etc folder.
My app server has its document root at /var/www/html. The full path of /var/www/html/app/etc gives an id of 403. The app servers running on elastic beanstalk have their document roots at /var/app/current which is done automatically at deployment.
It seems pretty dumb. Why not a hash of the database address and database name or something? That would make more sense.
Anyway, I hope this helps someone.

Resources