Laravel Application session expires on AWS ELB - laravel

I have a Laravel application deployed on AWS Elastic Beanstalk with a classic load balancer. Somehow the user sessions expire at irregular times. Sometimes it expires right after logging in and most times few minutes after logging in. On some occasions to, it takes hours to expire. but on localhost, this doesn't happen.
I have configured my session duration in my Laravel application to 10hours and this works perfectly on localhost but somehow it doesn't work on AWS ELB.
I'm suspecting that AWS resets the app sessions a number of times within a day. If that's the case, how do I overcome this? If that's not the case, then what might be causing this?

I'm posting the answer here just so anyone runs into the same problem. What happens with AWS servers is that, They kind of redeploy your codes a couple of times a day and this clears all newly created files and uploaded files in your project. That's why you have to use a cloud storage if you want to store files and the same thing happens with sessions.
By default laravel saves sessions in a file and whenever AWS redeploy your code, it wipes all current session because it deletes the session file. The solution is store the sessions anywhere but the file. So i used my database to store sessions and cache. You can do that by
Going to the config/session.php and changing the driver to database
After run
php artisan session:table
php artisan migrate
These will create the sessions table in the database for you and that should fix the AWS problem. Just like #arun-a said in short. you can checkout the sessions docs for more info.

If you are using load balancer, you have to keep session as centralized to access over multiple servers. So use session driver as database instead of file and do related migration. Refer here.

Related

Laravel session problems - randomly logged as different user - after deployment to AWS Elastic Beanstalk

My problem is that sometimes when I do a new code deployment to AWS Elastic Beanstalk and I already have active session (am logged as myself), then when I refresh the page, I'm logged in as someone else. I use database sessions. This doesn't happen too often - or at least I'm not aware of it but can't figure this out. I'm using standard Laravel login functionality. I'm trying to find at least a start point how/where to start investigating. It has to do something with the deployments to Elastic Beanstalk because that's when this sometimes happen. I would have imagined that using database session shouldn't be affected by code changes on EB. Any help would be appreciated.

Laravel app not working after deploying on Heroku

I have a Laravel app working perfectly on my localhost, but when I deploy it on Heroku cloud, it gives the following error,
ErrorException:
file_put_contents(/tmp/build_a3cd0b04/storage/framework/sessions/YZFia5zZhnq2Lz2jZmdD9uZKjiQUU9KnMmRU0oad):
Failed to open stream: No such file or directory
I have tried changing permissions of the storage folder, clearing cache, etc., but nothing works. Any ideas, please?
It looks like you are using the file driver. That isn't a very good choice on Heroku due to its ephemeral filesystem. It won't scale horizontally and sessions will be lost unpredictably (whenever your dyno restarts, which happens at least once per day).
I suggest using the Redis, Memcached, or database driver.
If you already have aa database, that driver is likely easiest. You can use the session:table Artisan command to generate a database migration for holding session data. Commit that migration, update your config/session.php accordingly, redeploy, and run your migrations.
I suspect the cookie driver would also work. Basically, you can use anything but the file driver (or the toy array driver).

Why do I lose contents of my database after a heroku dyno restart?

Anytime my app goes to sleep and comes back on, I lose data in my database
And I'm not storing any media, it's just form data (texts)... I built the app on strapi and I've followed all their guidelines but it keeps happening. I'd be happy if anyone can help
Local data (files, db) is cleared after a Dyno restart because the Heroku File System is ephemeral. A Dyno is restarted (at least) every 24hrs.
In your case Strapi uses SQLite where data is saved in a local file.
Strapi suggests to configure Postgres on Heroku, alternatively you can use an external DB storage service.
First of all:
As you create content types with strapi it generates the code (= new files) for the according controllers/routes/services
Heroku does not persist data after a restart
After a restart strapi checks which content types exist in the code and deletes the tables of nonexisting types from the database.
Therefore, on Heroku you have to set up all your content types locally and connect to an external db (e.g. Heroku Postgres) but never strapi's default textfile based db.
Then push the generated files and finally deploy.
Thus, on Heroku you should always run in production mode. This way the option to alter content types is completely blocked and you will not run into the issue of data loss after a restart.

Which is better Redis on Lumen or Laravel?

I just learned about Redis and I want to try create a scalable Web Application, to achieve this I'm going to use Laravel as the main and Lumen as the microservice (API). So after I learned about Redis, I want to add it to my project, but I confused and tried to get a explanation from google, but no luck. I still confused after read a lot of tutorials.
My questions are:
Should I make it separated from the server? (because I saw it on
Docker, redis will be on separated container)
Should I append it to the Laravel? (because it's the main)
Thank you
To connect redis to Laravel see laravel official document
To connect lumen to redis see this links:
lumen doc for cache
lumen doc for queue
You can put your redis in any server you want and connect it to laravel or lumen with (in your .env file):
REDIS_Host="yout server"
REDIS_port="port of your server to connect redis"
REDIS_password="password which set in redis"
NOte: You are not force append redis to laravel if you need it in lumen just!
First of all, Redis is an in-memory data structure that is used as a database, cache and message broker What is Redis. It is similar to the database (DB) you would connect to but not something you can include in your app.
It sits somewhere, running as a daemon and you connect to it for the purposes of caching or message brokering, etc.
Now that you know you cannot append to it, do you want faster caching or session management? do you have the resources to support it? If yes, then you should connect to Redis.
Kindly take notice of something however, if you are going to run both Lumen and Laravel on the same system, you have to make certain changes to both environment files for the two applications.
eg. .env (Laravel app), you can change things like REDIS_HOST to REDIS_HOST_LARAVEL while you maintain it for .env (Lumen app). Another example is DB_HOST to something else like MY_DB_HOST and change them accordingly in the config/ files.
For some reason they can behave weird running to Lumen or Laravel apps on the same server connecting to Redis for cache or session management.

Problems flushing Magento Redis Cache on an installation with a separate backend server

My problem is that I do not think I am able to refresh the magento redis cache from the admin page. I realize that the problem could come from many sources, but my gut tells me it has something to do with the backend being on a separate server. My magento installation is as follows:
Magento CE 1.8
Backend server and NFS(media) on an Amazon AWS EC2 at
http://admin.example.com
Database on AWS RDS MySQL 2 app servers
(scalable to more) on AWS Elastic Beanstalk at http://www.example.com
(route53)
regular backend cache(database 0), Lesti-FPC(database 0), and
redis_session (database 1) on AWS elasticache redis
I originally had my Lesti-FPC configured to use database 2 on the redis cache. I thought it worked pretty well as far as I could tell, until I realized that I couldn't flush the cache at all from the admin System>Cache Management page. "Flush Magento Cache," "Flush Cache Storage," "disable", and "refresh" did nothing. I could only flush it by rebooting the redis node or going in with redis-cli and using redis commands.
I then tried configuring Lesti-FPC to use database 0 as described above. It worked better. Now, I could flush the FPC with "Flush Cache Storage," although the other options still didn't work. At the time, I assumed it was an issue specifically with Lesti-FPC. But anyway, using "Flush Cache Storage" was good enough for me at the time, especially once I discovered that I could flush the cache through code using
Mage::app()->getCacheInstance()->flush();
I just recently found out that the problem may not be specific to Lesti-FPC. While trying to fix the Lesti issue, I tried monitoring redis. I know nothing about redis or caching, but when I would try to refresh the FPC, I would see commands like:
“del” “zc:ti:403_FPC”
“srem” “zc:tags” “403_FPC”
But those tags never existed. Doing:
keys *FPC*
in redis would give me
“zc:ti:109_FPC”
but nothing with 403. SO this means that my fpc caches do not get invalidated like they're supposed to after product/category changes and reindexing. I got around this by manually flushing the cache after changes and running cron jobs to flush and prime the fpc every few hours.
But it made me suspicious. I tried refreshing the other caches from the admin, and I found that magento would always try to delete and read the 403 keys(some of which existed and some of which did not) but never any 109 keys (of which there are many).
My guess is that the 403 keys are specific to the admin server, and the 109 keys are specific to the app servers. The admin server, maybe because it is on a different subdomain, is not touching the app servers' cached stuff. But the app servers are able to find its own keys fine, as demonstrated by the fact that the FPC is working very well.
Does this make sense? Is there something I could do to fix this? Did I configure something incorrectly or is this a magento bug?
It turns out that the Zend cache prefix is the first three characters of an md5 hash of the path to your etc folder.
My app server has its document root at /var/www/html. The full path of /var/www/html/app/etc gives an id of 403. The app servers running on elastic beanstalk have their document roots at /var/app/current which is done automatically at deployment.
It seems pretty dumb. Why not a hash of the database address and database name or something? That would make more sense.
Anyway, I hope this helps someone.

Resources