I am trying to solve a caching issue that really has me scratching my head.
We have a Laravel app built in Docker that is deployed to Staging and Production.
Assets are built with Laravel Mix which is using Webpack. All of our
files are versioned.
Staging is in debug mode while Production is not.
Staging is a single instance behind a load balancer, Production is
two instances behind a load balancer.
Both are using the same Nginx Configs
Both sites are using Cloud Front, with the same caching behaviors. Origin Cache headers do not forward cookies, white lists query string using ID
When we deploy to Staging Everything works as expected.
When we deploy to Production, the CSS and Javascript are cached with the previous version.
I checked the asset hashes and they are the same, they both have hits from cloud front.
When I download the CSS files and diff production to stating they are not the same.
What really boggles my mind, is when I change the cache key on production I can see a miss from cloud front, reload hit from cloud front. Download and diff staging and production, they are now the same.
I have NO idea what is going on or where to look, Any ideas would be much appreciated!
Related
What is the recommended way to deploy changes (for example change in some Content Type model) from development to production without downtime?
I’m using this setup.
I have development instance with development postgres database.
On production I have 3 strapi instances (serving both api & admin, using the same production postgres database) and those instances are behind loadbalancer.
Lets say that I have Content Type named: Article (both on development and deployed on production).
Lets assume that I want to change that content type for example I want to add some fields and remove some fields in Article content type.
How to deploy changes to production without downtime?
I’ve done some tests and when I for example update Strapi Production Instance #1 to pull new code for updated models, strapi will update database of course. And from that time Strapi Production Instance #2 and #3 have problems serving Admin panel for example (javascript errors because database was changes but JS model files are not updated).
After I updated code on instance #2 and #3 everything works as expected.
But doing something like this on “working product” will be visible as downtime.
How to properly handle this situation? Thanks for help!
Could PM2 solve this problem? Strapi mentiones this in their documentation
PM2 Runtime allows you to keep your Strapi project alive and to reload it without downtime.
Strapi Docs v4
I understand the process of local > staging > production deployment, although I've come across one issue which I have a solution, but it doesn't feel like it's the correct method.
I have a production .env on the server and a local .env for local development which is all fine for storing my environment variables. Although, I am using Stripe API and have testing API keys locally and live API keys for production.
My Stripe public key gets pulled into public.app.js the Vue/Inertia compilation using the MIX_ prefix in my .env. I first push this to GitHub and then Deploy this to Laravel Forge where in my deploy script it runs yarn prod, pulling in the live Stripe public API key once compiled by the server.
Basically, what I am asking is: Is there a standard deployment process where you compile production ready files locally pulling the correct API keys and push to GitHub, or is there a better way which removes the need of compiling assets on the server?
My problem is that I do not think I am able to refresh the magento redis cache from the admin page. I realize that the problem could come from many sources, but my gut tells me it has something to do with the backend being on a separate server. My magento installation is as follows:
Magento CE 1.8
Backend server and NFS(media) on an Amazon AWS EC2 at
http://admin.example.com
Database on AWS RDS MySQL 2 app servers
(scalable to more) on AWS Elastic Beanstalk at http://www.example.com
(route53)
regular backend cache(database 0), Lesti-FPC(database 0), and
redis_session (database 1) on AWS elasticache redis
I originally had my Lesti-FPC configured to use database 2 on the redis cache. I thought it worked pretty well as far as I could tell, until I realized that I couldn't flush the cache at all from the admin System>Cache Management page. "Flush Magento Cache," "Flush Cache Storage," "disable", and "refresh" did nothing. I could only flush it by rebooting the redis node or going in with redis-cli and using redis commands.
I then tried configuring Lesti-FPC to use database 0 as described above. It worked better. Now, I could flush the FPC with "Flush Cache Storage," although the other options still didn't work. At the time, I assumed it was an issue specifically with Lesti-FPC. But anyway, using "Flush Cache Storage" was good enough for me at the time, especially once I discovered that I could flush the cache through code using
Mage::app()->getCacheInstance()->flush();
I just recently found out that the problem may not be specific to Lesti-FPC. While trying to fix the Lesti issue, I tried monitoring redis. I know nothing about redis or caching, but when I would try to refresh the FPC, I would see commands like:
“del” “zc:ti:403_FPC”
“srem” “zc:tags” “403_FPC”
But those tags never existed. Doing:
keys *FPC*
in redis would give me
“zc:ti:109_FPC”
but nothing with 403. SO this means that my fpc caches do not get invalidated like they're supposed to after product/category changes and reindexing. I got around this by manually flushing the cache after changes and running cron jobs to flush and prime the fpc every few hours.
But it made me suspicious. I tried refreshing the other caches from the admin, and I found that magento would always try to delete and read the 403 keys(some of which existed and some of which did not) but never any 109 keys (of which there are many).
My guess is that the 403 keys are specific to the admin server, and the 109 keys are specific to the app servers. The admin server, maybe because it is on a different subdomain, is not touching the app servers' cached stuff. But the app servers are able to find its own keys fine, as demonstrated by the fact that the FPC is working very well.
Does this make sense? Is there something I could do to fix this? Did I configure something incorrectly or is this a magento bug?
It turns out that the Zend cache prefix is the first three characters of an md5 hash of the path to your etc folder.
My app server has its document root at /var/www/html. The full path of /var/www/html/app/etc gives an id of 403. The app servers running on elastic beanstalk have their document roots at /var/app/current which is done automatically at deployment.
It seems pretty dumb. Why not a hash of the database address and database name or something? That would make more sense.
Anyway, I hope this helps someone.
I have a client who currently has one server with Magento and his admin takes down whole site for updates for multiple hours. I would like to make it instant process so that I wanted to propose new solution on how he should have set it up:
Magento Production Server 1 (WEB+DB)
Magento Production Server 2 (WEB+DB)
Magento Dev Server 1
DB would have to be synced somehow between those 2 servers (cluster? replication?) and I was thinking that for the smallest downtime possible first the updates should be tested on Dev Server (DB / WEB synced from Production server just before upgrading) and after checking it works fine and knowing how the process looks like I would be disabling LoadBalancing or RoundRobin DNS to only Server 1 then doing upgrades/updates on Server 2 and then Switching to server 2 as production server and updating server 1. When both are done switch on LoadBalancing/Round Robin on.
I come from Windows environment so this is how I would do it on Windows (maybe with seperate Database and Web too) and with tools like RedGate SqlCompare/Sql Data Compare etc it should work.
But I don't know Magento at all so please let me know what's possible and maybe how this should be done if the client don't want to end up with his shop being down...
You'll definitely need a production server, and some sort of staging/version management system.
I recommend checking out Subversion or Git for version management.
Changes can be committed to a repository first, and then updated to the live site with no downtime. This would be more than sufficient for a development environment.
For bigger changes, like a Magento version upgrade, you might still want/need to take the site down for a few hours in the middle of the night, as this is a much bigger process.
As for multiple servers, as an example I run a load balancer which balances between a primary and a secondary server. There is one database server that is separate. Changes are made to a development server, committed to the primary server with Subversion, and then any changes between the primary and secondary servers are rsynced to the secondary server every 60 seconds.
For this solution, session and cache data are stored in the database.
IMHO, with a good hosting environment, you won't need multiple servers unless you literally are in the thousands of simultaneous visitors. Plugins are the usual cause of admin-related problems.
We've had great success with "cloud" environments. Instantiate a new cloud instance, get that IP, then in your "hosts" file, point something like dev.yourdomain.com to it for testing. The only real downtime is that you should freeze the production site while the database converts to the new version, which can be a couple hours. Our mySql DB backup is 3 GB or so, but thankfully tgz's down to 280 MB.
We're using nginx and php-fpm and they are obscenely fast.
Typical migration path for me:
backup production site
start new cloud instance and copy production site to dev site
(restore production database)
try upgrading dev site one step at a time to see what breaks
start new cloud instance and do completely fresh install of newest
magento version
once working, restore production database and watch as it grinds on
converting it, see what breaks
pick between upgrade versus fresh install
back up production mySql, put production site in maintenance mode
while dev site converts the database
point domain to new IP address
I recently implemented Capistrano for the first time with a new cloud production environment. When I run cap deploy, everything seems to work fine. I can visit my live application in the browser, but my static files seem to load very slowly (like 5.0-12.0s).
See answer for clarity on config.assets.compile.
Static files load slowly because they possibly are not static, but are being served by Sprockets.
Check in production.rb and see if config.assets.compile = true or it is not set. That would mean that Sprockets is doing the work. You would also see far-future headers being used.
Have a look in /home/my_user/my_app/current/public and see if assets exists; I suspect it does not.
That means that mkdir -p is not working. The most likely cause is that the deploy user does not have sufficient permissions to create the directory.
Fix that, and also check (if this is an upgraded app from 3.0 or before) that your config setting match those in the last section of the pipeline guide.