Magento upgrading process and infrastructure for smallest possible downtime - magento

I have a client who currently has one server with Magento and his admin takes down whole site for updates for multiple hours. I would like to make it instant process so that I wanted to propose new solution on how he should have set it up:
Magento Production Server 1 (WEB+DB)
Magento Production Server 2 (WEB+DB)
Magento Dev Server 1
DB would have to be synced somehow between those 2 servers (cluster? replication?) and I was thinking that for the smallest downtime possible first the updates should be tested on Dev Server (DB / WEB synced from Production server just before upgrading) and after checking it works fine and knowing how the process looks like I would be disabling LoadBalancing or RoundRobin DNS to only Server 1 then doing upgrades/updates on Server 2 and then Switching to server 2 as production server and updating server 1. When both are done switch on LoadBalancing/Round Robin on.
I come from Windows environment so this is how I would do it on Windows (maybe with seperate Database and Web too) and with tools like RedGate SqlCompare/Sql Data Compare etc it should work.
But I don't know Magento at all so please let me know what's possible and maybe how this should be done if the client don't want to end up with his shop being down...

You'll definitely need a production server, and some sort of staging/version management system.
I recommend checking out Subversion or Git for version management.
Changes can be committed to a repository first, and then updated to the live site with no downtime. This would be more than sufficient for a development environment.
For bigger changes, like a Magento version upgrade, you might still want/need to take the site down for a few hours in the middle of the night, as this is a much bigger process.
As for multiple servers, as an example I run a load balancer which balances between a primary and a secondary server. There is one database server that is separate. Changes are made to a development server, committed to the primary server with Subversion, and then any changes between the primary and secondary servers are rsynced to the secondary server every 60 seconds.
For this solution, session and cache data are stored in the database.

IMHO, with a good hosting environment, you won't need multiple servers unless you literally are in the thousands of simultaneous visitors. Plugins are the usual cause of admin-related problems.
We've had great success with "cloud" environments. Instantiate a new cloud instance, get that IP, then in your "hosts" file, point something like dev.yourdomain.com to it for testing. The only real downtime is that you should freeze the production site while the database converts to the new version, which can be a couple hours. Our mySql DB backup is 3 GB or so, but thankfully tgz's down to 280 MB.
We're using nginx and php-fpm and they are obscenely fast.
Typical migration path for me:
backup production site
start new cloud instance and copy production site to dev site
(restore production database)
try upgrading dev site one step at a time to see what breaks
start new cloud instance and do completely fresh install of newest
magento version
once working, restore production database and watch as it grinds on
converting it, see what breaks
pick between upgrade versus fresh install
back up production mySql, put production site in maintenance mode
while dev site converts the database
point domain to new IP address

Related

Move OctoberCMS website from Ubuntu VM to a CentOS 7 VM

Our web developer picked OctoberCMS to develop our new website (his skill). Unfortunately before completion he rapidly left us due to health reasons and is no longer available. His Ubuntu environment has some problems and we need it on CentOS 7 anyway. The rest of us are OctoberCMS newbies, but want to learn it.
We built a CentOS 7 VM and installed OctoberCMS and want to move his work over.
We can not find any instructions on how to "export" the work he has done thus far and import it into our new OctoberCMS.
He is using 10 plugins and 3 he developed. (I don't know if that is relevant)
Is there an easy way to do this or at least instructions?
We have been googling, youtubing, IRC'ing for a week and still at a loss.
Any help would be most appreciated.
There really isn't anything special you need to know about moving an OctoberCMS install to a new server compared to moving over any other PHP application.
I am assuming you know how to do the basics of setting up a LAMP stack, such as setting up a virtual host for the domain you want to host the site on and setting up a MySQL database and user/password to access the database. There are of course many variants on how you could accomplish this such as using a management tool like Plesk or cPanel, or just configuring the services manually via the command line.
1) Ensure your new server is running at least roughly the same version of Apache, MySQL, and PHP.
2) Copy over the directory that contains all of the web files from the old server into the document root for your domain on the new server.
3) Do a database dump from the old server and copy it to the new server. If possible, use the same database name and username and password as the old server. This way you don't have to worry about updating the configuration of the website.
4) Pull up the site and troubleshoot any errors that come up. It is helpful if OctoberCMS debug mode is on.
Following the above method will ensure that you have the exact same setup on your new server that the old server had. This will copy over all of the plugins, data, etc.
There are of course many complexities that can come up during a switch over like this, but this should at least get you started and you can come back to StackOverflow with some more specific hurdles.
Hope that helps.

Minimizing downtime in SaaS-multi tenant with separate database model web application

We are having separate databases for each tenant which is creating a lot of downtime when we are deploying changes on cloud. The steps(in brief) what we follow whenever we have to deploy the changes on cloud are:
Put down the client site.
Take a snapshot of the current RDS instance(in case anything goes south).
Run the migration scripts(Changes) on each tenant database on RDS instance.
If everything goes well, then we make the client site live again.
Now the problem is, we are having around 250 tenants as of now and the 3rd step which is running the update script is taking too much time which in turn increases the downtime. Any suggestions on how to improve this process or if we are suppose to do it in some other way. There is a clear lack of enterprise level expertise here on our end, so any help will be appreciated. Thanks!
Without knowing anything about your application, here are some things to think about:
If your application would still have some value when running in a 'read-only' mode, you could limit the actual downtime by doing the following.
Make sure all of your RDS databases have a read-replica.
Set your application into 'read-only' mode (i.e. thru some application code).
Let your read replica catchup with your master
promote your read-replica to a stand-alone DB.
Run your updates against this copy of the database.
redirect your application to the new master.
create a new read replica from this new master
delete/archive your old database.
You still have to do all the work, and it still takes a while to run, but the actual downtime for the user should be minimal.

How do you replicate rules between SonarQube servers?

We currently have two SonarQube servers (v4.5.1) running on two separate Windows 2012 servers each with its own MS SQL database server. One is our Development server and the other is our production server. The idea being that we test out all rule changes on the development server first, once we are happy that they are correct we port them to the Production server.
When we first setup the two servers we simply took a backup of the Development server database and restored it on the Production server. At this point both systems were in sync.
We have recently made some modifications to the Development rules set, however when we tried the same approach to move these to the production server it did not work.
The production box seemed to remember the previous rule set. There seems to be a cache of the previous rules that we can't work out how to clear.
Before restarting SonarQube with the new DB in place we deleted the temp folder as that appears to keep a cached H2 database, but that did not solve the issue. We also tried starting it up and using the /setup url but this did not appear to work either.
Is there a way to completely reset the SonarQube server prior to restoring the database so that it has no knowledge of the previous rule set?
Alternatively is there a better way to export and re-import the entire rule set between two servers?
We looked at exporting the rule profile, but this did not appear to contain the full detail of the rules.
Thanks
Pete
For the moment, this is not possible to fully synchronize rules and quality profiles between 2 servers because of SONAR-5366. You can watch and vote for this ticket.
Concerning the cache that you seem to have, this is probably the E/S indexes which are located in <install_dir>/data/es folder. What you can do is:
stop you server
fully delete the <install_dir>/data folder
restart the server: your rules should be in sync with the DB

Scale down Heroku DB from production to hobby

Is it possible to scale down a Heroku DB from production to hobby (the free Dev plan), as long as we stay within the row limits? A site I'm working on requires a production-grade DB for a few weeks, but then it'll be quiet for a while. Haven't been able to find any info on this.
It shouldn't be a problem. Heroku has a guide to upgrade with backups, so I'd recommend taking a backup of your current database, downloading it to your local computer, then spinning up a development database.
Once the development (free) DB is ready, restore from the pgbackup on your local. As long as you're under the row limit, you should be fine.
Obviously, you'd want to put the site in maintenance mode when you do all of this - but it shouldn't be down for more than 5-10 minutes.

Setting up a collaborative environment for web application development

My office is growing and ive been tasked to build out the IT for our web development.
Whats the best tool/setup for doing web development in a group setting? The requirements are a centralized code repository, a location to test development code on, and finally a way to push tagged code out to a staging server. What im thinking is svn/redmine for code repo, each user has an account on a central development machine to allow for ssh access(eclipse over ssh) and their own virtual host on the dev server which gives everyone a centralized development sandbox. Code is written and tested on this dev box then checked back into svn and later tagged and pushed out to the staging server. Yeah? Thoughts comments or recommendations?
*Also, in a dev environment what is the best way to handle databases? Is it wise to pull from the production database? Also should each developer have his/her own db or work off a master db?
**We are building a magento application and also have some custom backoffice tools that run on cakePHP.
Although this subject is off-topic in StackOverflow and flagged so then you need to concentrate on following areas:
VERSION-CONTROL
GIT has all the glory and you don't need your own box for this as https://bitbucket.org/ offers unlimited data and private/public repos and you can set your codebase there. http://github.com is also powerful and de facto most popular version-control oriented tool out there although it comes for a small price
so your master branches live in your version control and your devs will checkout frpom there and commit to it as well
your deployment tools will deploy data to your live and staging environments from your master
ENVIRONMENTS
usually three are used LIVE, STAGE, DEV
LIVE is well live and only approved code gets deployed there
STAGE is pre-live environment and should be exact replica environment according to LIVE so all things can be tested there by merchant
DEV is cool to have exact replica but can as well be on developers local env and is ment for loose testing and experimenting
DATABASES AND DEPLOYMENT
mysql databases are pain in the ass to sync so you better have a script for it that syncs from live to others and prevent syncing from other environments to LIVE. This limitation also requires that all the configuration and content will be added from LIVE only and only then synced down the line. Every change to schema or permanent setting should be handled by update scripts (As we are talking MAGENTO CE , MAGENTO EE has migration built in)
for deployment I also suggest you to build a fabric or capistrano script that resets dev and staging environments, handles database reset and pull from LIVE DB, and imports code from central repository.
it's also a good idea to target the following everyday tasks:
clients needs to reset the stage for it's tests
project manager, developer or testers need to test so spawning a test clone should be oneclick action (take current db and code and make it live in some subfolder for specific test only) as well as deleting the test
3rd party devs might need access to specific test or dev environment (this is actual with magento as in average there are at least 10 external extensions installed in every magento store)

Resources