How do you replicate rules between SonarQube servers? - sonarqube

We currently have two SonarQube servers (v4.5.1) running on two separate Windows 2012 servers each with its own MS SQL database server. One is our Development server and the other is our production server. The idea being that we test out all rule changes on the development server first, once we are happy that they are correct we port them to the Production server.
When we first setup the two servers we simply took a backup of the Development server database and restored it on the Production server. At this point both systems were in sync.
We have recently made some modifications to the Development rules set, however when we tried the same approach to move these to the production server it did not work.
The production box seemed to remember the previous rule set. There seems to be a cache of the previous rules that we can't work out how to clear.
Before restarting SonarQube with the new DB in place we deleted the temp folder as that appears to keep a cached H2 database, but that did not solve the issue. We also tried starting it up and using the /setup url but this did not appear to work either.
Is there a way to completely reset the SonarQube server prior to restoring the database so that it has no knowledge of the previous rule set?
Alternatively is there a better way to export and re-import the entire rule set between two servers?
We looked at exporting the rule profile, but this did not appear to contain the full detail of the rules.
Thanks
Pete

For the moment, this is not possible to fully synchronize rules and quality profiles between 2 servers because of SONAR-5366. You can watch and vote for this ticket.
Concerning the cache that you seem to have, this is probably the E/S indexes which are located in <install_dir>/data/es folder. What you can do is:
stop you server
fully delete the <install_dir>/data folder
restart the server: your rules should be in sync with the DB

Related

Laravel Restore a Backup

I'm fairly new to server administration. I have my Laravel app up and running and I want to make sure it has proper backups. I have researched some backup packages and I have settled on https://github.com/spatie/laravel-backup.
However, once the server fails, I need to know how to use the most recent backup (which will be on AWS S3) to restore the database on the rebuilt server. Are there any suggestions for guides on how to do this? I can't seem to find any unless it doesn't really require much learning and instead just a couple mySQL commands.
Thanks!
I would use replication and within Laravel i would try to switch connection to the replica database server so things can run smoothly until the problem is resolved.
Take a look at this Cross-Region Replication
A typical production environment is automatically running backups on most important things that your deployment needs in order to recover from a failure. Those parts would commonly be your database and storage folder, and configuration files.
Also when you deploy a laravel application there aren't many things that are "worth" backing up , you can choose the entire disk to be mirrored somewhere or you can schedule a backup script which run every N times and backups the things that are more important to your application.
Personally i wouldn't rely on an package from laravel to handle my backups , you can always use other backup utilities, replication and so on.
Update
Take a look at the link below:
User Guide » Amazon RDS DB Instance Lifecycle » Backing Up and Restoring
Backing Up and Restoring
You can call the API function RestoreDBInstanceFromDBSnapshot as showed on example.
But i don't think something automated exists that would auto restore or magically make everything work, you need to do a lot of security checks if something like that would even be attempted. Final word i believe a good solution manually entering or sending the request would be the most solid solution.

How to setup SonarQube for a large organization

I am in the process of setting up SQ for a large organization. I plan to have two separate systems one for update testing and rule development. The second would be the production system where real work occurs. I will be using SQL 2014 typically when I do that I use a SQL always On group to sync to a DR server in another datacenter. My question is with a SonarQube instance does it make sense to DR the application to that level. If my organization can wait for a period of time to stand up a new server in a DR event would that be possible with a proper backup of the DB? Further if there were no backups of the DB what would be lost with a fresh new SonarQube server besides setup/config time? Is there historical value of code scans that would be lost or would the next scan of the code base have us right back to where we were in terms of critical issues found etc.?
Thanks for your replies.
All the data is stored in the database so using DR on the database is a good idea. You should make backup of the database and restoring the database is also a good solution (note that you should do backup of installed plugins).
If you loose the database, you will also loose all the configuration (quality profiles, credentials, etc.) and the history of the analyzed projects.
So to restore a SonarQube instance, you have to :
Restore the database
Restore SonarQube or install the same version
Restore the plugins (${SONAR_HOME}/extensions/plugins)
During the first start, the ES files (${SONAR_HOME}/data/es) will be regenerated and you're instance will then be up and running.
If you have commercial plugins or if you are working with large SonarQube instance you may contact the sales team to have support on this setup.
Disclaimer : I'm working at SonarSource

Magento upgrading process and infrastructure for smallest possible downtime

I have a client who currently has one server with Magento and his admin takes down whole site for updates for multiple hours. I would like to make it instant process so that I wanted to propose new solution on how he should have set it up:
Magento Production Server 1 (WEB+DB)
Magento Production Server 2 (WEB+DB)
Magento Dev Server 1
DB would have to be synced somehow between those 2 servers (cluster? replication?) and I was thinking that for the smallest downtime possible first the updates should be tested on Dev Server (DB / WEB synced from Production server just before upgrading) and after checking it works fine and knowing how the process looks like I would be disabling LoadBalancing or RoundRobin DNS to only Server 1 then doing upgrades/updates on Server 2 and then Switching to server 2 as production server and updating server 1. When both are done switch on LoadBalancing/Round Robin on.
I come from Windows environment so this is how I would do it on Windows (maybe with seperate Database and Web too) and with tools like RedGate SqlCompare/Sql Data Compare etc it should work.
But I don't know Magento at all so please let me know what's possible and maybe how this should be done if the client don't want to end up with his shop being down...
You'll definitely need a production server, and some sort of staging/version management system.
I recommend checking out Subversion or Git for version management.
Changes can be committed to a repository first, and then updated to the live site with no downtime. This would be more than sufficient for a development environment.
For bigger changes, like a Magento version upgrade, you might still want/need to take the site down for a few hours in the middle of the night, as this is a much bigger process.
As for multiple servers, as an example I run a load balancer which balances between a primary and a secondary server. There is one database server that is separate. Changes are made to a development server, committed to the primary server with Subversion, and then any changes between the primary and secondary servers are rsynced to the secondary server every 60 seconds.
For this solution, session and cache data are stored in the database.
IMHO, with a good hosting environment, you won't need multiple servers unless you literally are in the thousands of simultaneous visitors. Plugins are the usual cause of admin-related problems.
We've had great success with "cloud" environments. Instantiate a new cloud instance, get that IP, then in your "hosts" file, point something like dev.yourdomain.com to it for testing. The only real downtime is that you should freeze the production site while the database converts to the new version, which can be a couple hours. Our mySql DB backup is 3 GB or so, but thankfully tgz's down to 280 MB.
We're using nginx and php-fpm and they are obscenely fast.
Typical migration path for me:
backup production site
start new cloud instance and copy production site to dev site
(restore production database)
try upgrading dev site one step at a time to see what breaks
start new cloud instance and do completely fresh install of newest
magento version
once working, restore production database and watch as it grinds on
converting it, see what breaks
pick between upgrade versus fresh install
back up production mySql, put production site in maintenance mode
while dev site converts the database
point domain to new IP address

best practices for uploading many files to live server while updating database

I have roughly 200 files that I need to push to our live server after business hours. In addition to this push I have a few database updates that I need to run in conjunction with this roll out.
What has been done in the past on this system is to create a directory on the server of the updated files and create a cron script to copy those files to overwrite their previous versions on the server. And then executing the calls to the database.
Here are the problems I am trying to work around:
1) There is no staging server.
2) There is no easy way to push from our version control (svn) to our live server
3) There are a lot of files and the directory structure is deep so setting up a copy of the directories to be copied over on the server seems precarious and time consuming.
What's the best way to do this?
The way I've done similar things in the past is to have a cron job run a script an administrative machine that:
1) checks out the files I need on my production server on some sort of staging machine
2) rsync's the files onto the server
3) runs a post-rsync script on the server (say via ssh'ing to the server)
However, you specify that you have no ability to use a staging machine, by which I assume you mean that you have no administrative machine at all, and that you cannot check out your repository on the server either. That makes doing this cleanly far harder. Are you sure you can't at least use your workstation or some similar box as an administrative or staging machine here?

Setting up a collaborative environment for web application development

My office is growing and ive been tasked to build out the IT for our web development.
Whats the best tool/setup for doing web development in a group setting? The requirements are a centralized code repository, a location to test development code on, and finally a way to push tagged code out to a staging server. What im thinking is svn/redmine for code repo, each user has an account on a central development machine to allow for ssh access(eclipse over ssh) and their own virtual host on the dev server which gives everyone a centralized development sandbox. Code is written and tested on this dev box then checked back into svn and later tagged and pushed out to the staging server. Yeah? Thoughts comments or recommendations?
*Also, in a dev environment what is the best way to handle databases? Is it wise to pull from the production database? Also should each developer have his/her own db or work off a master db?
**We are building a magento application and also have some custom backoffice tools that run on cakePHP.
Although this subject is off-topic in StackOverflow and flagged so then you need to concentrate on following areas:
VERSION-CONTROL
GIT has all the glory and you don't need your own box for this as https://bitbucket.org/ offers unlimited data and private/public repos and you can set your codebase there. http://github.com is also powerful and de facto most popular version-control oriented tool out there although it comes for a small price
so your master branches live in your version control and your devs will checkout frpom there and commit to it as well
your deployment tools will deploy data to your live and staging environments from your master
ENVIRONMENTS
usually three are used LIVE, STAGE, DEV
LIVE is well live and only approved code gets deployed there
STAGE is pre-live environment and should be exact replica environment according to LIVE so all things can be tested there by merchant
DEV is cool to have exact replica but can as well be on developers local env and is ment for loose testing and experimenting
DATABASES AND DEPLOYMENT
mysql databases are pain in the ass to sync so you better have a script for it that syncs from live to others and prevent syncing from other environments to LIVE. This limitation also requires that all the configuration and content will be added from LIVE only and only then synced down the line. Every change to schema or permanent setting should be handled by update scripts (As we are talking MAGENTO CE , MAGENTO EE has migration built in)
for deployment I also suggest you to build a fabric or capistrano script that resets dev and staging environments, handles database reset and pull from LIVE DB, and imports code from central repository.
it's also a good idea to target the following everyday tasks:
clients needs to reset the stage for it's tests
project manager, developer or testers need to test so spawning a test clone should be oneclick action (take current db and code and make it live in some subfolder for specific test only) as well as deleting the test
3rd party devs might need access to specific test or dev environment (this is actual with magento as in average there are at least 10 external extensions installed in every magento store)

Resources