Maintaining Multiple Databases Across Several Platforms - windows

What's the best way to maintain a multiple databases across several platforms (Windows, Linux, Mac OS X and Solaris) and keep them in sync with one another? I've tried several different programs and nothing seems to work!

I think you should ask yourself why you have to go through the hassle of maintaining multiple databases across several platforms and have them in sync with one another. Sounds like there's a lot of redundancy there. Why not just have one instance of that database, since I'm sure it can be made accessible (e.g. via SOA approach) to multiple apps on multiple platforms anyway?

Why go through the hassle? Management claims it's more expensive?
Here's how to prove them wrong.
Pick one database, call it the "master" or "system of record".
Write scripts to export data from the master and load it into your copies. If you have a nice database (MySQL, SQL/Server, Oracle or DB2) there are nice tools to do this replication for you. If you have a mixture of databases, you'll have to resort to exporting changed data and reloading changed data. The idea is that this is a 1-way copy: master to replicants.
Fix each application, one at a time, to do updates in the master database only. Since each application has a JDBC (or ODBC or whatever) connection to a database, it can just as easily be a connection to the master database.
Once you've fixed the applications to update only the master, the replicas are worthless. Management can insist that it's cheaper to have them. And there they are -- clones of the master database -- just what management says you must have.
Your life is simpler because the apps are only updating the system of record. They're happy because you have all the cloned databases lying around.

Related

Ho to do blue/green deployment for hybris?

I want to deploy hybris builds with zero down time. Our technical architecture consist of two frontend servers, two backend servers, two master/slave solr clusters, but a single DB server (MS SQL 2012). A new build may require patch execution which changes the DB schema.
Would it be possible to achieve this in a single DB landscape?
If two DB's are required (blue and green), then what is the best practice for DB replication in case of hybris?
Hybris does provide a rolling update feature (when you're running it in a cluster environment).
This is targeted to allow for zero downtime.
You can find more information on the hybris help pages, e.g.
https://help.hybris.com/6.5.0/hcd/8c455268866910149b25f7b53d1af3e1.html
Looking at the first picture there it seems to be pretty much fitting for the architecture you describe.
(But to be honest I have no experience with it, so I can't tell you whether or or how well it works :) )
If you have risky changes or end up needing to rollback your rolled out update you will have to do quite a bit of db cleanup etc.
From that perspective a blue/green setup might sound better although with db replication you would end up with the same problem (as your updated schema would be replicated as well I assume).
Hybris only adding new columns to db, never change their type or remove them. So single DB can be OK. I didn't test this using store front while updating system. I think it will be OK.
On the other hand you need development for empty/null check for new attributes in development.

How to schedule backups for development, testing, and production environments?

I was educating myself about website launches. I was reading this document to prepare an implementation check-list for my future website launches.
On page 11, they mentioned
Schedule backups for development, testing, and production environments
I can imagine taking backup of a database and website code.
How can someone automate and schedule backup of an entire x, y or z environment?
If this is a very broad question then I do not need the step by step document for doing this. Maybe some starting point which help me imagine this will be good enough.
Update:
I have sent an inquiry to Pantheon looking forward for their response.
It all depends on how you set up your environments, how complex they are, and what you are trying to back up.
How often do your environments change?
You can use something like Docker to keep track of the changes to your environments. It basically gives you version control for environments.
If you're using a hosting provider like DigitalOcean, they provide snapshots and backups of entire virtual images.
Read about rsync - https://wiki.archlinux.org/index.php/Full_system_backup_with_rsync
You could use rsync if you manually keep track of all of your config files and handle the scheduling and syncing yourself.
As for the actual site:
What's changing on your sites?
For changes to the code of a site, you should be using version control so that every single change is backed up to a repository like Github.
Most of the time though, the only thing that is changing is data in a database. Your database can easily be managed by a third party like Amazon RDS who offers automated backups. If you want to manage your own database on your own server, then you can have a cron job to automatically perform a backup of your database at scheduled times. Then you can use rsync to sync that backup to a separate machine. (You want to spread your backups across a couple of machines in different locations)
You can run cron jobs at regular interval to backup your data. The most important data could be your transactional information or the entire database itself.
Code, you can manage at developers end using github. But if you have certain folders which are populated by website users, then plan to backup those as well.

Oracle, Users as Instances

A client has been referring me to their different Oracle "databases".
After a data migration mix-up, it occurred to me that these various databases are all the same database. They've just broken up the same instance with multiple users and strict permissions.
So, user1 has a CUSTOMER table, user1.CUSTOMER and user2 has a CUSTOMER table user2.CUSTOMER.
The data in those tables is completely separate, and managed by different instances of an application.
I've never seen this done before; Is this a standard, acceptable practice?
Is there a specific performance and maintenance scheme for this type of setup?
Or, is this the wild west?
thanks
This is a normal standard practice in Oracle environments. Oracle environments are one big application with one or few schemas, or a multitude of small applications grown over time.
Of course there are a number of architectural design decisions to make. For instance, when placing multiple applications in one database, the applications may influence each others. Over time, Oracle has been enhancing the technology stack to restrict the influence applications can have on each other. For instance, you can nowadays distribute the resources differently per user. But for instance an application running wild can flush the whole cache, impacting other applications. Starting with Oracle 12c, Oracle Corp. has even further improved the isolation by creating a structure that more resembles the Microsoft SQL Server approach with separate "databases", but 12c even extends that further. For instance because an Oracle container database can still contain multiple databases and it's own data dictionary (enterprise edition only allows more than 1 container database).
In general, I do not recommend putting many serious/large applications in one Oracle database. For instance, when you upgrade one of them, you need to make sure that all other applications are compatible with a possible new minor release of Oracle needed. So when you merge multiple applications in one database, make sure that you can control that they are all certified at the same time for the same Oracle version. For instance because it are inhouse packages.

Would SQLite be a 'better' choice for Joomla than MySQL, if it would be available?

Since this doesn't touch a real problem of mine I'm somwhat uncertain, if it is even worth to be asked here. However maybe some of you would like to share your opinion on that.
In general I have to admit, that 'better' means anything and nothing at all at the same time. So I probably should be more specific, but I tried not to overflow the topic. In a regular hosted environment on one of those cheap webhosters (like Dreamhost), with around 1000 articles in Joomla, a couple of users and a few hundreds visitors a day, would a SQLite database with a persistent connection (sqlite_popen) perform noticeable faster than the MySQL equivalent (with the TCP/IP overhead etc.)?
Or in short: Would it be wise to call Joomla to support SQLite?
I have never used sqlite on a website, but I have used it extensively for other purposes and I quite like it. The truth is, you won't know till you try. If you try, I reccomend creating a db abstraction layer first so that you can easily swap in other db's.
The downside to sqlite is that it's not really meant to be a multi user database. If you rarely write to the db, but do lots of reading, sqlite will probably be fine. If you find that you need multiple processes writing to the same db, I believe sqlite uses file level locking to maintain database consistency.. So, if all you're tables are in the same file, you'll lock the whole file while it's being written to even if another process wants to modify a completely different table.
In my opinion it's not the big multi user databases of the world that should be worried about competition from sqlite... It's all the regular files out there (and there custom file formats) that applications create and use that should be shaking in their boots about sqlite...
Linux ISPs for whatever reason seem to have settled on MySQL. This is what they offer and you will lock yourself to a limited number of service providers if you wander outside the norm.

How do I deploy an Oracle database?

I have an ASP .NET application that connects to an Oracle or a SQL Server database. An installer has been developed to install a fresh database to an existing SQL Server using sql commands such as "restore database..." which simply restores a ".bak" file which we keep under source control.
I'm very new to Oracle and our application has only recently been ported to be compatible with 10g.
We are currently using the "exp.exe" tool to generate a ".dmp" file and then using the "imp.exe" to import it into a developers box.
How would you go about creating an "Oracle Database Installer"?
Would you create the database using script files and then populate the database with required default data?
Would you run the "imp.exe" tool behind the scenes?
Do we need to provide a clean interface for system administrators so that they can just select the destination server and have done, or should we just provide them with the ".dmp" file? What are the best practices?
Thanks.
The question is -- what do your customers know about Oracle?
Nothing? You should probably rethink this position. Oracle is very large and complex. If you assume your customers know nothing, you'll then start providing tutorials and help that's inappropriate.
Minimally Competent? If they're competent, they know enough to run imp by themselves. Also, they know enough to run a script that executes SQL.
Actual DBA's? Most organizations that can afford Oracle can afford real DBA's. Real DBA's can cope with a lot of things -- they do not need much hand-holding. Some of them like to assign storage parameters according to their shop standards.
You should provide a script with reasonable defaults. You should define your script in a way that someone can easily find all of your storage parameters and tweak them if necessary.
Your initial data can be via export/import or via a script. I prefer a script.
I have done this repeatedly from both sides (consumer and provider) as a DBA, developer, and architect.
As a provider, one of my grand accomplishments (in 1996) was the creation of an installation CD for a commercial insurance claims management software product targeted to the largest insurance carriers (a multi-million dollar item). That installation CD installed the Oracle 7.2 RDBMS engine, the FileNet optical storage system (scans paper documents and creates cataloged binary versions), and our custom claim-processing application (built in VB 4.0), all integrated and ready to run. As part of the installation process, the user could skip the Oracle software installation or customize it, and the user could customize/override the database configuration in all of its major details (database, schemas, tablespaces, sizes, disks, etc.).
I also provided the field service for this product, which included traveling to the client site as necessary. I tested the installation CD literally hundreds of times under every imaginable scenario that I could replicate, and we NEVER had a field failure that required even a phone call, let alone a trip (I did travel on four occasions, but for pre-sales stuff instead).
More recently (2007), I scripted the creation of an Oracle 10g database for an internal system at a megacorp. In production, the database was sized at 8 TB, mostly for a single transaction table with high data volume. In test, the database was sized around 1 TB for a modest server. In development, the database was sized around 100 MB to run on my laptop. The EXACT SAME SCRIPTS created all three environments, and I could extend them to handle a new environment/machine in about five minutes. This database involved extreme performance tuning, so customization of all pertinent characteristics was absolutely crucial.
Back to the insurance claims processing product--let me please add that I was originally hired to lead its conversion from a SQL Server database to an Oracle database. That conversion was identified as a business necessity because most potential clients did not view a SQL-Server-based product as a professional, serious solution. That is not quite as common today, but it still applies in general: a software product has a better chance of market penetration if it can accommodate multiple database options as preferred by the target customers (especially enterprise-class customers).
Likewise, the installation CD was also viewed as an essential element. However, that situation and many more have revealed to me that most "real" DBAs will not accept an import-based database installation. As a DBA and architect, I know that I definitely will not for the same reasons.
Simply put, an import-based database installation gives the customer almost no control over the resulting database. It is opaque to the customer, leaving them questioning what it did. It forces the customer to expend massive efforts to attempt to exercise what little control they can. It is notoriously fragile and error-prone (Oracle imports are well known for ownership and permission problems, constraint problems, etc.). Weighing all those impacts, an import-based database installation is unprofessional--it does not put the customers' needs first.
Scripting the database installation provides the right kind of transparency, configurability, selective repeatability, and overall customer control that professionalism demands. It also encourages you to properly understand the impacts of your database design decisions in a way that an import does not.
Best wishes.
Personally I favour SQL scripts to database creation and data loads where possible. I tend to use PL/SQL Developer. It has some good options to generate scripts from an existing database. Once you have these you can run the scripts using sqlplus or any application code that can execute arbitrary SQL (eg JDBC with Java). Toad is the more common (and more expensive) tool for Oracle development.
The only limitation of a SQL export is it can't export CLOB/BLOB fields. If you have those, you either need to do them separately (as a PL/SQL export) or do the whole thing as a PL/SQL export. Theres no dramas with this except the file is effectively a binary export (extension .pde) and is more limited in how you can execute it.
The other big advantage of SQL source files is they can be version controlled easily. It's really handy to be able to create a database environment by running one or two scripts.
The import and export tools for Oracle I think are more applicable for backup and restore operations.
Now, as for delivering that to a customer, from your comments it seems that you'll be giving this to DBAs. Pretty much any Oracle installation will have DBAs involved. They will be fine with SQL scripts to create the schema and do the data load. They will be doing a lot of site-specific configuration (eg tuning the SGA, temp tablespaces, # of concurrent connections, etc based on expected load).
You, as the vendor, can give guidance on any relevant configuration and you may get involved in support and possibly installation but ultimately it's up to them to figure out what works for them. Oracle runs on a large number of operating systems and hardware variants with infinite variations in network topology and firewall configuraiton. You can't factor in all of these to an installer or even a set of instructions (other than the guidelines mentioned previously).
The last time I was involved in the creation of a (oracle) db (for a reasonably large company with in-house DBAs) the DBAs wanted to know things like:
what we wanted to call the db,
what tablespaces we would need, and an estimate of how much data would be in each one
how many users would be connecting.
(From memory) they set up the db and tablespaces, then we provided a combination of simple scripts that they could run (or clear instructions if a task wasn't easy to automate)
As I say this was for an in-house app, so your mileage may vary, but in my case they wanted all instructions clearly spelt out so that (a) there was no possibily of a misunderstanding leading to the wrong thing being done, and (b) no culpability on their part if something didn't work ("we were just following the instructions")

Resources