had been looking towards this "Database Cloning" quite many times.. is it anything different from simply creating a copy of the database... please tell me keeping MySQL in mind...
Definition from Wikipedia:
A database clone is a complete and
separate copy of a database system
that includes the business data, the
DBMS software and any other
application tiers that make up the
environment. Cloning is a different
kind of operation to replication and
backups in that the cloned environment
is both fully functional and separate
in its own right. Additionally the
cloned environment may be modified at
its inception due to configuration
changes or data subsetting.
MySQL Documentation for cloning database objects:
http://dev.mysql.com/doc/refman/4.1/en/connector-net-visual-studio-cloning-database-objects.html
Related
What is a neat way to recreate heroku dataclips on my local machine so that I have immediate access to the same useful queries locally which I do on an instance of my app on heroku?
I'm referring to the ability to query the state of the local database one is working with during application development, i.e. testing data, if you like (though of course after I pg:pull it's simply a copy of production data for testing purposes).
I have found I have come to rely on the views the dataclips give me into production data, which then assists in the courage to not allow primitive readability of bare tables to be a significant design consideration when adding to or adjusting my database schema. That means I can pursue more normalisation with confidence which can be wonderfully freeing.
So, I just realised this morning that this could be really quite useful, so, lets consider it two steps:
A high level overview of the concepts involved.
Details of how to do it, with some examples.
So to start with, do heroku dataclips correspond directly (postgres) database views?
Heroku Dataclips does nothing more than execute a given query and display/visualize the resulting data set. Additionally, dataclips are only able to query against Heroku Postgres databases. Simply put, there's no way to target a local database with the heroku dataclip tooling.
You could potentially create a Heroku Postgres database with the express purpose to model the state of your local development database and use that. For instance, every time you'd like to run a dataclip against your local instance you'd push the data up to this purposed database and then execute the dataclip against that database. It's an extra step but if you need to use Dataclips it's likely the only reasonable way to do it for the purposes you've expressed here.
i am working on designing constructure in Greenplum database.
we have many clinets which need to store data for them.
there are two ways to design database constructure. we build one database and different schemas in this database for each clients
or build different databases for each clients, which way is better?
waht is nmore ,we need to migrate databases or schemas from dev environment to environment
Thanks
William
William,
Either way will work. If you are keeping multi-tenants in Greenplum and there is no data sharing, you might be better off keeping them in separate databases - easier for security and for backups. If there is a requirement that they share some common data, then using multiple schemas in one database is the better option.
I am not sure what version of Greenplum you are on, but you should be able to backup a schema from dev and restore it with gprestore using the --redirect option to put it in the database you want it to be in.
Jim McCann
Pivotal
I have been tasked with creating some backups for some Oracle Apex apps (Application Express v4.1.1.00.23). The request is to back up both the applications & referenced db objects (not sure if this means just structure or structure & data).
On the one hand, I would have expected standard db backups to handle most or all of this but I'm very new to Apex so it's all a learning curve.
I'm currently exporting the application from apex and then exporting (using SQL Developer) all the database object dependencies that Apex gives me - although I see that the list doesn't include functions that are used for auth.
This seems a really clunky process that's very prone to mistakes (miss an object, save something to the wrong place, no guarantees of consistency etc).
Does Apex (my version!) offer something to do the job or is there something else I could be doing? I've had a good google but nothing has stood out.
UPDATE: I realise now that I should have included some extra info. I'm currently at a large organisation & I believe our db backups (which I guess/hope are done using rman) are done by a different department. I think the motivation for the request is so that we have some local, easily accessible backup so that if one of the developers messes something up we don't have to go through multiple layers of organisation (& undoubtedly a lot of time) to sort ourselves out. I suspect that some kind of source control would be a great starting point but I'm not sure how far I'll get with that idea - especially as we seem to have little in the way of autonomy over things like servers.
RMAN is the way to go to backup an oracle database:
https://docs.oracle.com/database/121/BRADV/toc.htm
there are tons of material on the hows and whys online; just google "oracle rman" and you'll find what you need (the documentation should cover you as well of course).
cheers
Standard DB backups will include everything you need.
The Apex applications I develop are static, meaning the end users make no changes to the Apex application, and there is no need to make a specific backup other than to store the original apex application .sql installation files in a safe place.
If you must, you can make an export of the database schemas the application uses. For example with the expdp utility.
IN apex you need to take 2 backups one in the workspace
Second your application
Third is while using export import from the database it tends to loose & character in procedure ..So beter use rman and take a complete backup.
I have found this Oracle white paper Life Cycle Management with Oracle Application Express (Revision 2) which does what it says on the tin - including various strategies for exporting, backing up & managing 'lost application development'. It's a really good read and I'll be using it as a template for suggestions of how we can manage our process in future.
Is it possible to create database server "sandbox"?So there is a master server that contains real data and a sandbox server that should dispatch read request to the master server in case the sandbox does not have cached data.In the case of a write request it should create a local copy of the data and apply changes to that copy without any impact to the master server.
You could build such a thing.
Create a local Oracle database with a database link that points back to the master database.
Copy the DDL for every object you're interested in from the master database to your local database renaming each table (i.e. EMP becomes EMP_LOC).
Create a view in the local database for each table that does a UNION ALL between the remote and local copies of the table.
Create an INSTEAD OF trigger on the local view that writes any changes only to the local table.
While you could do such a thing, however, it's not obvious why you'd want to. It would be a fair amount of work to set up and maintain and performance could easily get dodgy rather easily. And it's not obvious what problem this approach solves-- it wouldn't replace the need to have isolated development, test, and staging environments. And I'm hard-pressed to come up with a lot of use cases where this sort of "sandbox" would be preferable to one of those environments.
#Justin Cave give a good approach.. however maybe you should consider creating a Virtual Machine and take a snapshot of your PROD instance whenever you want to work on something new with the latest data.
Hey guys,
I need to figure out a way to back up and also migrate our Oracle database from our production schema to the dev schema and the other way around.
We have bunch of config tables that drive how systems on our platform run, and when setting up new systems or doing maintenance, we need to update our config tables. We want to be able to work on the dev schemas and after setting up a system/feature, we want to be able to migrate all those configs to the dev schemas.
I thought of running a procedure where we give the ID of the system (from the main table) and i would go through all the tables and select nvl(..) and if it doesn't exist, i would insert into, and if it does exist then i just run an update on that row.
This code will get very messy and complicated especially since the whole config schema is very complex and it might be hard to handle all the keys properly.
Another option i was looking at was triggers, so when setting up a new system, there would be a log of all the statements we ran while setting up/editing a system, then we would run it on our production schema.
I'm on a coop term, and have only been working with databases for 6 months, so i don't know that much and any information/advice would be greatly appericiated.
(We use pl/sql)
What about using export / import (or datapump) to bring over the config tables?
Check out data comparison tools like this
Think TOAD has one built in. I'm sure there are others out there too.
It is common to have tables in a schema that are what we call "static data", i.e. the users don't change it because it controls how the application works.
Each change to config data should not be run ad-hoc in the target environment. Instead, you design and code your DML carefully in one or more scripts, which get tested in a dev environment, checked into change control, and can be re-run in any environment when required.