oracle database sandbox - oracle

Is it possible to create database server "sandbox"?So there is a master server that contains real data and a sandbox server that should dispatch read request to the master server in case the sandbox does not have cached data.In the case of a write request it should create a local copy of the data and apply changes to that copy without any impact to the master server.

You could build such a thing.
Create a local Oracle database with a database link that points back to the master database.
Copy the DDL for every object you're interested in from the master database to your local database renaming each table (i.e. EMP becomes EMP_LOC).
Create a view in the local database for each table that does a UNION ALL between the remote and local copies of the table.
Create an INSTEAD OF trigger on the local view that writes any changes only to the local table.
While you could do such a thing, however, it's not obvious why you'd want to. It would be a fair amount of work to set up and maintain and performance could easily get dodgy rather easily. And it's not obvious what problem this approach solves-- it wouldn't replace the need to have isolated development, test, and staging environments. And I'm hard-pressed to come up with a lot of use cases where this sort of "sandbox" would be preferable to one of those environments.

#Justin Cave give a good approach.. however maybe you should consider creating a Virtual Machine and take a snapshot of your PROD instance whenever you want to work on something new with the latest data.

Related

How to add existing heroku dataclips to local postgres development database?

What is a neat way to recreate heroku dataclips on my local machine so that I have immediate access to the same useful queries locally which I do on an instance of my app on heroku?
I'm referring to the ability to query the state of the local database one is working with during application development, i.e. testing data, if you like (though of course after I pg:pull it's simply a copy of production data for testing purposes).
I have found I have come to rely on the views the dataclips give me into production data, which then assists in the courage to not allow primitive readability of bare tables to be a significant design consideration when adding to or adjusting my database schema. That means I can pursue more normalisation with confidence which can be wonderfully freeing.
So, I just realised this morning that this could be really quite useful, so, lets consider it two steps:
A high level overview of the concepts involved.
Details of how to do it, with some examples.
So to start with, do heroku dataclips correspond directly (postgres) database views?
Heroku Dataclips does nothing more than execute a given query and display/visualize the resulting data set. Additionally, dataclips are only able to query against Heroku Postgres databases. Simply put, there's no way to target a local database with the heroku dataclip tooling.
You could potentially create a Heroku Postgres database with the express purpose to model the state of your local development database and use that. For instance, every time you'd like to run a dataclip against your local instance you'd push the data up to this purposed database and then execute the dataclip against that database. It's an extra step but if you need to use Dataclips it's likely the only reasonable way to do it for the purposes you've expressed here.

Heroku share database in pipeline with restricted permissions?

I have a production and staging app in my pipeline. I would like to do one of two things.
Copy the postgres production database, but with limited data (as the current amount requires that I pay). Really, I want to copy all of the data except from one table. Is it possible to copy it and then just delete a table?
If this is not possible, can I share the production database with the staging app but not allow it to add or delete data unless I know it is ready.

Oracle11g Database Synchornization

I have a WPF application with back-end as Oracle11gR2. We need to enable our application to work in both online and offline(disconnected) mode. We are using Oracle standard edition(with single instance) as client database. I am using Sequnece Numbers for Primary Key Columns. Is there anyway to sync my client and server database without any issues in Sequence number columns. Please note that we will restrict creation of basic(master) data to be created only in server.
There are a couple of approaches to take here.
1- Write the sync process to rebuild the server tables (on the client) each time with a SELECT INTO. Once complete, RENAME the current table to a "temp" table, and RENAME the newly created table with the proper name. The sync process should DROP the temp table as one of its first steps. Finally, recreate the indexes and you should be good-to-go.
2- Create a backup of the server-side database, write a shell script to copy it down and restore it on the client.
Each of these options will preserve your sequence numbers. Which one you choose really depends on your skills. If you're more of a developer, you can make #1 work. If you've got some Oracle DBA skills you should be able to make #2 work.
Since you're on 11g, there might be a cleaner way to do this using Data Pump.

Migrating and Backing up Schemas (complex database structures)

Hey guys,
I need to figure out a way to back up and also migrate our Oracle database from our production schema to the dev schema and the other way around.
We have bunch of config tables that drive how systems on our platform run, and when setting up new systems or doing maintenance, we need to update our config tables. We want to be able to work on the dev schemas and after setting up a system/feature, we want to be able to migrate all those configs to the dev schemas.
I thought of running a procedure where we give the ID of the system (from the main table) and i would go through all the tables and select nvl(..) and if it doesn't exist, i would insert into, and if it does exist then i just run an update on that row.
This code will get very messy and complicated especially since the whole config schema is very complex and it might be hard to handle all the keys properly.
Another option i was looking at was triggers, so when setting up a new system, there would be a log of all the statements we ran while setting up/editing a system, then we would run it on our production schema.
I'm on a coop term, and have only been working with databases for 6 months, so i don't know that much and any information/advice would be greatly appericiated.
(We use pl/sql)
What about using export / import (or datapump) to bring over the config tables?
Check out data comparison tools like this
Think TOAD has one built in. I'm sure there are others out there too.
It is common to have tables in a schema that are what we call "static data", i.e. the users don't change it because it controls how the application works.
Each change to config data should not be run ad-hoc in the target environment. Instead, you design and code your DML carefully in one or more scripts, which get tested in a dev environment, checked into change control, and can be re-run in any environment when required.

what is database cloning?

had been looking towards this "Database Cloning" quite many times.. is it anything different from simply creating a copy of the database... please tell me keeping MySQL in mind...
Definition from Wikipedia:
A database clone is a complete and
separate copy of a database system
that includes the business data, the
DBMS software and any other
application tiers that make up the
environment. Cloning is a different
kind of operation to replication and
backups in that the cloned environment
is both fully functional and separate
in its own right. Additionally the
cloned environment may be modified at
its inception due to configuration
changes or data subsetting.
MySQL Documentation for cloning database objects:
http://dev.mysql.com/doc/refman/4.1/en/connector-net-visual-studio-cloning-database-objects.html

Resources