Best way to clone DB-Tables - oracle

I need to clone (make a 1to1 copy of) several Tables in our Oracle DB into another Oracle DB.
Both DBs are running under Oracle version 11.2.0.3
Problems are:
Tables (together) are quite large (> 20gb)
Must be a real "Snapshot". DB may change during cloning process
Realisation must be (of course) damn quick
I came across the DB-Link technology, which seems feasible here. But my question is: How can I make sure, that ALL Tables are consistent after the cloning process? I mean this scenario:
Copy Table A
Copy Table B
Source Table A and Table C changes
Copy Table C
Then my copy Table C contains data, which is NOT present in copy Table A, which might be a constraint violation (logically). How can I avoid this? How do I make a real snapshot of 40 Tables? Is there something like a "revision" of the whole DB? How would the DB-Link-Query then look like?

I suggest you investigate the export utility of Oracle itself. You can impose that the "dump" of the set of tables you are interested in is consistent in terms of transaction, too.
More details here.
Considering you are using 11g I suppose that with "exp tool" you mean the "Data Pump Exp" and not the "legacy exp/imp Utility"?
See the differences here.
If this is not the case try switching to the Data Pump which was designed specifically to cope with large datasets, and may be further tuned to squeeze some extra performance.

Related

oracle schema sharing , is it possible?

Trying to understand if there is any such concept like this in Oracle Database.
Let's say I have two Databases, Database_A & Database_B
Database_A has schema_A, is there a way I can attach this schema to Database_B?
What I mean by this is if there is a job populating a TABLE_A in schema_A, I can see that read-only view in Database_B. We are trying to split a big Oracle database into two smaller databases and have a vast PL/SQL code, and trying to minimize the refactoring here.
Sharding might be what you're looking for. The schemas and tables will still logically exist on all databases, but you can arrange the data to be physically stored in specific databases. There might be a way to setup shardspaces, tablespaces, and user default tablespaces in a way where each schema's data is automatically stored in a specific database.
But I haven't actually used sharding. From what I've read, it seems to be designed for massive distributed OLTP systems, and it is likely complicated to administer. I'd guess this feature isn't worth the hassle unless you have petabytes of data.

How can we do data analysis for DB replication project

We are facing one issue in our project i.e. Data verification issue.
The project is about Replication of data from Sybase to oracle DBs.
The table structures for Table A across Sybase, Oracle is same.
Same column and primary key combination across all the databases.
e.g. If Sybase has Table A with columns a, b and C
same table with same name and same columns will be available in different databses.
We are done with replication stuff part.But we faced some silent failure like data discrepancy just wondering if there will any tool already available for this.
Any information on his would be helpful. Thanks.
Sybase (now SAP) has a couple products that can be used for data comparisons and reconciliation:
rs_subcmp - an older, 32-bit tool that comes with the Sybase Replication Server product that can be used to compare data between
source and target; SQL reconciliation scripts can be generated from
the differences and then applied to the target to bring it in sync
with the source; if your tables are more than 1GB in size you can
still use rs_subcmp but you'll need to create multiple comparison
jobs (via where clauses) to work on different subsets of your tables
[I don't recall if rs_subcmp can be use for heterogeneous
replication setsup, eg, ASE-Oracle.]
Data Assurance (DA) - the newer, 64-bit product ... also from
Sybase ... which can also compare data and (re)sync the target(s)
from the source (either via SQL reconciliation scripts or directly);
DA is capable of handling comparisons between a handful of
different RDBMS products (eg, ASE-Oracle); I'm currently working on a
project where one of the requirements is to validate (and reconcile
where needed) 200+TB of data being migrated from Oracle to HANA and
I'm using DA for the validation/reconciliation portion of the project
As #TenG has hinted at with his answer, there's a good bit of effort involved to compare data and generate code to reconcile the differences. Rolling your own code is doable but will entail a lot of work. If you've got the money you'll likely find 3rd party tools can get most/all of the work done for you.
If you used a 3rd party product to replicate your data from Sybase to Oracle, you may want to see if the same vendor has a comparison/validation/reconciliation tool you could use.
I've worked on a few migration projects and a key part has always been data reconciliation.
I can only talk about the approaches we took, based on constraints around tools available and minimising downtime, and constraints of available space.
In all cases I took to writing scripts that worked on two levels - summary view and "deep dive". We couldn't find any tools readily available that did what we wanted in a timely enough manner. In fact even the migration tools we found had limitations (datapump, sqlloader, golden gate, etc) and hand coded scripts to handle the bits that we found to be lacking or too slow in the standard tools.
The summary view varied from project to project. It was part functional based (do the accounting figures for transactions match) for the users to verify, and part technical. For smaller tables we could just write simple reports and the diff was straight forward.
For larger tables we wrote technical reports that looked at bands of data (e.g group the PK into 1000s) collect all the column data and produce checksum, generating a report for each table like:
PK ID Range Start Checksum
----------------- -----------
100000 22773377829
200000 38938938282
.
.
Corresponding table pairs from each database were then were "diff"d against each other to highlight discrepancies. Any differences that were found could then be looked at in more detail.
The scripts were written in such a way to allow them to run in parallel looking at discrete bands. Te band ranges were tunable as well to get the best throughput. This obviously sped things up.
The scripts were shell scripts firing off sqlplus reports, and similar for the source database.
On one project there wasn't enough diskspace to do these reports, so I wrote a Java program that queried the two databases side by side, using block queues to fetch and compare rowsets. Being in memory meant this was super fast.
For the "deep dive" we looked at the details for key tables, or for tables that reports a checksum difference.
For the user reports, the users would specify what they wanted to see, and we wrote the reports accordingly.
On the last project, the only discrepancies found were caused by character set conversion issues (people names with accents weren't handled correctly).
On projects where the overall dataset was smaller we extracted the data to XML files and wrote a Java tool to processes pairs and report differences.
The SAP/Sybase rs_subcmp tool is pretty powerful and also pretty hard to use. For details see:
https://help.sap.com/viewer/075940003f1549159206fcc89d020515/16.0.3.3/en-US/feb58db1bd1c1014b134ef4efef25563.html?q=rs_subcmp
You have to pass it key field information, but once you do that, it can retry/restart the compare streams after transient differences. Pretty fancy.
rs_subcmp expects to work on Sybase data source. So to compare against Oracle, you'd probably have to setup one of those Sybase-to-Oracle gateway products ($$$$$).
Could you install the Oracle ODBC drivers and configure them to allow Sybase clients to access Oracle? I'm guessing not (but that's outside the range of my experience).
Note the "-h" option for rs_subcmp. The docs just say it runs a "fast comparison", but what it's actually doing is running queries using the hashbytes() function. Something like:
select keyfield1,keyfield2, hashbytes("Md5",datacol1,datacol2,datacol3)
from mytable
So this sort of query might be good for the "summary view" type comparison discussed above (if the Oracle STANDARD_HASH() function output matches up with the Sybase hashbytes() function (again, outside my experience))
Note, as of ASE 16, there was a bug with the hash() & hashbytes() functions running the Md5 hash option against large varbinary columns where they could use up all procedure cache, potentially crashing the server (CR 811073)

Oracle copy data between databases honoring relationships

I have a question regarding the oracle copy command:
Is it possible to copy data between databases (were the structure is the same) and honor relationships in one go without(!) writing procedures?
To be more precise:
Table B refers (by B.FK) to table A (A.PK) by a foreign key (B.FK -> A.PK; no relationship information is stored in the db itself). The keys are generated by a sequence, which is used to create the PK for all tables.
So how to copy table A and B while keeping the relationship intact and use the target DBs sequence to generate new primary keys for the copied data (i cannot use the "original" PK values as they might already be used in the same table for a different dataset)?
I doubt that the copy command is capable to handle this situation but what is the way to achieve the desired behavior?
Thanks
Matthias
Oracle has several different ways of moving data from one database to another, of which the SQL*Plus copy command is the most basic and the least satisfactory. Writing your own replication routine (as #OldProgrammer suggests) isn't much better.
You're using 11g, so move into the 21st century by using the built-in Streams functionality.
There is no way to synchronize sequences across databases. There is a workaround, which is explained by the inestimable Tom Kyte.
I generally prefer db links and then use sql insert statements to copy over data.
In your scenario , first insert data of Table A using DB link and then table. If you try otherway round, you will error.
For info on DB link , you canc heck this link: http://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts002.htm

MERGE in Vertica

I would like to write a MERGE statement in Vertica database.
I know it can't be used directly, and insert/update has to be
combined to get the desired effect.
The merge sentence looks like this:
MERGE INTO table c USING (select b.field1,field2 aeg from table a, table b
where a.field3='Y'
and a.field4=b.field4
group by b.field1) t
on (c.field1=t.field1)
WHEN MATCHED THEN
UPDATE
set c.UUS_NAIT=t.field2;
Would just like to see an example of MERGE being used as insert/update.
You really don't want to do an update in Vertica. Inserting is fine. Selects are fine. But I would highly recommend staying away from anything that updates or deletes.
The system is optimized for reading large amounts of data and for inserting large amounts of data. So since you want to do an operation that does 1 of the 2 I would advise against it.
As you stated, you can break apart the statement into an insert and an update.
What I would recommend, not knowing the details of what you want to do so this is subject to change:
1) Insert data from an outside source into a staging table.
2) Perform and INSERT-SELECT from that table into the table you desire using the criteria you are thinking about. Either using a join or in two statements with subqueries to the table you want to test against.
3) Truncate the staging table.
It seems convoluted I guess, but you really don't want to do UPDATE's. And if you think that is a hassle, please remember that what causes the hassle is what gives you your gains on SELECT statements.
If you want an example of a MERGE statement follow the link. That is the link to the Vertica documentation. Remember to follow the instructions clearly. You cannot write a Merge with WHEN NOT MATCHED followed and WHEN MATCHED. It has to follow the sequence as given in the usage description in the documentation (which is the other way round). But you can choose to omit one completely.
I'm not sure, if you are aware of the fact that in Vertica, data which is updated or deleted is not really removed from the table, but just marked as 'deleted'. This sort of data can be manually removed by running: SELECT PURGE_TABLE('schemaName.tableName');
You might need super user permissions to do that on that schema.
More about this can be read here: Vertica Documentation; Purge Data.
An example of this from Vertica's Website: Update and Insert Simultaneously using MERGE
I agree that Merge is supported in Vertica version 6.0. But if Vertica's AHM or epoch management settings are set to save a lot of history (deleted) data, it will slow down your updates. The update speeds might go from what is bad, to worse, to horrible.
What I generally do to get rid of deleted (old) data is run the purge on the table after updating the table. This has helped maintain the speed of the updates.
Merge is useful where you definitely need to run updates. Especially incremental daily updates which might update millions of rows.
Getting to your answer: I don't think Vertica supportes Subquery in Merge. You would get the following.
ERROR 0: Subquery in MERGE is not supported
When I had a similar use-case, I created a view using the sub-query and merged into the destination table using the newly created view as my source table. That should let you keep using MERGE operations in Vertica and regular PURGEs should let you keep your updates fast.
In fact merge also helps avoid duplicate entries during inserts or updates if you use the correct combination of fields in ON clause, which should ideally be a join on the primary keys.
I like geoff's answer in general. It seems counterintuitive, but you'll have better results creating a new table with the rows you want in it versus modifying an existing one.
That said, doing so would only be worth it once the table gets past a certain size, or past a certain number of UPDATEs. If you're talking about a table <1mil rows, I might chance it and do the updates in place, and then purge to get rid of tombstoned rows.
To be clear, Vertica is not well suited for single row updates but large bulk updates are much less of an issue. I would not recommend re-creating the entire table, I would look into strategies around recreating partitions or bulk updates from staging tables.

Logical grouping schemas in ORACLE?

We are planning a new system for a client in ORACLE 11g. I've been mostly in the Sql Server world for several years, and am not really current on the latest ORACLE updates.
One particular feature I'm wondering if ORACLE has added in by this point is some sort of logical "container" for database objects, akin to Sql Server's SCHEMA.
Trying to use ORACLE's schemas like Sql Server winds up being a disaster for code comparisons when trying to push from dev > test > live.
Packages are sort of similar, except that you can't put tables into a package (so they really only work for logical code grouping).
The only other option I am aware of is the archaic practice of having to prefix object names with a "schema" prefix, i.e. RPT_REPORTS, RPT_PARAMETERS, RPT_LOGS, RPT_USERS, RPT_RUN_REPORT(), with the prefix RPT_ denoting that these are all the objects dealing with our reporting engine say. Writing a system like this feels like we never left the 8.3 file-naming age.
Is there by this point in time any cleaner, more direct way of logically grouping related objects together in ORACLE?
Oracle's logical container for database objects IS the schema. I don't know how much "cleaner" and "more direct" you can get! You are going to have to do a paradigm shift here. Don't try to think in SQL Server terms, and force a solution that looks like SQL Server on Oracle. Get familiar with what Oracle does and approach your problems from that perspective. There should be no problem pushing from dev to test to production in Oracle if you know what you're doing.
It seems you have a bit of a chip on your shoulder about Oracle when you use terms like "archaic practice". I would suggest you make friends with Oracle's very rich and powerful feature set by doing some reading, since you're apparently already committed to Oracle for this project. In particular, pick up a copy of "Effective Oracle By Design" by Tom Kyte. Once you've read that, have a look at "Expert Oracle Database Architecture" by the same author for a more in-depth look at how Oracle works. You owe it to your customer to know how to use the tool you've been handed. Who knows? You might even start to like it. Think of it as another tool in your toolchest. You're not married to SQL Server and you're not being unfaithful by using Oracle ;-)
EDIT:
In response to questions by OP:
I'm not sure why that is a logistical problem. They can be thought of as separate databases, but physically they are not. And no, you do not need a separate data file for each schema. A single datafile is often used for all schemas.
If you want a "nice, self-contained database" ala SQL Server, just create one schema to store all your objects. End of problem. You can create other users/schemas, just don't give them the ability to create objects.
There are tools to compare objects and data, as in the PL/SQL Developer compare. Typically in Oracle you want to compare schemas, not entire databases. I'm not sure why it is you want to have multiple schemas each with their own objects anyway. What does is buy you to do that? Keep your objects (tables, triggers, code, views, etc.) in one schema.

Resources