Migrating data between 2 databases in Oracle 9i - oracle

I am new to Oracle. Since we have rewritten an earlier application , we have to migrate the data from the earlier database in Oracle 9i to a new database , also in 9i, with totally different structures. The column names and types would be totally different. We need to map the tables and columns , try to export as much data as possible, eliminate duplicates, and fill empty values with defaults.
Are there any tools which can help in mapping the elements of the 2 databases , with rules to handle duplicates, and default values and migrate the data ?
Thanks,
Chak.

If your goal is to migrate data between two very different schemas you will probably need an ETL solution (ETL=Extract Transform Load).
An ETL will allow you to:
Select data from your source database(s) [Extract]
apply business logic to the selected data [Transform] (deal with duplicates, default values, map source tables/columns with destination tables/columns...)
insert the data into the new database [Load]
Most ETLs also allow some kind of automatisation and reporting of the loads (bad/discarded rows...)
Oracle's ETL is called Oracle Warehouse Builder (OWB). It is included in the Database licence and you can download it from the Oracle website. As most Oracle products it is powerful but the learning curve is a bit steep.
You may want to look into the [ETL] section here in SO, among others:
What ETL tool do you use?
ETL tools… what do they do exactly? In laymans terms please.

In many cases, creating a database link and some scripts a'la
insert into newtable select distinct foo, bar, 'defaultvalue' from oldtable#olddatabase where xxx
should do the trick

Related

How to convert a ODI interface into an SQL query?

I am working on ODI 10 project which has 153 interfaces divided in a few packages. What I want to do is create a PL/SQL procedure with INSERT statements instead of having 153 interfaces. These interfaces are more or less similar i.e they have the same source table and same target (in my case target is a Essbase Hyperion cube), the transformations & filters are different. So anytime I have to update something like a column value , I have to open 153 interfaces and update in each and every one of them. In a procedure, I could do this so easily, I can just replace all values.
So I feel that its best that I create a PL/SQL procedure, as I can maintain the code better that way.
Is there a way to convert the interface into a SQL query?. I want a direct data dump, I don't want to do an complex incremental load. I am just looking to truncate the table and load the data.
Thanks in advance.
It is possible to get the SQL code generated by ODI from the Operator in the log tables. It can also be retrieved in the repository.
Here is an example of a query for ODI 12c (10g being out of support for a long time now) :
SELECT s.sess_no, s.nno, step_name, scen_task_no, def_txt
FROM SNP_SESS_STEP s, SNP_SESS_TASK_LOG t
WHERE s.sess_no = t.sess_no
AND s.nno = t.nno;
Starting with ODI 11g, it is also possible to simulate the execution instead of doing an actual execution. This functionality will just display the code generated in ODI Studio instead of running it.
Finally, upgrading to a more recent of ODI would allow to use the ODI SDK. With it you could programmatically do changes to all the mappings in one go. Reusable mappings could also help as it sounds that some logic is implemented multiple times. That would enable to ease these kind of changes while keeping the benefits of an ELT tool (scheduling, monitoring, visual representation of flows, cross-technology, ...).
Disclaimer : I'm an Oracle employee

Create star/snowflake schema from existing database (Oracle)

I happen to find myself in a situation where i am using Oracle SQL Developer Version 1.5.5 and there's this huge database for which the documentation is very poor. I'd like to create a star or a snowflake schema for better understanding of the data. Is there a simple way to do it?
You can reverse engineer the physical data model using SQL Developer Data Modeler. This is actually a separate tool from SQL Developer but shares some branding. It is also free.
The quality of the resultant diagram will depend heavily on how well the physical data structures have been implemented. You will only get relationships if the database has defined foreign key constraints (disabled is good enough). Likewise UIDs require defined primary key constraints. If your database lacks constraints you'll have to rely on column naming conventions, data analysis and your business knowledge.
Star or Snowflake schemas are for data warehouses. Is that the sort of database you're dealing with?

Seeking Opinion:Denormalising Fact and Dim tables to improve performance of SSRS Reports

We seem to have bit of a debate on a discussion point in our team.
We are working on a Data Warehouse in the Microsoft SQL Server 2012 platform. We have followed the Kimball Architecture to build this Data Warehouse.
Issue:
A reporting solution (built on SSRS), which sources data from this Warehouse, has significant performance issues when sourcing data from fact and dim tables. Some of our team members suggest that we extract data from facts and dims into a new set of tables using SSIS packages. This would mean we denormalise these tables into ‘Snapshot’ tables. In this way the we would not need to join these tables to create data sets within the reports. Data could be read out of these tables directly.
I do have my own worries about this; inconsistencies, maintenance of different data structures, duplication of data etc to name a few.
Question:
Would you consider creating snapshot tables (by denormalising facts and dim tables) for reporting tables a right approach?
Would like to hear your thoughts on this.
Cheers
Nithin
I don't think there is anything wrong with snapshot tables. The two most important aspects of a data warehouse are:
The data is correct.
The data is useful.
If your users are unable to extract the totals they require, in a reasonable timescale, they won't use the warehouse.
My own solution includes 3 snapshot tables. Like you, I was worried about inconsistencies. To address this we built an automated checking process. This sub-system executes a series of queries, stored on a network drive, once an hour. Any records returned by the queries are considered a fail. Fails are reported and immediately investigated by my ETL team. This sub-system ensures the snapshots and underlying facts are always aligned and consistent with each other. Drift is prevented.
That said, additional tables equals additional complexity. And that requires more time/effort to manage. Before introducing another layer to your warehouse, you should investigate why these queries are underperforming. If joins are to blame:
Are you using an inappropriate data type, for your P/F keys?
Are the FKeys indexed (some RDBMS do this by default, others do not)?
Have you looked at the execution plans, for the offending queries?
Is the join really to blame, or is it a filter applied to the dim table?
for raw cube performance my advice would be to always try to denormalize your tables and have one fact table and one table for each dimension (star schema).
If you are unsure if it will actually help you could start creating materialized views. These are kind of the best of both worlds, on the long run you should alter your etl.
In my previous job we only had flattened tables which worked quite well. Currenly we have a normalized schema but flatten it in the last step.

Oracle copy data between databases honoring relationships

I have a question regarding the oracle copy command:
Is it possible to copy data between databases (were the structure is the same) and honor relationships in one go without(!) writing procedures?
To be more precise:
Table B refers (by B.FK) to table A (A.PK) by a foreign key (B.FK -> A.PK; no relationship information is stored in the db itself). The keys are generated by a sequence, which is used to create the PK for all tables.
So how to copy table A and B while keeping the relationship intact and use the target DBs sequence to generate new primary keys for the copied data (i cannot use the "original" PK values as they might already be used in the same table for a different dataset)?
I doubt that the copy command is capable to handle this situation but what is the way to achieve the desired behavior?
Thanks
Matthias
Oracle has several different ways of moving data from one database to another, of which the SQL*Plus copy command is the most basic and the least satisfactory. Writing your own replication routine (as #OldProgrammer suggests) isn't much better.
You're using 11g, so move into the 21st century by using the built-in Streams functionality.
There is no way to synchronize sequences across databases. There is a workaround, which is explained by the inestimable Tom Kyte.
I generally prefer db links and then use sql insert statements to copy over data.
In your scenario , first insert data of Table A using DB link and then table. If you try otherway round, you will error.
For info on DB link , you canc heck this link: http://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts002.htm

working with LINQ to Entities against multiple sql server databases

I'm building a project combined of number of sites with common subject.
The sites rely on one central database that holds the common info for all of them.
In addition, each site has another database that holds its unique info (I will refer to it as unique-db in the next lines so I won't be misunderstood).
For example, the Languages table sits in the central db. That said, I suddenly noticed that I need to use the Languages table in one of my unique-db in order for the table to act as a FK so I don't have to create the same table again in the unique-db.
Do I have to create the same table again this time in the unique-db? Or is there a way to connect tables from separate databases?
In addition, we decided using linq2entity and soon we're gonna run some complex queries against the different databases. Will I have a problem with this matter?
How should I go on with that? Was it wise to split the data into a few databases?
I really appreciate all the help I can get!
One thing that might make your life easier is to create views of the central tables in each unique db. Linq to Entities will pick up views as if they were tables.

Resources