Oracle Data Modeler: add initial data to tables - oracle

I've created a relational model in Oracle SQL Developer Data Modeler.
I want to add predefined\static data that should exist in the initial clean database: enum values, fixed lists( for example: contries ) using modeler. My goal is to receive script using "DDL File Editor" tool which contains not only "create table" commands and so on, but also "inserts" with initial data.
I there any way to do this?

What might be the easiest way would be to put the DML into the AFTER CREATE tab under Scripts for each table - and to make sure it's included in the DDL script.

Related

Oracle Data Modeler - How to Commit DDL changes back to Database?

Initial Note: Created Model by choosing to Import the Data Dictionary using one of my Connections and then choosing the Schema and lastly the Tables for which I want to model.
After making changes within Oracle SQL Developer Data Modeler how can I commit the changes made in the newly created relational model back to the database.
I can manually parse thru the generated DDL but that seems like unnecessary work. I attempted to use the 'Synchronize with Data Dictionary' option however when I went back to my tables within my schema they were not altered/updated in any way. No Primary Keys... Foreign Keys, Indexes or any other of the DDL actions I created in the model were seen in my database. What am I missing here?
I really thought the Synchronize options where what I should be using.
We will never commit changes to the database.
You'll do the compare, review the delta DDL, and then if you think it's good - load it up in SQLcl, SQL Developer, or SQLPlus to run.
It's not that we don't trust you to do the review part first, but also, it'd be just too easy to muck up a database if you hit the wrong button. Especially as some table structural changes could result in data loss.

informatica execute sql in sql transformation

Background: I am really new. Informatica Developer for PowerCenter Express Version: 9.6.1 HotFix 2
I want to execute a t-sql statement as one step in a work flow:
truncate table dbo.stage_customer
I tried create a mapping, add a sql transformation on it. Input above query in sql query window. I added the mapping to a workflow of just start, the mapping, and the end. When I validate the flow I got this error:
The group [Input] in transformation xxx must have at least one port
I have no idea what ports are needed since this (the truncate statement) basically doesn't need input or output.
Use your query " truncate table dbo.stage_customer" in Pre-SQL command
As Aswin suggested use the built in option in the session property.
But in the production environments user may not have truncate table access for the table in a database. In this case, informatica workflow will fail if you check the truncate target table option. It is good to have a stored procedure to truncate the target table and use that stored procedure in informatica mapping to avoid workflow failures in case of user having no truncate access to the database.
if you would like to truncate a target table before loading why don't you use the in-built option present in session properties?
goto workflow manager-> open session->mapping tab->click on target table listed left side->choose the property "Truncate table option" just enable it
to answer you question, I think you have to connect at least one input and output port into SQL transformation (because it is not unconnected). Just create dummy ports and try again
try this article - click here

Export oracle database tables

i am working on a large database ,how do i Export some database tables without having dba privileges .do i have to copy the structures of the tables and using spool command to get the data in a text file then create the tables and inserting data from the text file?
One of the methods would be to install Oracle SQL Developer and export the required table structures and data using the wizard.
Here is the link to a tutorial which can guide you if you go with this option.
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldev/r30/SQLdev3.0_Import_Export/sqldev3.0_import_export.htm
A second option would be to use SQL Loader to load data in your target tables. But for that you will have to first create the data structures on your target schema and spool the data from your source tables in CSV (comma separated values) or any other eligible format.
Here is a link for SQL Loader.
http://docs.oracle.com/cd/B28359_01/server.111/b28319/ldr_concepts.htm
A third option would be that you create the table structures on the target schema and generate the insert statements from the source schema using a script. Here is a link to such an example.
https://pandazen.wordpress.com/2008/08/18/generate-insert-statement-script-to-extract-data-from-oracle-table/
I would recommend going with the SQL Developer option since it is relatively simple.

Oracle data migration with lot of schema changes

I need to do an Oracle data migration from 11g to 12c where schema changes are abundant. I have an excel sheet which describes all the schema changes. Excel sheet has the columns for 'old_table_name', 'old_column_name', 'old_value' and same for the new tables. Some values can be directly copied to the new table and some cannot be done that way.
For example I have to transform the old column value when it is moved to the new table. Some transformation are complex and they cannot be simply mapped. They should be transformed by joining with other tables in the old database. I was trying the Talend Open Studio Data Integration tool for this and found it is a bit complex to go ahead with that tool in my case. Does anyone have an idea of getting this done using Talend or any other tool? What is the ideal approach when doing a migration like this? I have included a sample of the excel sheet below which only has simple transformations.
The kind of converions shown in the spreadsheet can all be performed on the table itself using rename statements and/or basic ddl and dml statements. I would load the old table into the new database and perform these statement on the table.
alter table
old_table_one
rename to
new_table_one;
alter table
new_table_one
rename column
old_col_one
to
new_col_one;
update new_table_one
set new_col_one = 'A_NEW'
where new_col_one = 'A';
etc.

Script Oracle tables (DDL) with data insert statements into single/multiple sql files

I am needing to export the tables for a given schema, into DDL scripts and Insert statements - and have it scripted such that, the order of dependencies/constraints is maintained.
I came across this article suggesting how to archive the database with data - http://www.dba-oracle.com/t_archiving_data_in_file_structures.htm - not sure if the article is applicable for oracle 10g/11g.
I have seen "export table with data" features in "Sql Developer", "Toad for Oracle", "DreamCoder for Oracle" etc, but i would need to do this one table at a time, and will still need to figure out the right order of script execution manually.
Are there any tools/scripts that can utilize oracle metadata and generate DDL script with data?
Note that some of the tables have CLOB datatype columns - so the tool/script would need to be able to handle these columns.
P.S. I am needing something similar to the "Generate Scripts" feature in SQL Server 2008, where one can specify "script data" option and get back a self-sufficient script with DDL and data, generated in the order of table constraints. Please see: http://www.kodyaz.com/articles/sql-server-script-data-with-generate-script-wizard.aspx
Thanks for your help!
Firstly, recognise that this isn't necessarily possible. A view can use a function in a package that also selects from the view. Another issue is that you might need to load data into tables and then apply constraints, even though this might be slower than the other way round.
In short, you will need to do some work here.
Work out the dependencies in your system. ALL_DEPENDENCIES is the primary mechanism.
Then use DBMS_METADATA.GET_DDL to extract the DDL statements. For small data volumes, I'd extract the constraints separately for applying after the data load.
In current versions you can create external tables to unload data from regular tables into OS files (and obviously go the other way round). But if you've got exotic datatypes (BLOB, RAW, XMLTYPEs, User Defined Types....) it will be more challenging.
I suggest that you use Oracle standard export and import (exp/imp) here, is there a reason why you won't consider it? Note in addition you can use the "indexfile" option on the import to output the SQL statements (unfortunately this doesn't include the inserts) to a file instead of actually executing them.

Resources