Oracle PL/SQL and Shell scripting: from one schema to another schema - oracle

I want to load data from one table from one schema to another schema on daily basis.
Tables are in different database so to create database link will not be an option due to some security purpose....
About million records will get process....
Databases are on different server , from database "A" I am fetching Employee presence details by combining emp details and emp presence table for period of a month , and loading this data in other table on database "B". Need to run this activity on daily basis.
I need to run a job daily at low peak hours to get complete copy of table into other db ...
will Import/Export or loading data with help of sqlldr?
please let me know the correct way..
Thanks in Advance..
What are my best options?

Well, it seems that using database link would best fit for your situation. If you want to read a table from a database, you should have read privilege. Perhaps you can ask the DBA creating an account(user) which only has read privilege for specific table. Then you can use database link connecting with the new user.
You can't update or delete data from the table because the user you connecting doesn't have the write privilege. This can solve the security problem.
exp/imp and sqlldr are different tools. They don't work together. You can only import data from an export file. You can't load export file with sqlldr.

If you want to run this periodically, it sounds like you might want to take a look at the Oracle Scheduler
Overview: http://docs.oracle.com/cd/B28359_01/server.111/b28310/schedover001.htm
To export the data and add it into the new database, you might want to use Oracle DataPump, which can do both the export and import for you, securely.
Data Pump Export: http://docs.oracle.com/cd/B28359_01/server.111/b28319/dp_export.htm
So your bet might be creating a shell script that uses data pump to create an export file from database number 2, and then uses data pump again to import said file into database number 1.
Once you have that script, you can schedule it to run during nights or at any time you have low traffic.
Regards

Related

Oracle user DB export command's scope (User/Schema level)?

I'm totally novice in terms of Oracle DB knowledge. Trying to understand IMPDB command and its scope.
Issue: Suppose there are 500 tables in a particular DB, many of them (60% - 70% or more) are coming as zero records when we're importing the data into a fresh Oracle DB (getting the data from one vendor who has the DB). The doubt is, how can most of the tables be zero records in a DB (why were they created at the first place then?). Also, we're assuming maybe the vendor is using a specific user while generating the .DMP files who has no access to those tables and hence the 0 count. When we asked the vendor, they said, that's not how Oracle works, they've provided user export dump and said, "Schema is a collection of database objects owned by a specific user. Those objects include tables, indexes, views, functions, stored procedures, etc."
When asked about the zero records issue, they said they're pulling correctly and have no understanding as to why so many tables are zero. The SO community has great experts in Oracle DB, can anyone shed some light as to:
What might be the issue?
Is our assumption correct (i.e, that user doesn't have access to those tables which got zero records)?
What's the right way forward?
4) Anything else you want to add.
The vendor is correct - the utility used to generate the export, EXPDP (the compliment to IMPDP) can create a full dump of all of the database objects of a specific user. However, the parameters used to generate the export can vary greatly, and it's absolutely possible for an export to not include table data IF the EXPDP command/parameters used to create the export are specified in that way. For example, let's imagine that someone wants to export a specific schema using the following commmand:
expdp [USER]#[DATABASE] schemas=test directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log query=TEST.TABLE:'"WHERE row_date>sysdate"'
While the export is being generated, all of the rows in that specific table will be evaluated based on the where condition. Unless rows have a date that is in the future, none of the rows dated prior and up to the sysdate will be exported. If a where condition like that is applied to the entire export, you'll have tables with 0 rows in the dump file.
That is just an example - it might also be the case that the tables really have 0 rows. This is possible for a lot of reasons - perhaps it is an older schema with tables that have previously been truncated. Perhaps that particular database isn't used often, and the tables within the schema are empty because rows were never added to the tables. Maybe a developer or another DBA created a bunch of unnecessary tables and they simply were never dropped. It could be a plethora of potential reasons/issues for a schema to have empty tables, and that doesn't mean there is something wrong with the database or the export file being generated. Applications and their technical requirements change all the time, and it's possible that the schema simply wasn't updated when those tables were no longer needed.
The first thing I would recommend is:
Ask the vendor to provide record counts of each table in that schema from their end for validation purposes. This will tell you if the tables are empty in the database. If they are empty in the database, they will be empty in your export. This is very simple and can be achieved with a query like select owner, table_name, num_rows, sample_size, last_analyzed from all_tables where owner=[SCHEMA]; provided that their table statistics are up to date.
If this is a big concern for you, you can always ask them to exclude those tables in the export with a command like:
expdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Or simply exclude them during your import with a command like:
impdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Either way should work, but be careful and ensure that there will be no issues from a constraint/child record perspective. You can also exclude the constraints. There are many ways to work around it.
IF THERE ARE INCONSISTENCIES BETWEEN THE COUNTS AND THE ROWS IMPORTED, I would recommend asking the vendor for the specific EXPDP command or parameter file that was used to generate the export. This will let you know if the empty rows are being caused by a clause in the export command.
It's impossible to know if your assumption is correct without knowing more about the database the export is coming from or seeing the the commands being used to generate the export. I would ask the vendor to verify record counts before assuming that it's a permission issue. Empty tables are created all the time.

Is it possible in oracle to trigger a SAS program after an insert or update on oracle table?

I'm new to oracle and I saw Oracle triggers can trigger some action after an update or insert is done on oracle table.
Is it possible to trigger a SAS program after every update or insert on Oracle table.
There's a few different ways to do this but a problem like this is an example of the saying "Just because you can, doesn't mean you should".
So sure, your trigger can be fired on update or insert and that can call a stored procedure in a package which can use the oracle host command to call an operating system command which can call SAS.
Here are some questions:
do you really want to install SAS on the same machine as your Oracle database?
do you really want every transaction that inserts or updates to have to wait until the host command completes? What if SAS is down? Do you want the transaction to complete or.....?
do you really want the account that runs the database to have privileges to start up or send information to other executables? Think security risks.
if an insert does one record the action is clear. What if an update affects a thousand records? What message do you want to send to SAS? One thousand update statements? One update statement?
There are better ways to do this but a complete answer needs more details from you as to the end goal and business logic involved. Some ways I have used include:
trigger inserts data into an Oracle advanced queue. At predetermined intervals take the changes off the queue and write them to a flat file. Write a file watcher to look for the files and send the info to SAS.
write a Java program to take the changes and ship them
use the APEX web service and expose the changes as a series of JSON or REST packets.

Oracle Export/Import issues with Tablespace

I created a dump of a local oracle database like this:
expdp mydb/passwd -schemas=myschema -dumpfile=mydumpfile.dmp -logfile=oralog.log
I sent the dump to someone who is supposed to import the dump in his oracle server. Now, he tells me, the import fails due to some errors related to tablespaces (like tablespace XYZ is not available, - the database XYZ is in no relation to the respective database). Besides, he asks me to give some information about the dump concerning the tablespaces.
Since I am usually working with MySQL and have limited knowledge about these Oracle-Tablespace things: I would really appreciate to get some advise.
Use REMAP_TABLESPACE parameter.
For example,
REMAP_TABLESPACE=(source1:destination1,source2:destination1,source3:destination1,source4:destination1)
Go through the documentation about Data Pump Import. A small quote -
Multiple REMAP_TABLESPACE parameters can be specified, but no two can
have the same source tablespace. The target schema must have
sufficient quota in the target tablespace.
Note that use of the REMAP_TABLESPACE parameter is the only way to
remap a tablespace in Data Pump Import. This is a simpler and cleaner
method than the one provided in the original Import utility. That
method was subject to many restrictions (including the number of
tablespace subclauses) which sometimes resulted in the failure of some
DDL commands.
By contrast, the Data Pump Import method of using the REMAP_TABLESPACE
parameter works for all objects, including the user, and it works
regardless of how many tablespace subclauses are in the DDL statement.

SYSTEM tables import into other schema in oracle 11g

I trying to take oracle DB backup using expdp. I have a specific case where an application table resides in the SYSTEM tablespace.
The backup export of this schema is successfully created with options SCHEMAS=SYSTEM and INCLUDE=TABLE:"like 'USER%'" which corresponds to my application tables.
I have created another schema with the user impexp which has a different tablespace allocated to it.
when I try to import the .dmp file into impexp, the import is unsuccessful stating "SYSTEM"."USER_SYS_MAST" exists.
Is there a way to import this table in the newly created schema. I also tried using the option REMAP_SCHEMA=SYSTEM:IMPEXP, but it seems to error out saying ORA-39013: Remapping the SYSTEM schema is not supported.
Summarizing : I want to import my application tables in the SYSTEM tablespace into a new TABLESPACE 'IMPEXP'.
Please let me know If I am going wrong somewhere and trying to do something that isn't supported.
any help will be greatly appreciated.
This is one of the reasons why putting application tables in the SYS or SYSTEM schemas is considered bad practice. These schemas are vital to the running of our databases and should not be meddled with.
You have compounded this bloomer by naming your tables with a prefix of USER, which is the same convention the data dictionary uses.
What you need to do is create a new schema to hold these tables. Grant it whatever privileges it needs that made you think it had to be owned by SYSTEM. Then move those tables out of the SYSTEM schema.
To do a proper job you should change your application to use this new schema, but as temporary fix you could give SYSTEM rights on the tables and build synonyms for them. If you have the time, change the application. It will cause you less grief in the long run.
Either way, you will be able to export the data out of the old database and into the target database using this new schema.
Agree with APC.
In your specific case, I would look at DBMS_METADATA.GET_DDL to extract the DDL so I can recreate all the objects in the new schema. There are options to exclude the TABLESPACE component so they would get created in the new schema's default tablespace.
Then I would simply do INSERT /*+APPEND */ INTO newschema.table AS SELECT * FROM SYSTEM.table
If space is an issue, you may need to TRUNCATE or DROP individual tables immediately after they are successfully copied.

Script Oracle tables (DDL) with data insert statements into single/multiple sql files

I am needing to export the tables for a given schema, into DDL scripts and Insert statements - and have it scripted such that, the order of dependencies/constraints is maintained.
I came across this article suggesting how to archive the database with data - http://www.dba-oracle.com/t_archiving_data_in_file_structures.htm - not sure if the article is applicable for oracle 10g/11g.
I have seen "export table with data" features in "Sql Developer", "Toad for Oracle", "DreamCoder for Oracle" etc, but i would need to do this one table at a time, and will still need to figure out the right order of script execution manually.
Are there any tools/scripts that can utilize oracle metadata and generate DDL script with data?
Note that some of the tables have CLOB datatype columns - so the tool/script would need to be able to handle these columns.
P.S. I am needing something similar to the "Generate Scripts" feature in SQL Server 2008, where one can specify "script data" option and get back a self-sufficient script with DDL and data, generated in the order of table constraints. Please see: http://www.kodyaz.com/articles/sql-server-script-data-with-generate-script-wizard.aspx
Thanks for your help!
Firstly, recognise that this isn't necessarily possible. A view can use a function in a package that also selects from the view. Another issue is that you might need to load data into tables and then apply constraints, even though this might be slower than the other way round.
In short, you will need to do some work here.
Work out the dependencies in your system. ALL_DEPENDENCIES is the primary mechanism.
Then use DBMS_METADATA.GET_DDL to extract the DDL statements. For small data volumes, I'd extract the constraints separately for applying after the data load.
In current versions you can create external tables to unload data from regular tables into OS files (and obviously go the other way round). But if you've got exotic datatypes (BLOB, RAW, XMLTYPEs, User Defined Types....) it will be more challenging.
I suggest that you use Oracle standard export and import (exp/imp) here, is there a reason why you won't consider it? Note in addition you can use the "indexfile" option on the import to output the SQL statements (unfortunately this doesn't include the inserts) to a file instead of actually executing them.

Resources