Import a table from one user to another Oracle SQL - oracle

Is it possible to import a table from one user to another from export dmp file in Oracle?
If yes, how to do it?
I have 2 users: MILLER and DUMMY. MILLER has table Planets. I've made export from MILLER (last.dmp) using Command Prompt, and I want to make import the table into DUMMY user from export file.
I have already tried use information from here but it didn't help me.
I also can add log of command prompt, if necessary.

If you are using the "Original Import" Utility, you should consider that:
The user names must exist before the import operation; otherwise an error is returned.
The IMP_FULL_DATABASE role is required to use this parameter. To import to a different schema than the one that originally contained the object, specify TOUSER.
More information here

Thank you to all of you. Here is the solution:
1)grant IMP_FULL_DATABASE to DUMMY;
2)alter user DUMMY quota 50m on system; (because I had error
IMP-00058: ORACLE error 1950 encountered
ORA-01950: no privileges on tablespace 'SYSTEM'
Import terminated successfully with warnings.)
3)use FROMUSER and TOUSER in command prompt:
C:\>imp dummy#test3 fromuser=miller touser=dummy

Here is the complete process
Exporting:
expdp schema1/password dumpfile=Imported_Table1.dmp directory=DIR tables=sourceTablename;
Importing:
impdp schema2/password DIRECTORY=DIR DUMPFILE=Imported_Table1.dmp
remap_table=schema1.tablename:*destTablename*
remap_schema=schema1:schema2 TABLE_EXISTS_ACTION=append;
destTablename -> Table name in destination schema. If table exists, system import the data otherwise system creates a new table with the data.
Ref: TABLE_EXISTS_ACTION
Addressed Issues:
ORA-39002: invalid operation
ORA-39166: Object schema2.tablename was not found or could not be
exported or imported.
Table "schema2"."tablename" exists. All dependent metadata and data will be skipped due to table_exists_action of skip

Related

Am I impdp dump file in Oracle database 11g R2, right?

I have a dump file without log file,I have no idea what the expdp schema users are,so editing a parfile like below:
directory=my directory
remap_schema=rx:tbs
table_exist_action=replace
My problem is that the user "rx" is not exist,IMPDP by the way above, Whether or not IMPDP load all objects properly to database
you don't need a log file
it is unclear whether you got that parameter file, or did you write it yourself
I think the former; otherwise, how would you know about the rx user?
if that's so, you shouldn't worry about the rx user - it looks as if dumpfile contains objects that belong(ed) to that user
what you should have is the tbs user (created in the target database). Why? Because of the remap_schema parameter. Certainly, you don't have to import into tbs; create any other user and fix the parameter file
that's it, then; import the dumpfile as e.g.
impdp system/password#database parfile=that_parameter_file.txt

How can I convert an Oracle impdp sqlfile to a PostgreSQL script to import data into PostgreSQL? [duplicate]

I am trying to import the dump file to .sql file using SQLFILE parameter.
I used the command "impdp username/password DIRECTORY=dir DUMPFILE=sample.dmp SQLFILE=sample.sql LOGFILE=sample.log"
I expected this to return a sql file with contents inside the table. But it created a sql file with only DDL queries.
For export I used, "expdp username/password DIRECTORY=dir DUMPFILE=sample.dmp LOGFILE=sample.log FULL=y"
Dump file size is 130 GB. So, I believe the dump has been exported correctly.
Am I missing something in the import command? Is there any other parameter should I use to get the contents?
Thanks in advance!
Your expectation was wrong, I'm afraid. You're asking it to do something it isn't designed for.
The documentation for SQLFILE says:
Purpose
Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
So it will only ever contain DDL.
There isn't a mechanism to turn a .dmp file into a .sql containing insert statements. If you need to put the data into a table, just use the native import.
Individual insert statements - if you could generate them, which SQL Developer will do as a separate task unrelated to your data pump export - would be slower, would have problems with LOBs, and would have to be careful about the order they were run unless integrity constraints were disabled. Data pump takes care of all of that for you.

Oracle catldr.sql multiple errors

I'm preparing my new Oracle 11g install for "Direct" SQL*Loader operation. As per the documentation here:
http://docs.oracle.com/cd/B28359_01/server.111/b28319/ldr_modes.htm#i1007669
To prepare the database for direct path loads, you must run the setup script, catldr.sql, to create the necessary views. You need only run this script once for each database you plan to do direct loads to.
So I execute the following sql script:
$ORACLE_HOME/rdbms/admin/catldr.sql
Problem is, when I run this script I get multiple errors. E.g. (and there are a lot more than this, stuff about circular synonyms too):
grant select on gv_$loadistat to public
*
ERROR at line 1:
ORA-04063: view "SUKLTI.GV_$LOADISTAT" has errors
create or replace view v_$loadpstat as select * from v$loadpstat
*
ERROR at line 1:
ORA-01731: circular view definition encountered
Synonym created.
grant select on v_$loadpstat to public
*
ERROR at line 1:
ORA-04063: view "SUKLTI.V_$LOADPSTAT" has errors
create or replace view v_$loadistat as select * from v$loadistat
*
ERROR at line 1:
ORA-01731: circular view definition encountered
Synonym created.
grant select on v_$loadistat to public
*
ERROR at line 1:
ORA-04063: view "SUKLTI.V_$LOADISTAT" has errors
from x$kzsro
*
ERROR at line 15:
ORA-00942: table or view does not exist
And then when I try to run SQL*Loader with "direct=true" I receive the following errors:
ORA-26014: unexpected error on column SYS_NTEOzTt73hE9LgU+XYHax0tQ==.DUMMYCOL NAME
while retrieving virtual column status
ORA-01775: looping chain of synonyms
Note this was a clean Oracle install with some XML schema registered(8) and tables generated off the back of the schema.
Any ideas?
The catldr.sql script says:
Rem NAME
Rem catldr.sql
Rem FUNCTION
Rem Views for the direct path of the loader
Rem NOTES
Rem This script must be run while connected as SYS or INTERNAL.
From the error messages you seem to have run it as your normal user, SUKLTI, rather than as SYS. The documentation you linked to also stated it should be run once per database - not once per end-user.
It should have been run during database creation anyway, via the catalog.sql script, so I'm surprised you need to run it manually at all; and some of the errors suggest the objects it creates did already exist. Through re-running it as SYS doesn't really look like it should hurt.
You can see which objects have been created by querying:
select object_type, object_name
from all_objects
where created > time_just_before_you_ran_the_script
You may need to cross-reference public synonyms with the all_synonyms view to check the table owner, and drop any objects it created from the SUKLTI schema as well as those new public synonyms. (But don't drop anything from the SYS schema...)
You may then need to re-run catldr.sql as SYS to recreate those synonyms pointing to the correct SYS objects.
#AlexPoole
You are completely correct. The script had to be run as SYS.
As this was a "test db" we tore it down and ran the script as SYS at DB re-creation time.
Everything now working!
thanks for reply

How do I copy or import Oracle schemas between two different databases on different servers?

What is the best way to copy schema from one user/instance/server:
jdbc:oracle:thin:#deeb02:1535:DH, user pov
to another user/instance/server
jdbc:oracle:thin:#123.456.789.123:1523:orcl, user vrs_development
?
Similarly, if you're using Oracle 10g+, you should be able to make this work with Data Pump:
expdp user1/pass1#db1 directory=dp_out schemas=user1 dumpfile=user1.dmp logfile=user1.log
And to import:
impdp user2/pass2#db2 directory=dp_out remap_schema=user1:user2 dumpfile=user1.dmp logfile=user2.log
Use oracle exp utility to take a dump of the schema from the first database
exp user1/pass1#db1 owner=user1 file=user1.dmp log=user1.log
Then use imp utility to populate the other schema in the other datbase
imp user2/pass2#db2 fromuser=user1 touser=user2 file=user1.dmp log=user2.log
you can directly copy schema via network (without moving files from one server to another) using datapump parameter NETWORK LINK as described here:
http://vishwanath-dbahelp.blogspot.com/2011/09/network-link-in-datapump.html
for example:
impdp -userid user/pass#destination_server LOGFILE=log.txt NETWORK_LINK=dblink_from_dest_to_source SCHEMAS=schema1 directory=DATA_PUMP_DIR
check that directory DATA_PUMP_DIR exists in
select *
from dba_directories
and points to correct place in destination_server file system.

Importing selective data using impdp

I have an entire DB to be imported as a dump into my own. I want to exclude data out of certain tables(mostly because they are huge in size and not useful). I cannot entirely exclude those tables since I need the table object per se(minus the data) and will have to re create them in my schema if I do so. Also in the absence of those table objects , various other foreign constraints defined on other tables will also fail to be imported and will need to be redefined.So I need to exclude just the data from certain tables.I want data from all other tables though.
Is there a set of parameters for impdp that can help me do so?
I would make two runs at it: The first I would import metadata only:
impdp ... CONTENT=METADATA_ONLY
The second would include the data only for the tables I was interested in:
impdp ... CONTENT=DATA_ONLY TABLES=table1,table2...
Definitely make 2 runs. One to create all the table objects, but instead of using tables in the second impdp run, use the exclude
impdp ... Content=data_only exclude=TABLE:"IN ('table1', 'table2')"
The other way works, but this way you only have to list the tables you don't want versus all that you want.
If the size of the table is big for export import the you can use "SAMPLE" parameter in expdp command to take export of table for what ever percentage you want ....
$ expdp tables=T100test DIRECTORY=expimp1 DUMPFILE=test12.dmp SAMPLE = 10;
This command will export only 10% data of the T100test table's data.
Syntax:
EXCLUDE=[object_type]:[name_clause],[object_type]:[name_clause]
INCLUDE=[object_type]:[name_clause],[object_type]:[name_clause]
Examples of operator-usage:
EXCLUDE=SEQUENCE
or EXCLUDE=TABLE:"IN ('EMP','DEPT')"
or EXCLUDE=INDEX:"= 'MY_INDX'"
or INCLUDE=PROCEDURE:"LIKE 'MY_PROC_%'"
or INCLUDE=TABLE:"> 'E'"
The parameter can also be stored in a parameter file, for example: exp.par
DIRECTORY = my_dir
DUMPFILE = exp_tab.dmp
LOGFILE = exp_tab.log
SCHEMAS = scott
INCLUDE = TABLE:"IN ('EMP', 'DEPT')"
It seems you can exclude directly when importing using impdp query parameter
impdp [...] QUERY='TABLE_NAME:"WHERE rownum = 0"'
cf : community.oracle.com

Resources