I created a dump of a local oracle database like this:
expdp mydb/passwd -schemas=myschema -dumpfile=mydumpfile.dmp -logfile=oralog.log
I sent the dump to someone who is supposed to import the dump in his oracle server. Now, he tells me, the import fails due to some errors related to tablespaces (like tablespace XYZ is not available, - the database XYZ is in no relation to the respective database). Besides, he asks me to give some information about the dump concerning the tablespaces.
Since I am usually working with MySQL and have limited knowledge about these Oracle-Tablespace things: I would really appreciate to get some advise.
Use REMAP_TABLESPACE parameter.
For example,
REMAP_TABLESPACE=(source1:destination1,source2:destination1,source3:destination1,source4:destination1)
Go through the documentation about Data Pump Import. A small quote -
Multiple REMAP_TABLESPACE parameters can be specified, but no two can
have the same source tablespace. The target schema must have
sufficient quota in the target tablespace.
Note that use of the REMAP_TABLESPACE parameter is the only way to
remap a tablespace in Data Pump Import. This is a simpler and cleaner
method than the one provided in the original Import utility. That
method was subject to many restrictions (including the number of
tablespace subclauses) which sometimes resulted in the failure of some
DDL commands.
By contrast, the Data Pump Import method of using the REMAP_TABLESPACE
parameter works for all objects, including the user, and it works
regardless of how many tablespace subclauses are in the DDL statement.
Related
I'm totally novice in terms of Oracle DB knowledge. Trying to understand IMPDB command and its scope.
Issue: Suppose there are 500 tables in a particular DB, many of them (60% - 70% or more) are coming as zero records when we're importing the data into a fresh Oracle DB (getting the data from one vendor who has the DB). The doubt is, how can most of the tables be zero records in a DB (why were they created at the first place then?). Also, we're assuming maybe the vendor is using a specific user while generating the .DMP files who has no access to those tables and hence the 0 count. When we asked the vendor, they said, that's not how Oracle works, they've provided user export dump and said, "Schema is a collection of database objects owned by a specific user. Those objects include tables, indexes, views, functions, stored procedures, etc."
When asked about the zero records issue, they said they're pulling correctly and have no understanding as to why so many tables are zero. The SO community has great experts in Oracle DB, can anyone shed some light as to:
What might be the issue?
Is our assumption correct (i.e, that user doesn't have access to those tables which got zero records)?
What's the right way forward?
4) Anything else you want to add.
The vendor is correct - the utility used to generate the export, EXPDP (the compliment to IMPDP) can create a full dump of all of the database objects of a specific user. However, the parameters used to generate the export can vary greatly, and it's absolutely possible for an export to not include table data IF the EXPDP command/parameters used to create the export are specified in that way. For example, let's imagine that someone wants to export a specific schema using the following commmand:
expdp [USER]#[DATABASE] schemas=test directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log query=TEST.TABLE:'"WHERE row_date>sysdate"'
While the export is being generated, all of the rows in that specific table will be evaluated based on the where condition. Unless rows have a date that is in the future, none of the rows dated prior and up to the sysdate will be exported. If a where condition like that is applied to the entire export, you'll have tables with 0 rows in the dump file.
That is just an example - it might also be the case that the tables really have 0 rows. This is possible for a lot of reasons - perhaps it is an older schema with tables that have previously been truncated. Perhaps that particular database isn't used often, and the tables within the schema are empty because rows were never added to the tables. Maybe a developer or another DBA created a bunch of unnecessary tables and they simply were never dropped. It could be a plethora of potential reasons/issues for a schema to have empty tables, and that doesn't mean there is something wrong with the database or the export file being generated. Applications and their technical requirements change all the time, and it's possible that the schema simply wasn't updated when those tables were no longer needed.
The first thing I would recommend is:
Ask the vendor to provide record counts of each table in that schema from their end for validation purposes. This will tell you if the tables are empty in the database. If they are empty in the database, they will be empty in your export. This is very simple and can be achieved with a query like select owner, table_name, num_rows, sample_size, last_analyzed from all_tables where owner=[SCHEMA]; provided that their table statistics are up to date.
If this is a big concern for you, you can always ask them to exclude those tables in the export with a command like:
expdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Or simply exclude them during your import with a command like:
impdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Either way should work, but be careful and ensure that there will be no issues from a constraint/child record perspective. You can also exclude the constraints. There are many ways to work around it.
IF THERE ARE INCONSISTENCIES BETWEEN THE COUNTS AND THE ROWS IMPORTED, I would recommend asking the vendor for the specific EXPDP command or parameter file that was used to generate the export. This will let you know if the empty rows are being caused by a clause in the export command.
It's impossible to know if your assumption is correct without knowing more about the database the export is coming from or seeing the the commands being used to generate the export. I would ask the vendor to verify record counts before assuming that it's a permission issue. Empty tables are created all the time.
The Oracle new tablespace name must fit with the old tablespace name?
For example:
The dump file tablespace name is A,and i create a new tablespace B,and it could import table, but has many error?
ORA-00959:tablespace 'ECASYS'(old) not exits.
This is my import statement:
imp userid='ZHPE/zhpe#ORCL' file='E:\xxxx\xxxx2013-08-15Bak\130815.dmp' log='D:\app\Administrator\oradata\orcl\ZHPE.log' full=y ignore=y;
Is the new tablespace must must must fit the old one???
help!
If you're forced to use the legacy exp and imp tools then the tablespace cannot be changed during the import itself using command-line options. If you can, switch to using the datapump versions, expdp and impdp, and then follow #schurik's advice.
If you can't do that then you'll need a workaround, which is to create the schema objects manually first.
If you run imp with the indexfile option then it will create a file containing the DDL for the tables and indexes:
imp user/password indexfile=schema.sql file=...
The table creation DDL is commented out with REM markers, which you need to remove. You can then edit it to change the tablespace and any other storage options that are no longer appropriate. Then run that schema creation SQL to create all the tables as empty.
Then run the normal import again, but with the ignore=y flag so it doesn't complain (much) that the tables now already exist. The data will still be inserted into those existing tables.
This will be a bit slower if you create the indexes as well as the tables beforehand; normally it would create the tables, insert the data, and then build the indexes, which is more efficient. If the slowdown is significant then you can split your schema.sql into separate table and index creation files and do the same thing manually - create the tables, run the import with ignore=y and indexes=n (to stop it trying and failing to create them), and then create the indexes yourself afterwards.
Clearly this is a bit of a pain, and one of many reasons that switching to datapump is a good idea.
take a look at the REMAP_TABLESPACE import parameter e.g
REMAP_TABLESPACE=A:B
I'm attempting to recover the data from a specific table that exists in a system table dump I performed earlier. I would like to append the rows existing in the dump to any rows that may exist in the active table. The problem is, it's likely that the name of the table in the dump is not the same as what exists in the database currently (They're dynamically created with a prefix of ARC_TREND_). In addition, I don't know the name of the table as it exists in the dump, I was hoping to use SQL Developer to analyze the dump file as I can recognize the correct table by it's columns and it's existing rows.
While i'm going on blind faith that SQL Developer can work with my dump file, when attempting to open it, i'm getting a Java Heap OutOfMemory exception raised. I've adjusted the maximum heap size from 640m to 1024m in both sqldeveloper.bat and in sqldeveloper.conf, but to no avail.
Can someone recommend a course of action for me to take to recover the data from a table which exists in a exp created dump file? A graphical tool would be nice, but I'm no stranger to the command line. I need to analyze the tables that exist in the dump in order to pick the correct one out. Then I assume I can use imp TABLE= to bring it back into the active instance. It likely won't match the existing table name, so I will use SQL Developer to copy the rows from the imported table to the table where I need them to be.
The dump was taken from a Linux server running 10g, and will be imported to (the same server & database instance, upgraded) an 11g instance of the same database.
Thanks
Since you're referring to imp rather than impdp, I assume this wasn't exported with data pump. Either way, I doubt you'll get anything useful through SQL Developer.
Fortunately most of what you're trying to do is quite easy from the command line; just run imp with the INDEXFILE parameter, which will give you a text file containing all the table (commented out with REM) and index creation commands. From that you should be able to spot the table from its column names.
You can't really see any row data though, so if there's more than one possible match you might need to import several tables and inspect the data in them in the database to see which one you really want.
I trying to take oracle DB backup using expdp. I have a specific case where an application table resides in the SYSTEM tablespace.
The backup export of this schema is successfully created with options SCHEMAS=SYSTEM and INCLUDE=TABLE:"like 'USER%'" which corresponds to my application tables.
I have created another schema with the user impexp which has a different tablespace allocated to it.
when I try to import the .dmp file into impexp, the import is unsuccessful stating "SYSTEM"."USER_SYS_MAST" exists.
Is there a way to import this table in the newly created schema. I also tried using the option REMAP_SCHEMA=SYSTEM:IMPEXP, but it seems to error out saying ORA-39013: Remapping the SYSTEM schema is not supported.
Summarizing : I want to import my application tables in the SYSTEM tablespace into a new TABLESPACE 'IMPEXP'.
Please let me know If I am going wrong somewhere and trying to do something that isn't supported.
any help will be greatly appreciated.
This is one of the reasons why putting application tables in the SYS or SYSTEM schemas is considered bad practice. These schemas are vital to the running of our databases and should not be meddled with.
You have compounded this bloomer by naming your tables with a prefix of USER, which is the same convention the data dictionary uses.
What you need to do is create a new schema to hold these tables. Grant it whatever privileges it needs that made you think it had to be owned by SYSTEM. Then move those tables out of the SYSTEM schema.
To do a proper job you should change your application to use this new schema, but as temporary fix you could give SYSTEM rights on the tables and build synonyms for them. If you have the time, change the application. It will cause you less grief in the long run.
Either way, you will be able to export the data out of the old database and into the target database using this new schema.
Agree with APC.
In your specific case, I would look at DBMS_METADATA.GET_DDL to extract the DDL so I can recreate all the objects in the new schema. There are options to exclude the TABLESPACE component so they would get created in the new schema's default tablespace.
Then I would simply do INSERT /*+APPEND */ INTO newschema.table AS SELECT * FROM SYSTEM.table
If space is an issue, you may need to TRUNCATE or DROP individual tables immediately after they are successfully copied.
We get *.dmp files from client which has some masked table data including indexes and constraints.
I do have those table structures (including indexes and constraints) at my end.
I want to import just the data without the indexes and constraints (present in the .dmp file) in Oracle10g using 'imp' command.
I am aware of the 'imp' command. Do help me in letting me know the options available in 'imp' command to import only the data.
I have tried using -- rows=yes indexes=no but this does not help.
You should be able to specify indexes=N and constraints=N.
For more info:
%> imp help=y
Here is a link with some good info on the options:
Oracle imp information
I am assuming from your post that you already have the tables and ancillary structures in your database, and you just want to suppress the error messages. If that is indeed the case the option you want is IGNORE=Y.
The Oracle documentaion is available online. You don't say what version you're on, but as you're using IMP I'd say 9i was a good fit. Find out more.. (On later versions you should check out DataPump instead).
Import the dump with show=y option. This will create/extract the scripts from dmp file. Now you can remove the indexes and constraint scripts from the log and execute against the database.
Here you can see lot of exp/data pump related examples.
http://www.acehints.com/p/site-map.html
You need to disable all triggers and then import your data with the CONSTRAINTS=N argument. Consider importing a table G3E_COMPONENT with constraints, foreign keys and triggers:
SQL>alter table G3E_COMPONENT disable all triggers;
import your data:
imp userid=USER/XXXXX#ORCL CONSTRAINTS=N DATA_ONLY=Y STATISTICS=NONE file=export.exp log=imp.log TABLES=G3E_COMPONENT
Should do the trick
IMHO IMP can't prevent constraints being applied and triggers being fired, ignore=y only ignores errors that arise. Maybe datapump allows it, I don't know.
So you have to:
manually disable all triggers and constraints on imported table
do an import with tables=<table names> rows=Y indexes=N constraints=N
enable triggers
enable validate constraints and resolve any errors (find and edit/remove offending values).
Be careful to use imp version that exactly matches your DB version. I had trouble with this...
Do Ignore=Y. It will ignore the create errors since you have already have the schema.