I want to import a DB using impdp. I would like to import the INDEX NAMES and after that, rebuild it manually.
I used the option
EXCLUDE=index
but then I can't find the index in the table all_indexes
Is there a way to import the index without build them, and build them manually afterwards ?
Import all tables and data with following parameters
SQLFILE=create_index.sql include=index
This parameter doesn't create any index but create a sql file with all create index sql statements. Using this file you can create index manually after finishing table and data import.
you can also export/ import only metadata, this will import only table & index structures.
CONTENT=metadata_only
Related
Does anybody know is there a way to import locally all tables to clickHouse from csv or sql file (mysql dump) but not one by one table?
I found information only about table by table import https://clickhouse.com/docs/en/integrations/data-formats/csv-tsv
I am trying to use imp to Import a table from user abdou2 to user abdou1.
I exported my table from abdou2 inside a file dump using:
exp abdou2/root file=CLIENTS.dmp tables=CLIENTS
Then, I created in abdou1 an exact same Table but empty using:
CREATE TABLE imp_CLIENTS AS SELECT * FROM abdou2.CLIENTS WHERE 1=2;
And I want to directly by using imp import abdou2.CLIENTS to abdou1.imp_CLIENTS. Both instructions mentioned above worked. Is it possible to do? Thank you
A better working solution was to, indeed, use expdp/impdp.
I tried to import (TABLES, PROCEDURE, FUNCTION etc) from a dump file. I did a mistake by
executing KILL -9 <PROCESS_ID> while import was still going on.
So, I started to import again. Now, I did another mistake by NOT mentioning
TABLE_EXISTS_ACTION=TRUNCATE . So, tables have been imported with duplicate records.
I want to get rid of duplicate data. There are more than 500 tables involved.
I am planning to import again by first truncating the table and then importing data only.
Below is the import command I have come up with. Will this command import ONLY table data(records) by first
truncating the table and then insert only the data?
impdp DIRECTORY=MY_DIRECTORY dumpfile=EXP_MY_DUMP.dmp INCLUDE=TABLE_DATA TABLE_EXISTS_ACTION=TRUNCATE
I could try executing myself and find out if that works. But, I have already tried twice and failed.
Also, I don't want to again import INDEX, SEQUENCES etc. Just table records.
Remove INCLUDE=TABLE_DATA. That will not execute create table.. that should work.
The Oracle new tablespace name must fit with the old tablespace name?
For example:
The dump file tablespace name is A,and i create a new tablespace B,and it could import table, but has many error?
ORA-00959:tablespace 'ECASYS'(old) not exits.
This is my import statement:
imp userid='ZHPE/zhpe#ORCL' file='E:\xxxx\xxxx2013-08-15Bak\130815.dmp' log='D:\app\Administrator\oradata\orcl\ZHPE.log' full=y ignore=y;
Is the new tablespace must must must fit the old one???
help!
If you're forced to use the legacy exp and imp tools then the tablespace cannot be changed during the import itself using command-line options. If you can, switch to using the datapump versions, expdp and impdp, and then follow #schurik's advice.
If you can't do that then you'll need a workaround, which is to create the schema objects manually first.
If you run imp with the indexfile option then it will create a file containing the DDL for the tables and indexes:
imp user/password indexfile=schema.sql file=...
The table creation DDL is commented out with REM markers, which you need to remove. You can then edit it to change the tablespace and any other storage options that are no longer appropriate. Then run that schema creation SQL to create all the tables as empty.
Then run the normal import again, but with the ignore=y flag so it doesn't complain (much) that the tables now already exist. The data will still be inserted into those existing tables.
This will be a bit slower if you create the indexes as well as the tables beforehand; normally it would create the tables, insert the data, and then build the indexes, which is more efficient. If the slowdown is significant then you can split your schema.sql into separate table and index creation files and do the same thing manually - create the tables, run the import with ignore=y and indexes=n (to stop it trying and failing to create them), and then create the indexes yourself afterwards.
Clearly this is a bit of a pain, and one of many reasons that switching to datapump is a good idea.
take a look at the REMAP_TABLESPACE import parameter e.g
REMAP_TABLESPACE=A:B
I have encountered a problem in importing the dump file to a new database.
When importing the dump file it requires a new tablespace which does not exist in the database.
To create the tablespace I need to hijack some scripts which is readonly.For this reason it is complicated to export the table structure through imp tool of Oracle so my colleage changed the dump file manually and it is able to be imported.
Is it OK to change the dump file manually in order to import the file if it is the quickest way?
if are comfortable to change the dump file manually then it is fine, keeping that you are aware of the the complete structure of the .dmp file.
i will suggest u to use data pump as it remaps the table space of the existing schema with the new one. and performance wise data pump is much faster then normal dump.
As an alternative, get a dummy database and
create the tablespace/schema.
Do the import there with ROWS=N
ALTER TABLE ... MOVE .... to put the tables into the desired tablespace
Export tables (structures) from there
import corrected structures
Import the data with IGNORE=Y so that the data can be imported over the existing structures.
if you create the user with a default tablespace that is a tablespace that exists, you can import with rows=n and ignore=y and that should bring the objects in for you into that tablespace.