Oracle dump file table data extraction to file (original exp format) - oracle

I have Oracle dump files created with original exp (not expdp) (EXPORT:V10.02.01, Oracle 10g). They contain only table data for four tables.
1) I want to extract the table data into files (flat/fixed-width, CVS, or other text file) without importing them into another Oracle DB. [preferred]
2) Alternatively, I need a solution that can import them into an ordinary user (not SYSDBA) so that I can use other tools to extract the data.
My databases are 11g, but I can find 10g databases if needed. I have TOAD for Oracle Xpert 11.6.1.6 as my disposal. I am a moderately experieinced Oracle programmer, but I haven't worked with EXP/IMP before.
(The information below has been obscured to protect the data.)
Here's how the dump files were created:
exp FILE=data.dmp \
LOG=data.log \
TABLES=USER1.TABLE1,USER1.TABLE2,USER1.TABLE3,USER1.TABLE4 \
INDEXES=N TRIGGERS=N CONSTRAINTS=N GRANTS=N
Here's the log:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Note: grants on tables/views/sequences/roles will not be exported
Note: indexes on tables will not be exported
Note: constraints on tables will not be exported
About to export specified tables via Conventional Path ...
Current user changed to USER1
. . exporting table TABLE1 271 rows exported
. . exporting table TABLE2 272088 rows exported
. . exporting table TABLE3 2770 rows exported
. . exporting table TABLE4 21041 rows exported
Export terminated successfully without warnings.
Thank you in advance.

UPDATE:
TOAD version 9.7.2 will read a "dmp" file generated by EXP.
Select DATABASE -> EXPORT -> EXPORT FILE BROWSER from the menus.
You need to have the DBA utilities for TOAD installed. There is no real guarantee that the file is parsed
correctly, but the data will show up in TOAD in the schema browser.
NOTE: The only other known utility that will a dmp file generated by the exp utility is the imputility. You cannot read the dump file yourself. If you do, you run the risk of parsing the file incorrectly.
If you already have the data in an ORACLE table:
To extract the table data into a file, create a shell script that calls SQL*PLUS and causes SQL*PLUS to spool the table data to a file. You need one script per table.
#!/bin/sh
#NOTE: The path to sqlplus will vary on your system,
# but it is generally $ORACLE_HOME/bin/sqlplus.
#YOU NEED TO UNCOMMENT THESE LINES AND SET APPROPRIATELY.
#export ORACLE_SID=YOUR_SID
#export ORACLE_HOME=PATH_TO_YOUR_ORACLE_HOME
#export PATH=$PATH:$ORACLE_HOME/bin
sqlplus -s user/pwd#db << EOF
set pagesize 0
set linesize 0
set linesize 255
set heading off
set echo off
SPOOL TABLE1_DATA.txt
REM FOR EACH COLUMN IN TABLE1, SET THE FORMAT
COL FIELD_ID format 999,999,999
COL FIELD_DATA format a99
select FIELD_ID,FIELD_DATA from TABLE1;
SPOOL OFF
EOF
Make sure you set the line size of each line and set the format of each column. See FIELD_ID above for a number format column and FIELD_DATA for a character column.
NOTE: You need to remove the "N rows selected" from the end of the file.
(You can still import the file you created into another schema using the imputility.)

Related

Firebird 2.5 query returns COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed

I have an old source database in which apparently custom collation UTF8_CI_AI_NUMERIC_SORT was created. I'm running it on docker via image jacobalberty/firebird:2.5-ss. Originally database was created on a Windows machine.
When I try to do a query on the table where this collation was used, I get the error:
SQL> select * from "InvoiceService";
Statement failed, SQLSTATE = 22021
COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed
Show collations returns the following:
SQL> show collations;
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'NUMERIC-SORT=1'
I tried the following fixes:
add entry to fbintl.conf:
<charset UTF8>
intl_module fbintl
collation UTF8_CI_AI_NUMERIC_SORT
</charset>
Then run the sp_register_character_set("UTF8", 4) procedure, and receiving error about duplicate collations (because UTF8_CI_AI_NUMERIC_SORT is already defined in the DB).
Dropping collation
SQL> drop collation UTF8_CI_AI_NUMERIC_SORT;
Statement failed, SQLSTATE = 42000
unsuccessful metadata update
-Collation UTF8_CI_AI_NUMERIC_SORT is used in table InvoiceService (field name NAME) and cannot be dropped
Adding new column in which different collation would be used, but can't even add it:
SQL> ALTER TABLE "InvoiceService" ADD NAME2 VARCHAR(600) CHARACTER SET UTF8;
Statement failed, SQLSTATE = 22021
unsuccessful metadata update
-InvoiceService
-COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed
With using gbak restoring only metadata, fixing the schema and then inserting only the data, but gbak does not support restoring data only
...
I'm out of ideas now. What else could I try?
So, I finally managed to solve the problem. What I did was to create a DB backup with
gbak -v -t -user SYSDBA /path/to/source.fdb /path/to/backup.fbk
Then use the 3.0 version of Docker image with Firebird DB (jacobalberty/firebird:3.0) and restore from backup with
gbak -create /path/to/backup.fbk /path/to/restored3.fdb
Note that the same backup-restore procedure without switching the Docker image did not work.
I didn't have to do anything else. There's only a slight difference in SHOW COLLATIONS; output:
// originally:
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'NUMERIC-SORT=1'
// restored DB
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'COLL-VERSION=58.0.6.50;NUMERIC-SORT=1'

SQLPLUS connection to different dbs

Hello i want to connect to following dbs in loop and execute statements on each:
conn support/support#sp0666to
conn support/support#sp0667to
conn support/support#sp0668to
Is there any way to do this in sqlplus?
Thank you for your answers in advance!
Create one script (doWork.sql) that contains the majority of what you want to do:
conn &1/&2#&3
select EMPLOYEE, AUTHORIZED, TIME, DAT, WORKSTATION
from EMPLOYEE
where status = 25;
In a separate script (goToWork.sql):
set lines 1500 pages 10000
set colsep ';'
set sqlprompt ''
set heading on
set headsep off
set newpage none column tm new_value file_time noprint
select to_char(sysdate, 'DDMMYYYY_HH24.MI') tm from dual;
accept user
accept pass
spool C:\Users\NANCHEV\Desktop\parked.csv
##doWork &user &pass sp0666to
##doWork &user &pass sp0667to
##doWork &user &pass sp0668to
spool off;
exit
If you want separate files, then move the two spool commands to the doWork.sql file.
Assuming you want to run the same set of queries for each database, I'd create a script file (e.g. main_statements.sql) containing those statements.
Then, if the list of databases was static, I'd create a second script file (e.g. run_me.sql) in the same directory, with contents along the lines of:
connect &&user/&&password#db1
##main_statements.sql
connect &&user/&&password#db2
##main_statements.sql
connect &&user/&&password#db3
##main_statements.sql
...
If, however, the databases are static but the list is contained in a database somewhere, then I'd write a script (e.g. run_me.sql) that generates a script, something like:
set echo off
set feedback off
set verify off
spool databases_to_run_through.sql
select 'connect '||username||'/'||password||'#'||database_name||chr(10)||
'##main_statements.sql'
from list_of_databases_to_query;
spool off;
##databases.run_through.sql
N.B. untested. Also, I have assumed that your table contains the usernames and passwords for each db that needs to be connected to; if that's not the case, you'll have to work out how to handle them; maybe they're all the same (in which case, you can hardcode them - or better yet, use substitution variables (e.g. &&username) to avoid having to store them in a plain file. You'd then have to enter them at runtime.
You'll also need to run the script from the same directory, otherwise you could end up with the generated script not being created in the same directory as your main_statements.sql equivalent script.
Yes it's possible, you can use oracle DBLink to connect to different dbs just like your example.

How to load Triggers to Teradata Server using bteq

We're migrating database from Oracle to Teradata.
We have .sql files with valid trigger DDL and .bteq files with .compile commands for these triggers. But when we run these .bteq files we get errors and trigger is not loaded.
For example, file td_instrg1.sql contains trigger definition:
CREATE TRIGGER TD_INSTRG1
AFTER INSERT
ON TD_EMPLOYEES
REFERENCING NEW AS X1
FOR EACH ROW
WHEN(X1.id is not null)
BEGIN ATOMIC
(INSERT INTO TD_EMPLOYEES1 VALUES(X1.id, X1.name, X1.monthly_income);)
END;
and file td_instrg1.bteq contains the following commands:
.logon vmdbsrv016/dbc, dbc;
DATABASE twm;
.compile FILE=td_instrg1.sql;
.logoff;
Please advise how to load triggers from scripts using bteq utility.
The .COMPILE command in BTEQ is reserved for the compilation of Teradata stored procedures. Your DDL statements for the triggers can be executed directly. If you have separate files containing the DDL you can reference them from within BTEQ using the .RUN command:
.logon vmdbsrv016/dbc, {password};
DATABASE twm;
.RUN FILE=td_instrg1.sql;
.logoff;

How to determine the Schemas inside an Oracle Data Pump Export file

I have an Oracle database backup file (.dmp) that was created with expdp.
The .dmp file was an export of an entire database.
I need to restore 1 of the schemas from within this dump file.
I don't know the names of the schemas inside this dump file.
To use impdp to import the data I need the name of the schema to load.
So, I need to inspect the .dmp file and list all of the schemas in it, how do I do that?
Update (2008-09-18 13:02) - More detailed information:
The impdp command i'm current using is:
impdp user/password#database directory=DPUMP_DIR
dumpfile=EXPORT.DMP logfile=IMPORT.LOG
And the DPUMP_DIR is correctly configured.
SQL> SELECT directory_path
2 FROM dba_directories
3 WHERE directory_name = 'DPUMP_DIR';
DIRECTORY_PATH
-------------------------
D:\directory_path\dpump_dir\
And yes, the EXPORT.DMP file is in fact in that folder.
The error message I get when I run the impdp command is:
Connected to: Oracle Database 10g Enterprise Edition ...
ORA-31655: no data or metadata objects selected for the job
ORA-39154: Objects from foreign schemas have been removed from import
This error message is mostly expected. I need the impdp command be:
impdp user/password#database directory=DPUMP_DIR dumpfile=EXPORT.DMP
SCHEMAS=SOURCE_SCHEMA REMAP_SCHEMA=SOURCE_SCHEMA:MY_SCHEMA
But to do that, I need the source schema.
impdp exports the DDL of a dmp backup to a file if you use the SQLFILE parameter. For example, put this into a text file
impdp '/ as sysdba' dumpfile=<your .dmp file> logfile=import_log.txt sqlfile=ddl_dump.txt
Then check ddl_dump.txt for the tablespaces, users, and schemas in the backup.
According to the documentation, this does not actually modify the database:
The SQL is not actually executed, and the target system remains unchanged.
If you open the DMP file with an editor that can handle big files, you might be able to locate the areas where the schema names are mentioned. Just be sure not to change anything. It would be better if you opened a copy of the original dump.
Update (2008-09-19 10:05) - Solution:
My Solution: Social engineering, I dug real hard and found someone who knew the schema name.
Technical Solution: Searching the .dmp file did yield the schema name.
Once I knew the schema name, I searched the dump file and learned where to find it.
Places the Schemas name were seen, in the .dmp file:
<OWNER_NAME>SOURCE_SCHEMA</OWNER_NAME>
This was seen before each table name/definition.
SCHEMA_LIST 'SOURCE_SCHEMA'
This was seen near the end of the .dmp.
Interestingly enough, around the SCHEMA_LIST 'SOURCE_SCHEMA' section, it also had the command line used to create the dump, directories used, par files used, windows version it was run on, and export session settings (language, date formats).
So, problem solved :)
Assuming that you do not have the log file from the expdp job that generated the file in the first place, the easiest option would probably be to use the SQLFILE parameter to have impdp generate a file of DDL (based on a full import). Then you can grab the schema names from that file. Not ideal, of course, since impdp has to read the entire dump file to extract the DDL and then again to get to the schema you're interested in, and you have to do a bit of text file searching for the various CREATE USER statements, but it should be doable.
The running the impdp command to produce an sqlfile, you will need to run it as a user which has the DATAPUMP_IMP_FULL_DATABASE role.
Or... run it as a low privileged user and use the MASTER_ONLY=YES option, then inspect the master table. e.g.
select value_t
from SYS_IMPORT_TABLE_01
where name = 'CLIENT_COMMAND'
and process_order = -59;
col object_name for a30
col processing_status head STATUS for a6
col processing_state head STATE for a5
select distinct
object_schema,
object_name,
object_type,
object_tablespace,
process_order,
duplicate,
processing_status,
processing_state
from sys_import_table_01
where process_order > 0
and object_name is not null
order by object_schema, object_name
/
http://download.oracle.com/otndocs/products/database/enterprise_edition/utilities/pdf/oow2011_dp_mastering.pdf
Step 1: Here is one simple example. You have to create a SQL file from the dump file using SQLFILE option.
Step 2: Grep for CREATE USER in the generated SQL file (here tables.sql)
Example here:
$ impdp directory=exp_dir dumpfile=exp_user1_all_tab.dmp logfile=imp_exp_user1_tab sqlfile=tables.sql
Import: Release 11.2.0.3.0 - Production on Fri Apr 26 08:29:06 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 08:29:12
$ grep "CREATE USER" tables.sql
CREATE USER "USER1" IDENTIFIED BY VALUES 'S:270D559F9B97C05EA50F78507CD6EAC6AD63969E5E;BBE7786A5F9103'
Lot of datapump options explained here http://www.acehints.com/p/site-map.html
You need to search for OWNER_NAME.
cat -v dumpfile.dmp | grep -o '<OWNER_NAME>.*</OWNER_NAME>' | uniq -u
cat -v turn the dumpfile into visible text.
grep -o shows only the match so we don't see really long lines
uniq -u removes duplicate lines so you see less output.
This works pretty well, even on large dump files, and could be tweaked for usage in a script.
My solution (similar to KyleLanser's answer) (on a Unix box):
strings dumpfile.dmp | grep SCHEMA_LIST
In my case, based on Aldur's and slafs' answers I came up with this expression that should tell you just the name of the original schema:
cat -v file.dmp | grep 'SCHEMA_LIST' | uniq -u | grep -o -P '(?<=SCHEMAS\=).*(?=content)'
Tested for a DMP file from Oracle 19.8 version.

Howto import an oracle dump in an different tablespace

I want to import an oracle dump into a different tablespace.
I have a tablespace A used by User A. I've revoked DBA on this user and given him the grants connect and resource. Then I've dumped everything with the command
exp a/*** owner=a file=oracledump.DMP log=log.log compress=y
Now I want to import the dump into the tablespace B used by User B. So I've given him the grants on connect and resource (no DBA). Then I've executed the following import:
imp b/*** file=oracledump.DMP log=import.log fromuser=a touser=b
The result is a log with lots of errors:
IMP-00017: following statement failed with ORACLE error 20001: "BEGIN DBMS_STATS.SET_TABLE_STATS
IMP-00003: ORACLE error 20001 encountered
ORA-20001: Invalid or inconsistent input values
After that, I've tried the same import command but with the option statistics=none. This resulted in the following errors:
ORA-00959: tablespace 'A_TBLSPACE' does not exist
How should this be done?
Note: a lot of columns are of type CLOB. It looks like the problems have something to do with that.
Note2: The oracle versions are a mixture of 9.2, 10.1, and 10.1 XE. But I don't think it has to do with versions.
You've got a couple of issues here.
Firstly, the different versions of Oracle you're using is the reason for the table statistics error - I had the same issue when some of our Oracle 10g Databases got upgraded to Release 2, and some were still on Release 1 and I was swapping .DMP files between them.
The solution that worked for me was to use the same version of exp and imp tools to do the exporting and importing on the different Database instances. This was easiest to do by using the same PC (or Oracle Server) to issue all of the exporting and importing commands.
Secondly, I suspect you're getting the ORA-00959: tablespace 'A_TBLSPACE' does not exist because you're trying to import a .DMP file from a full-blown Oracle Database into the 10g Express Edition (XE) Database, which, by default, creates a single, predefined tablespace called USERS for you.
If that's the case, then you'll need to do the following..
With your .DMP file, create a SQL file containing the structure (Tables):
imp <xe_username>/<password>#XE file=<filename.dmp> indexfile=index.sql full=y
Open the indexfile (index.sql) in a text editor that can do find and replace over an entire file, and issue the following find and replace statements IN ORDER (ignore the single quotes.. '):
Find: 'REM<space>' Replace: <nothing>
Find: '"<source_tablespace>"' Replace: '"USERS"'
Find: '...' Replace: 'REM ...'
Find: 'CONNECT' Replace: 'REM CONNECT'
Save the indexfile, then run it against your Oracle Express Edition account (I find it's best to create a new, blank XE user account - or drop and recreate if I'm refreshing):
sqlplus <xe_username>/<password>#XE #index.sql
Finally run the same .DMP file you created the indexfile with against the same account to import the data, stored procedures, views etc:
imp <xe_username>/<password>#XE file=<filename.dmp> fromuser=<original_username> touser=<xe_username> ignore=y
You may get pages of Oracle errors when trying to create certain objects such as Database Jobs as Oracle will try to use the same Database Identifier, which will most likely fail as you're on a different Database.
If you're using Oracle 10g and datapump, you can use the REMAP_TABLESPACE clause. example:
REMAP_TABLESPACE=A_TBLSPACE:NEW_TABLESPACE_GOES_HERE
For me this work ok (Oracle Database 10g Express Edition Release 10.2.0.1.0):
impdp B/B full=Y dumpfile=DUMP.dmp REMAP_TABLESPACE=OLD_TABLESPACE:USERS
But for new restore you need new tablespace
P.S. Maybe useful http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
What version of Oracle are you using? If its 10g or greater, you should look at using Data Pump instead of import/export anyway. I'm not 100% sure if it can handle this scenario, but I would expect it could.
Data Pump is the replacement for exp/imp for 10g and above. It works very similar to exp/imp, except its (supposedly, I don't use it since I'm stuck in 9i land) better.
Here is the Data Pump docs
The problem has to do with the CLOB columns. It seems that the imp tool cannot rewrite the create statement to use another tablespace.
Source: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:66890284723848
The solution is:
Create the schema by hand in the correct tablespace. If you do not have a script to create the schema, you can create it by using the indexfile= of the imp tool.
You do have to disable all constraints your self, the oracle imp tool will not disable them.
After that you can import the data with the following command:
imp b/*** file=oracledump.dmp log=import.log fromuser=a touser=b statistics=none ignore=y
Note: I still needed the statistics=none due to other errors.
extra info about the data pump
As of Oracle 10 the import/export is improved: the data pump tool ([http://www.oracle-base.com/articles/10g/OracleDataPump10g.php][1])
Using this to re-import the data into a new tablespace:
First create a directory for the temporary dump:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/tempdump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO a;
Export:
expdp a/* schemas=a directory=tempdump dumpfile=adump.dmp logfile=adump.log
Import:
impdp b/* directory=tempdump dumpfile=adump.dmp logfile=bdump.log REMAP_SCHEMA=a:b
Note: the dump files are stored and read from the server disk, not from the local (client) disk
my solution is to use GSAR utility to replace tablespace name in the DUMP file. When you do replce, make sure that the size of the dump file unchanged by adding spaces.
E.g.
gsar -f -s"TSDAT_OV101" -r"USERS " rm_schema.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ ENABLE STORAGE IN ROW CHUNK 8192 RETENTION" -r" " rm_schema.n1.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ LOGGING" -r" " rm_schema.n1.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ " -r" " rm_schema.n.dump rm_schema.n1.dump
I wanna improve for two users both in different tablespaces on different servers (databases)
1.
First create a directories for the temporary dump for both servers (databases):
server #1:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/old_datapump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO old_user;
server #2:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/new_datapump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO new_user;
2.
Export (server #1):
expdp tables=old_user.table directory=tempdump dumpfile=adump.dmp logfile=adump.log
3.
Import (server #2):
impdp directory=tempdump dumpfile=adump_table.dmp logfile=bdump_table.log
REMAP_TABLESPACE=old_tablespace:new_tablespace REMAP_SCHEMA=old_user:new_user
The answer is difficult, but doable:
Situation is: user A and tablespace X
import your dump file into a different database (this is only necessary if you need to keep a copy of the original one)
rename tablespace
alter tablespace X rename to Y
create a directory for the expdp command en grant rights
create a dump with expdp
remove the old user and old tablespace (Y)
create the new tablespace (Y)
create the new user (with a new name) - in this case B - and grant rights (also to the directory created with step 3)
import the dump with impdp
impdp B/B directory=DIR dumpfile=DUMPFILE.dmp logfile=LOGFILE.log REMAP_SCHEMA=A:B
and that's it...
Because I wanted to import (to Oracle 12.1|2) a dump that was exported from a local development database (18c xe), and I knew that all my target databases will have an accessible tablespace called DATABASE_TABLESPACE, I just created my schema/user to use a new tablespace of that name instead of the default USERS (to which I have no access on the target databases):
-- don't care about the details
CREATE TABLESPACE DATABASE_TABLESPACE
DATAFILE 'DATABASE_TABLESPACE.dat'
SIZE 10M
REUSE
AUTOEXTEND ON NEXT 10M MAXSIZE 200M;
ALTER DATABASE DEFAULT TABLESPACE DATABASE_TABLESPACE;
CREATE USER username
IDENTIFIED BY userpassword
CONTAINER=all;
GRANT create session TO username;
GRANT create table TO username;
GRANT create view TO username;
GRANT create any trigger TO username;
GRANT create any procedure TO username;
GRANT create sequence TO username;
GRANT create synonym TO username;
GRANT create synonym TO username;
GRANT UNLIMITED TABLESPACE TO username;
An exp created from this makes imp happy on my target.
---Create new tablespace:
CREATE TABLESPACE TABLESPACENAME DATAFILE
'D:\ORACL\ORADATA\XE\TABLESPACEFILENAME.DBF' SIZE 350M AUTOEXTEND ON NEXT 2500M MAXSIZE UNLIMITED
LOGGING
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT MANUAL
FLASHBACK ON;
---and then import with below command
CREATE USER BVUSER IDENTIFIED BY VALUES 'bvuser' DEFAULT TABLESPACE TABLESPACENAME
-- where D:\ORACL is path of oracle installation

Resources