Oracle full EXPORT with exclude and NOT using par file - oracle

I need to do a full export of a 12.2 database. Recently we placed 2 tables in it with over 4 million records that will remain static. I'd like to eliminate them from the daily EXPDP as they have been archived offline.
This EXPDP is launched via a scheduled task and calls a series of batch files that have defined variables that are passed from batch file to batch file. This produces a series of log and archive files important in the larger scheme of things.
I do this without a .PAR file as the .PAR files does not seem to like any VARIABLE names defined in the batch files.
I can run this at the command prompt without issue, but if I call it via a batch I get an error
**
LRM-00111: no closing quote for value 'table:"LIK'
**
EXPDP *******/********#%dbname% FULL=Y exclude=statistics exclude=table:\"LIKE\'%_80\'\" DUMPFILE=%bckupdate%.dmp LOGFILE=%bckupdate%.log reuse_dumpfiles=yes
Any helpful hints on how to either use a variable name (as in %DBNAME%) in the PAR file or proper formatting for the batch file would be appreciated.

You can try this script expdp_powershell.ps1
For example
E:\upwork\stackoverflow\expdp_powershell>powershell ./expdp_powershell.ps1 -user_name system -user_password manager -connect_string test -exclude table:\"LIKE\'%_80\'\"
or
E:\upwork\stackoverflow\expdp_powershell>powershell ./expdp_powershell.ps1
Script expdp_powershell.ps1
param(
[string]$user_name = "system"
,
[string]$user_password = "manager"
,
[string]$connect_string = "TEST"
,
[string]$export_mode = "FULL=Y"
,
[string]$exclude = "table:\""LIKE \'%_80\'\"""
)
$date_time_log = Get-Date -Format "yyyyMMddHHmmss"
$DUMPFILE = "backup" + $date_time_log + ".dmp"
$LOGFILE = "backup_log" + $date_time_log + ".log"
$reuse_dumpfiles = "yes"
$DIRECTORY="DATA_PUMP_DIR"
echo $exclude
EXPDP $user_name/$user_password#$connect_string $export_mode exclude=statistics exclude=$exclude DIRECTORY=$DIRECTORY DUMPFILE=$DUMPFILE LOGFILE=$LOGFILE reuse_dumpfiles=$reuse_dumpfiles
For example output
E:\upwork\stackoverflow\expdp_powershell>powershell ./expdp_powershell.ps1 -user_name system -user_password manager -connect_string test -exclude table:\"LIKE\'%_80\'\"
table:\"LIKE \'%_80\'\"
Export: Release 11.2.0.4.0 - Production on Sat Jan 9 12:44:10 2021
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01": system/********#TEST FULL=Y exclude=statistics exclude=table:"LIKE \'%_80\'" DIRECTORY=DATA_PUMP_DIR DUMPFILE=backup20210109124410.dmp LOGFILE=ba
ckup_log20210109124410.log reuse_dumpfiles=yes
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 363.1 MB
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
Processing object type DATABASE_EXPORT/RESOURCE_COST
Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE

Related

sqlplus Dynamic Spool File Name

I need to give the spool file name dynamically and I have to pass the parameters when I call sqlplus. Below is what I tried
echo exit | sqlplus "{{ Oracle_username }}/ {{ Oracle_pwd}} #(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host={{ Oracle_HostName }} )(Port=1521))(CONNECT_DATA=(SID= {{Oracle_SID }})))" #Script.sql 'AppName' 'DatabaseName' 'ObjectType'
Over here I tried to pass App Name, Database Name and Object Type dynamically. Prior to running SQLPLUS step, I create folders dynamically (App Name , Database Name , Object Type are all folders and it will vary depending on each application) .Below is how my script.sql looks like :
SPOOL &&AppName/&&DatabaseName/&&ObjectType/Output.csv
<<SQL Script>>
SPOOL OFF
This doenst work . Can someone tell me what needs to be changed.
You are passing the values you want to form your spool file path and name as arguments to your script, but you need to refer to them as positional parameters:
SPOOL &1/&2/&3/Output.csv
Or if you're going to reuse them for something else you could define your own variable, set from the positional parameters:
DEFINE AppName=&1
DEFINE DatabaseName=&2
DEFINE ObjectType=&3
SPOOL &&AppName/&&DatabaseName/&&ObjectType/Output.csv
The spool file path will be relative to the directory you're in when you run the script. If that isn't what you want then put the root before the first substitution variable in the spool command, whichever form you use.
You could also include the exit in your .sql file so you don't have to echo it in; and you could use a TNS alias instead of passing all of the connection information on the command line - or if you can use a service name instead of a SID, you could use the easy connect syntax which is a bit simpler:
sqlplus username/password#//hostname:1521/service_name #Script.sql 'AppName' 'DatabaseName' 'ObjectType'
set your appname,dbname,objecttype's as environmental variables and then Try like below
[oracle#ct-myhost-02 ~]$ export app_name=/stage
[oracle#ct-myhost-02 ~]$ export database_name=PSES
[oracle#ct-myhost-02 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Wed Feb 1 12:04:08 2017
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> spool $app_name/$database_name/out.csv
SQL> select * from dual;
D
-
X
SQL> spool off;
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
[oracle#ct-myhost-02 ~]$ ls -l /stage/PSES/out.csv
-rw-r-----. 1 oracle oinstall 286 Feb 1 12:04 /stage/PSES/out.csv

impdp does not accept two tables in an INCLUDE Command

When restoring a table from an oracle 11g backup, including more than 2 entries in the INCLUDE command returns syntax error.
The command that works is:
impdp SVC_DEMO/********* SCHEMAS=test REMAP_SCHEMA=test:SVC_DEMO REMAP_TABLESPACE=DATA:SYSTEM DIRECTORY=dmpdir DUMPFILE=devv2db_05102016.dmp TABLE_EXISTS_ACTION=replace INCLUDE = TABLE:"IN('TBLPARTNER')" LOGFILE=impschema1.log
Starting "SVC_DEMO"."SYS_IMPORT_SCHEMA_02": SVC_DEMO/********
SCHEMAS=test REMAP_SCHEMA=test:SVC_DEMO
REMAP_TABLESPACE=DATA:SYSTEM DIRECTORY=dmpdir DUMPFILE=devv2db_05102016.dmp
LOGFILE=impschema1.log
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SVC_DEMO"."TBLPARTNER" 21.46 KB 7 rows
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SVC_DEMO"."SYS_IMPORT_SCHEMA_02" successfully completed at 15:01:38
But, when I add a second table in the include command:
impdp SVC_DEMO/********* SCHEMAS=test REMAP_SCHEMA=test:SVC_DEMO REMAP_TABLESPACE=DATA:SYSTEM DIRECTORY=dmpdir DUMPFILE=devv2db_05102016.dmp TABLE_EXISTS_ACTION=replace INCLUDE = TABLE:"IN('TBLPARTNER', 'TBLACCOUNT')" LOGFILE=impschema1.log
I get the following message:
impdp SVC_DEMO/****** SCHEMAS=test REMAP_SCHEMA=test:SVC_DEMO
REMAP_TABLESPACE=DATA:SYSTEM DIRECTORY=dmpdir
DUMPFILE=devv2db_05102016.dmp TABLE_EXISTS_ACTION=replace INCLUDE =
TABLE:"IN('TBLPARTNER', 'TBLACCOUNT')" LOGFILE=impschema1.log
LRM-00116: syntax error at ')' following 'TBLACCOUNT'
I have looked for bugs in impdp but can't find one.
Am I doing something wrong?
Since you are running this on command line, depending on your OS, special characters may need to be escaped. (It's also easier to use a parameter file where you wont need to escape the characters)
include=TABLE:\"IN \(\'TABLE1\', \'TABLE2\'\)\"
Using a parameter file you just place one option per line and reference it with
impdp PARFILE=name.txt

Why do I get ORA-39001: invalid argument value when I try to impdp in Oracle 12c?

When I run this command in Oracle 12c SE2:
impdp system/Oracle_1#pdborcl directory=DATA_PUMP_DIR dumpfile=mydb.dmp nologfile=Y
I get this:
ORA-39001 : invalid argument value
ORA-39000 : bad dump file specification
ORA-39088 : directory name DATA_PUMP_DIR is invalid
We used to import this into 11g all the time.
How can I solve these errors?
From the 12c documentation:
Be aware of the following requirements when using Data Pump to move data into a CDB:
...
The default Data Pump directory object, DATA_PUMP_DIR, does not work with PDBs. You must define an explicit directory object within the PDB that you are exporting or importing.
You will need to define your own directory object in your PDB, which your user (system here) has read/write privileges against.
create directory my_data_pump_dir as 'C:\app\OracleHomeUser1\admin\orcl\dpdump';
grant read, write on directory my_data_pump_dir to system;
It can be the same operating system directory that DATA_PUMP_DIR points to, you just need a separate directory object. But I've used the path you said you'd prefer, from a comment on a previous question.
Then the import is modified to have:
... DIRECTORY=my_data_pump_dir DUMPFILE=mydb.dmp

Oracle dump file table data extraction to file (original exp format)

I have Oracle dump files created with original exp (not expdp) (EXPORT:V10.02.01, Oracle 10g). They contain only table data for four tables.
1) I want to extract the table data into files (flat/fixed-width, CVS, or other text file) without importing them into another Oracle DB. [preferred]
2) Alternatively, I need a solution that can import them into an ordinary user (not SYSDBA) so that I can use other tools to extract the data.
My databases are 11g, but I can find 10g databases if needed. I have TOAD for Oracle Xpert 11.6.1.6 as my disposal. I am a moderately experieinced Oracle programmer, but I haven't worked with EXP/IMP before.
(The information below has been obscured to protect the data.)
Here's how the dump files were created:
exp FILE=data.dmp \
LOG=data.log \
TABLES=USER1.TABLE1,USER1.TABLE2,USER1.TABLE3,USER1.TABLE4 \
INDEXES=N TRIGGERS=N CONSTRAINTS=N GRANTS=N
Here's the log:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Note: grants on tables/views/sequences/roles will not be exported
Note: indexes on tables will not be exported
Note: constraints on tables will not be exported
About to export specified tables via Conventional Path ...
Current user changed to USER1
. . exporting table TABLE1 271 rows exported
. . exporting table TABLE2 272088 rows exported
. . exporting table TABLE3 2770 rows exported
. . exporting table TABLE4 21041 rows exported
Export terminated successfully without warnings.
Thank you in advance.
UPDATE:
TOAD version 9.7.2 will read a "dmp" file generated by EXP.
Select DATABASE -> EXPORT -> EXPORT FILE BROWSER from the menus.
You need to have the DBA utilities for TOAD installed. There is no real guarantee that the file is parsed
correctly, but the data will show up in TOAD in the schema browser.
NOTE: The only other known utility that will a dmp file generated by the exp utility is the imputility. You cannot read the dump file yourself. If you do, you run the risk of parsing the file incorrectly.
If you already have the data in an ORACLE table:
To extract the table data into a file, create a shell script that calls SQL*PLUS and causes SQL*PLUS to spool the table data to a file. You need one script per table.
#!/bin/sh
#NOTE: The path to sqlplus will vary on your system,
# but it is generally $ORACLE_HOME/bin/sqlplus.
#YOU NEED TO UNCOMMENT THESE LINES AND SET APPROPRIATELY.
#export ORACLE_SID=YOUR_SID
#export ORACLE_HOME=PATH_TO_YOUR_ORACLE_HOME
#export PATH=$PATH:$ORACLE_HOME/bin
sqlplus -s user/pwd#db << EOF
set pagesize 0
set linesize 0
set linesize 255
set heading off
set echo off
SPOOL TABLE1_DATA.txt
REM FOR EACH COLUMN IN TABLE1, SET THE FORMAT
COL FIELD_ID format 999,999,999
COL FIELD_DATA format a99
select FIELD_ID,FIELD_DATA from TABLE1;
SPOOL OFF
EOF
Make sure you set the line size of each line and set the format of each column. See FIELD_ID above for a number format column and FIELD_DATA for a character column.
NOTE: You need to remove the "N rows selected" from the end of the file.
(You can still import the file you created into another schema using the imputility.)

How to determine the Schemas inside an Oracle Data Pump Export file

I have an Oracle database backup file (.dmp) that was created with expdp.
The .dmp file was an export of an entire database.
I need to restore 1 of the schemas from within this dump file.
I don't know the names of the schemas inside this dump file.
To use impdp to import the data I need the name of the schema to load.
So, I need to inspect the .dmp file and list all of the schemas in it, how do I do that?
Update (2008-09-18 13:02) - More detailed information:
The impdp command i'm current using is:
impdp user/password#database directory=DPUMP_DIR
dumpfile=EXPORT.DMP logfile=IMPORT.LOG
And the DPUMP_DIR is correctly configured.
SQL> SELECT directory_path
2 FROM dba_directories
3 WHERE directory_name = 'DPUMP_DIR';
DIRECTORY_PATH
-------------------------
D:\directory_path\dpump_dir\
And yes, the EXPORT.DMP file is in fact in that folder.
The error message I get when I run the impdp command is:
Connected to: Oracle Database 10g Enterprise Edition ...
ORA-31655: no data or metadata objects selected for the job
ORA-39154: Objects from foreign schemas have been removed from import
This error message is mostly expected. I need the impdp command be:
impdp user/password#database directory=DPUMP_DIR dumpfile=EXPORT.DMP
SCHEMAS=SOURCE_SCHEMA REMAP_SCHEMA=SOURCE_SCHEMA:MY_SCHEMA
But to do that, I need the source schema.
impdp exports the DDL of a dmp backup to a file if you use the SQLFILE parameter. For example, put this into a text file
impdp '/ as sysdba' dumpfile=<your .dmp file> logfile=import_log.txt sqlfile=ddl_dump.txt
Then check ddl_dump.txt for the tablespaces, users, and schemas in the backup.
According to the documentation, this does not actually modify the database:
The SQL is not actually executed, and the target system remains unchanged.
If you open the DMP file with an editor that can handle big files, you might be able to locate the areas where the schema names are mentioned. Just be sure not to change anything. It would be better if you opened a copy of the original dump.
Update (2008-09-19 10:05) - Solution:
My Solution: Social engineering, I dug real hard and found someone who knew the schema name.
Technical Solution: Searching the .dmp file did yield the schema name.
Once I knew the schema name, I searched the dump file and learned where to find it.
Places the Schemas name were seen, in the .dmp file:
<OWNER_NAME>SOURCE_SCHEMA</OWNER_NAME>
This was seen before each table name/definition.
SCHEMA_LIST 'SOURCE_SCHEMA'
This was seen near the end of the .dmp.
Interestingly enough, around the SCHEMA_LIST 'SOURCE_SCHEMA' section, it also had the command line used to create the dump, directories used, par files used, windows version it was run on, and export session settings (language, date formats).
So, problem solved :)
Assuming that you do not have the log file from the expdp job that generated the file in the first place, the easiest option would probably be to use the SQLFILE parameter to have impdp generate a file of DDL (based on a full import). Then you can grab the schema names from that file. Not ideal, of course, since impdp has to read the entire dump file to extract the DDL and then again to get to the schema you're interested in, and you have to do a bit of text file searching for the various CREATE USER statements, but it should be doable.
The running the impdp command to produce an sqlfile, you will need to run it as a user which has the DATAPUMP_IMP_FULL_DATABASE role.
Or... run it as a low privileged user and use the MASTER_ONLY=YES option, then inspect the master table. e.g.
select value_t
from SYS_IMPORT_TABLE_01
where name = 'CLIENT_COMMAND'
and process_order = -59;
col object_name for a30
col processing_status head STATUS for a6
col processing_state head STATE for a5
select distinct
object_schema,
object_name,
object_type,
object_tablespace,
process_order,
duplicate,
processing_status,
processing_state
from sys_import_table_01
where process_order > 0
and object_name is not null
order by object_schema, object_name
/
http://download.oracle.com/otndocs/products/database/enterprise_edition/utilities/pdf/oow2011_dp_mastering.pdf
Step 1: Here is one simple example. You have to create a SQL file from the dump file using SQLFILE option.
Step 2: Grep for CREATE USER in the generated SQL file (here tables.sql)
Example here:
$ impdp directory=exp_dir dumpfile=exp_user1_all_tab.dmp logfile=imp_exp_user1_tab sqlfile=tables.sql
Import: Release 11.2.0.3.0 - Production on Fri Apr 26 08:29:06 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 08:29:12
$ grep "CREATE USER" tables.sql
CREATE USER "USER1" IDENTIFIED BY VALUES 'S:270D559F9B97C05EA50F78507CD6EAC6AD63969E5E;BBE7786A5F9103'
Lot of datapump options explained here http://www.acehints.com/p/site-map.html
You need to search for OWNER_NAME.
cat -v dumpfile.dmp | grep -o '<OWNER_NAME>.*</OWNER_NAME>' | uniq -u
cat -v turn the dumpfile into visible text.
grep -o shows only the match so we don't see really long lines
uniq -u removes duplicate lines so you see less output.
This works pretty well, even on large dump files, and could be tweaked for usage in a script.
My solution (similar to KyleLanser's answer) (on a Unix box):
strings dumpfile.dmp | grep SCHEMA_LIST
In my case, based on Aldur's and slafs' answers I came up with this expression that should tell you just the name of the original schema:
cat -v file.dmp | grep 'SCHEMA_LIST' | uniq -u | grep -o -P '(?<=SCHEMAS\=).*(?=content)'
Tested for a DMP file from Oracle 19.8 version.

Resources