Oracle EX and SQL Plus: How to recover dump file? - oracle

I have .dmp and .log files. I need to recover the database schema and data using SQLPlus or some feature of EX. How do I do that? I've tried the RECOVER command and impdp. No luck, or I'm doing something wrong.

What version of Oracle? How was the .dmp file created? You can look at the first line in your .dmp file (assuming it's a file produced by exp) to get the version of the utility that dumped it. Likely, you will need to use imp, although I don't know what problems you were experiencing with impdp - error messages and the command line being used would be helpful. Assuming this was produced by exp, RECOVER will not help.

Can you use "imp Scott/Tiger#Machine file=MyDump.dmp" in EX?
P.S: I Assume Oracle Ex == Oracle Express

open command prompt and type imp user/password#db
you will be prompt to provide the file and path.
if you get message like:
IMP-00010: not a valid export file, header failed verification
IMP-00000: Import terminated unsuccessfully
this means that the file was created with data pump or that the source version is newer.
if not, follow the prompt it should be really straight forward.

Related

Does v13 of postgresql solve the bug related to "could not stat file"? [duplicate]

I run this command:
COPY XXX FROM 'D:/XXX.csv' WITH (FORMAT CSV, HEADER TRUE, NULL 'NULL')
In Windows 7, it successfully imports CSV files of less than 1GB.
If the file is more then 1GB big, I get an “unknown error”.
[Code: 0, SQL State: XX000] ERROR: could not stat file "'D:/XXX.csv' Unknown error
How can I fix this issue?
You can work around this by piping the file through a program. For example I just used this to copy from a 24GB file on Windows 10 and PostgreSQL 11.
copy t(c,d) from program 'cmd /c "type x:\path\to\file.txt"' with (format text);
This copies the text file file.txt into the table t, columns c and d.
The trick here is to run cmd in a single command mode, with /c and telling it to type out the file in question.
https://github.com/MIT-LCP/mimic-code/issues/493
alistairewj commented Nov 3, 2018 • ►
edited
Okay, the could not stat file "CHARTEVENTS.csv": Unknown error is actually a bug in PostgreSQL 11. Under the hood it makes a call to fstat() to make sure the file is not a directory, and unfortunately fstat() is a 32-bit program which can't handle large files like chartevents. I tested the build on Windows with PostgreSQL 10.5 and I didn't get this error so I think it's fairly new.
The best workaround is to keep the files compressed (i.e. keep them as .csv.gz files) and use 7zip to load in the data directly from compressed files. In testing this seemed to still work. There is a pretty detailed tutorial on how to do this here: https://mimic.physionet.org/tutorials/install-mimic-locally-windows/
The brief version of above is that you keep the .csv.gz files, you add the 7zip binary to your windows environment path, and then you call the postgres_load_data_7zip.sql file to load in the data. You can use the postgres_checks.sql file after everything to make sure you loaded in all the data correctly.
edit: For your later error, where you are using this 7zip approach, I'm not sure why it's not loading. Try redownloading just the ADMISSIONS.csv.gz file and seeing if it still throws you that same error. Maybe there is a new version of 7zip which requires me to update the script or something!
For anyone else who googled this Postgres error message after attempting to work with a >1gb file in Postgres 11, I can confirm that #亚军吴's answer above is spot-on. It is indeed a size issue.
I tried a different approach, though, than #亚军吴's and #Loren's: I simply uninstalled Postgres 11 and installed the stable version of Postgres 10.7. (I'm on Windows 10, by the way, in case that matters.)
I re-ran the original code that had prompted the error and voila, a few minutes later I'd filled in a new table with data from a medium-ish-size csv file (~3gb). I initially tried to use CSVSplitter, per #Loren, which was working fine until I got close to running out of storage space on my machine. (Thanks, Battlefield 5.)
In my case, there isn't anything in PGSQL 11 that I was relying on that wasn't in version 10.7, so I think this could be a good solution for anyone else who runs into this problem. Thanks everyone above for contributing, especially to the OP for posting this in the first place. I cured a huge, huge headache!
This has been fixed in commit bed90759f in PostgreSQL v14.
The file limit for the error is actually 4 GB.
The fix was too invasive to be backported, so you can only upgrade to avoid the problem. Once the fix has had some field testing, you could lobby the pgsql-hackers mailing list to get it backported.
With pgAdmin and AWS, I used CSVSplitter to split into files less than 1GB. Lame, but worked. pgAdmin import appends to the existing table. (Changed escape character from ' to " in order to avoid error due to unquoted text in the source file. Typically I apply quotes in LibreOffice, but these files were too big to open.)
It seems this is not a database problem, but a problem of psql / pgadmin. The workaround is using an admin software from the previous psql versions:
Use the existing PostgreSQL 11 database
Install psql or pgadmin from the PostgreSQL 10 installation and use it to upload the file (with the command shown in the question)
Hope this helps anyone coming across the same problem.
Add two lines to your CSV file: One at the begining and one at the end:
COPY XXX FROM STDIN WITH (FORMAT CSV, HEADER TRUE, NULL 'NULL');
<here are the lines your file already contains>
\.
Don't forget another newline after the \. line. Then call
psql -h hostname -d dbname -U username -f 'D:/XXX.csv'
This is what worked for me:
\COPY member_data.lab_result FROM PROGRAM 'gzip -dcf lab_result.dat.gz' WITH (FORMAT 'csv', DELIMITER '|', QUOTE '`')

oracle data pump import ORA-39002 with ORA-39070, ORA-29283 and others on Windows 10

I am using data pump to perform an import on 4 .dmp files and keep on receiving the set of errors as below:
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
I am new to oracle and cannot find a helpful solution.
I am performing the import as in here, although I'm using oracle 12c.
The command I run in the windows command like looks like this:
impdp user/pass#db_name directory=DUMP_DIR dimpfile="file_name.dmp" schemas=schema_name content=all parallel=4
DUMP_DIR is created in oracle and appropriate privs were granted.
I also ran this command with
... logfile=file_name.log
added at the end but I'm not sure if the log file was created or where it was saved.
I have found this - it's about exactly the same set of errors but on export and on linux. At the end of the answer there's a sentence 'If we are on a Windows machine, then we need to make sure that both the listener and the database have been started with the exact same username.' Is this useful in case of import? If yes - what does it mean exactly?
There's a great short answer here, which is basically "The database isn't able to write to the log file location."
The link above suggests a simple test to troubleshoot the issue.
declare
f utl_file.file_type;
begin
f := utl_file.fopen ('DUMP_DIR', 'test.txt', 'w');
utl_file.put_line(f, 'test');
utl_file.fclose(f);
end;
/
If this fails, Oracle can't write to that directory at all, probably because of Windows file permissions. Check which Windows user(s) the Oracle services are running as, and change the folder permissions to allow them write access.
If that worked, it's a problem specific to impdp. You might try changing your command string - one option might be to specifically write your log file to a different Oracle directory, e.g. logfile=DATA_PUMP_DIR:file_name.log.
If none of these options work, you can also disable the logfile completely by using NOLOGFILE=Y, but you'll have to monitor the impdp output on your console, because it won't get saved anywhere else.
The problem You have is Your Oracle is not able to write to DIRECTORY (DUMP_DIR) you specified.
In Windows 10, It behaves unpredictably. Solution
Create another Oracle directory with preferably in C:\Users\Public\ folder, where you are 100% sure access would not be issue. CREATE OR REPLACE DIRECTORY DUMP_DIR_2 AS 'C:\Users\Public\<name>
Give Grants GRANT READ, WRITE ON DIRECTORY DUMP_DIR_2 TO schema_name;
Copy your dump file to newly created folder.
Fire your import command
First is very important the Oracle have the permission to write and read the folder. If you already test this, try the solution bellow:
I had the same situation, in my case the command was (password is only for an instance) :
impdp 'sys/passExample as sysdba' directory=C:/oracle/oradata/EXEMPLODB dumpfile=preupd.bak
I put the preup.bak into the folder EXEMPLODB
The correct is change the directory folder by the name of directory, the correct command is:
impdp 'sys/passExample as sysdba' directory=EXT_DATA_FILES dumpfile=preupd.bak
The EXT_DATA_FILES is the directory name, I found with the query
select * from all_directories;
into the system db.

How Can I import or open a .dmp file?

Update:
I tried the impdp command and it's giving me that it cannot create a user. I tried creating the user as well
This is how my .par file looks like
This is a snip of .sh file
I have never used the oracle database before. I have a .dmp file which is 50 GB. I don't know how it was exported or which version it was exported from. I downloaded Oracle 12c release 2 and tried to do an import but I get the error ".dmp may be a Data Pump export dump file". What do I need to do so that I can run SQL queries on it eventually? Please see the attached image.
UPDATE :
I tried the command :
IMP SYSTEM/Password SHOW=Y FILE=DBO_V7WRIGLEY_PROD_20180201_TECHOPS-5527.dmp fromuser=SYSTEM touser=SYSTEM
It gave me a message saying import terminated successfully with warnings. what does this do? Also, where can I view the data now if it's imported?
in sqlplus as SYSTEM:
CREATE DIRECTORY IMPDIR as 'C:\Users\negink\Documents\databasewrigley';
back in command line:
impdp SYSTEM/Password DUMPFILE=IMPDIR:DBO_V7WRIGLEY_PROD_20180201_TECHOPS-5527.dmp logfile=IMPDIR:DBO_V7WRIGLEY_PROD_20180201_TECHOPS-5527.log FULL=Y
when done, you can remove the DIRECTORY object
in a CDB database (which is your case), this will not work, unless you
pre-create all the users and roles in SQLPLUS, after running this command:
alter session set "_ORACLE_SCRIPT"=true;
create user x identified by pwdx;
create user y identified by pwdy;
create role r1;
create role r2;
...
Otherwize, you can create a PDB inside your CDB and import your DMP file into the PDB. In this case, you'll need to modify the connection in the IMPDP command as follows (change SYSTEM/Password to SYSTEM/Password#//localhost/pdb_name) :
impdp SYSTEM/Password#//localhost/pdb_name DUMPFILE=IMPDIR:DBO_V7WRIGLEY_PROD_20180201_TECHOPS-5527.dmp logfile=IMPDIR:DBO_V7WRIGLEY_PROD_20180201_TECHOPS-5527.log FULL=Y
First of all, you should use impdp instead of imp. And don't forget to take backups before doing anything. Also, you should have your dmp file on your server's local directory. I've seen people trying to import dmp files located on their computer's hard drive. That's not how things work.
I recommend you to drop the schema if you are importing to an existing schema for better results.
To drop an existing schema, login to sqlplus with an admin account
sqlplus username/password#connect_identifier
Then you can use this command to drop the schema:
DROP USER <SCHEMA_NAME> CASCADE;
Query your DB to see if data pump directory is defined
SELECT directory_name, directory_path FROM dba_directories WHERE directory_name='DATA_PUMP_DIR'
If directory is not defined use this command to define (btw "D:\orcl12" is my oracle instance path, you should use your own path)
CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR AS 'D:\orcl12c/admin/<ORA_INSTANCE_NAME>/dpdump/';
Quit sqlplus to command prompt and run impdp with admin credentials (Be sure there's no other logfile with the same name on source directory - if so operation will abort)
impdp username/password#connect_identifier directory=DATA_PUMP_DIR dumpfile=filename.dmp logfile=filename.log
If the operation succeeds you may have to update User-Defined Datatypes manually because they are not importing correctly.

expdp dump file is big

I am trying to make dump of Oracle database using the expdp tool. I added the exclude=statistics option to the command line to make the resulting dmp file smaller, but the file is still very big even with this setting. Is there some other setting that can be used to make the dmp file smaller? The database is almost empty and the dmp file is around 230MB. Thank you.
Split into multiple dump files
expdp usr1/usr1 tables=tbl_test directory=dp_dir dumpfile=test_dump_%u.dmp filesize=20m
Cheers
Brian

Importing .dmp file from Oracle 11g to 10g returns error 'Unable to open log file'

With the help of Stack Overflow, I've been able to export a dump file of my database from my local machine. The command I used is as follows:
host expdp tkcsowner/tkcsowner#xe version=10.2 schemas=tkcsowner dumpfile=tnrg.dmp logfile=tnrg.log
Now, my local machine has the OS Windows 7, 32-bit. Hardly a server. It's got Oracle 11g. I want to transfer it to another machine, the test server, running Linux. It has Oracle 10g.
I am in no way a Linux / Unix expert, but I do have some instructions left for me by the previous person who handled such.
First, I change privileges to root user via 'su -' - No problems there.
Log in as 'sqlplus /nolog', and then 'connect sys/sys#xe as dba' - No problems there, either.
I created a logical dump directory (not sure if this step is needed, but I did it anyway):
create or replace directory dumpdir as 'usr/lib/oracle/xe/app/oracle/admin/XE/dpdump';
Done, no problems.
So I take it TNRG.dmp and tnrg.log should be inside that directory. Unfortunately, it could not be copied, for some reason. Access denied. I figured I should log out, log in as root, and copy the stuff from there. It worked, but just to be safe, I logged out of the root, logged back in as my normal user, and did everything above again. D'oh.
Finally, with all the stuff in place, now comes the time to import the .dmp and .log. Huzzah!
impdp tkcsowner/tkcsowner#xe schemas=tkcsowner dumpfile=TNRG.dmp logfile=tnrg.log
Lo and behold, it asks for a username and password. Is it because tkcsowners does not exist on the 10g database? Anyway, I put in 'system' for both. It continued, but warning bells already set off in my head.
Suddenly:
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
ORA-39002: invalid operation
ORA-39070: unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
At which point, I'm not sure how to proceed. I went into the directory via the command line, and ls -l'ed the contents, showing that both the .dmp and .log have three rwx's, for root. What I have yet to try was to run the entire operation while logged in as root, but I'm not sure how that would change anything.
The directory that your dumpdir database directory object points to needs to be a valid existing directory - at least by the time you use it, it won't check or complain when you create the object - and it needs to be readable and writable by the user that Oracle is running under, which is usually oracle.
Your initial directory creation had 'usr/lib/oracle/... rather than '/usr/lib/oracle/..., but even with that corrected the directory might not be usable by the oracle account. Since you created the directory as root, it is probably still owned by root:root and with permissions 700 (if you do ls -ld /usr/lib/oracle/xe/app/oracle/admin/XE/dpdump that will show as drwx------).
You need to change that to be owned by Oracle, using the correct owner and group - that's probably oracle:dba or oracle:oinstall, but check the owner of the XE directory. And then change the ownership of the directory and the files you copied into it:
chown -R oracle:dba /usr/lib/oracle/xe/app/oracle/admin/XE/dpdump
and set the directory permissions to a suitable level; if you don't want anyone else to create or modify files, but you don't mind them seeing what's there, then something like:
chmod 755 /usr/lib/oracle/xe/app/oracle/admin/XE/dpdump
If you want to be able to copy your .dmp file in as yourself (not root or oracle) and you aren't in the dba group then make it 777. You said the files you copied are 777, which is a little odd as they aren't executable, and could currently be removed by anyone; again to make them just readable:
chmod 644 /usr/lib/oracle/xe/app/oracle/admin/XE/dpdump/*
You don't need the export log from the other system though, just the dump file itself. The logfile parameter for impdp will create a log of the import process; since you used the same file name it will overwrite the export log you copied across. THat probably doesn't matter since you still have the original, but something to watch for in the future. It does mean the existing log file has to be writable by oracle though.
You also need to make sure the Oracle owner has appropriate access to the whole directory tree, but it seems likely that they already own XE so I don't think that's an issue here. You shouldn't really need to do any of this as root. If you don't have the oracle password you can su to the account from root anyway, which remove the need to manually change ownership later.
The impdp command is initiated from outside Oracle (probably with root in your case) but mainly executed by the Oracle server processes. In particular, the dump and log files are directly access by the Oracle server processes (and not by the initiating command). As a result, the file protection need to be set such that the oracle user can access them.
So execute the following (as root) and try again:
chown -R oracle:oinstall /usr/lib/oracle/xe/app/oracle/admin/XE/dpdump

Resources