Img rename error
SQL> select name from v$datafile;
NAME
-------------------------------------------------------------------------------- D:\ORACLEWINDOW\ORADATA\ORCL\SYSTEM01.DBF
D:\ORACLEWINDOW\ORADATA\ORCL\SYSAUX01.DBF
D:\ORACLEWINDOW\ORADATA\ORCL\UNDOTBS01.DBF
D:\ORACLEWINDOW\ORADATA\ORCL\USERS01.DBF
SQL> alter database rename file 'D:\ORACLEWINDOW\ORADATA\ORCL\SYSTEM01.DBF' to '/home/oracle/xyz/SYSTEM01.DBF';
alter database rename file
'D:\ORACLEWINDOW\ORADATA\ORCL\SYSTEM01.DBF' to
'/home/oracle/xyz/SYSTEM01.DBF'
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01516: nonexistent log file, data file, or temporary file
"D:\ORACLEWINDOW\ORADATA\ORCL\SYSTEM01.DBF" in the current container
I want to rename datafiles and redo log files from endian windows to linux endian. Hope to help!
You can create a controlfile to trace, rename logfiles and execute it like this: https://dbsguru.com/solution-for-ora-01516-nonexistent-log-file-data-file-or-temporary-file-in-oracle/
However, redo application is not supported between Linux and Windows except with a standby database. If you want to restore DB from windows to linux using RMAN, the backup must be a cold (consistent) backup, which requires no redo application.
See note "Restore From Windows To Linux using RMAN Fails (Doc ID 2003327.1)"
Related
I am using data pump to perform an import on 4 .dmp files and keep on receiving the set of errors as below:
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
I am new to oracle and cannot find a helpful solution.
I am performing the import as in here, although I'm using oracle 12c.
The command I run in the windows command like looks like this:
impdp user/pass#db_name directory=DUMP_DIR dimpfile="file_name.dmp" schemas=schema_name content=all parallel=4
DUMP_DIR is created in oracle and appropriate privs were granted.
I also ran this command with
... logfile=file_name.log
added at the end but I'm not sure if the log file was created or where it was saved.
I have found this - it's about exactly the same set of errors but on export and on linux. At the end of the answer there's a sentence 'If we are on a Windows machine, then we need to make sure that both the listener and the database have been started with the exact same username.' Is this useful in case of import? If yes - what does it mean exactly?
There's a great short answer here, which is basically "The database isn't able to write to the log file location."
The link above suggests a simple test to troubleshoot the issue.
declare
f utl_file.file_type;
begin
f := utl_file.fopen ('DUMP_DIR', 'test.txt', 'w');
utl_file.put_line(f, 'test');
utl_file.fclose(f);
end;
/
If this fails, Oracle can't write to that directory at all, probably because of Windows file permissions. Check which Windows user(s) the Oracle services are running as, and change the folder permissions to allow them write access.
If that worked, it's a problem specific to impdp. You might try changing your command string - one option might be to specifically write your log file to a different Oracle directory, e.g. logfile=DATA_PUMP_DIR:file_name.log.
If none of these options work, you can also disable the logfile completely by using NOLOGFILE=Y, but you'll have to monitor the impdp output on your console, because it won't get saved anywhere else.
The problem You have is Your Oracle is not able to write to DIRECTORY (DUMP_DIR) you specified.
In Windows 10, It behaves unpredictably. Solution
Create another Oracle directory with preferably in C:\Users\Public\ folder, where you are 100% sure access would not be issue. CREATE OR REPLACE DIRECTORY DUMP_DIR_2 AS 'C:\Users\Public\<name>
Give Grants GRANT READ, WRITE ON DIRECTORY DUMP_DIR_2 TO schema_name;
Copy your dump file to newly created folder.
Fire your import command
First is very important the Oracle have the permission to write and read the folder. If you already test this, try the solution bellow:
I had the same situation, in my case the command was (password is only for an instance) :
impdp 'sys/passExample as sysdba' directory=C:/oracle/oradata/EXEMPLODB dumpfile=preupd.bak
I put the preup.bak into the folder EXEMPLODB
The correct is change the directory folder by the name of directory, the correct command is:
impdp 'sys/passExample as sysdba' directory=EXT_DATA_FILES dumpfile=preupd.bak
The EXT_DATA_FILES is the directory name, I found with the query
select * from all_directories;
into the system db.
Update:
I tried the impdp command and it's giving me that it cannot create a user. I tried creating the user as well
This is how my .par file looks like
This is a snip of .sh file
I have never used the oracle database before. I have a .dmp file which is 50 GB. I don't know how it was exported or which version it was exported from. I downloaded Oracle 12c release 2 and tried to do an import but I get the error ".dmp may be a Data Pump export dump file". What do I need to do so that I can run SQL queries on it eventually? Please see the attached image.
UPDATE :
I tried the command :
IMP SYSTEM/Password SHOW=Y FILE=DBO_V7WRIGLEY_PROD_20180201_TECHOPS-5527.dmp fromuser=SYSTEM touser=SYSTEM
It gave me a message saying import terminated successfully with warnings. what does this do? Also, where can I view the data now if it's imported?
in sqlplus as SYSTEM:
CREATE DIRECTORY IMPDIR as 'C:\Users\negink\Documents\databasewrigley';
back in command line:
impdp SYSTEM/Password DUMPFILE=IMPDIR:DBO_V7WRIGLEY_PROD_20180201_TECHOPS-5527.dmp logfile=IMPDIR:DBO_V7WRIGLEY_PROD_20180201_TECHOPS-5527.log FULL=Y
when done, you can remove the DIRECTORY object
in a CDB database (which is your case), this will not work, unless you
pre-create all the users and roles in SQLPLUS, after running this command:
alter session set "_ORACLE_SCRIPT"=true;
create user x identified by pwdx;
create user y identified by pwdy;
create role r1;
create role r2;
...
Otherwize, you can create a PDB inside your CDB and import your DMP file into the PDB. In this case, you'll need to modify the connection in the IMPDP command as follows (change SYSTEM/Password to SYSTEM/Password#//localhost/pdb_name) :
impdp SYSTEM/Password#//localhost/pdb_name DUMPFILE=IMPDIR:DBO_V7WRIGLEY_PROD_20180201_TECHOPS-5527.dmp logfile=IMPDIR:DBO_V7WRIGLEY_PROD_20180201_TECHOPS-5527.log FULL=Y
First of all, you should use impdp instead of imp. And don't forget to take backups before doing anything. Also, you should have your dmp file on your server's local directory. I've seen people trying to import dmp files located on their computer's hard drive. That's not how things work.
I recommend you to drop the schema if you are importing to an existing schema for better results.
To drop an existing schema, login to sqlplus with an admin account
sqlplus username/password#connect_identifier
Then you can use this command to drop the schema:
DROP USER <SCHEMA_NAME> CASCADE;
Query your DB to see if data pump directory is defined
SELECT directory_name, directory_path FROM dba_directories WHERE directory_name='DATA_PUMP_DIR'
If directory is not defined use this command to define (btw "D:\orcl12" is my oracle instance path, you should use your own path)
CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR AS 'D:\orcl12c/admin/<ORA_INSTANCE_NAME>/dpdump/';
Quit sqlplus to command prompt and run impdp with admin credentials (Be sure there's no other logfile with the same name on source directory - if so operation will abort)
impdp username/password#connect_identifier directory=DATA_PUMP_DIR dumpfile=filename.dmp logfile=filename.log
If the operation succeeds you may have to update User-Defined Datatypes manually because they are not importing correctly.
My Solaris user oscar is part or a group that contains the oracle user account.
I create a directory and place a file inside; make it owner and group readable
mkdir /tmp/tdir
echo $$ > /tmp/tdir/foo.txt
chmod 440 /tmp/tdir/foo.txt
I then log on as system and create an Oracle directory
CREATE OR REPLACE DIRECTORY tmp_tdir AS '/tmp/tdir';
I then start a sqlplus session (as system) from the database server, while logged on as unix user oscar.
I can read the file contents by executing this snippet in sqlplus
set serveroutput on
DECLARE
fileHandler UTL_FILE.FILE_TYPE;
buffer CLOB;
BEGIN
fileHandler := UTL_FILE.FOPEN('TMP_TDIR', 'foo.txt', 'r');
UTL_FILE.GET_LINE(fileHandler, buffer);
dbms_output.put_line('File Data: '||buffer);
UTL_FILE.FCLOSE(fileHandler);
END;
/
Now when I remove the group read permission from the file, the above snippet no longer works. Instead, I'm presented with the error
ERROR at line 1:
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at line 5
Why am I unable to read the file once the group permission has been removed, given that the oracle sqlplus process is running as the file owner?
It seems that the OS authentication was not important in the context of this problem.
The issue was that utl_file won't allow file access if the oracle user does not have access.
The group associated with the file contains the oracle user. Therefore, I can only read the file with utl_file when the group read bit is enabled.
OS users and Oracle users are two different things.
OS authentication will not imply that your oracle instance will run under the identity of the given user. Foe example, on my Linux system, Oracle XE is running as OS user oracle. This is the only identity that matters for granting or rejecting file access as OS level.
You could check that with a simple ps command.
Please note that if your OS/filesystem is using ACL or Solaris Trusted Extension (Somewhat the equivalent of SELinux), things might be more complex than that though.
With the help of Stack Overflow, I've been able to export a dump file of my database from my local machine. The command I used is as follows:
host expdp tkcsowner/tkcsowner#xe version=10.2 schemas=tkcsowner dumpfile=tnrg.dmp logfile=tnrg.log
Now, my local machine has the OS Windows 7, 32-bit. Hardly a server. It's got Oracle 11g. I want to transfer it to another machine, the test server, running Linux. It has Oracle 10g.
I am in no way a Linux / Unix expert, but I do have some instructions left for me by the previous person who handled such.
First, I change privileges to root user via 'su -' - No problems there.
Log in as 'sqlplus /nolog', and then 'connect sys/sys#xe as dba' - No problems there, either.
I created a logical dump directory (not sure if this step is needed, but I did it anyway):
create or replace directory dumpdir as 'usr/lib/oracle/xe/app/oracle/admin/XE/dpdump';
Done, no problems.
So I take it TNRG.dmp and tnrg.log should be inside that directory. Unfortunately, it could not be copied, for some reason. Access denied. I figured I should log out, log in as root, and copy the stuff from there. It worked, but just to be safe, I logged out of the root, logged back in as my normal user, and did everything above again. D'oh.
Finally, with all the stuff in place, now comes the time to import the .dmp and .log. Huzzah!
impdp tkcsowner/tkcsowner#xe schemas=tkcsowner dumpfile=TNRG.dmp logfile=tnrg.log
Lo and behold, it asks for a username and password. Is it because tkcsowners does not exist on the 10g database? Anyway, I put in 'system' for both. It continued, but warning bells already set off in my head.
Suddenly:
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
ORA-39002: invalid operation
ORA-39070: unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
At which point, I'm not sure how to proceed. I went into the directory via the command line, and ls -l'ed the contents, showing that both the .dmp and .log have three rwx's, for root. What I have yet to try was to run the entire operation while logged in as root, but I'm not sure how that would change anything.
The directory that your dumpdir database directory object points to needs to be a valid existing directory - at least by the time you use it, it won't check or complain when you create the object - and it needs to be readable and writable by the user that Oracle is running under, which is usually oracle.
Your initial directory creation had 'usr/lib/oracle/... rather than '/usr/lib/oracle/..., but even with that corrected the directory might not be usable by the oracle account. Since you created the directory as root, it is probably still owned by root:root and with permissions 700 (if you do ls -ld /usr/lib/oracle/xe/app/oracle/admin/XE/dpdump that will show as drwx------).
You need to change that to be owned by Oracle, using the correct owner and group - that's probably oracle:dba or oracle:oinstall, but check the owner of the XE directory. And then change the ownership of the directory and the files you copied into it:
chown -R oracle:dba /usr/lib/oracle/xe/app/oracle/admin/XE/dpdump
and set the directory permissions to a suitable level; if you don't want anyone else to create or modify files, but you don't mind them seeing what's there, then something like:
chmod 755 /usr/lib/oracle/xe/app/oracle/admin/XE/dpdump
If you want to be able to copy your .dmp file in as yourself (not root or oracle) and you aren't in the dba group then make it 777. You said the files you copied are 777, which is a little odd as they aren't executable, and could currently be removed by anyone; again to make them just readable:
chmod 644 /usr/lib/oracle/xe/app/oracle/admin/XE/dpdump/*
You don't need the export log from the other system though, just the dump file itself. The logfile parameter for impdp will create a log of the import process; since you used the same file name it will overwrite the export log you copied across. THat probably doesn't matter since you still have the original, but something to watch for in the future. It does mean the existing log file has to be writable by oracle though.
You also need to make sure the Oracle owner has appropriate access to the whole directory tree, but it seems likely that they already own XE so I don't think that's an issue here. You shouldn't really need to do any of this as root. If you don't have the oracle password you can su to the account from root anyway, which remove the need to manually change ownership later.
The impdp command is initiated from outside Oracle (probably with root in your case) but mainly executed by the Oracle server processes. In particular, the dump and log files are directly access by the Oracle server processes (and not by the initiating command). As a result, the file protection need to be set such that the oracle user can access them.
So execute the following (as root) and try again:
chown -R oracle:oinstall /usr/lib/oracle/xe/app/oracle/admin/XE/dpdump
I have Oracle 10g installed on Windows in C:\oracle. If I stop all Oracle services, is it safe to backup by just copying the entire directory (e.g., to C:\oracle_bak), or am I significantly better off using expdp?
Pointers to docs/websites very welcome, I wasn't able to Google up anything relevant.
If your database is not running in archive log mode the answer is yes. Here are some scripts I use to backup and restore my database.
--- backup.bat ---
sqlplus "sys/passwd#database as sysdba" #shutdown.sql
xcopy C:\oracle\oradata\database\*.* C:\oracle\oradata\backup_database\*.* /Y
sqlplus "sys/passwd#database as sysdba" #startup.sql
---- shutdown.sql
shutdown immediate
exit;
---- startup.sql
startup
exit;
Restore script is similar. Just copies the files in the other direction.
You can just copy the data files ( make sure you get the control files as well, and make sure you TEST your backups ), however. You should probably be using RMAN.
The Oracle® Database Backup and Recovery Quick Start Guide would be a good place to start.
A very simple backup method is to export the relevant schema using the exp tool. If e.g. all your tables exist in the schema MY_APP (read database for mysql users), you can dump all its tables to one file.
exp userid=MY_APP file=dumpfile.dmp log=logfile.txt consistent=y statistics=none buffer=1024000
Restoring the dumpfile to a second database works like this
imp userid=system fromuser=MY_APP touser=MY_APP file=dumpfile.dmp commit=y buffer=102400
Or you can restore the tables from MY_APP to another schema in the same database
imp userid=system fromuser=MY_APP touser=MY_BACKUP file=dumpfile.dmp commit=y buffer=102400
Just create a new schema MY_BACKUP before the import
create user MY_BACKUP identified by SECRET default tablespace USERS temporary tablespace temp;
grant connect, resource to MY_BACKUP;
Copy/ Paste does work, but you shouldn't simply copy/ paste the entire Oracle home. This is a lot more effort than what is required.
You will firstly need to perform a log switch, i.e.
SET ORACLE_SID=mydb
sqlplus /nolog
Connect / as sysdba
Alter system switch logfile;
Place all your tablespaces into backup mode, i.e.
CONNECT / AS SYSDBA
ALTER TABLESPACE mytablespace BEGIN BACKUP;
(You can get your tablespaces by querying the DBA_TABLESPACES view)
Then copy all your data files and redo log files to your backup location.
In regards to whether this method is safe or not, It depends on how you are retaining the data files and log files. Of course, I should mention that RMAN is Oracle's proven and recommended mode of backup.