Creating directory direct from oracle [duplicate] - oracle

How do you create a physical directory on the OS from within PL/SQL? I looked at the CREATE OR REPLACE DIRECTORY command but that doesn't do it. Neither does UTL_FILE appear to be capable.

In the end I did find an easier solution. Use
select os_command.exec('mkdir /home/oracle/mydir') from dual;
or simply
x := os_command.exec('mkdir /home/oracle/mydir');

UTL_FILE still lacks this capability - probably a holdover from the pre-DIRECTORY object days where you had to explicitly define the OS file directories you could access in a startup parameter, so there was no need to create directories dynamically anyway.
I think the easiest way to do this is with an Oracle Java stored procedure that uses:
File f = new File(dirname);
return (f.mkdir()) ? 1 : 0;
If you go this route make sure that you use dbms_java.grant_permission to grant java.io.FilePermission to the user that owns the executing code.

I believe the only way to do this is to use an external procedure (C or Java) and call it through PL/SQL. PL/SQL itself does not have the means to create the physical OS directory.
PL/SQL Tips provides a good example of how to create a C external procedure that executes shell commands. Note that I would not consider it best practice to allow this for security reasons.
It you can create the directory first, then you can use the
create or replace directory myDir as '<path-to-dir>/myDir';
Note that you will need to have the CREATE ANY DIRECTORY privilege assigned to the user executing the command. After the directory is created with the command above, be sure to assign any needed privileges on the directory to other users.
grant read, write on directory myDir to testUsers;

I just checked the new docs for database version 11.2, and there's still no routine I can find to create a directory. So, like the other respondents, I recommend using a Java or C routine.

You can execute OS commands from within Oracle using DBMS_SCHEDULER or internal Java procedure, for example, using my XT_SHELL package:
install it using install.sql:
Execute OS command using xt_shell.shell_exec(pCommand in varchar2,timeout in number) in SQL or PL/SQL:
SQL> select * from table(xt_shell.shell_exec('/bin/mkdir /tmp/test-dir',1000));
COLUMN_VALUE
--------------------------------------------------------------------------------
SQL> select * from table(xt_shell.shell_exec('/bin/mkdir /tmp/test-dir/test-dir2',1000));
COLUMN_VALUE
--------------------------------------------------------------------------------
SQL> select * from table(xt_shell.shell_exec('/bin/ls -l /tmp/test-dir',1000));
COLUMN_VALUE
--------------------------------------------------------------------------------
total 4
drwxr-xr-x 2 oracle oinstall 4096 Apr 19 12:14 test-dir2

Related

Import data from file.txt to table Oracle SQL using PL/SQL

I'm trying to read a file of type txt from c:\Dir and insert the content on the table Oracle Sql
set SERVEROUTPUT ON
CREATE OR REPLACE DIRECTORY MYDIR AS ' C:\dir';
DECLARE
vInHandle utl_file.file_type;
eNoFile exception;
PRAGMA exception_init(eNoFile, -29283);
BEGIN
BEGIN
vInHandle := utl_file.Fopen('MYDIR','attachment.txt','R');
dbms_output.put_line('The File exists');
EXCEPTION
WHEN eNoFile THEN
dbms_output.put_line('The File not exists');
END;
END fopen;
/
i have the file not exists but i have this file
I don't know whether space you have in front of the directory name in the first statement you posted makes difference (or is it just a typo), but - nonetheless, here's how it is usually done.
Create directory on hard disk:
C:\>mkdir c:\dir
Connect to the database as SYS (as it owns the database, as well as directories); create directory (Oracle object) and grant privileges to user which will use that directory:
C:\>sqlplus sys as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on ╚et O×u 5 18:34:43 2020
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Enter password:
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
SQL> create or replace directory mydir as 'c:\dir';
Directory created.
SQL> grant read, write on directory mydir to scott;
Grant succeeded.
SQL>
You don't need this, as you already have the file; I'll create it by spooling table contents.
SQL> connect scott/tiger
Connected.
SQL> spool c:\dir\example.txt
SQL> select * From dept;
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
SQL> spool off;
SQL> $dir c:\dir\*.txt
Volume in drive C is OSDisk
Volume Serial Number is 7635-F892
Directory of c:\dir
05.03.2020. 18:39 539 example.txt
1 File(s) 539 bytes
0 Dir(s) 290.598.363.136 bytes free
SQL>
Finally, reusing code you wrote:
SQL> set serveroutput on
SQL>
SQL> DECLARE
2 vInHandle utl_file.file_type;
3 eNoFile exception;
4 PRAGMA exception_init(eNoFile, -29283);
5 BEGIN
6 BEGIN
7 vInHandle := utl_file.Fopen('MYDIR','example.txt','R');
8 dbms_output.put_line('The File exists');
9 EXCEPTION
10 WHEN eNoFile THEN
11 dbms_output.put_line('The File not exists');
12 END;
13 END fopen;
14 /
The File exists
PL/SQL procedure successfully completed.
SQL>
Works properly (congratulations, you wrote code that actually works!).
So, what have you done wrong?
as I said, space in front of c:\dir: CREATE OR REPLACE DIRECTORY MYDIR AS ' C:\dir';
database isn't on your computer but on a separate database server
it means that you probably created directory, but it points to c:\dir directory on the database server, not your own PC!
As Boneist commented, it is possible to create a directory (Oracle object) on computer which is NOT a database server, but that's not something we usually do. If you opt to choose this option, you'll have to use UNC (Universal Naming Convention) while creating directory.
Another option you might want to consider is to use SQL Loader. It is an operating system utility, installed along with the database or (full, not instant) client software. Its advantage is that it runs on your local PC (i.e. you don't have to have access to the database server) and is extremely fast. You'd create a control file which tells Oracle how to load data stored in the source (.txt) file.
Another option, which - in the background - uses SQL Loader, is to use an external table. It is yet another Oracle object which points to the source (.txt) file and allows you to access it using a simple SQL SELECT statement. Possible drawback: it still requires access to the Oracle directory (just like your UTL_FILE option).

How to get all XML file names from a directory using EXTERNAL TABLE

I am trying to get all XML file names present in a directory in order to feed them to a procedure which pulls data out of those files. Could anyone help with how I can get the file name using the EXTERNAL TABLE.
I am having trouble with ACCESS PARAMETERS and LOCATION file. Don't know what exactly would go there.
Thanks
CREATE TABLE S7303786.XML_FILES
(
FILE_NAME VARCHAR2(255 CHAR)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY AUTOACCEPT_XMLDIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
PREPROCESSOR AUTOACCEPT_XMLDIR: 'list_file.sh'
FIELDS TERMINATED BY WHITESPACE
)
LOCATION ('list_file.sh')
)
REJECT LIMIT UNLIMITED;
list_files.sh just contains the directory where the files are present.
sticky.txt has nothing in it
error I am getting are :
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-29400: data cartridge error
KUP-04004: error while reading file /home/transfer/stu/nshstrans/sticky.txt
Error you got might have something to do with directory, Oracle object which points to physical directory on database server's disk. It is created by a privileged user - SYS, who then grants read and/or write privileges on it to users who will use it.
If you missed to do anything of above mentioned things, your external table won't work.
So:
SQL> show user
USER is "SYS"
SQL>
SQL> create directory mydir as 'c:\temp';
Directory created.
SQL> grant read, write on directory mydir to scott;
Grant succeeded.
SQL>
Connect to Scott and create external table:
SQL> connect scott/tiger
Connected.
SQL> create table extusers
2 (username varchar2(20),
3 country varchar2(20)
4 )
5 organization external
6 (type oracle_loader
7 default directory mydir --> this is directory I created
8 access parameters
9 (records delimited by newline
10 fields terminated by ';'
11 missing field values are null
12 (username char(20),
13 country char(20)
14 )
15 )
16 location ('mydata.txt') --> name of the file that contains data
17 ) -- located in c:\temp, which is MYDIR
18 reject limit unlimited -- directory
19 /
Table created.
SQL>
Contents of the sample text file:
SQL> $type c:\temp\mydata.txt
Littlefoot;Croatia
Michel;France
Maaher;Netherlands
SQL>
Finally, let's select from the external table:
SQL> select * from extusers;
USERNAME COUNTRY
-------------------- --------------------
Littlefoot Croatia
Michel France
Maaher Netherlands
SQL>
Works OK, doesn't it? Now, try to do what I did.
On a second reading,
it appears that you don't want to read file contents, but directory contents. If that's so - apparently, it is - then see whether this helps.
In order to make it work, privileged user has to grant additional privilege - EXECUTE - to the directory.
SQL> show user
USER is "SYS"
SQL> grant execute on directory mydir to scott;
Grant succeeded.
Next step is to create an operating system executable (on MS Windows I use, it is a .bat script; on Unix, that would be a .sh, I think) which will list the directory. Note the first line - I have to navigate to a directory which is source for Oracle directory object. If you don't do that, it won't work. The .bat file is simple:
SQL> $type c:\temp\directory_contents.bat
cd c:\temp
dir /b *.txt
SQL>
Create external table:
SQL> create table extdir
2 (line varchar2(50))
3 organization external
4 (type oracle_loader
5 default directory mydir
6 access parameters
7 (records delimited by newline
8 preprocessor mydir:'directory_contents.bat'
9 fields terminated by "|" ldrtrim
10 )
11 location ('directory_contents.bat')
12 )
13 reject limit unlimited
14 /
Table created.
SQL> connect scott/tiger
Connected.
Let's see what it returns:
SQL> select * From extdir;
LINE
-----------------------------------------------
c:\Temp>dir /b *.txt
a.txt
dept.txt
emp.txt
emps.txt
externalfile1.txt
lab18.txt
mydata.txt
p.txt
parfile_01.txt
sofile.txt
test.txt
test2.txt
15 rows selected.
SQL>
Well ... yes, those are my .txt files located in c:\temp directory.
As you use *nix, I think that problem you got is related to list_files.sh script. You didn't post its contents (which would probably help - not necessarily help me as I forgot almost everything I knew about *.nix), but - regarding Preprocessing External Tables (written by Michael McLaughlin), you might need to
prepend /usr/bin before the ls, find, and sed programs: /usr/bin/ls ...
See if it helps.

Writing a file to a custom created directory on Oracle Amazon-RDS

I can connect to the database via sqlplus
sqlplus stepdba/<password>#steprds.<rds-hash-here>.<region>.rds.amazonaws.com:1521/STEP
and I am trying to write to a file.
According to AmazonRDS documentation regarding Oracle, create directory must be done with rdsadmin.rdsadmin_util.create_directory('MY_DIR'); which I have done.
To write to a file, I do the following:
DECLARE
fileHandler UTL_FILE.FILE_TYPE;
BEGIN
fileHandler := UTL_FILE.FOPEN('MY_DIR', 'test.txt', 'W');
UTL_FILE.PUTF(fileHandler, 'Writing TO a file\n');
UTL_FILE.FCLOSE(fileHandler);
END;
/
Which result in an error:
ERROR at line 1:
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at line 4
If I try to write to an Oracle provided directory DATA_PUMP_DIR, the above snippet executes correctly and the file is written.
The privileges to the two directories are the same
select grantee, privilege from dba_tab_privs where table_name='DATA_PUMP_DIR' and grantee = 'STEPDBA';
select grantee, privilege from dba_tab_privs where table_name='MY_DIR' and grantee = 'STEPDBA';
In the Amazon-RDS case, we can not manipulate the file/directory permissions on the OS level.
I seem to be missing something, any hint would be appreciated.
Hi I had exactly the same problem. Solved it by using higher version of Oracle software: Oracle SE One 11.2.0.4.v4
One that was causing the problems was Oracle SE One 11.2.0.4.v3

ORA-29283: invalid file operation ORA-06512: at "SYS.UTL_FILE", line 536

Below is the code i use to extract data from a table to a flat file.
BEGIN
DECLARE
file_name VARCHAR2(50);
file_handle utl_file.file_type;
BEGIN
file_name := 'table.txt';
file_handle := utl_file.fopen('SEND',file_name,'W');
FOR rec in(
SELECT column 1
||'~'||column 2
||'~'||column 3 out_line
FROM table1)LOOP
UTL_FILE.PUT_LINE(file_handle,rec.out_line);
UTL_FILE.FFLUSH(file_handle);
END LOOP;
UTL_FILE.FCLOSE(file_handle);
END;
end;
This code is working fine in our development database but its throwing the below error if i execute in a new DB.
Error starting at line 1 in command:
BEGIN
DECLARE
file_name VARCHAR2(50);
file_handle utl_file.file_type;
BEGIN
file_name := 'table.txt';
file_handle := utl_file.fopen('SEND',file_name,'W');
FOR rec in(
SELECT column 1
||'~'||column 2
||'~'||column 3 out_line
FROM table1)LOOP
UTL_FILE.PUT_LINE(file_handle,rec.out_line);
UTL_FILE.FFLUSH(file_handle);
END LOOP;
UTL_FILE.FCLOSE(file_handle);
END;
end;
Error report:
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at line 7
29283. 00000 - "invalid file operation"
*Cause: An attempt was made to read from a file or directory that does
not exist, or file or directory access was denied by the
operating system.
*Action: Verify file and directory access privileges on the file system,
and if reading, verify that the file exists.
Oracle directory 'SEND' points to some UNIX directory which has rights as
'rwxrwsr-x' (Octal 2775)
Oracle Version:11g
Please help me to solve this issue.
Guys please do let me know if you require more data from me to solve this question.
So, #Vivek has got the solution to the problem through a dialogue in the Comments rather than through an actual answer.
"The file is being created by user oracle just noticed this in our development database. i'm getting this error because, the directory where i try to create the file doesn't have write access for others and user oracle comes under others category. "
In the absence of an accepted answer to this question I proffer a link to an answer of mine on the topic of UTL_FILE.FOPEN(). Find it here.
P.S. I'm marking this answer Community Wiki, because it's not a proper answer to this question, just a redirect to somewhere else.
Assume file is already created in the predefined directory with name "table.txt"
1) change the ownership for file :
sudo chown username:username table.txt
2) change the mode of the file
sudo chmod 777 table.txt
Now, try it should work!
On Windows also check whether the file is not encrypted using EFS. I had the same problem untill I decrypted the file manualy.
I had been facing this problem for two days and I found that the directory you create in Oracle also needs to created first on your physical disk.
I didn't find this point mentioned anywhere i tried to look up the solution to this.
Example
If you created a directory, let's say, 'DB_DIR'.
CREATE OR REPLACE DIRECTORY DB_DIR AS 'E:\DB_WORKS';
Then you need to ensure that DB_WORKS exists in your E:\ drive and also file system level Read/Write permissions are available to the Oracle process.
My understanding of UTL_FILE from my experiences is given below for this kind of operation.
UTL_FILE is an object under SYS user. GRANT EXECUTE ON SYS.UTL_FILE TO
PUBLIC; needs to given while logged in as SYS. Otherwise, it will
give declaration error in procedure. Anyone can create a directory as
shown:- CREATE OR REPLACE DIRECTORY DB_DIR AS 'E:\DBWORKS'; But CREATE
DIRECTORY permission should be in place. This can be granted as
shown:- GRANT CREATE ALL DIRECTORY TO user; while logged in as SYS
user. However, if this needs to be used by another user, grants need
to be given to that user otherwise it will throw error. GRANT READ,
WRITE, EXECUTE ON DB_DIR TO user; while loggedin as the user who
created the directory. Then, compile your package. Before executing
the procedure, ensure that the Directory exists physically on your
Disk. Otherwise it will throw 'Invalid File Operation' error. (V.
IMPORTANT) Ensure that Filesystem level Read/Write permissions are in
place for the Oracle process. This is separate from the DB level
permissions granted.(V. IMPORTANT) Execute procedure. File should get
populated with the result set of your query.
The ORA-29283: invalid file operation is also raised on utl_file.put if there is an attempt to write line longer than max_linesize in text mode. max_linesize is optional 4th parameter of utl_file.fopen function defaulting to 1024.
(My case was dumping CSV from within Oracle in Docker into file in host directory mapped as Docker volume and I was misleaded by this error for pretty significat time - I seeked cause in filesystem rights or volume mapping between Docker and host, actually it was so stupid cause.)
UPDATE: another occurence of same exception also happened on utl_file.fopen. The database rejected to create file even if the file did not exist before. The directory in which the attempt of file creation happened was mapped on Docker volume. It started to work if the zero-sized file was created on host machine in advance. Attempt to create file from within container (touch /dir/file) failed though. Perhaps some docker issue - it disappeared after restarting Docker Desktop.
You need give permission by creating folder.
create or replace directory DINESH as '/home/oracle/DINESH/';
grant read, write
on directory DINESH
to public;
Simple PLSQL to open a file,
-- write two lines into the file,
-- and close the file
declare
fhandle utl_file.file_type;
begin
fhandle := utl_file.fopen(
'DINESH' -- File location
, 'test_file.txt' -- File name
, 'w' -- Open mode: w = write.
);
utl_file.put(fhandle, 'Hello world!'|| CHR(10));
utl_file.put(fhandle, 'Hello again!');
utl_file.fclose(fhandle);
exception
when others then
dbms_output.put_line('ERROR: ' || SQLCODE || ' - ' || SQLERRM);
raise;
end;
test_file.txt file created in /home/oracle/DINESH.

How to determine the Schemas inside an Oracle Data Pump Export file

I have an Oracle database backup file (.dmp) that was created with expdp.
The .dmp file was an export of an entire database.
I need to restore 1 of the schemas from within this dump file.
I don't know the names of the schemas inside this dump file.
To use impdp to import the data I need the name of the schema to load.
So, I need to inspect the .dmp file and list all of the schemas in it, how do I do that?
Update (2008-09-18 13:02) - More detailed information:
The impdp command i'm current using is:
impdp user/password#database directory=DPUMP_DIR
dumpfile=EXPORT.DMP logfile=IMPORT.LOG
And the DPUMP_DIR is correctly configured.
SQL> SELECT directory_path
2 FROM dba_directories
3 WHERE directory_name = 'DPUMP_DIR';
DIRECTORY_PATH
-------------------------
D:\directory_path\dpump_dir\
And yes, the EXPORT.DMP file is in fact in that folder.
The error message I get when I run the impdp command is:
Connected to: Oracle Database 10g Enterprise Edition ...
ORA-31655: no data or metadata objects selected for the job
ORA-39154: Objects from foreign schemas have been removed from import
This error message is mostly expected. I need the impdp command be:
impdp user/password#database directory=DPUMP_DIR dumpfile=EXPORT.DMP
SCHEMAS=SOURCE_SCHEMA REMAP_SCHEMA=SOURCE_SCHEMA:MY_SCHEMA
But to do that, I need the source schema.
impdp exports the DDL of a dmp backup to a file if you use the SQLFILE parameter. For example, put this into a text file
impdp '/ as sysdba' dumpfile=<your .dmp file> logfile=import_log.txt sqlfile=ddl_dump.txt
Then check ddl_dump.txt for the tablespaces, users, and schemas in the backup.
According to the documentation, this does not actually modify the database:
The SQL is not actually executed, and the target system remains unchanged.
If you open the DMP file with an editor that can handle big files, you might be able to locate the areas where the schema names are mentioned. Just be sure not to change anything. It would be better if you opened a copy of the original dump.
Update (2008-09-19 10:05) - Solution:
My Solution: Social engineering, I dug real hard and found someone who knew the schema name.
Technical Solution: Searching the .dmp file did yield the schema name.
Once I knew the schema name, I searched the dump file and learned where to find it.
Places the Schemas name were seen, in the .dmp file:
<OWNER_NAME>SOURCE_SCHEMA</OWNER_NAME>
This was seen before each table name/definition.
SCHEMA_LIST 'SOURCE_SCHEMA'
This was seen near the end of the .dmp.
Interestingly enough, around the SCHEMA_LIST 'SOURCE_SCHEMA' section, it also had the command line used to create the dump, directories used, par files used, windows version it was run on, and export session settings (language, date formats).
So, problem solved :)
Assuming that you do not have the log file from the expdp job that generated the file in the first place, the easiest option would probably be to use the SQLFILE parameter to have impdp generate a file of DDL (based on a full import). Then you can grab the schema names from that file. Not ideal, of course, since impdp has to read the entire dump file to extract the DDL and then again to get to the schema you're interested in, and you have to do a bit of text file searching for the various CREATE USER statements, but it should be doable.
The running the impdp command to produce an sqlfile, you will need to run it as a user which has the DATAPUMP_IMP_FULL_DATABASE role.
Or... run it as a low privileged user and use the MASTER_ONLY=YES option, then inspect the master table. e.g.
select value_t
from SYS_IMPORT_TABLE_01
where name = 'CLIENT_COMMAND'
and process_order = -59;
col object_name for a30
col processing_status head STATUS for a6
col processing_state head STATE for a5
select distinct
object_schema,
object_name,
object_type,
object_tablespace,
process_order,
duplicate,
processing_status,
processing_state
from sys_import_table_01
where process_order > 0
and object_name is not null
order by object_schema, object_name
/
http://download.oracle.com/otndocs/products/database/enterprise_edition/utilities/pdf/oow2011_dp_mastering.pdf
Step 1: Here is one simple example. You have to create a SQL file from the dump file using SQLFILE option.
Step 2: Grep for CREATE USER in the generated SQL file (here tables.sql)
Example here:
$ impdp directory=exp_dir dumpfile=exp_user1_all_tab.dmp logfile=imp_exp_user1_tab sqlfile=tables.sql
Import: Release 11.2.0.3.0 - Production on Fri Apr 26 08:29:06 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 08:29:12
$ grep "CREATE USER" tables.sql
CREATE USER "USER1" IDENTIFIED BY VALUES 'S:270D559F9B97C05EA50F78507CD6EAC6AD63969E5E;BBE7786A5F9103'
Lot of datapump options explained here http://www.acehints.com/p/site-map.html
You need to search for OWNER_NAME.
cat -v dumpfile.dmp | grep -o '<OWNER_NAME>.*</OWNER_NAME>' | uniq -u
cat -v turn the dumpfile into visible text.
grep -o shows only the match so we don't see really long lines
uniq -u removes duplicate lines so you see less output.
This works pretty well, even on large dump files, and could be tweaked for usage in a script.
My solution (similar to KyleLanser's answer) (on a Unix box):
strings dumpfile.dmp | grep SCHEMA_LIST
In my case, based on Aldur's and slafs' answers I came up with this expression that should tell you just the name of the original schema:
cat -v file.dmp | grep 'SCHEMA_LIST' | uniq -u | grep -o -P '(?<=SCHEMAS\=).*(?=content)'
Tested for a DMP file from Oracle 19.8 version.

Resources