Facts
My Server running on AIX
some files (to sftp) generate by Oracle and put in to a CBSINTERFACE directory
Different user created for sftp (lets say sftpusr) will access and get the files from CBSINTERFACE and remove the files after get.
Problem
sftpusr is unable to remove files since files generated by Oracle and aftpusr does not have write permission to those files.
what i did
I granted permission CBSINTERFACE directory but still files created by oracle cannot be removed by sftpusr
Requesting help to grant permission to sftpusr
I granted permission CBSINTERFACE directory but still files created by oracle cannot be removed by sftpusr
Not sure what you did here, but sftpuser - which is presumably an operating system user and not a database user - must have write permissions to the directory through the operating system (not the database) since it interacts through the operating system. Permissions granted on a DIRECTORY object within the database only apply to database users.
Related
I am trying BFILE functionality in Oracle. My plan is all the files should be stored in file server, whose IP is 192.165.1.10.
Based on this I created a directory in my local PC database like this
create directory TEST_DIR as `\\192.165.1.10\c\ATTACH_FILES\STUDENT`
Directory is created. My doubt is being my db system and file server are in different locations so should I give any other privileges in Oracle?
Please give your opinion as Bfile is not working properly for me.
Note, my database server and file server are both Windows.
"My doubt is being my db system and file server in different locations "
That's a very good doubt to have. The database can only access OS directories on its local server, and directories which have been shared with that server. So you will need to share your file server directory using System Tools > Shared Folders > Shares.
As the database server is Windows you will need to map the shared directory if it isn't mapped already. The mapping must be owned by the OS user that owns the Oracle database, or the mapping owner must grant permissions to Oracle OS user( or its group). So that requires sysadmin access. Find out more. Also you may have to bounce the database.
I use this tuotrial to export/import schema. The steps in the tutorial are working until the expdp command, see the screenshot:
I am using oracle12c. Any Idea?
The article you linked to notes that:
The directory object is only a pointer to a physical directory, creating it does not actually create the physical directory on the file system of the database server.
You have to create the physical operating system directory separately, outside the database. That physical directory has to be readable and writable by the operating system user that is running the Oracle database; as you seem to be on Windows that will be the account the services are running under.
You can create the physical directory before or after creating the directory object as they are completely independent, except when Oracle is trying to access it through a UTL_FILE or related activity - data pump uses UTL_FILE, as you can see from the error message stack.
The CREATE DIRECTORY doesn't check that the physical directory it points to exists; and you can delete or create the physical directory without Oracle noticing; as long as it is there are accessible when you try to use it.
From the Oracle documentation:
A directory object specifies an alias for a directory on the server file system ...
and
For file storage, you must also create a corresponding operating system directory, an Oracle Automatic Storage Management (Oracle ASM) disk group, or a directory within an Oracle ASM disk group. Your system or database administrator must ensure that the operating system directory has the correct read and write permissions for Oracle Database processes.
Privileges granted for the directory are created independently of the permissions defined for the operating system directory, and the two may or may not correspond exactly. For example, an error occurs if sample user hr is granted READ privilege on the directory object but the corresponding operating system directory does not have READ permission defined for Oracle Database processes.
I'm trying to import few users from .dmp file from a net drive. Unofrtunately it seems that I lack some rights to do so since I get
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "\\net\drive\directory\placeholder\my_dump.dmp" for read
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 5) Access is denied.
I'm not sure why, because I can both access that directory, and for example save a txt file there.
Directory is saved on database as '\net\drive\directory\placeholder'. Log file has other directory specified (not on net drive).
Is there any workaround to import this dump without actually moving it to local drive? Dump is really big, and I don't have space for it (not even close) and neither can I (probably) change my rights on this mapped drive.
Also I can't really make dump smaller.
On one site I've found this advice - " Remember, your OS user ID may not be the ID that is running a submitted RMAN job, in an operating system, UNIX, Linux or Windows."
The solution was to "
In the ControlPanel services:
Right click on service
Select ?properties?
Select ?logon?
Change the default user ID to an Oracle user with Windows administrator privileges"
But I'm not sure what changing this would actually do to server/database, and I'm working on client's server so I don't want to act rashly. I also don't want to reset database or server itself.
Any help with what should I do?
The problem is that your Oracle instance is running under different user account which doesn't have an access to the network drive.
Unless you don't want to run Oracle under different account, you can give the read access to the current Oracle's instance user account (usually LocalSystem for Windows platform) to your network share. Another option could be to import data from the source database via dblink (you won't need dump file in this case at all)
I have a oracle external table. There is a oracle directory created for use of external table to read input CSV file. DISCARD, LOG and BAD files of external table will be created in the same directory.
When corresponding directory on unix has permissions "1770", external table can not read or write from that directory. When permissions for that directory are changed to "1777", external table is able to read write into that directory.
I am not able to figure out what is the issue when permissions for that directory are 1770. Please provide me with any hint on this weird behavior.
Please note that oracle schema user has READ and WRITE grants on that directory.
What user and group owns the operating system directory? What operating system user runs the Oracle database? What group(s) is that operating system user that runs Oracle in?
It sounds from your description that the operating system user that runs Oracle does not own the operating system directory and is not part of the group that owns the directory. In Unix, privileges on a directory are granted to the user (the first 7), the group (the second 7), and to the public (the third digit, either a 7 or a 0 in your example). If changing the privileges associated with the public are changing the behavior, that implies that the Oracle operating system user only has the privileges granted to the public on this directory.
I am trying to open an existing physical file from file system in a procedure in Oracle 10g. The bfilename function requires a directory and path. I want to pass on the directory such as "c:\abc" directly to the procedure, i.e. I don't want to pass a directory object. The reason is, the directory can change, and I don't know how to create or replace directory inside a procedure (the command always returns error saying that variable is not allowed). I also am not sure about how it works in a multi-user environment, because a new directory is not local to the procedure.
When I run the command:
bfilename('c:\abc', 'myfile.txt');
it returns error that directory do not exists. I have checked by ending directory with "\" i.e. make it "c:\work\". I have also checked by capitalizing the directory name inside the procedure. If I make a directory object say DOCUMENTS and pass it to the bfilename, then its working.
bfilename('DOCUMENTS', 'myfile.txt');
Is there some way to make the directory part dynamic?
Update: I have tried to create directory from inside the procedure because this msdn article says that directory-object is must. The code is as follows:
EXECUTE IMMEDIATE 'CREATE OR REPLACE DIRECTORY WORKDIR AS ''c:\work''';
The procedure compiled successfully but when run gives following error:
Insufficient privileges.
I have only one user in my test database, this user has sysdba role. I have physical access to file through file system. The database-user can create directory through sqlplus.
Mark's answer solves your specific problem (you cannot use privileges gained through the SYSDBA role in procedures) but I want to discuss the underlying issues.
The relevant privilege is CREATE ANY DIRECTORY. All directory objects are owned by SYS, which is why this is a privilege which should be granted with caution.
Bear in mind that the second time you run your stored procedure it will fail. This is because the directory already exists. There is no ALTER DIRECTORY syntax. This means you need to drop and re-create the directory every time you want to change its path. Consequently you will also need to grant DROP ANY DIRECTORY to the user. Also, in a production environment you would need to re-issue the privileges granted on that directory to any other users who need it.
Why do Oracle make it so hard to work dynamically with directory objects? Because we shouldn't need to do it.
The OS directory structure is full of potential dangers for the database. Access to OS files from inside the database should be strictly controlled. That means we should specify particular directories for known purposes (DataPump, writing dumps and logfiles, importing CSV files, etc) and stick to them. Allowing procedures to change the paths of directories on the fly is a red flag for bad business process.
But, sometimes directory objects are a real pain. For instance, I once worked on a system which generated millions of files. In order to spread them across the operating system without blowing the unix inode limit we had a tree structure based on the last two digits of the header ID and the penultimate two digits of the header ID. Some like this:
$OUT_FILES/whatever/00/00
$OUT_FILES/whatever/00/01
...
$OUT_FILES/whatever/99/98
$OUT_FILES/whatever/99/99
That would have been ten thousand directory objects per feed. Which is a lot. So we used the (deprecated) UTL_FILE_DIR parameter. This parameter is deprecated for at least three reasons:
It is set in the INIT.ORA file and changing it requires a database re-start.
There is no granularity. Every database user has privileges on any OS directory in the list
The security is laxer because it allows wildcards.
Also, it means we have to specify the full OS path whenever we need to read or write to a directory, which is fragile and error-prone.
However, if you're working on a toy database this parameter might solve your headache. Find out more.
Just to be clear, I am not recommending that anybody use this parameter in a proper database unless they really understand the limitations. In most situations directory objects are safer and more convenient.
Try granting the 'CREATE DIRECTORY' privilege directly to the user (i.e., not through a role).
Something like:
grant create directory to the_user;
The reason for that is that roles are disabled inside of stored procedures. So, your user needs a direct grant explicitly granted.
Hope that helps.