how to solve creating database file error in redhat linux - oracle

I have sql scripts in folder in the location
/root/Desktop/artifacts_2019-06-03_234105/db-core
as
[oracle#ol7-122 ~]$ cd /root/Desktop/artifacts_2019-06-03_234105/db-core
[oracle#ol7-122 db-core]$ ll
total 5436
-rw-r--r--. 1 root root 3007 Jun 3 23:41 10_DBA_CreateEnv.sql
-rw-r--r--. 1 root root 1102 Jun 3 23:41 15_DBA_CreateBLOBTablespace.sql
when I'm trying to execute from oracle system user , there is an error in creating database file in the location : /root/Desktop/RSA
like
SQL> #10_DBA_CreateEnv.sql;
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
CREATE TABLESPACE RSACOREDATA DATAFILE '/root/Desktop/RSA' SIZE 1024M REUSE AUTOEXTEND ON NEXT 100M
*
ERROR at line 1:
ORA-01119: error in creating database file '/root/Desktop/RSA'
ORA-27056: could not delete file
Linux-x86_64 Error: 21: Is a directory
I don't know what to do.
can anyone help.
Thanks in advance.

create tablespace ... reuse
means that you'd want to ... well, reuse existing file. You said that /root/Desktop/RSA is to be reused:
Oracle then complains that it can not delete file
in valid circumstances, you'd have to take the tablespace offline first
Linux says: Is a directory
From my point of view, it seems that you are reusing a directory instead of a file. If that's so, specify file name, not directory name.

Related

Create a dictionary Permission denied

I need help with a permission error while accessing a freshly created dictionary.
My source file for a dictionary is as follows:
$ ls -l /root/organization.csv
-rwxrwxrwx 1 clickhouse clickhouse 154 Jul 7 14:56 /root/organization.csv
$ cat /root/organization.csv
1,"a0001","研发部"
2,"a0002","产品部"
3,"a0003","数据部"
4,"a0004","测试部"
5,"a0005","运维部"
6,"a0006","规划部"
7,"a0007","市场部"
I create my dictionary as follows:
CREATE DICTIONARY test_flat_dict
(
id UInt64,
code String,
name String
)
PRIMARY KEY id
SOURCE(FILE(PATH '/root/organization.csv' FORMAT CSV))
LAYOUT(HASHED())
LIFETIME(0);
Then I'm trying to test a dictionary by a simple SQL query:
SELECT * FROM test_flat_dict
But I'm getting the exception:
Received exception from server (version 21.6.3):
Code: 156. DB::Exception: Received from localhost:9000. DB::Exception: Failed to load dictionary 'eeedf011-4a41-4337-aeed-f0114a414337': std::exception. Code: 1001, type: std::__1::__fs::filesystem::filesystem_error, e.what() = filesystem error: in canonical: Permission denied [/root/organization.csv] [""],
What might be wrong with my dictionary?
As it stated on ClickHouse documentation:
When dictionary with source FILE is created via DDL command (CREATE
DICTIONARY ...), the source file needs to be located in user_files
directory, to prevent DB users accessing arbitrary file on ClickHouse
node.
I would doubt that ClickHouse has the ability to retrieve files from the root home folder, even if your file has 777 mode.
So I would propose you put the data file under ./user_files folder (it's in the root of ClickHouse data folder).

How to get all XML file names from a directory using EXTERNAL TABLE

I am trying to get all XML file names present in a directory in order to feed them to a procedure which pulls data out of those files. Could anyone help with how I can get the file name using the EXTERNAL TABLE.
I am having trouble with ACCESS PARAMETERS and LOCATION file. Don't know what exactly would go there.
Thanks
CREATE TABLE S7303786.XML_FILES
(
FILE_NAME VARCHAR2(255 CHAR)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY AUTOACCEPT_XMLDIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
PREPROCESSOR AUTOACCEPT_XMLDIR: 'list_file.sh'
FIELDS TERMINATED BY WHITESPACE
)
LOCATION ('list_file.sh')
)
REJECT LIMIT UNLIMITED;
list_files.sh just contains the directory where the files are present.
sticky.txt has nothing in it
error I am getting are :
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-29400: data cartridge error
KUP-04004: error while reading file /home/transfer/stu/nshstrans/sticky.txt
Error you got might have something to do with directory, Oracle object which points to physical directory on database server's disk. It is created by a privileged user - SYS, who then grants read and/or write privileges on it to users who will use it.
If you missed to do anything of above mentioned things, your external table won't work.
So:
SQL> show user
USER is "SYS"
SQL>
SQL> create directory mydir as 'c:\temp';
Directory created.
SQL> grant read, write on directory mydir to scott;
Grant succeeded.
SQL>
Connect to Scott and create external table:
SQL> connect scott/tiger
Connected.
SQL> create table extusers
2 (username varchar2(20),
3 country varchar2(20)
4 )
5 organization external
6 (type oracle_loader
7 default directory mydir --> this is directory I created
8 access parameters
9 (records delimited by newline
10 fields terminated by ';'
11 missing field values are null
12 (username char(20),
13 country char(20)
14 )
15 )
16 location ('mydata.txt') --> name of the file that contains data
17 ) -- located in c:\temp, which is MYDIR
18 reject limit unlimited -- directory
19 /
Table created.
SQL>
Contents of the sample text file:
SQL> $type c:\temp\mydata.txt
Littlefoot;Croatia
Michel;France
Maaher;Netherlands
SQL>
Finally, let's select from the external table:
SQL> select * from extusers;
USERNAME COUNTRY
-------------------- --------------------
Littlefoot Croatia
Michel France
Maaher Netherlands
SQL>
Works OK, doesn't it? Now, try to do what I did.
On a second reading,
it appears that you don't want to read file contents, but directory contents. If that's so - apparently, it is - then see whether this helps.
In order to make it work, privileged user has to grant additional privilege - EXECUTE - to the directory.
SQL> show user
USER is "SYS"
SQL> grant execute on directory mydir to scott;
Grant succeeded.
Next step is to create an operating system executable (on MS Windows I use, it is a .bat script; on Unix, that would be a .sh, I think) which will list the directory. Note the first line - I have to navigate to a directory which is source for Oracle directory object. If you don't do that, it won't work. The .bat file is simple:
SQL> $type c:\temp\directory_contents.bat
cd c:\temp
dir /b *.txt
SQL>
Create external table:
SQL> create table extdir
2 (line varchar2(50))
3 organization external
4 (type oracle_loader
5 default directory mydir
6 access parameters
7 (records delimited by newline
8 preprocessor mydir:'directory_contents.bat'
9 fields terminated by "|" ldrtrim
10 )
11 location ('directory_contents.bat')
12 )
13 reject limit unlimited
14 /
Table created.
SQL> connect scott/tiger
Connected.
Let's see what it returns:
SQL> select * From extdir;
LINE
-----------------------------------------------
c:\Temp>dir /b *.txt
a.txt
dept.txt
emp.txt
emps.txt
externalfile1.txt
lab18.txt
mydata.txt
p.txt
parfile_01.txt
sofile.txt
test.txt
test2.txt
15 rows selected.
SQL>
Well ... yes, those are my .txt files located in c:\temp directory.
As you use *nix, I think that problem you got is related to list_files.sh script. You didn't post its contents (which would probably help - not necessarily help me as I forgot almost everything I knew about *.nix), but - regarding Preprocessing External Tables (written by Michael McLaughlin), you might need to
prepend /usr/bin before the ls, find, and sed programs: /usr/bin/ls ...
See if it helps.

permission denied when inserting into table created by HCTAS

I am having trouble with Teradata's HCTAS procedure when I use it to create a table in Hadoop.
I call HCTAS to create the table,
CALL SYSLIB.HCTAS('test_table',null,null,'myserver','default');
*** Procedure has been executed.
but when I try to insert data into that table, I get a permission denied.
INSERT INTO test_table#myserver SELECT * FROM test_table;
*** Failure 7810 [TblOp] Permission denied: user=myuser, access=WRITE, inod
e="/apps/hive/warehouse/test_table":hive:hdfs:drwxr-xr-x
at org.apache.hadoo
p.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChec
ker.java:265)
at org.apache.hadoop.hdfs.s.
Statement# 1, Info =0
I checked Hadoop and found that the directory was created with owner as 'hive' instead of 'myuser'.
drwxr-xr-x - hive hdfs 0 2015-08-05 21:45 /apps/hive/warehouse/test_table
What should I do so that the directories will be created with 'myuser' as the owner?
Thanks
The 3rd parameter is used to specify the directory, try
CALL SYSLIB.HCTAS('test_table',null,'LOCATION "/usr/myuser"','myserver','default');
As you can see this is a write permissions problem, because myuser does not have permissions to write in /hive/warehouse/.
Oh the other hand, as you can see here, you can specify the location of the table where you are sure that myuser has write permissions (personal folder in the HDFS maybe). In this way you will not have write permissions problems.

create table in postgresql fails - tablespace issue

I am trying to create a the following table in postgresql
CREATE TABLE retail_demo.categories_dim_hawq
(
category_id integer NOT NULL,
category_name character varying(400) NOT NULL
)
WITH (appendonly=true, compresstype=quicklz) DISTRIBUTED RANDOMLY;
I am getting the following error:
ERROR: cannot get table space location for content 0 table space 1663
(catalog.c:97)
I tried to create a new tablespace, I got the following:
ERROR: syntax error at or near "LOCATION" LINE 1: create TABLESPACE
moha LOCATION "/tmp/abc";
Thanks in advance,
Moha.
I got the answer
you’ll need to create a filespace, tablespace, database, and then create the table to do this follow the following steps:
12. If you are on the default database (using plsql command), you can get out to root db user (gpadmin) using CTRL + D.
13. gpfilespace -o .
14. enter the name of the filespace: hawqfilespace3
15. Choose filesystem name for this filespace: hdfs
16. Enter replica num for filespace: 0
17. Specify the HDFS location for the segments: bigdata01.intrasoft.com.jo:8020/xd
Note that /xd is one of Hadoop directories which has read write access.
18. The system will generate a configuration command to you, just execute it.
19. Copy and paste the command and click on enter to execute it.
20. The file space is now created successfully.
21. Now connect to the Database using the psql command.
22. Now create a tablespace on the file space you created.
create TABLESPACE hawqtablespace3 FILESPACE hawqfilespace3;
23. Create a database on this tablespace using the command.
CREATE DATABASE hawqdatabase3 WITH OWNER gpadmin TEMPLATE=template0 TABLESPACE hawqtablespace3;
24. Now you need to connect to the database you created, but first click CTRL + D to exit the user you are in.
25. Enter the command psql hawqdatabase3

Oracle 11g. Unable to import dump files, even though schema is created

I have created a user in Oracle 11gR2, using the following script
create user cata
identified by cata
default tablespace tbs
temporary tablespace temp;
grant DBA to cata;
After trying to import a dump file using the command
impdp system/password#ORCL11 schemas=cata dumpfile=cata.dmp logfile=log.txt
i'm getting the following error
ORA-39002: invalid operation
ORA-39165: Schema ATGDB_CATA was not found.
Surprisingly, when i try to export a dump from the same schema, i'm able to do that. So, if the schema was not created properly then i should not be able to export the dump file as well, right ?
I have also checked in dba_users & the schema is created. Is there anything else that i can do which could resolve this problem
Out of the error message I guess that the original schema name was "atgdb_cata".
As you are now trying to import into a schema named "cata" you need to specify the parameter remap_schema
So for your case:
impdp system/password#ORCL11 schemas=atgdb_cata dumpfile=cata.dmp logfile=log.txt remap_schema=atgdb_cata:cata
Grant the roles of read and write on the Directory in which you created to the New User: EX:
GRANT READ, WRITE ON DIRECTORY dir_name TO NEW_USER:
Also grant the following role to the new user:
GRANT IMP_FULL_DATABASE TO NEW_USER;
Thanks!
NC
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
SOLUTION:
create or replace directory test_ dir as 'FOLDER_NAME' ;
that 'FOLDER_NAME' must has that dump file
step : 1
create folder SAMPLE under orcle_installed_path/sql/SAMPLE
put that dump file into that SAMPLE folder.
go to bin and execute ./sqlplus
and login
SQL>create or replace directory test_ dir as 'SAMPLE' ;
SQL> SQL> GRANT READ, WRITE on directory test_dir to 'USER';
SQL> GRANT IMP_FULL_DATABASE to 'USER';
exit
then impdb to import that dump

Resources