I am having trouble with Teradata's HCTAS procedure when I use it to create a table in Hadoop.
I call HCTAS to create the table,
CALL SYSLIB.HCTAS('test_table',null,null,'myserver','default');
*** Procedure has been executed.
but when I try to insert data into that table, I get a permission denied.
INSERT INTO test_table#myserver SELECT * FROM test_table;
*** Failure 7810 [TblOp] Permission denied: user=myuser, access=WRITE, inod
e="/apps/hive/warehouse/test_table":hive:hdfs:drwxr-xr-x
at org.apache.hadoo
p.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChec
ker.java:265)
at org.apache.hadoop.hdfs.s.
Statement# 1, Info =0
I checked Hadoop and found that the directory was created with owner as 'hive' instead of 'myuser'.
drwxr-xr-x - hive hdfs 0 2015-08-05 21:45 /apps/hive/warehouse/test_table
What should I do so that the directories will be created with 'myuser' as the owner?
Thanks
The 3rd parameter is used to specify the directory, try
CALL SYSLIB.HCTAS('test_table',null,'LOCATION "/usr/myuser"','myserver','default');
As you can see this is a write permissions problem, because myuser does not have permissions to write in /hive/warehouse/.
Oh the other hand, as you can see here, you can specify the location of the table where you are sure that myuser has write permissions (personal folder in the HDFS maybe). In this way you will not have write permissions problems.
Related
I need help with a permission error while accessing a freshly created dictionary.
My source file for a dictionary is as follows:
$ ls -l /root/organization.csv
-rwxrwxrwx 1 clickhouse clickhouse 154 Jul 7 14:56 /root/organization.csv
$ cat /root/organization.csv
1,"a0001","研发部"
2,"a0002","产品部"
3,"a0003","数据部"
4,"a0004","测试部"
5,"a0005","运维部"
6,"a0006","规划部"
7,"a0007","市场部"
I create my dictionary as follows:
CREATE DICTIONARY test_flat_dict
(
id UInt64,
code String,
name String
)
PRIMARY KEY id
SOURCE(FILE(PATH '/root/organization.csv' FORMAT CSV))
LAYOUT(HASHED())
LIFETIME(0);
Then I'm trying to test a dictionary by a simple SQL query:
SELECT * FROM test_flat_dict
But I'm getting the exception:
Received exception from server (version 21.6.3):
Code: 156. DB::Exception: Received from localhost:9000. DB::Exception: Failed to load dictionary 'eeedf011-4a41-4337-aeed-f0114a414337': std::exception. Code: 1001, type: std::__1::__fs::filesystem::filesystem_error, e.what() = filesystem error: in canonical: Permission denied [/root/organization.csv] [""],
What might be wrong with my dictionary?
As it stated on ClickHouse documentation:
When dictionary with source FILE is created via DDL command (CREATE
DICTIONARY ...), the source file needs to be located in user_files
directory, to prevent DB users accessing arbitrary file on ClickHouse
node.
I would doubt that ClickHouse has the ability to retrieve files from the root home folder, even if your file has 777 mode.
So I would propose you put the data file under ./user_files folder (it's in the root of ClickHouse data folder).
I have sql scripts in folder in the location
/root/Desktop/artifacts_2019-06-03_234105/db-core
as
[oracle#ol7-122 ~]$ cd /root/Desktop/artifacts_2019-06-03_234105/db-core
[oracle#ol7-122 db-core]$ ll
total 5436
-rw-r--r--. 1 root root 3007 Jun 3 23:41 10_DBA_CreateEnv.sql
-rw-r--r--. 1 root root 1102 Jun 3 23:41 15_DBA_CreateBLOBTablespace.sql
when I'm trying to execute from oracle system user , there is an error in creating database file in the location : /root/Desktop/RSA
like
SQL> #10_DBA_CreateEnv.sql;
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
CREATE TABLESPACE RSACOREDATA DATAFILE '/root/Desktop/RSA' SIZE 1024M REUSE AUTOEXTEND ON NEXT 100M
*
ERROR at line 1:
ORA-01119: error in creating database file '/root/Desktop/RSA'
ORA-27056: could not delete file
Linux-x86_64 Error: 21: Is a directory
I don't know what to do.
can anyone help.
Thanks in advance.
create tablespace ... reuse
means that you'd want to ... well, reuse existing file. You said that /root/Desktop/RSA is to be reused:
Oracle then complains that it can not delete file
in valid circumstances, you'd have to take the tablespace offline first
Linux says: Is a directory
From my point of view, it seems that you are reusing a directory instead of a file. If that's so, specify file name, not directory name.
All,
I am new and trying few hands on use cases.
I have a file in hdfs and would want to load into impala table.
-- File location on hdfs : hdfs://xxx/user/hive/warehouse/impala_test
-- Table : CREATE TABLE impala_test_table
(File_Format STRING ,Rank TINYINT, Splitable_ind STRING )
Row format delimited
Fields terminated by '\,'
STORED AS textfile;
-- Load syntax in impala-shell : Load data inpath 'hdfs://xxx/user/hive/warehouse/impala_test' into table impala_test_table;
P.S : I am able to load it successfully with hive shell.
ERROR: AccessControlException: Permission denied by sticky bit: user=impala, path="/user/hive/warehouse/impala_test":uabc:hive:-rwxrwxrwx, parent="/user/hive/warehouse":hive:hive:drwxrwxrwt at ......
All permissions(777) are granted on the file impala_test.
Any suggestions ?
Thanks.
I know it is too late to answer this question, but maybe it would help others searching in future.
refer to HDFS Permissions Guide
The Sticky bit can be set on directories, preventing anyone except the superuser, directory owner or file owner from deleting or moving the files within the directory. Setting the sticky bit for a file has no effect.
so to the best of my knowledge, you should sign in as hdfs super user and remove sticky bit by hdfs dfs -chmod 0755 /dir_with_sticky_bit or hdfs dfs -chmod -t /dir_with_sticky_bit
hope this asnwer helps anybody
OWNER SYS
DIRECTORY_NAME ME
DIRECTORY_PATH \\172.16.20.11\Mad\
begin
vSFile := utl_file.fopen('ME','20170405.csv','R');
IF utl_file.is_open(vSFile) THEN
LOOP
i am getting error :
ORA-29283: invalid file operation ORA-06512: at "SYS.UTL_FILE", line 536 ORA-29283: invalid file operation ORA-06512: at "MADHUR.MSP_UPD_DAILYSALESFRMSAP", line 28 ORA-06512: at line 1
29283. 00000 - "invalid file operation"
*Cause: An attempt was made to read from a file or directory that does
not exist, or file or directory access was denied by the
operating system.
*Action: Verify file and directory access privileges on the file system,
and if reading, verify that the file exists.
Your error tells you exactly what the problem is:
*Cause: An attempt was made to read from a file or directory that does
not exist, or file or directory access was denied by the
operating system.
and what to do to fix it:
*Action: Verify file and directory access privileges on the file system,
and if reading, verify that the file exists.
So you specify:
DIRECTORY_PATH \\172.16.20.11\Mad\
are you able to actually access \\172.16.20.11\Mad\ with your oracle user?
if not, then you need to grant read, write on directory to user and also check OS permissions for your user to that path.
but also consider doing a network share to drive letter, instead of UNC path.
The reason for getting such issue is that you dont have read or write permission to the directory.
Run the below query to see if you have read and write priviledges:
SELECT *
FROM all_tab_privs
WHERE table_name = 'your_directory name';
If you find dont have any access then grant read and write privs.
SQL>CREATE OR REPLACE DIRECTORY dir1 as '/opt/oracle/';
SQL>GRANT READ,WRITE on dir1 to <Required user>; (if you want to give access to particular user)
OR
SQL>GRANT READ,WRITE on dir1 to PUBLIC; (if you want to give access to all users then give access to public)
I have created a user in Oracle 11gR2, using the following script
create user cata
identified by cata
default tablespace tbs
temporary tablespace temp;
grant DBA to cata;
After trying to import a dump file using the command
impdp system/password#ORCL11 schemas=cata dumpfile=cata.dmp logfile=log.txt
i'm getting the following error
ORA-39002: invalid operation
ORA-39165: Schema ATGDB_CATA was not found.
Surprisingly, when i try to export a dump from the same schema, i'm able to do that. So, if the schema was not created properly then i should not be able to export the dump file as well, right ?
I have also checked in dba_users & the schema is created. Is there anything else that i can do which could resolve this problem
Out of the error message I guess that the original schema name was "atgdb_cata".
As you are now trying to import into a schema named "cata" you need to specify the parameter remap_schema
So for your case:
impdp system/password#ORCL11 schemas=atgdb_cata dumpfile=cata.dmp logfile=log.txt remap_schema=atgdb_cata:cata
Grant the roles of read and write on the Directory in which you created to the New User: EX:
GRANT READ, WRITE ON DIRECTORY dir_name TO NEW_USER:
Also grant the following role to the new user:
GRANT IMP_FULL_DATABASE TO NEW_USER;
Thanks!
NC
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
SOLUTION:
create or replace directory test_ dir as 'FOLDER_NAME' ;
that 'FOLDER_NAME' must has that dump file
step : 1
create folder SAMPLE under orcle_installed_path/sql/SAMPLE
put that dump file into that SAMPLE folder.
go to bin and execute ./sqlplus
and login
SQL>create or replace directory test_ dir as 'SAMPLE' ;
SQL> SQL> GRANT READ, WRITE on directory test_dir to 'USER';
SQL> GRANT IMP_FULL_DATABASE to 'USER';
exit
then impdb to import that dump