I am trying to remove the hard coding from Hive script. For that I have created a hql file(src_sys_cd_param.hql).
Setting the source system value thru parameter, below the of param file
hive -f /data/data01/dev/edl/md/sptfr/landing/src_sys_cd_param.hql;
param file is having command set src_sys_cd = 'M09';
After running the below script:
INSERT INTO TABLE SPTFR_CORE.M09_PRTY SELECT C.EDW_SK,A.PRTY_TYPE_HIER_ID,
A.PRTY_NUM,A.PRTY_DESC,A.PRTY_DESC,'N',${hiveconf:src_sys_cd},
A.DAI_UPDT_DTTM,A.DAI_CRT_DTTM
FROM SPTFR_STG.M09_PRTY_VIEW_STG A JOIN SPTFR_STG.BKEY_PRTY_STG C
ON ( CONCAT(A.PRTY_TYPE_LVL_1_CD,'|^',A.PRTY_NUM ,'|^',A.SRC_SYS_CD)= C.SRC_CMBN);
Receing the error:
Error while compiling statement: FAILED: ParseException line 1:113 cannot recognize input near '$' '{' 'hiveconf' in selection target
Related
I am trying to load sample.log file on HDP-sandbox
My initial efforts
LOAD DATA LOCAL INPATH 'sample.log' OVERWRITE INTO TABLE logs;
It seems that path is not matching
Error: Error while compiling statement: FAILED: SemanticException Line 1:23 Invalid path ''sample.log'': No files matching path file:/home/hive/sample.log (state=42000,code=40000)
I logged out,moved to /root,then entered hive
0: jdbc:hive2://sandbox-hdp.hortonworks.com:2> LOAD DATA LOCAL INPATH '/root/Hadoop_Spark_Fundamentals_Code_Notes-V3.0/Lesson-6/Lesson-6.2_Hive/sample.log' OVERWRITE INTO TABLE logs;
Full path does not work either.
Error: Error while compiling statement: FAILED: SemanticException Line 1:23 Invalid path ''/root/Hadoop_Spark_Fundamentals_Code_Notes-V3.0/Lesson-6/Lesson-6.2_Hive/sample.log'': No files matching path file:/root/Hadoop_Spark_Fundamentals_Code_Notes-V3.0/Lesson-6/Lesson-6.2_Hive/sample.log (state=42000,code=40000)
It looks to me that it confuses /root and /home/hive.
How to set the proper path?
Your statement is being executed by user 'hive'. Make sure local file has permissions that allow 'hive' read access to it.
I am trying to import a file text from my machine windows to Hive. I create a table in Hive and I used this command:
load data local in path C:\Users\me\Desktop\test.txt into test_t sample;
But it doesn't work, and the Error is:
Error while compiling statement: FAILED: ParseException line 1:69 missing TABLE at 'test_t' near '' line 1:76 extraneous input 'sample' expecting EOF near '' [ERROR_STATUS]
Do you have any advise?
you have not mention table in your query
assume- your hive table name is sample
load data local inpath "C:\Users\me\Desktop\test.txt" into table sample
So you can set a setting in the Hive console with:
hive> set hive.enforce.bucketing=true
And you can view ALL of the settings with:
hive> set
or
hive> set -v
But how do you read the current value of a specified setting from the Hive console?
hive> hive.enforce.bucketing;
NoViableAltException(26#[])
at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1074)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:202)
...
FAILED: ParseException line 1:0 cannot recognize input near 'hive' '.' 'enforce'
Right now I'm redirecting hive -e 'set' to file and then using grep. Is there a better way?
Simply use set and the property name without a value
hive> set mapreduce.input.fileinputformat.split.maxsize;
mapreduce.input.fileinputformat.split.maxsize=256000000
Using following command:
insert overwrite local directory '/my/local/filesystem/directory/path'
select * from Emp;
overwrites the entire already existing data in /my/local/filesystem/directory/path with the data of Emp.
What i want is to just copy the data of Emp to /my/loca/filesystem/directory/path and not overwrite, how to do that?
Following are my failed trials:
hive> insert into local directory '/home/cloudera/Desktop/Sumit' select * from appdata;
FAILED: ParseException line 1:12 mismatched input 'local' expecting
TABLE near 'into' in insert clause
hive> insert local directory '/home/cloudera/Desktop/Sumit' select * from appdata;
FAILED: ParseException line 1:0 cannot recognize input near 'insert'
'local' 'directory' in insert clause
Can u please tell me how can I get this solved?
To appened to a hive table you need to use INSERT INTO:
INSERT INTO will append to the table or partition keeping the existing
data in tact. (Note: INSERT INTO syntax is only available starting in
version 0.8)
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-InsertingdataintoHiveTablesfromqueries
But you can't use this to append to an existing local file so another option is to use a bash command.
If you have a file called 'export.hql' and in that file your code is:
select * from Emp;
Then your bash command can be:
hive -f 'export.hql' >> localfile.txt
The -f command executes the hive file and the >> append pipes the results to the text file.
EDIT:
The command:
hive -f 'export.hql' > localfile.txt
Will save the hive query to a new file, not append.
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-SQLOperations
When using 'LOCAL', 'OVERWRITE' is also needed in your hql.
For example:
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/out' SELECT * FROM test
I'm trying to create an external table in Hive, but keep getting the following error:
create external table foobar (a STRING, b STRING) row format delimited fields terminated by "\t" stored as textfile location "/tmp/hive_test_1375711405.45852.txt";
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask (state=08S01,code=1)
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask (state=08S01,code=1)
Aborting command set because "force" is false and command failed: "create external table foobar (a STRING, b STRING) row format delimited fields terminated by "\t" stored as textfile location "/tmp/hive_test_1375711405.45852.txt";"
The contents of /tmp/hive_test_1375711405.45852.txt are:
abc\tdef
I'm connecting via the beeline command line interface, which uses Thrift HiveServer2.
System:
Hadoop 2.0.0-cdh4.3.0
Hive 0.10.0-cdh4.3.0
Beeline 0.10.0-cdh4.3.0
Client OS - Red Hat Enterprise Linux Server release 6.4 (Santiago)
The issue was that I was pointing the external table at a file in HDFS instead of a directory. The cryptic Hive error message really threw me off.
The solution is to create a directory and put the data file in there. To fix this for the above example, you'd create a directory under /tmp/foobar and place hive_test_1375711405.45852.txt in it. Then create the table like so:
create external table foobar (a STRING, b STRING) row format delimited fields terminated by "\t" stored as textfile location "/tmp/foobar";
We faced similar problem in our company (Sentry, hive, and kerberos combination). We solved it by removing all privileges from non fully defined hdfs_url. For example, we changed GRANT ALL ON URI '/user/test' TO ROLE test; to GRANT ALL ON URI 'hdfs-ha-name:///user/test' TO ROLE test;.
You can find the privileges for a specific URI in the Hive database (mysql in our case).