HIVE - SQL insert record - hadoop

In HIVE SQL , i ran same query in two different cloudera version. Cloudera VM 5.10 is not causing any issue. But another version cloudera -CDH-5.1.0-1.cdh5.1.0.p0.53 is throwing error.
hive> select * from t;
OK
Time taken: 1.803 seconds
hive> insert into table t values (1);
NoViableAltException(26#[])
at org.apache.hadoop.hive.ql.parse.HiveParser_SelectClauseParser.selectClause(HiveParser_SelectClauseParser.java:713)
at org.apache.hadoop.hive.ql.parse.HiveParser.selectClause(HiveParser.java:35992)
at org.apache.hadoop.hive.ql.parse.HiveParser.regular_body(HiveParser.java:33510)
at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatement(HiveParser.java:33389)
at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:33169)
at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1284)
at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:983)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:190)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:434)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:352)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:995)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1038)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:921)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
FAILED: ParseException line 1:20 cannot recognize input near 'values' '(' '1' in select clause
hive>
Any idea? which version have to choose for my studies. Please advise.

In older versions of CDH like CDH-5.1 insert record is not supported but in new versions of CDH it is supported feature.
So instead of trying insert into values,try with Load data statement
If your file is in Local then
hive> LOAD DATA LOCAL INPATH '<local-path-tofile>' INTO TABLE t;
If your file is in HDFS then
hive> LOAD DATA INPATH 'hdfs_file_or_directory_path' INTO TABLE t;
For more details refer to:-
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations

What you are trying is right, make sure you have one column in your table. if you multiple columns you need have the values for all the fields. checkout the below image
The version I tried in :
[cloudera#quickstart ~]$ hadoop version
Hadoop 2.6.0-cdh5.5.0
Subversion http://github.com/cloudera/hadoop -r fd21232cef7b8c1f536965897ce20f50b83ee7b2
Compiled by jenkins on 2015-11-09T20:37Z
Compiled with protoc 2.5.0
From source with checksum 98e07176d1787150a6a9c087627562c
This command was run using /usr/jars/hadoop-common-2.6.0-cdh5.5.0.jar

#roh
hive> describe formatted t;
OK
# col_name data_type comment
id int None
# Detailed Table Information
Database: default
Owner: learnhadoop
CreateTime: Fri Feb 23 21:25:15 IST 2018
LastAccessTime: UNKNOWN
Protect Mode: None
Retention: 0
Location: hdfs://hadoopmasters:8020/user/hive/warehouse/t
Table Type: MANAGED_TABLE
Table Parameters:
transient_lastDdlTime 1519401315\
# Storage Information
SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
InputFormat: org.apache.hadoop.mapred.TextInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputForm at
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
serialization.format 1
Time taken: 0.111 seconds, Fetched: 26 row(s)
hive>

Related

Unable to map the HBase row key in HIve table effectively

I have a HBase table where the rowkey looks like this.
08:516485815:2013 1
06:260070837:2014 1
00:338289200:2014 1
I create a Hive link table using the below query.
create external table hb
(key string,value string)
stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
with serdeproperties("hbase.columns.mapping"=":key,e:-1")
tblproperties("hbase.table.name"="hbaseTable");
When I query the table I get the below result
select * from hb;
08:516485815 1
06:260070837 1
00:338289200 1
This is very strange to me. Why the serde is not able to map the whole content of the HBase key? The hive table is missing everything after the second ':'
Has anybody faced a similar kind of issue?
I tried by recreating your scenario on Hbase 1.1.2 and Hive 1.2.1000,it works as expected and i am able to get the whole rowkey from hive.
hbase> create 'hbaseTable','e'
hbase> put 'hbaseTable','08:516485815:2013','e:-1','1'
hbase> scan 'hbaseTable'
ROW COLUMN+CELL
08:516485815:2013 column=e:-1, timestamp=1519675029451, value=1
1 row(s) in 0.0160 seconds
As i'm having 08:516485815:2013 as rowkey and i have created hive table
hive> create external table hb
(key string,value string)
stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
with serdeproperties("hbase.columns.mapping"=":key,e:-1")
tblproperties("hbase.table.name"="hbaseTable");
hive> select * from hb;
+--------------------+-----------+--+
| hb.key | hb.value |
+--------------------+-----------+--+
| 08:516485815:2013 | 1 |
+--------------------+-----------+--+
Can you once make sure your hbase table rowkey having the data after second :.

Using spark/scala, I use saveAsTextFile() to HDFS, but hiveql("select count(*) from...) return 0

I created external table as follows...
hive -e "create external table temp_db.temp_table (a char(10), b int) PARTITIONED BY (PART_DATE VARCHAR(10)) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE LOCATION '/work/temp_db/temp_table'"
And I use saveAsTextFile() with scala in IntelliJ IDEA as follows...
itemsRdd.map(_.makeTsv).saveAsTextFile("hdfs://work/temp_db/temp_table/2016/07/19")
So the file(fields terminated by '\t') was in the /work/temp_db/temp_table/2016/07/19.
hadoop fs -ls /work/temp_db/temp_table/2016/07/19/part-00000 <- data file..
But, I checked with hiveql, there are no datas as follows.
hive -e "select count(*) from temp_db.temp_table" -> 0.
hive -e "select * from temp_db.temp_table limit 5" -> 0 rows fetched.
Help me what to do. Thanks.
you are saving at wrong location from spark. Partition dir name follows part_col_name=part_value.
In Spark: save file at directory part_date=2016%2F07%2F19 under temp_table dir
itemsRdd.map(_.makeTsv)
.saveAsTextFile("hdfs://work/temp_db/temp_table/part_date=2016%2F07%2F19")
add partitions: You will need to add partition that should update hive table's metadata (partition dir we have created from spark as hive expected key=value format)
alter table temp_table add partition (PART_DATE='2016/07/19');
[cloudera#quickstart ~]$ hadoop fs -ls /user/hive/warehouse/temp_table/part*|awk '{print $NF}'
/user/hive/warehouse/temp_table/part_date=2016%2F07%2F19/part-00000
/user/hive/warehouse/temp_table/part_date=2016-07-19/part-00000
query partitioned data:
hive> alter table temp_table add partition (PART_DATE='2016/07/19');
OK
Time taken: 0.16 seconds
hive> select * from temp_table where PART_DATE='2016/07/19';
OK
test1 123 2016/07/19
Time taken: 0.219 seconds, Fetched: 1 row(s)
hive> select * from temp_table;
OK
test1 123 2016/07/19
test1 123 2016-07-19
Time taken: 0.199 seconds, Fetched: 2 row(s)
For Everyday process: you can run saprk job like this - just add partitions right after saveAsTextFile(), aslo note the s in alter statement. it is need to pass variable in hive sql from spark:
val format = new java.text.SimpleDateFormat("yyyy/MM/dd")
vat date = format.format(new java.util.Date())
itemsRDD.saveAsTextFile("/user/hive/warehouse/temp_table/part=$date")
val hive = new HiveContext(sc)
hive.sql(s"alter table temp_table add partition (PART_DATE='$date')")
NOTE: Add partition after saving the file or else spark will throw directory already exist exception as hive creates dir (if not exist) when adding partition.

Hive insert query with dynamic partitioning

I have an orc formatted partitioned and clustered hive table to which I have to insert data from a temp table.
Create table statements:
orc table:
CREATE EXTERNAL TABLE DYN_TGT_TABLE(ID INT,USRNAME VARCHAR(20), LD TIMESTAMP) PARTITIONED BY (PART VARCHAR(20)) CLUSTERED BY (id) INTO 1024 BUCKETS ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION '/tmp/dyn_tgt_table' TBLPROPERTIES ('transactional'='true','transient_lastDdlTime'='1461157247');
temp table:
CREATE EXTERNAL TABLE DYN_TEMP_TABLE(ID INT,USRNAME VARCHAR(20),PART VARCHAR(20), LD TIMESTAMP) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE LOCATION '/tmp/dyn_temp_table' TBLPROPERTIES ('transactional'='TRUE','transient_lastDdlTime'='1461216088');
Added 5 records to temp table:
hive> select * from DYN_TEMP_TABLE;
OK
1 user1 A 2016-05-06 06:31:48
2 user2 A 2016-05-06 06:31:48
14 user3 B 2016-05-06 06:31:48
15 user4 B 2016-05-06 06:31:48
16 user5 B 2016-05-06 06:31:48
Time taken: 0.166 seconds, Fetched: 5 row(s)
The below dynamic insert query is erroring:
insert into TABLE DYN_TGT_TABLE PARTITION(part) select id,usrname,ld,part from DYN_TEMP_TABLE;
Error message:
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0)
{"key":{},"value":{"_col0":15,"_col1":"user4","_col2":"2016-05-06 06:31:48","_col3":"B"}}
at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:265)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{},"value":{"_col0":15,"_col1":"user4","_col2":"2016-05-06 06:31:48","_col3":"B"}}
at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:253)
... 7 more
Caused by: java.lang.NullPointerException
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.findWriterOffset(FileSinkOperator.java:755)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:683)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
at org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:244)
... 7 more
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
Please help me point out the cause for this error.

Error when loading Hive table data to HBase

I'm using CDH 4.4, Hive-0.10.0, and HBase-0.94.6. I created tables in hive using HBaseStorageHandler; I created a table called pokes which has one record, 98. Here's my create table code:
CREATE TABLE hbase_table_1(key int, value string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
TBLPROPERTIES ("hbase.table.name" = "xyz");
I verified the table was created in Hive (hbase_table_1) and in HBase (xyz). However, when I run this command
INSERT OVERWRITE TABLE hbase_table_1
SELECT *
FROM pokes
WHERE foo=98;
I get an error:
Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
What am I missing? Please help.

Confirming compression is enabled for specific Hive tables

I need to benchmark a series of tables, some compressed and some not. I compress by setting:
hive> SET hive.exec.compress.output=true;
hive> SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
...and using INSERT OVERWRITE to populate a table. Is there a way to confirm via the command line (similar to DESCRIBE EXTENDED) that output compression is enabled for a particular table?
When you exec describe formatted orc_with_compress_setting_table seeing something like this:
Compressed: No and feeling uncomfortable.
Here is the answer:
The Compressed field is not a reliable indicator of whether the table contains compressed data. It typically always shows No, because the compression settings only apply during the session that loads data and are not stored persistently with the table metadata.
from: https://www.cloudera.com/documentation/enterprise/5-5-x/topics/impala_describe.html
Following will show the path to the table files in hdfs
desc formatted <tablename>
hive> desc formatted cobtest;
OK
col_name data_type comment
col_name data_type comment
batch_sk int None
geo_cd string None
env_cd string None
Detailed Table Information
Database: default
Owner: steve
CreateTime: Thu Nov 21 23:36:37 PST 2013
LastAccessTime: UNKNOWN
Protect Mode: None
Retention: 0
Location: hdfs://localhost:9000/user/hive/warehouse/cobtest
Table Type: MANAGED_TABLE
Table Parameters:
numFiles 1
numPartitions 0
numRows 0
rawDataSize 0
totalSize 473
transient_lastDdlTime 1385105797
Storage Information
SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
InputFormat: org.apache.hadoop.mapred.TextInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
field.delim \t
serialization.format \t
Time taken: 0.203 seconds, Fetched: 34 row(s)
You can then do dfs -lsr and you will notice by the file extensions if they are compressed.
hive> dfs -lsr hdfs://localhost:9000/user/hive/warehouse/cobtest ;
-rw-r--r-- 1 steve supergroup 473 2013-11-21 23:36 /user/hive/warehouse/cobtest/UDFPafCobIndTest.input**.tsv**
UPDATE SNAPPY is the CODEC for compression. You still need to add the following:
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET mapred.output.compression.type=BLOCK;

Resources