We have a HBase table with 1 column family and has 1.5 billion records in it.
HBase Row count was retrieved using command
"count '<tablename>'", {CACHE => 1000000}.
And HBase to Hive Mapping was done with the below command.
create external table stagingdata(
rowkey String,
col1 String,
col2 String
)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
'hbase.columns.mapping' = ':key,
n:col1,
n:col2,
')
TBLPROPERTIES('hbase.table.name' = 'hbase_staging_data');
But While we retrieve the Hive Row Count using the below command,
select count(*) from stagingdata;
It only shows up 140 million rows in the Hive Mapped Table.
We have tried the similar approach for Smaller HBase with 100 million records and complete records were shown up in Hive Mapped Table.
My Question is why the complete 1.5 billion records are not showing up in Hive?
Are we missing here anything ?
Your Immediate Answer would be highly appreciated.
Thanks,
Madhu.
What you see in hive is the latest version per key and not all the versions of a key
there is currently no way to access the HBase timestamp attribute, and
queries always access data with the latest timestamp.
Hive HBase Integration
Related
I am trying to load 3 billion records(ORC file) from hive to Hbase using hive-HBase integration.
Hive Create table DDL
CREATE EXTERNAL TABLE cs.account_dim_hbase(`account_number` string,`encrypted_account_number` string,`affiliate_code` string,`alternate_party_name` string, `alternate_party_name` string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping"=":key,account_dim:encrypted_account_number,account_dim:affiliate_code,account_dim:alternate_party_name")TBLPROPERTIES ("hbase.table.name" = "default:account_dim");
Hive Insert Query to HBase, I am running 128 insert command similar to the below example.
insert into table cs.account_dim_hbase select account_number ,encrypted_account_number , affiliate_code ,alternate_party_name,mod_account_number from cds.account_dim where mod_account_number=1;
When I try to run all 128 inserts at the same time I am getting the below error
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 438 actions: org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=2.0G, regionName=jhgjhsdgfjgsdjf, server=cldf0007.com
Help me to fix this and let me know If am doing anything wrong. I am using HDP 3
Loaded the data from hive using MD5 Hashing on the rowkey field and created the HBASE table using region splits. Now the data gets loaded just in 5 min per partition (It was 20 min before with exceptions but now fixed)
create ‘users, ‘usercf’, SPLITS=›
['10000000000000000000000000000000',
'20000000000000000000000000000000',
'30000000000000000000000000000000',
'40000000000000000000000000000000',
'50000000000000000000000000000000',
'60000000000000000000000000000000',
'70000000000000000000000000000000',
'80000000000000000000000000000000',
'90000000000000000000000000000000',
'a0000000000000000000000000000000',
'b0000000000000000000000000000000',
'c0000000000000000000000000000000',
'd0000000000000000000000000000000',
'e0000000000000000000000000000000',
'f0000000000000000000000000000000']
I am using Spark 1.4.1 version. I am trying to load a partitioned Hive table in to a DataFrame where in the Hive table is partitioned by the year_week number, at a scenario I might have 104 partitions.
But I could see the DataFrame is getting loaded with the data into 200 partitions and I understand that it is due to the spark.sql.shuffle.partitions set to 200 by default.
I would like to know if there is any good way I can load my Hive table to Spark Dataframe with 104 partitions with making sure that the Dataframe is partitioned by year_week number during the Dataframe load time itself.
The reason for my expectation is that I will be doing few joins with huge volume tables, where all are partitioned by year_week number. So having the Dataframe partitioned by year_week number and loaded accordingly will save me a lot of time from re-partitioning them with year_week number.
Please let me know if you have any suggestions to me.
Thanks.
Use hiveContext.sql("Select * from tableName where pt='2012.07.28.10'")
Where, pt= partitionKey, in your case will be year_week
and corresponding value with it.
I have to make a POC with Hadoop for a database using interactive query (~300To log database). I'm trying Impala but i didn't find any solution to use sorted or indexed data. I'm a newbie so i don't even know if it is possible.
How to query sorted/indexed columns in Impala ?
By the way, here is my table's code (simplified).
I would like to have a fast access on the "column_to_sort" below.
CREATE TABLE IF NOT EXISTS myTable (
unique_id STRING,
column_to_sort INT,
content STRING
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\073'
STORED AS textfile;
Here are the environment details:
Hadoop: 2.4.0
Hive: 0.11.0
HBase: 0.94.18
I created a HBase table and imported 10,000 rows:
hbase(main):008:0> create 'genotype_tbl', 'cf'
Load data to the table.
hbase(main):008:0> count 'hbase_tbl'
10000 row(s) in 176.9310 seconds
I created a Hive table as described in this article (using instructions on this page: https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration#HBaseIntegration-HiveHBaseIntegration)
CREATE EXTERNAL TABLE hive_tbl(key int, value string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf:info")
TBLPROPERTIES("hbase.table.name" = "hbase_tbl");
However, when I do a count(*) on hive_tbl, it returns 0. There are no errors of any sort. Any help is appreciated.
This issue is resolved. The problem is with the hbase ImportTsv command. columns list was incorrect. Once, that was resolved, I could execute queries from Hive.
I have a partitioned Hive table that i want to load in a Pig script and would like to add partition as column also.
How can I do that?
Table definition in Hive:
CREATE EXTERNAL TABLE IF NOT EXISTS transactions
(
column1 string,
column2 string
)
PARTITIONED BY (datestamp string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
LOCATION '/path';
Pig script:
%default INPUT_PATH '/path'
A = LOAD '$INPUT_PATH'
USING PigStorage('|')
AS (
column1:chararray,
column2:chararray,
datestamp:chararray
);
The datestamp column is not populated. Why is it so?
I am sorry I didn't get the part which says add partition as column also. Once created, partition keys behave like regular columns. What exactly do you need?
And you are loading the data directly from a given HDFS location, not as a Hive table. If you intend to use Pig to load/store data from/into a Hive table you should use HCatalog.
For example :
A = LOAD 'transactions' USING org.apache.hcatalog.pig.HCatLoader();