Query on Bucketized Table - hadoop

I created a bucketized table as the following
drop table if exists bi_st.st_usr_member_active_day_test;
CREATE TABLE `bi_st.st_usr_member_active_day_test`(
`cal_dt_from` string,
`cal_dt_to` string,
`memberid` string,
`vipcode` string,
`vipleavel` string,
`cityid` string,
`cityname` string,
`groupid` int,
`groupname` string,
`storeid` int,
`storename` string,
`sectionid` int,
`sectionname` string,
`promotionid` string,
`promotionname` string,
`moduleid` string,
`modulename` string,
`activeness_today` string,
`new_vip_class` string
)
clustered by (storeid) into 2 buckets
row format delimited fields terminated by '\t'
stored as orc TBLPROPERTIES('transactional'='true');
And then inserted some data into it, and then I did
select * from bi_st.st_usr_member_active_day_test where storeid = 193;, it failed and gave an array index out of bound error. Can anybody explain about this? Thanks

Related

Non-string values showing as NULL in Hive

Im new to HIVE and creating my first table!
for some reason all non-string values are showing as NULL (including int, BOOLEAN, etc.)
my data looks like this sample row:
58;"management";"married";"tertiary";"no";2143;"yes";"no";"unknown";5;"may";261;1;-1;0;"unknown";"no"
i used this to create the table:
create external table bank_dataset(
age TINYINT,
job string,
education string,
default BOOLEAN,
balance INT,
housing BOOLEAN,
loan BOOLEAN,
contact STRING,
day STRING,
month STRING,
duration INT,
campaign INT,
pdays INT,
previous INT,
poutcome STRING,
y BOOLEAN)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\u003B'
STORED AS TEXTFILE
location '/user/marchenrisaad_gmail/Bank_Project'
tblproperties("skip.header.line.count"="1");
Thanks for the comments it worked! but i have 1 issue. For every row i get all the data correctly then I get extra columns of null values. Find below my code:
create external table bank_dataset(age TINYINT, job string, education string, default BOOLEAN, balance INT, housing BOOLEAN, loan BOOLEAN, contact STRING,day INT, month STRING, duration INT,campaign INT, pdays INT, previous INT, poutcome STRING,y BOOLEAN)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
"separatorChar" = "\u003B",
"quoteChar" = '"'
)
STORED AS TEXTFILE
location '/user/marchenrisaad_gmail/Bank_Project'
tblproperties("skip.header.line.count"="1");
Any suggestions?

Unable to insert data in Hive for JSON file

Failed!!
Create the table for below schema
(schema = {"type":"record","name":"topLevelRecord","fields":[{"name":"MESSAGE_ID","type":["string","null"]},{"name":"MSGNAME","type":["string","null"]},{"name":"SOURCE","type":["string","null"]},{"name":"EVENT_DATETIME","type":["string","null"]},{"name":"CUSTOMER_ORDER_ID","type":["string","null"]},{"name":"SP_ORGANISATION_NAME","type":["string","null"]},{"name":"CUSTOMER_ACCOUNT_ID","type":["string","null"]},{"name":"ORDER_TYPE_NAME","type":["string","null"]},{"name":"ORDER_SUBTYPE_NAME","type":["string","null"]},{"name":"ORDER_REASON_NAME","type":["string","null"]},{"name":"ORDER_CREATED_DATE","type":["string","null"]},{"name":"ORDER_CREATED_CHANNEL_NAME","type":["string","null"]},{"name":"ORDER_CREATED_RETAILER_ID","type":["string","null"]},{"name":"ORDER_CREATED_DEALER_ID","type":["string","null"]},{"name":"ORDER_CREATED_AFFILIATE_ID","type":["string","null"]},{"name":"ORDER_CREATED_EMPLOYEE_ID","type":["string","null"]},{"name":"ORDER_CREATED_CONTACT_CENTRE_AGENT_ID","type":["string","null"]},{"name":"ORDER_SUBMITTED_DATE","type":["string","null"]},{"name":"ORDER_SUBMITTED_CHANNEL_NAME","type":["string","null"]},{"name":"ORDER_DUE_DATE","type":["string","null"]},{"name":"ONE_TIME_CHARGE_AMT","type":["string","null"]},{"name":"RECURRING_CHARGE_AMT","type":["string","null"]},{"name":"ORDER_STATUS_NAME","type":["string","null"]},{"name":"ORDER_STATUS_CHANGE_REASON_NAME","type":["string","null"]},{"name":"CREATE_JOB_RUN_ID","type":"int"},{"name":"CREATE_DATE_TIME","type":"string"},{"name":"SYSTEM_ID","type":"int"},{"name":"SRC_FILE_NAME","type":"string"}]}
I am new to hive just tried out by just looking around and came up with below Query
CREATE EXTERNAL TABLE governed_data.customer_order(
message_id string,
msgname string,
source string,
event_datetime string,
customer_order_id string,
sp_organisation_name string,
customer_account_id string,
order_type_name string,
order_subtype_name string,
order_reason_name string,
order_created_date string,
order_created_channel_name string,
order_created_retailer_id string,
order_created_dealer_id string,
order_created_affiliate_id string,
order_created_employee_id string,
order_created_contact_centre_agent_id string,
order_submitted_date string,
order_submitted_channel_name string,
order_due_date string,
one_time_charge_amt string,
recurring_charge_amt string,
order_status_name string,
order_status_change_reason_name string,
create_job_run_id int,
create_date_time string,
system_id int,
src_file_name string)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
STORED AS AVRO
location 'adl://rbsitbinsighstdlt001.azuredatalakestore.net/insights/governed_data/';
In the i want to insert data in the hive Database
You specified stored as AVRO and serde is JsonSerde, these properties are conflicting.
If you need AVRO, then specify the serde as org.apache.hadoop.hive.serde2.avro.AvroSerDe, specify the inputformat as org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat, and the outputformat as org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat. Also provide a location from which the AvroSerde will pull the most current schema for the table.
See example here: Creating Avro-backed Hive tables
Or simply specify STORED AS AVRO, without SerDe, Input and Output format. Try to remove ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' in your DDL.
And if you want JsonSerDe to parse attributes then create table like this:
CREATE EXTERNAL TABLE governed_data.customer_order(message_id string,
msgname string,
source string,
event_datetime string,
customer_order_id string,
sp_organisation_name string,
customer_account_id string,
order_type_name string,
order_subtype_name string,
order_reason_name string,
order_created_date string,
order_created_channel_name string,
order_created_retailer_id string,
order_created_dealer_id string,
order_created_affiliate_id string,
order_created_employee_id string,
order_created_contact_centre_agent_id string,
order_submitted_date string,
order_submitted_channel_name string,
order_due_date string,
one_time_charge_amt string,
recurring_charge_amt string,
order_status_name string,
order_status_change_reason_name string,
create_job_run_id int,
create_date_time string,
system_id int,
src_file_name string)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 'adl://rbsitbinsighstdlt001.azuredatalakestore.net/insights/governed_data/'
;
Read also docs about JsonSerDe

AWS Athena creates indentation and moves values into wrong columns after partitions loads

I encountered the following problem:
I created a Hive table in an EMR cluster in HDFS without partitions
and loaded a data to it.
I created another Hiva table based on the
table from the paragraph#1 but with partitions from the datetime
column: PARTITIONED BY (year STRING,month STRING,day STRING).
I loaded a data from the non partitioned table into partitioned table and get the valid result.
I created an Athena database and table with the same structure as Hive table.
I copied partitioned files from HDFS locally and by aws s3 sync transferred all files into S3 empty bucket. All files were transferred without error and with the same order as in Hive directory in HDFS.
I loaded partitions by MSCK REPAIR TABLE and didn't get any error in an output.
After that I found that many values got indentation, for example a value that need to be in the "IP" column was in "Operating_sys" column and etc.
My scripts are:
-- Hive tables
SET hive.exec.dynamic.partition = true;
SET hive.exec.dynamic.partition.mode = nonstrict;
CREATE EXTERNAL TABLE IF NOT EXISTS cloudfront_logs_page_part
(
log_DATE STRING,
user_id STRING,
page_path STRING,
referer STRING,
tracking_referer STRING,
medium STRING,
campaign STRING,
source STRING,
visitor_id STRING,
ip STRING,
session_id STRING,
operating_sys STRING,
ad_id STRING,
keyword STRING,
user_agent STRING
)
PARTITIONED BY
(
`year` STRING,
`month` STRING,
`day` STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION '/user/admin/events_partitioned';
CREATE EXTERNAL TABLE IF NOT EXISTS cloudfront_logs_event_part
(
log_DATE STRING,
user_id STRING,
category STRING,
action STRING,
label STRING,
value STRING,
visitor_id STRING,
ip STRING,
session_id STRING,
operating_sys STRING,
extra_data_json STRING
)
PARTITIONED BY
(
`year` STRING,
`month` STRING,
`day` STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION '/user/admin/pages_partitioned';
INSERT INTO TABLE cloudfront_logs_page_part
PARTITION
(
`year`,
`month`,
`day`
)
SELECT
log_DATE,
user_id,
page_path,
referer,
tracking_referer,
medium,
campaign,
source,
visitor_id,
ip,
session_id,
operating_sys,
ad_id,
keyword,
user_agent,
year(log_DATE) as `year`,
month(log_DATE) as `month`,
day(log_DATE) as `day`
FROM
cloudfront_logs_page;
INSERT INTO TABLE cloudfront_logs_event_part
PARTITION
(
`year`,
`month`,
`day`
)
SELECT
log_DATE,
user_id,
category,
action,
label,
value,
visitor_id,
ip,
session_id,
operating_sys,
extra_data_json,
year(log_DATE) as `year`,
month(log_DATE) as `month`,
day(log_DATE) as `day`
FROM
cloudfront_logs_event;
-- Athena tables
CREATE DATABASE IF NOT EXISTS test
LOCATION 's3://...';
DROP TABLE IF EXISTS test.cloudfront_logs_page_ath;
CREATE EXTERNAL TABLE IF NOT EXISTS powtoon_hive.cloudfront_logs_page_ath (
log_DATE STRING,
user_id STRING,
page_path STRING,
referer STRING,
tracking_referer STRING,
medium STRING,
campaign STRING,
source STRING,
visitor_id STRING,
ip STRING,
session_id STRING,
operating_sys STRING,
ad_id STRING,
keyword STRING,
user_agent STRING
)
PARTITIONED BY (`year` STRING,`month` STRING, `day` STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION 's3://.../';
DROP TABLE IF EXISTS test.cloudfront_logs_event_ath;
CREATE EXTERNAL TABLE IF NOT EXISTS test.cloudfront_logs_event_ath
(
log_DATE STRING,
user_id STRING,
category STRING,
action STRING,
label STRING,
value STRING,
visitor_id STRING,
ip STRING,
session_id STRING,
operating_sys STRING,
extra_data_json STRING
)
PARTITIONED BY (`year` STRING,`month` STRING, `day` STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION 's3://.../';
What can be wrong? Table structure? Athena metadata?
The easiest method would be to convert your raw files directly into a partitioned Parquet columnar format. This has the benefit of partitioning, columnar storage, predicate push-down and all those other fancy words.
See: Converting to Columnar Formats - Amazon Athena

Partition a hive table with a column in middle from another external table

I have created an external table as below:
create external table if not exists complaints (date_received string, product string, sub_product string, issue string, sub_issue string, consumer_complaint_narrative string, state string, company_public_response string, company varchar(50), zipcode int, tags string, consumer_consent_provided string, submitted_via string, date_sent_company string, company_response string, timely_response string, consumer_disputed string, complaint_id int) row format delimited fields terminated by ',' stored as textfile location 'hdfs:hostname:8020/complaints/';
Now I want to create another table complaints_new with partition as state and have all the data from above table. How can this be acheived?
I tried the below:
create external table if not exists complaints_new (date_received string, product string, sub_product string, issue string, sub_issue string, consumer_complaint_narrative string, company_public_response string, company varchar(50), zipcode int, tags string, consumer_consent_provided string, submitted_via string, date_sent_company string, company_response string, timely_response string, consumer_disputed string, complaint_id int) partitioned by (state varchar(20)) row format delimited fields terminated by ',' stored as textfile location 'hdfs://hostname:8020/complaints/';
SET hive.exec.dynamic.partition = true;
SET hive.exec.dynamic.partition.mode = nonstrict;
SET hive.mapred.mode = nonstrict;
insert into table complaints_new partition(state) select * from complaints;
The query is failing.
You have a few problems here... you are pointing to the same location which means that you will be reading and overwriting that location... the other problem is that Hive expect th partition column to be the last element in your list, it means that you cannot do select *, instead you have to select field to field and put the state and the end of your select statement

Unable to load text data into Hive table as ORC through temporary Hive table

I want to load .csv file into Hive table as a ORC file. I came across one post
which suggested a workaround to the problem to which I executed the below queries:
1) Creating and loading data as a text file into a temporary table:
CREATE TABLE IF NOT EXISTS CrimesData( ID int, Case_Number int, CrimeDate string, Block string , IUCR string,Primary_Type string, Description string, Location_Description string, Arrest string, Domestic string, Beat int, District int, Ward int, Community_Area int, FBI_Code string, X_Coordinate int, Y_Coordinate int, Year int, Updated_On string, Latitude decimal(10,10), Longitude decimal(10,10), CrimeLocation string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n'
tblproperties("skip.header.line.count"="1")
LOAD DATA LOCAL INPATH '/home/cloudera/Documents/CrimesData.csv' INTO TABLE CrimesData
2) Creating a new table and specifying ORC data as the source:
CREATE TABLE IF NOT EXISTS CrimesDataORC( ID int, Case_Number int, CrimeDate string, Block string , IUCR string,Primary_Type string, Description string, Location_Description string, Arrest string, Domestic string, Beat int, District int, Ward int, Community_Area int, FBI_Code string, X_Coordinate int, Y_Coordinate int, Year int, Updated_On string, Latitude decimal(10,10), Longitude decimal(10,10), CrimeLocation string)
STORED AS ORC;
3) Insert data into the new table from temporary table:
INSERT INTO TABLE CrimesDataORC SELECT * FROM CrimesData;
The first two steps execute without any error but the step 3 throws the following error:
Error while processing statement: FAILED: Execution Error, return code
2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
I am running the above queries on Cloudera Manager Quickstart VM 5.8.
Not sure where I am going wrong as similar steps for another table in the same database works as expected.
It might be kind of data non-compliance with structure. Try to set some where conditions in the select statement to check rather inserting all the data

Resources