I have following setup:
CREATE TABLE IF NOT EXISTS request_income_buffer (
timestamp UInt64,
timestamp_micro Float32,
traceId Int64,
host String,
type String,
service String,
message String,
caller String,
context String
) ENGINE = Kafka('kafka:9092', 'request_income', 'group', 'JSONEachRow');
CREATE MATERIALIZED VIEW IF NOT EXISTS request_income
ENGINE = MergeTree(date, microtime, 8192) AS
SELECT
toDate(toDateTime(timestamp)) AS `date`,
toDateTime(timestamp) as `date_time`,
timestamp,
timestamp_micro AS `microtime`,
traceId,
host,
type,
service,
message,
caller,
context
FROM
request_income_buffer;
I want to add new column, ex. ip to my request_income table.
According to docs in order to do so I will need to follow next steps:
Detach view to stop receiving messages from Kafka.
DETACH TABLE request_income;
Drop table that streams data from Kafka since Kafka engine doesn't support ALTER queries.
DROP TABLE request_income_buffer
Recreate table that streams data from Kafka with new field.
CREATE TABLE IF NOT EXISTS request_income_buffer (
timestamp UInt64,
timestamp_micro Float32,
traceId Int64,
host String,
ip String,
type String,
service String,
message String,
caller String,
context String
) ENGINE = Kafka('kafka:9092', 'request_income', 'group', 'JSONEachRow');
According to this post update .inner table of the detached materialized view
ALTER TABLE `.inner.request_income` ADD COLUMN ip String AFTER host;
According to post from above update view's select query
Attach view
ATTACH TABLE request_income
Question is how to update view's select query?
So it appears the way to update materialized view's select query is as follows:
Get path to views metadata
SELECT metadata_path FROM system.tables WHERE name = 'request_income';
Use your favorite text editor to modify view's sql. In my case edited sql will look like
ATTACH MATERIALIZED VIEW request_income (
date Date,
date_time DateTime,
timestamp UInt64,
microtime Float32,
traceId Int64,
host String,
ip String,
type String,
service String,
message String,
caller String,
context String ) ENGINE = MergeTree(date, microtime, 8192) AS SELECT
toDate(toDateTime(timestamp)) AS date,
toDateTime(timestamp) AS date_time,
timestamp,
timestamp_micro AS microtime,
traceId,
host,
ip,
type,
service,
message,
caller,
context FROM default.request_income_buffer
Attach modified view back
ATTACH TABLE request_income;
Try this out:
DETACH TABLE request_income;
ALTER TABLE `.inner.request_income` ADD COLUMN ip String AFTER host;
ATTACH MATERIALIZED VIEW request_income
ENGINE = MergeTree(date, microtime, 8192) AS
SELECT
toDate(toDateTime(timestamp)) AS `date`,
toDateTime(timestamp) as `date_time`,
timestamp,
timestamp_micro AS `microtime`,
traceId,
host,
ip,
type,
service,
message,
caller,
context
FROM
request_income_buffer;
Related
I created MATERIALIZED VIEW like this :
create target table:
CREATE TABLE user_deatils_daily (
day date,
hour UInt8 ,
appid UInt32,
isp String,
city String,
country String,
session_count UInt64,
avg_score AggregateFunction(avg, Float32),
min_revenue AggregateFunction(min, Float32),
max_load_time AggregateFunction(max, Int32)
)
ENGINE = SummingMergeTree()
PARTITION BY toRelativeWeekNum(day)
ORDER BY (day,hour)
create mv:
CREATE MATERIALIZED VIEW user_deatils_daily_mv
TO user_deatils_daily as
select toDate(session_ts) as day, toHour(toDateTime(session_ts)) as hour,appid,isp,city,country,
count(session_uuid) as session_count,avgState() as avg_score,
minState(revenue) as min_revenue,
maxState(perf_page_load_time) as max_load_time
from user_deatils where toDate(session_ts)>='2020-08-26' group by session_ts,appid,isp,city,country
the data in the target table starting to fill with data.
after some times the target table is getting fill with new data and doesn't' save the old one.
why is that?
SummingMergeTree() PARTITION BY toRelativeWeekNum(day) ORDER BY (day,hour)
means calculate sums groupby toRelativeWeekNum(day), day,hour)
user_deatils_daily knows nothing about user_deatils_daily_mv. They are not related.
user_deatils_daily_mv just does inserts into user_deatils_daily
SummingMergeTree knows nothing about group by session_ts,appid,isp,city,country
I would expect to see ORDER BY (ts,appid,isp,city,country);
I would do:
CREATE TABLE user_details_daily
( ts DateTime,
appid UInt32,
isp String,
city String,
country String,
session_count SimpleAggregateFunction(sum,UInt64),
avg_score AggregateFunction(avg, Float32),
min_revenue SimpleAggregateFunction(min, Float32),
max_load_time SimpleAggregateFunction(max, Int32) )
ENGINE = AggregatingMergeTree()
PARTITION BY toStartOfWeek(ts)
ORDER BY (ts,appid,isp,city,country);
CREATE MATERIALIZED VIEW user_deatils_daily_mv TO user_details_daily
as select
toStartOfHour(toDateTime(session_ts)) ts,
appid,
isp,
city,
country,
count(session_uuid) as session_count ,
avgState() as avg_score,
min(revenue) as min_revenue,
max(perf_page_load_time) as max_load_time
from user_details
where toDate(session_ts)>='2020-08-26' group by ts,appid,isp,city,country;
I encountered the following problem:
I created a Hive table in an EMR cluster in HDFS without partitions
and loaded a data to it.
I created another Hiva table based on the
table from the paragraph#1 but with partitions from the datetime
column: PARTITIONED BY (year STRING,month STRING,day STRING).
I loaded a data from the non partitioned table into partitioned table and get the valid result.
I created an Athena database and table with the same structure as Hive table.
I copied partitioned files from HDFS locally and by aws s3 sync transferred all files into S3 empty bucket. All files were transferred without error and with the same order as in Hive directory in HDFS.
I loaded partitions by MSCK REPAIR TABLE and didn't get any error in an output.
After that I found that many values got indentation, for example a value that need to be in the "IP" column was in "Operating_sys" column and etc.
My scripts are:
-- Hive tables
SET hive.exec.dynamic.partition = true;
SET hive.exec.dynamic.partition.mode = nonstrict;
CREATE EXTERNAL TABLE IF NOT EXISTS cloudfront_logs_page_part
(
log_DATE STRING,
user_id STRING,
page_path STRING,
referer STRING,
tracking_referer STRING,
medium STRING,
campaign STRING,
source STRING,
visitor_id STRING,
ip STRING,
session_id STRING,
operating_sys STRING,
ad_id STRING,
keyword STRING,
user_agent STRING
)
PARTITIONED BY
(
`year` STRING,
`month` STRING,
`day` STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION '/user/admin/events_partitioned';
CREATE EXTERNAL TABLE IF NOT EXISTS cloudfront_logs_event_part
(
log_DATE STRING,
user_id STRING,
category STRING,
action STRING,
label STRING,
value STRING,
visitor_id STRING,
ip STRING,
session_id STRING,
operating_sys STRING,
extra_data_json STRING
)
PARTITIONED BY
(
`year` STRING,
`month` STRING,
`day` STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION '/user/admin/pages_partitioned';
INSERT INTO TABLE cloudfront_logs_page_part
PARTITION
(
`year`,
`month`,
`day`
)
SELECT
log_DATE,
user_id,
page_path,
referer,
tracking_referer,
medium,
campaign,
source,
visitor_id,
ip,
session_id,
operating_sys,
ad_id,
keyword,
user_agent,
year(log_DATE) as `year`,
month(log_DATE) as `month`,
day(log_DATE) as `day`
FROM
cloudfront_logs_page;
INSERT INTO TABLE cloudfront_logs_event_part
PARTITION
(
`year`,
`month`,
`day`
)
SELECT
log_DATE,
user_id,
category,
action,
label,
value,
visitor_id,
ip,
session_id,
operating_sys,
extra_data_json,
year(log_DATE) as `year`,
month(log_DATE) as `month`,
day(log_DATE) as `day`
FROM
cloudfront_logs_event;
-- Athena tables
CREATE DATABASE IF NOT EXISTS test
LOCATION 's3://...';
DROP TABLE IF EXISTS test.cloudfront_logs_page_ath;
CREATE EXTERNAL TABLE IF NOT EXISTS powtoon_hive.cloudfront_logs_page_ath (
log_DATE STRING,
user_id STRING,
page_path STRING,
referer STRING,
tracking_referer STRING,
medium STRING,
campaign STRING,
source STRING,
visitor_id STRING,
ip STRING,
session_id STRING,
operating_sys STRING,
ad_id STRING,
keyword STRING,
user_agent STRING
)
PARTITIONED BY (`year` STRING,`month` STRING, `day` STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION 's3://.../';
DROP TABLE IF EXISTS test.cloudfront_logs_event_ath;
CREATE EXTERNAL TABLE IF NOT EXISTS test.cloudfront_logs_event_ath
(
log_DATE STRING,
user_id STRING,
category STRING,
action STRING,
label STRING,
value STRING,
visitor_id STRING,
ip STRING,
session_id STRING,
operating_sys STRING,
extra_data_json STRING
)
PARTITIONED BY (`year` STRING,`month` STRING, `day` STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION 's3://.../';
What can be wrong? Table structure? Athena metadata?
The easiest method would be to convert your raw files directly into a partitioned Parquet columnar format. This has the benefit of partitioning, columnar storage, predicate push-down and all those other fancy words.
See: Converting to Columnar Formats - Amazon Athena
I have files in S3 that contain many lines of JSON (separated by newline). I want to convert these files to a Columnar Format for consumption by AWS Athena
I am following the Converting to Columnar Formats guide to do this, however when converted to ORC, the partition convention in S3 is lost.
In this example, how do you preserve the dt partition in the converted to parquet s3 folder structure? When I run the example it just outputs s3://myBucket/pq/000000_0 and NOT s3://myBucket/pq/dt=2009-04-14-04-05/000000_0
Here is the HQL that sets up interface to bring JSON into a Hive table:
CREATE EXTERNAL TABLE impressions (
requestBeginTime string,
adId string,
impressionId string,
referrer string,
userAgent string,
userCookie string,
ip string,
number string,
processId string,
browserCookie string,
requestEndTime string,
timers struct<modelLookup:string, requestTime:string>,
threadId string,
hostname string,
sessionId string)
PARTITIONED BY (dt string)
ROW FORMAT serde 'org.apache.hive.hcatalog.data.JsonSerDe'
with serdeproperties ( 'paths'='requestBeginTime, adId, impressionId, referrer, userAgent, userCookie, ip' )
LOCATION 's3://us-east-1.elasticmapreduce/samples/hive-ads/tables/impressions' ;
msck repair table impressions;
Here is the HQL that converts to Parquet
CREATE EXTERNAL TABLE parquet_hive (
requestBeginTime string,
adId string,
impressionId string,
referrer string,
userAgent string,
userCookie string,
ip string)
STORED AS PARQUET
LOCATION 's3://mybucket/pq/';
INSERT OVERWRITE TABLE parquet_hive SELECT requestbegintime,adid,impressionid,referrer,useragent,usercookie,ip FROM impressions where dt='2009-04-14-04-05';
First of all, Add PARTITIONED BY (dt string) to parquet_hive definition.
Second -
If you want to insert the data, partition by partition, you have to declare the partition you are inserting into.
Note the PARTITION (dt='2009-04-14-04-05')
INSERT OVERWRITE TABLE parquet_hive PARTITION (dt='2009-04-14-04-05')
SELECT requestbegintime,adid,impressionid,referrer,useragent,usercookie,ip
FROM impressions where dt='2009-04-14-04-05'
;
An easier way would be to use dynamic partitioning.
Note the PARTITION (dt) and the dt as a last column in in the SELECT.
You might need to to set hive.exec.dynamic.partition.mode.
set hive.exec.dynamic.partition.mode=nonstrict;
INSERT OVERWRITE TABLE parquet_hive PARTITION (dt)
SELECT requestbegintime,adid,impressionid,referrer,useragent,usercookie,ip,dt
FROM impressions where dt='2009-04-14-04-05'
;
P.s.
CREATE EXTERNAL TABLE impressions does not "reads the JSON into a Hive table".
It is just an interface with the necessary information to read the HDFS files.
`
You can simply include the same PARTITIONED BY (dt string) parameter that is in your first statement, which will create the same directory structure.
In this case, the dt field (presumably, date) is actually stored in the directory name. A separate directory is created for each value.
I am trying to process some log files on a bucket in amazon s3.
I create the table :
CREATE EXTERNAL TABLE apiReleaseData2 (
messageId string, hostName string, timestamp string, macAddress string DISTINCT, apiKey string,
userAccountId string, userAccountEmail string, numFiles string)
ROW FORMAT
serde 'com.amazon.elasticmapreduce.JsonSerde'
with serdeproperties ( 'paths'='messageId, hostName, timestamp, macAddress, apiKey, userAccountId, userAccountEmail, numFiles')
LOCATION 's3://apireleasecandidate1/regression/transferstatistics/2013/12/31/';
Then I run the following HiveQL statement and get my desired output in the file without any issues. My directories are setup in the following manner :
s3://apireleasecandidate1/regression/transferstatistics/2013/12/31/ < All the log files for this day >
What I want to do is that I specify the LOCATION up to the 's3://apireleasecandidate1/regression/transferstatistics/' and then call the
ALTER TABLE <Table Name> ADD PARTITION (<path>)
statement or the
ALTER TABLE <Table Name> RECOVER PARTITIONS ;
statement to access the files in the subdirectories. But when I do this there is no data in my table.
I tried the following :
CREATE EXTERNAL TABLE apiReleaseDataUsingPartitions (
messageId string, hostName string, timestamp string, macAddress string, apiKey string,
userAccountId string, userAccountEmail string, numFiles string)
PARTITIONED BY (year STRING, month STRING, day STRING)
ROW FORMAT
serde 'com.amazon.elasticmapreduce.JsonSerde'
with serdeproperties ( 'paths'='messageId, hostName, timestamp, macAddress, apiKey, userAccountId, userAccountEmail, numFiles')
LOCATION 's3://apireleasecandidate1/regression/transferstatistics/';
and then I run the following ALTER command :
ALTER TABLE apiReleaseDataUsingPartitions ADD PARTITION (year='2013', month='12', day='31');
But running the Select statement on the table gives out no results.
Can someone please guide me what I am doing wrong ?
Am I missing something Important ?
Cheers
Tanzeel
In HDFS anyway, the partitions manifest in a key/value format like this:
hdfs://apireleasecandidate1/regression/transferstatistics/year=2013/month=12/day=31
I can't vouch for S3 but an easy way to check would be to write some data into a dummy partition and see where it creates the file.
ADD PARTITION supports an optional LOCATION parameter, so you might be able to deal with this by saying
ALTER TABLE apiReleaseDataUsingPartitions ADD PARTITION (year='2013', month='12', day='31') LOCATION 's3://apireleasecandidate1/regression/transferstatistics/2013/12/31/';
Again I've not dealt with S3 but would be interested to hear if this works for you.
I'm trying to import data using an EMR job from JSON files in S3 that contain sparse fields e.g. an ios_os field and android_os but only one contains data. Sometimes the data is null and sometimes it's an empty string, when trying to insert into DynamoDB I'm getting an error (although I am able to insert some records that are sparsely populated):
"AttributeValue may not contain an empty string"
{"created_at_timestamp":1358122714,...,"data":null,"type":"e","android_network_carrier":""}
I filtered out the columns that had the empty string "", but I'm still getting that error. I'm assuming it's the "property":null values that are causing this (or both). I assume that for it to work properly those values shouldn't exist when going to DynamoDB?
Is there any way to tell Hive through the JSONSerde or Hive's interaction with the DynamoDB table to ignore empty string attribute values.
Here's an example of the Hive SQL schema and insert command:
CREATE EXTERNAL TABLE IF NOT EXISTS json_events (
-- Common
created_at BIGINT,
data STRING,
type STRING,
android_network_carrier STRING
)
PARTITIONED BY (created_at BIGINT, type STRING)
ROW FORMAT SERDE "org.apache.hadoop.hive.contrib.serde2.JsonSerde"
WITH SERDEPROPERTIES (
-- Common
"created_at"="$.created_at",
"data"="$.data",
"android_network_carrier"="$.anw",
"type"="$.dt"
)
LOCATION s3://test.data/json_events;
CREATE EXTERNAL TABLE IF NOT EXISTS dynamo_events (
-- Common
created_at BIGINT,
data STRING,
type STRING,
android_network_carrier STRING
)
STORED BY 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler'
TBLPROPERTIES ("dynamodb.table.name" = "test-events",
"dynamodb.column.mapping" = "created_at:created_at,data:data,type:type,android_network_carrier:android_network_carrier");
ALTER TABLE json_events RECOVER PARTITIONS;
INSERT OVERWRITE TABLE dynamo_events
SELECT created_at,
data,
android_network_carrier,
type
FROM json_events
WHERE created_at = 20130114 AND type = 'e';
The nulls shouldn't be a problem as long as it's not for the primary key.
However, DynamoDB does not allow empty strings nor empty sets as described in the data model.
To work around this, I think you have a couple options:
Define a constant for empty strings like "n/a", and make sure that your data extraction processes treats missing values as such.
You could also filter these records, but that will mean losing data. This could be done like this:
INSERT OVERWRITE TABLE dynamo_events
SELECT created_at,
data,
android_network_carrier,
type
FROM json_events
WHERE created_at = 20130114 AND type = 'e' AND android_network_carrier != "";