Hive partitioning not working with dynamic variable - hadoop

If I run
set hivevar:a = 1;
select * from t1 where partition_variable=${a};
Hive only pulls in the records from the appropriate partition.
Alternately if I run
set hivevar:b = 6;
set hivevar:c = 5;
set hivevar:a = ${b}-${c};
select * from t1 where partition_variable=${a};
The condition on partition_variable is treated as a predicate rather than a partition, and hive goes through all records in the table.
This is obviously a contrived example, but in my particular use case it is necessary. Is there anyway to force hive to use this for partitioning?
Thanks in advance.

Is the partition variable the column on which partition occurs. It works with following.
create table newpart
(productOfMonth string)
partitioned by (month int);
hive> select * from newpart;
OK
Cantaloupes 10
Pumpkin 11
set hivevar:lastmonth = 11;
set hivevar:const = 1;
set hivevar:prevmonth = ${lastmonth}-${const};
hive> select * from newpart
> where month = ${prevmonth};
OK
Cantaloupes 10

I was never able to get partitioning to work properly with dynamically generated hive variables, but a simple workaround was to create a table containing the variables and join on them rather than using them in the where clause.

Related

how to constraint hive query file output to be in a single file always

I have created a hive table using below query, and inserting data to this table on daily basis using second query as mentioned below
create EXTERNAL table IF NOT EXISTS DB.efficacy
(
product string,
TP_Silent INT,
TP_Active INT,
server_date date
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION 'hdfs://hdfsadlproduction/user/DB/Report/efficacy';
Insert INTO DB.efficacy
select
product,
SUM(CASE WHEN verdict = 'TP_Silent' THEN 1 ELSE 0 END ),
SUM(CASE WHEN verdict = 'TP_Active' THEN 1 ELSE 0 END ) ,
current_date()
from
DB.efficacy_raw
group by
product
;
The issue is that everyday when my insert query executes it basically creates a new file in hadoop FS. I want every day query output to get appended in a same single file only, but Hadoop FS contains the files in the following manner.
000000_0, 000000_0_copy_1, 000000_0_copy_2
I have used below hive settings:-
SET hive.execution.engine=mr;
SET tez.queue.name=${queueName};
SET mapreduce.job.queuename=${queueName};
SET mapreduce.map.memory.mb = 8192;
SET mapreduce.reduce.memory.mb = 8192;
SET hive.exec.dynamic.partition = true;
SET hive.exec.dynamic.partition.mode = nonstrict;
SET hive.exec.parallel = true;
SET hive.exec.parallel.thread.number = 2;
SET mapreduce.input.fileinputformat.split.maxsize=2048000000;
SET mapreduce.input.fileinputformat.split.minsize=2048000000;
SET mapreduce.job.reduces = 20;
SET hadoop.security.credential.provider.path=jceks://hdfs/user/efficacy/s3-access/efficacy.jceks;
set hive.vectorized.execution.enabled=false;
set hive.enforce.bucketmapjoin=false;
set hive.optimize.bucketmapjoin.sortedmerge=false;
set hive.enforce.sortmergebucketmapjoin=false;
set hive.optimize.bucketmapjoin=false;
set hive.exec.dynamic.partition.mode=nostrict;
set hive.exec.compress.intermediate=false;
set hive.exec.compress.output=false;
**set hive.exec.reducers.max=1;**
I am beginner into hive and hadoop era so pl excuse. Any help will be greatly appreciated
Note:- I am using Hadoop 2.7.3.2.5.0.55-1
I didn't see any direct mechanism available or hive settings which will automatically merge all the small files at the end of the query. The concatenation of small files are currently not supported for files stored as text file.
As per the comment by "leftjoin" in my post, I have created the table in ORC format, and then used CONCATENATE hive query to merge all the small files into single big file.
I then used below hive query to export data from this single big ORC file into single text file, and could able to do my task with this exported text file.
hive#INSERT OVERWRITE DIRECTORY '<Hdfs-Directory-Path>'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
SELECT * FROM default.foo;
Courtesy:- https://community.hortonworks.com/questions/144122/convert-orc-table-data-into-csv.html

How to add running ID in a single UPDATE statement (Oracle)

let's assume I have a table tab1 in my Oracle DB 12.1, which has a column record_id (type NUMBER) and many other columns, among them a column named exchg_id.
This record_id is always empty when a batch of new rows gets inserted into the table. What I need to do is to populate the record_id with values 1..N for all rows that satisfy a condition ...WHERE EXCHG_ID = 'something' and number of such rows is N. Of course I know how to do this procedurally (in a for-loop), but I'd like to know if there's an faster way using a single UPDATE statement. I imagine something like this:
UPDATE tab1 SET record_id = {1..N} WHERE exchg_id = 'something';
Many thanks for your help!
UPDATE: the order of the rows is not important, I need no specific ordering. I just need unique record_id's 1..N for any given exchg_id.
You could use rownum to set record_id to 1 to N :
UPDATE tab1 SET record_id = rownum WHERE exchg_id = 'something';
If you have some offset, say 10, then use rownum + 10

Passing the table header in Hive transform

I am creating a query in Hive to execute a R script. I am using transform function to pass the table. However when I receive the table in R it comes without the header. I know that I could create a variable and ask the user to insert the header manually but I do not wanna do it.
I wanna do something automatically, I am considering two options:
1) Figure out a way to pass the table with the header included when using transform function
2) Save the header in a variable and pass it in transform (I have already tried it in different ways but instead of passing the result of the query it is passing the query string - as seen below)
Here is what I have:
--Name of the origin table
set source_table = categ_table_small;
--Number of clusters
set k = "5";
--Distance to be used in the model
set distance = "euclidean";
--Folder where the results of the model will be saved
set dir_tar = "/output_r";
--Name of the model used in the naming of the files
set model_name ="testeclara_small";
--Samples: integer, number of samples to be drawn from the dataset.
set n_samples = "10";
--sampsize: integer, number of observations in each sample. This formula is suggested by the package. sampsize<-min(nrow(x), 40 + 2 * k)
set sampsize = "50";
--Creating a matrix which will store the sample number and the group of each sample according to the algorithm
CREATE TABLE IF NOT EXISTS medoids_result AS SELECT * FROM categ_table_small;
--In the normal situation you don't have the output label, it means you just have 'x' and do not have 'y', so you need to add one extra column to receive
--the group of each observation
--ALTER TABLE medoids_result ADD COLUMNS (medoid INT);
set result_matrix = medoids_result;
set headerMatrix = show columns in categ_table_small;
--Trainning query
SET mapreduce.job.name = K medoids Clara- ${hiveconf:source_table};
SET mapreduce.job.reduces=1;
INSERT OVERWRITE TABLE ${hiveconf:result_matrix}
SELECT TRANSFORM ($begin(cols="${hiveconf:source_table}" delimiter= "," excludes="y")$column$end)
USING '/usr/bin/Rscript_10gb /programs_r/du8_dev_1.R ${hiveconf:k}${hiveconf:distance}${hiveconf:dir_tar}${hiveconf:model_name}${hiveconf:n_samples}${hiveconf:sampsize}${hiveconf:headerMatrix}'
AS
(
$begin(table='${hiveconf:result_matrix}') $column$end
)
FROM
(SELECT *
FROM ${hiveconf:source_table}
DISTRIBUTE BY '1'
)t1;
You can add this line
hive -e 'set hive.cli.print.header=true;select * from tablename;'
Where tablename refers to your table name
If you want defaultly work for every table then you need to update the $HOME/.hiverc file with
hive> set hive.cli.print.header=true;
in the first line.

Get the sysdate -1 in Hive

Is there any way to get the current date -1 in Hive means yesterdays date always?
And in this format- 20120805?
I can run my query like this to get the data for yesterday's date as today is Aug 6th-
select * from table1 where dt = '20120805';
But when I tried doing this way with date_sub function to get the yesterday's date as the below table is partitioned on date(dt) column.
select * from table1 where dt = date_sub(TO_DATE(FROM_UNIXTIME(UNIX_TIMESTAMP(),
'yyyyMMdd')) , 1) limit 10;
It is looking for the data in all the partitions? Why? Something wrong I am doing in my query?
How I can make the evaluation happen in a subquery to avoid the whole table scanned?
Try something like:
select * from table1
where dt >= from_unixtime(unix_timestamp()-1*60*60*24, 'yyyyMMdd');
This works if you don't mind that hive scans the entire table. from_unixtime is not deterministic, so the query planner in Hive won't optimize for you. For many cases (for example log files), not specifying a deterministic partition key can cause a very large hadoop job to start since it will scan the whole table, not just the rows with the given partition key.
If this matters to you, you can launch hive with an additional option
$ hive -hiveconf date_yesterday=20150331
And in the script or hive terminal use
select * from table1
where dt >= ${hiveconf:date_yesterday};
The name of the variable doesn't matter, nor does the value, you can set them in this case to get the prior date using unix commands. In the specific case of the OP
$ hive -hiveconf date_yesterday=$(date --date yesterday "+%Y%m%d")
In mysql:
select DATE_FORMAT(curdate()-1,'%Y%m%d');
In sqlserver :
SELECT convert(varchar,getDate()-1,112)
Use this query:
SELECT FROM_UNIXTIME(UNIX_TIMESTAMP()-1*24*60*60,'%Y%m%d');
It looks like DATE_SUB assumes date in format yyyy-MM-dd. So you might have to do some more format manipulation to get to your format. Try this:
select * from table1
where dt = FROM_UNIXTIME(
UNIX_TIMESTAMP(
DATE_SUB(
FROM_UNIXTIME(UNIX_TIMESTAMP(),'yyyy-MM-dd')
, 1)
)
, 'yyyyMMdd') limit 10;
Use this:
select * from table1 where dt = date_format(concat(year(date_sub(current_timestamp,1)),'-', month(date_sub(current_timestamp,1)), '-', day(date_sub(current_timestamp,1))), 'yyyyMMdd') limit 10;
This will give a deterministic result (a string) of your partition.
I know it's super verbose.

How to put more than 1000 values into an Oracle IN clause [duplicate]

This question already has answers here:
SQL IN Clause 1000 item limit
(5 answers)
Closed 8 years ago.
Is there any way to get around the Oracle 10g limitation of 1000 items in a static IN clause? I have a comma delimited list of many of IDs that I want to use in an IN clause, Sometimes this list can exceed 1000 items, at which point Oracle throws an error. The query is similar to this...
select * from table1 where ID in (1,2,3,4,...,1001,1002,...)
Put the values in a temporary table and then do a select where id in (select id from temptable)
select column_X, ... from my_table
where ('magic', column_X ) in (
('magic', 1),
('magic', 2),
('magic', 3),
('magic', 4),
...
('magic', 99999)
) ...
I am almost sure you can split values across multiple INs using OR:
select * from table1 where ID in (1,2,3,4,...,1000) or
ID in (1001,1002,...,2000)
You may try to use the following form:
select * from table1 where ID in (1,2,3,4,...,1000)
union all
select * from table1 where ID in (1001,1002,...)
Where do you get the list of ids from in the first place? Since they are IDs in your database, did they come from some previous query?
When I have seen this in the past it has been because:-
a reference table is missing and the correct way would be to add the new table, put an attribute on that table and join to it
a list of ids is extracted from the database, and then used in a subsequent SQL statement (perhaps later or on another server or whatever). In this case, the answer is to never extract it from the database. Either store in a temporary table or just write one query.
I think there may be better ways to rework this code that just getting this SQL statement to work. If you provide more details you might get some ideas.
Use ...from table(... :
create or replace type numbertype
as object
(nr number(20,10) )
/
create or replace type number_table
as table of numbertype
/
create or replace procedure tableselect
( p_numbers in number_table
, p_ref_result out sys_refcursor)
is
begin
open p_ref_result for
select *
from employees , (select /*+ cardinality(tab 10) */ tab.nr from table(p_numbers) tab) tbnrs
where id = tbnrs.nr;
end;
/
This is one of the rare cases where you need a hint, else Oracle will not use the index on column id. One of the advantages of this approach is that Oracle doesn't need to hard parse the query again and again. Using a temporary table is most of the times slower.
edit 1 simplified the procedure (thanks to jimmyorr) + example
create or replace procedure tableselect
( p_numbers in number_table
, p_ref_result out sys_refcursor)
is
begin
open p_ref_result for
select /*+ cardinality(tab 10) */ emp.*
from employees emp
, table(p_numbers) tab
where tab.nr = id;
end;
/
Example:
set serveroutput on
create table employees ( id number(10),name varchar2(100));
insert into employees values (3,'Raymond');
insert into employees values (4,'Hans');
commit;
declare
l_number number_table := number_table();
l_sys_refcursor sys_refcursor;
l_employee employees%rowtype;
begin
l_number.extend;
l_number(1) := numbertype(3);
l_number.extend;
l_number(2) := numbertype(4);
tableselect(l_number, l_sys_refcursor);
loop
fetch l_sys_refcursor into l_employee;
exit when l_sys_refcursor%notfound;
dbms_output.put_line(l_employee.name);
end loop;
close l_sys_refcursor;
end;
/
This will output:
Raymond
Hans
I wound up here looking for a solution as well.
Depending on the high-end number of items you need to query against, and assuming your items are unique, you could split your query into batches queries of 1000 items, and combine the results on your end instead (pseudocode here):
//remove dupes
items = items.RemoveDuplicates();
//how to break the items into 1000 item batches
batches = new batch list;
batch = new batch;
for (int i = 0; i < items.Count; i++)
{
if (batch.Count == 1000)
{
batches.Add(batch);
batch.Clear()
}
batch.Add(items[i]);
if (i == items.Count - 1)
{
//add the final batch (it has < 1000 items).
batches.Add(batch);
}
}
// now go query the db for each batch
results = new results;
foreach(batch in batches)
{
results.Add(query(batch));
}
This may be a good trade-off in the scenario where you don't typically have over 1000 items - as having over 1000 items would be your "high end" edge-case scenario. For example, in the event that you have 1500 items, two queries of (1000, 500) wouldn't be so bad. This also assumes that each query isn't particularly expensive in of its own right.
This wouldn't be appropriate if your typical number of expected items got to be much larger - say, in the 100000 range - requiring 100 queries. If so, then you should probably look more seriously into using the global temporary tables solution provided above as the most "correct" solution. Furthermore, if your items are not unique, you would need to resolve duplicate results in your batches as well.
Yes, very weird situation for oracle.
if you specify 2000 ids inside the IN clause, it will fail.
this fails:
select ...
where id in (1,2,....2000)
but if you simply put the 2000 ids in another table (temp table for example), it will works
below query:
select ...
where id in (select userId
from temptable_with_2000_ids )
what you can do, actually could split the records into a lot of 1000 records and execute them group by group.
Here is some Perl code that tries to work around the limit by creating an inline view and then selecting from it. The statement text is compressed by using rows of twelve items each instead of selecting each item from DUAL individually, then uncompressed by unioning together all columns. UNION or UNION ALL in decompression should make no difference here as it all goes inside an IN which will impose uniqueness before joining against it anyway, but in the compression, UNION ALL is used to prevent a lot of unnecessary comparing. As the data I'm filtering on are all whole numbers, quoting is not an issue.
#
# generate the innards of an IN expression with more than a thousand items
#
use English '-no_match_vars';
sub big_IN_list{
#_ < 13 and return join ', ',#_;
my $padding_required = (12 - (#_ % 12)) % 12;
# get first dozen and make length of #_ an even multiple of 12
my ($a,$b,$c,$d,$e,$f,$g,$h,$i,$j,$k,$l) = splice #_,0,12, ( ('NULL') x $padding_required );
my #dozens;
local $LIST_SEPARATOR = ', '; # how to join elements within each dozen
while(#_){
push #dozens, "SELECT #{[ splice #_,0,12 ]} FROM DUAL"
};
$LIST_SEPARATOR = "\n union all\n "; # how to join #dozens
return <<"EXP";
WITH t AS (
select $a A, $b B, $c C, $d D, $e E, $f F, $g G, $h H, $i I, $j J, $k K, $l L FROM DUAL
union all
#dozens
)
select A from t union select B from t union select C from t union
select D from t union select E from t union select F from t union
select G from t union select H from t union select I from t union
select J from t union select K from t union select L from t
EXP
}
One would use that like so:
my $bases_list_expr = big_IN_list(list_your_bases());
$dbh->do(<<"UPDATE");
update bases_table set belong_to = 'us'
where id in ($bases_list_expr)
UPDATE
Instead of using IN clause, can you try using JOIN with the other table, which is fetching the id. that way we don't need to worry about limit. just a thought from my side.
Instead of SELECT * FROM table1 WHERE ID IN (1,2,3,4,...,1000);
Use this :
SELECT * FROM table1 WHERE ID IN (SELECT rownum AS ID FROM dual connect BY level <= 1000);
*Note that you need to be sure the ID does not refer any other foreign IDS if this is a dependency. To ensure only existing ids are available then :
SELECT * FROM table1 WHERE ID IN (SELECT distinct(ID) FROM tablewhereidsareavailable);
Cheers

Resources