Convert Date TImestamp In Presto - oracle

is there any way to convert from 2022-06-15 10:21:05.698000000 to this 2022-06-15 10:21:05 format?
I have data in hive (Datatype is string) which contains data like this 2022-06-15 10:21:05.698000000. I need to insert this data in oracle, in oracle data type is date. I am using below query while selecting the data from hive.
select hive_date,cast(coalesce(substr(A.hive_date, 1,19),substr(A.hive_date2,1,19)) as timestamp)
as oracle_date from test A limit 10;
It's showing below output.
hive_date oracle_date
2022-06-15 10:21:05.698000000 | 2022-06-15 10:21:05.000
I want to convert this till seconds 2022-06-15 10:21:05 so i can insert into it in oracle. Can someone plz suggest me.

You can use date functions - date_parse and date_format:
select date_format(date_parse('2022-06-15 10:21:05.698000000', '%Y-%m-%d %H:%i:%s.%f'), '%Y-%m-%d %H:%i:%s')
Output:
_col0
2022-06-15 10:21:05
Also I would suggest using coalesce before manipulations i.e. in original attempt coalesce(substr(A.hive_date, 1,19),substr(A.hive_date2,1,19)) -> substr(coalesce(A.hive_date, A.hive_date2), 1, 19)
Also possibly you need to just use trim on the data, like:
substr(trim(coalesce(A.hive_date, A.hive_date2)), 1, 19)
Or:
select date_format(
date_parse(
trim(coalesce(A.hive_date, A.hive_date2)),
'%Y-%m-%d %H:%i:%s.%f'
),
'%Y-%m-%d %H:%i:%s'
)

It's possible that all you need is:
SELECT cast('2022-06-15 10:21:05.698000000' AS timestamp(0));
_col0
2022-06-15 10:21:06
But, if it needs to be a string, this truncates:
SELECT date_format(timestamp '2022-06-15 10:21:05.698000000', '%Y-%m-%d %H:%i:%s');
_col0
2022-06-15 10:21:05
Or this method rounds:
SELECT cast(cast('2022-06-15 10:21:05.698000000' AS timestamp(0)) AS varchar);
_col0
2022-06-15 10:21:06

Related

Oracle: filter query on datetime

I need to restrict a query with a
SELECT ... FROM ...
WHERE my_date=(RESULT FROM A SELECT)
... ;
in order to achieve that I am using as result of the select a timestamp (if I instead use a datetime I get nothing from my select probably because the format I am using trims the datetime at the second).
Sadly this is not working because these kindo of queries:
select DISTINCT TO_DATE(TO_TIMESTAMP(TO_DATE('25-10-2017 00:00', 'dd-MM-yyyy HH24:MI'))) from DUAL;
return an
ORA-01830: date format picture ends before converting entire input string
how to deal with timestamp to date conversion?
If you want to just compare and check only he dates use trunc on both LHS and RHS.
SELECT ... FROM ...
WHERE trunc(my_date)=(select trunc(RESULT) FROM A)
... ;
This will just compare the dates by truncating the timestamp values
You can use the combination of "TRUNC" and "IN" keywords in your query to achieve what you are expecting. Please check the below query sample as a reference.
SELECT * FROM customer WHERE TRUNC(last_update_dt) IN (select DISTINCT (TRUNC(last_update_dt)) from ... )
Cheers !!

How to convert string date to big int in hive with milliseconds

I have a string 2013-01-01 12:00:01.546 which represents a timestamp with milliseconds that I need to convert to a bigint without losing the milliseconds.
I tried unix_timestamp but I lose the milliseconds:
unix_timestamp(2013-01-01 12:00:01.546,'yyyy-MM-dd HH:mm:ss') ==> 1357059601
unix_timestamp(2013-01-01 12:00:01.786,'yyyy-MM-dd HH:mm:ss') ==> 1357059601
I tried with milliseconds format as well but no difference
unix_timestamp(2013-01-01 12:00:01.786,'yyyy-MM-dd HH:mm:ss:SSS') ==> 1357059601
Is there any way to get milliseconds difference in hive?
This is what I came with so far.
If all your timestamps have a fraction of 3 digits it can be simplified.
with t as (select timestamp '2013-01-01 12:00:01.546' as ts)
select cast ((to_unix_timestamp(ts) + coalesce(cast(regexp_extract(ts,'\\.\\d*',0) as decimal(3,3)),0)) * 1000 as bigint)
from t
1357070401546
Verification of the result:
select from_utc_timestamp (1357070401546,'UTC')
2013-01-01 12:00:01.546000
So apparently unix_timestamp doesn't convert milliseconds. You can use the following approach.
hive> select unix_timestamp(cast(regexp_replace('2013-01-01 12:00:01.546', '(\\d{4})-(\\d{2})-(\\d{2}) (\\d{2}):(\\d{2}):(\\d{2}).(\\d{3})', '$1-$2-$3 $4:$5:$6.$7' ) as timestamp));
OK
1357063201
Hive function unix_timestamp() doesn't convert the milli second part, so you may want to use the below:
unix_timestamp('2013-01-01 12:00:01.546') + cast(split('2013-01-01 12:00:01.546','\\\.')[1] as int) => 1357067347
unix_timestamp('2013-01-01 12:00:01.786') + cast(split('2013-01-01 12:00:01.786','\\\.')[1] as int) => 1357067587

Converting epoch time to PST zone in hadoop/hive

I want to convert the epoch time into pst time zone.
For example : 1482440069 when I convert to PST, I should get 2016-12-22
Now, when I try this I get proper answer
SELECT from_utc_timestamp('1970-01-01 07:00:00', 'PST');
Also, When I try this, I get proper value
select from_unixtime(cast(1482440069 as bigint), 'yyyy-MM-dd')
o/p : 2016-12-22
But, When I try this query, I get NULL response
select from_utc_timestamp(from_unixtime(cast(1482440069 as bigint), 'yyyy-MM-dd'),'PST') -- Gives NULL response
Use yyyy-MM-dd HH:mm:ss instead of yyyy-MM-dd
hive> select from_utc_timestamp(from_unixtime(cast(1482440069 as bigint), 'yyyy-MM-dd HH:mm:ss'),'PST');
OK
2016-12-22 04:54:29
You can try:
SELECT from_utc_timestamp(cast(from_unixtime('1970-01-01 07:00:00', 'yyyy-MM-dd HH:mm:ss') as bigint) + (time_zone value like -5 or -6 * 3600));

"Select Extract Minute" failed in oracle

Please help me to solve this problem.
I need to extract minute from hour ("JAM" column) in a table .
I have try this query :
WITH recordabsen AS
(SELECT userid,
TO_CHAR(checktime,'MM/DD/YYYY') AS tanggal ,
MIN(TO_CHAR(checktime,'hh24:mi')) AS JAM
FROM checkinout
WHERE USERID = '688'
AND (checktime BETWEEN to_date('04/01/2013','MM/DD/YYYY') AND to_date('05/01/2013','MM/DD/YYYY'))
AND checktype = 'I'
GROUP BY userid,
TO_CHAR(checktime,'MM/DD/YYYY')
)
SELECT EXTRACT (MINUTE FROM JAM) AS minute
FROM recordabsen
WHERE to_date(JAM,'hh24:mi') > TRUNC(to_date(JAM,'hh24:mi')) + 8/24
but returns an error :
Invalid Extract field
As soon as you've done:
to_char(checktime,'hh24:mi')
you got a string and not a date, so maybe it's better to use strings functions and not date functions, i.e.:
substr("JAM", 4)
Here is a sqlfiddle demo
Why don't you just pass the checktime from the WITH clause query? Then, if checktime is a date, you can cast it to TIMESTAMP, which will allow you to use EXTRACT MINUTE:
SELECT EXTRACT (MINUTE FROM CAST(SYSDATE AS TIMESTAMP)) FROM dual;
Check at SQLFiddle: http://www.sqlfiddle.com/#!4/d41d8/18054

Get the sysdate -1 in Hive

Is there any way to get the current date -1 in Hive means yesterdays date always?
And in this format- 20120805?
I can run my query like this to get the data for yesterday's date as today is Aug 6th-
select * from table1 where dt = '20120805';
But when I tried doing this way with date_sub function to get the yesterday's date as the below table is partitioned on date(dt) column.
select * from table1 where dt = date_sub(TO_DATE(FROM_UNIXTIME(UNIX_TIMESTAMP(),
'yyyyMMdd')) , 1) limit 10;
It is looking for the data in all the partitions? Why? Something wrong I am doing in my query?
How I can make the evaluation happen in a subquery to avoid the whole table scanned?
Try something like:
select * from table1
where dt >= from_unixtime(unix_timestamp()-1*60*60*24, 'yyyyMMdd');
This works if you don't mind that hive scans the entire table. from_unixtime is not deterministic, so the query planner in Hive won't optimize for you. For many cases (for example log files), not specifying a deterministic partition key can cause a very large hadoop job to start since it will scan the whole table, not just the rows with the given partition key.
If this matters to you, you can launch hive with an additional option
$ hive -hiveconf date_yesterday=20150331
And in the script or hive terminal use
select * from table1
where dt >= ${hiveconf:date_yesterday};
The name of the variable doesn't matter, nor does the value, you can set them in this case to get the prior date using unix commands. In the specific case of the OP
$ hive -hiveconf date_yesterday=$(date --date yesterday "+%Y%m%d")
In mysql:
select DATE_FORMAT(curdate()-1,'%Y%m%d');
In sqlserver :
SELECT convert(varchar,getDate()-1,112)
Use this query:
SELECT FROM_UNIXTIME(UNIX_TIMESTAMP()-1*24*60*60,'%Y%m%d');
It looks like DATE_SUB assumes date in format yyyy-MM-dd. So you might have to do some more format manipulation to get to your format. Try this:
select * from table1
where dt = FROM_UNIXTIME(
UNIX_TIMESTAMP(
DATE_SUB(
FROM_UNIXTIME(UNIX_TIMESTAMP(),'yyyy-MM-dd')
, 1)
)
, 'yyyyMMdd') limit 10;
Use this:
select * from table1 where dt = date_format(concat(year(date_sub(current_timestamp,1)),'-', month(date_sub(current_timestamp,1)), '-', day(date_sub(current_timestamp,1))), 'yyyyMMdd') limit 10;
This will give a deterministic result (a string) of your partition.
I know it's super verbose.

Resources