I have a table of items with the following columns:
start_time column (timestamp without time zone)
expiration_time_seconds column (integer)
For example, some values are:
SELECT start_time, expiration_time_seconds
FROM whatever
ORDER BY start_time;
start_time | expiration_time_seconds
----------------------------+-------------------------
2014-08-05 08:23:32.428452 | 172800
2014-08-10 09:49:51.082456 | 3600
2014-08-13 13:03:56.980073 | 3600
2014-08-21 06:31:38.596451 | 3600
...
How do I add the expiration time, given in seconds, to the start_time?
I have tried to format a time interval string for the interval command, but failed:
blah=> SELECT interval concat(to_char(3600, '9999'), ' seconds');
ERROR: syntax error at or near "("
LINE 1: SELECT interval concat(to_char(3600, '9999'), ' seconds');
The trick is to create a fixed interval and multiply it with the number of seconds in the column:
SELECT start_time,
expiration_time_seconds,
start_time + expiration_time_seconds * interval '1 second'
FROM whatever
ORDER BY start_time;
start_time | expiration_time_seconds | end_time
----------------------------|-------------------------|----------------------------
2014-08-05 08:23:32.428452 | 172800 | 2014-08-07 08:23:32.428452
2014-08-10 09:49:51.082456 | 3600 | 2014-08-10 10:49:51.082456
2014-08-13 13:03:56.980073 | 3600 | 2014-08-13 14:03:56.980073
2014-08-21 06:31:38.596451 | 3600 | 2014-08-21 07:31:38.596451
Related
I want to count the total amount of pending tickets for each day in this week. I was only able to get it for one day at a time. I have this query right now:
SELECT (n.TOTAL - v.TODAY) + d.GISTER AS GISTER
FROM
(
-- Counts yesterday
SELECT
COUNT(ID) AS Gister
FROM FRESHDESK_API
-- 4 = resolved 5 = closed
-- Both count as closed
WHERE STATUS IN(4, 5)
AND TRUNC(UPDATED_AT) = TRUNC(SYSDATE - 1)
) d
CROSS JOIN
(
-- Total pending
SELECT
COUNT(ID) AS TOTAL
FROM FRESHDESK_API
-- 3 is pending
WHERE STATUS IN(3)
) n
CROSS JOIN
(
-- Pending tickets today
SELECT
COUNT(ID) AS TODAY
FROM FRESHDESK_API
-- 3 is pending
WHERE STATUS IN(3)
AND TRUNC(UPDATED_AT) = TRUNC(SYSDATE)
) v
I want to get a result like this:
+----------------------------------+---------+----------+
| day | pending_tickets |
+----------------------------------+---------+----------+
| Monday | 20 |
| Tuesday | 22 |
| Wednesday | 25 |
| Thursday | 24 |
| Friday | 19 |
+----------------------------------+---------+----------+
The table is someting like this (left the unused data out):
+----------------------------------+---------+----------+---------+-----------+----------+----------+
| id | created_at | updated_at | status |
+----------------------------------+---------+----------+----------+----------+----------+----------+
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
+----------------------------------+---------+----------+---------+-----------+---------+-----------+
You can use left join and group by as follows:
Select to_char(tday.updated_at, 'day') as updated_at,
count(tday.id) - count(yday.id) as pending_tickets
From FRESHDESK_API tday
Left join FRESHDESK_API yday
On trunc(tday.UPDATED_AT) = trunc(yday.UPDATED_AT - 1)
And trunc(yday.UPDATED_AT + 1, 'iw') = trunc(sysdate, 'iw')
And yday.status in (4,5)
Where trunc(tday.UPDATED_AT, 'iw') = trunc(sysdate, 'iw')
And tday.status = 3
Group by to_char(tday.updated_at, 'day'), trunc(tday.updated_at)
Order by trunc(tday.updated_at);
We are getting a time from src "2019-11-03 01:01:00". 2019-11-03 is the day light saving day.
lets say this is the end_time. We have another column in hive table start_time.
Logic to derive start time is :
start_time = (end_time- 3600)
Issue ##:
When we apply the same logic during the job execution with unix_timestamp(), following are the results.
Start_time =
select from_unixtime(unix_timestamp('2019-11-03 01:01:00') - 3600 ,'yyyy-MM-dd HH:mm:ss');
+----------------------+--+
| _c0 |
+----------------------+--+
| 2019-11-03 01:01:00 |
+----------------------+--+
also
End_time = select from_unixtime(unix_timestamp('2019-11-03 01:01:00') ,'yyyy-MM-dd HH:mm:ss');
+----------------------+--+
| _c0 |
+----------------------+--+
| 2019-11-03 01:01:00 |
+----------------------+--+
Both are returning the same result. This way our start_date=end_date which is not expected.
We want the End_time = "2019-11-03 00:01:00"
Can someone help!
You are hitting this issue HIVE-14305
The solution can be to calculate date in bash and pass it to your script as a variable:
initial_date="2019-11-03 01:01:00"
datesec="$(date '+%s' --date="$initial_date")"
result_date=$( date --date="#$((datesec - 3600))" "+%Y-%m-%d %H:%M:%S")
echo $result_date
#result 2019-11-03 00:01:00
#call your script like this
hive -hiveconf result_date="$result_date" -f script_name
#In the script use '${hiveconf:result_date}'
I am trying to group by Timestamp`s Date in oracle so far I used to_char. But I need another way. I tried like below:
SELECT d.summa,
d.FILIAL_CODE,
to_char(d.DATE_ACTION, 'YYYY-MM-DD')
FROM table1 d
WHERE d.action_id = 2
AND d.date_action Between to_date('01.01.2020', 'dd.mm.yyyy') AND to_date('01.03.2020', 'dd.mm.yyyy')
GROUP BY to_char(d.DATE_ACTION, 'YYYY-MM-DD')
table1
-----------------------------------------------------
summa | filial_code | date_action
--------------------------------------------------
100000.00 | 2100 | 2016-09-13 11:04:32
320000.12 | 3200 | 2016-09-12 21:04:58
400000.00 | 2100 | 2016-09-13 15:12:45
510000.12 | 3200 | 2016-09-15 09:30:58
------------------------------------------------------
I need like below
-------------------------------------------
summa | filial_code | date_action
------------------------------------------
500000.00 | 2100 | 2016-09-13
320000.12 | 3200 | 2016-09-12
510000.12 | 3200 | 2016-09-15
------------------------------------------
But I need except to_char function. I tried trunc but i could not do that
Using TRUNC should actually convert it to a date and remove the time part, but you also need to handle your other columns. Either group by them or use an aggregation function:
SELECT SUM( d.summa ) AS summa,
d.FILIAL_CODE,
TRUNC(d.DATE_ACTION) AS date_action
FROM table1 d
WHERE d.action_id = 2
AND d.date_action Between to_date('01.01.2020', 'dd.mm.yyyy')
AND to_date('01.03.2020', 'dd.mm.yyyy')
GROUP BY TRUNC(d.DATE_ACTION), d.FILIAL_CODE
I need to extract Date and hour from the string column in hive.
Table:
select TO_DATE(from_unixtime(UNIX_TIMESTAMP(dates,'dd/MM/yyyy'))) from dates;
output:
0016-01-01
0016-01-01
select TO_DATE(from_unixtime(UNIX_TIMESTAMP(dates,'hh'))) from dates;
output:
1970-01-01
1970-01-01
Please advise how to take date seperately and hour seperately from the table column.
I've change the data sample to something more reasonable
with dates as (select explode(array('1/11/16 3:29','12/7/16 17:19')) as dates)
select from_unixtime(unix_timestamp(dates,'dd/MM/yy HH:mm'),'yyyy-MM-dd') as the_date
,from_unixtime(unix_timestamp(dates,'dd/MM/yy HH:mm'),'H') as H
,from_unixtime(unix_timestamp(dates,'dd/MM/yy HH:mm'),'HH') as HH
from dates
+------------+----+----+
| the_date | h | hh |
+------------+----+----+
| 2016-11-01 | 3 | 03 |
| 2016-07-12 | 17 | 17 |
+------------+----+----+
I have a table with date ranges and i need to count the days only for the contiguos date ranges...
-----------------------------------
| table RANGES |
----------------------------------
| d_start | d_end | days |
| (date) | (date) | (num)|
-----------------------------------
| 2014-02-01 | 2014-02-05 | 4 |
| 2014-02-06 | 2014-02-11 | 5 |
| 2014-03-22 | 2014-03-25 | 3 |
| 2014-04-02 | 2014-04-10 | 8 |
| 2014-04-11 | 2014-04-20 | 9 |
-----------------------------------
I need to totalize days with break when the date ranges are not contiguos, a result like this:
| 2014-02-01 | 2014-02-11 | 9 |
| 2014-03-22 | 2014-03-25 | 3 |
| 2014-04-02 | 2014-04-20 | 17 |
i Tryed with LEAD to check if next record's d_start is equal d_end but i can't achieve the goal.
many thanks for any idea!
Marco
The answer is quite tricky:
SQL> create table tmp$dates (d_start date, d_end date);
Table created
SQL> insert into tmp$dates values (DATE '2014-02-01', DATE '2014-02-05');
1 row inserted
SQL> insert into tmp$dates values (DATE '2014-02-06', DATE '2014-02-11');
1 row inserted
SQL> insert into tmp$dates values (DATE '2014-03-22', DATE '2014-03-25');
1 row inserted
SQL> insert into tmp$dates values (DATE '2014-04-02', DATE '2014-04-10');
1 row inserted
SQL> insert into tmp$dates values (DATE '2014-04-11', DATE '2014-04-20');
1 row inserted
SQL> select min(d_start), max(d_end), max(d_end) - min(d_start) + 1 n#
2 from tmp$dates d
3 start with d_start not in (select d_end + 1 from tmp$dates)
4 connect by prior d_end = d_start - 1
5 group by level - rownum
6 order by 1;
MIN(D_START) MAX(D_END) N#
------------ ----------- ----------
01.02.2014 11.02.2014 11
22.03.2014 25.03.2014 4
02.04.2014 20.04.2014 19