Group by TIMESTAMP`S Date in Oracle - oracle

I am trying to group by Timestamp`s Date in oracle so far I used to_char. But I need another way. I tried like below:
SELECT d.summa,
d.FILIAL_CODE,
to_char(d.DATE_ACTION, 'YYYY-MM-DD')
FROM table1 d
WHERE d.action_id = 2
AND d.date_action Between to_date('01.01.2020', 'dd.mm.yyyy') AND to_date('01.03.2020', 'dd.mm.yyyy')
GROUP BY to_char(d.DATE_ACTION, 'YYYY-MM-DD')
table1
-----------------------------------------------------
summa | filial_code | date_action
--------------------------------------------------
100000.00 | 2100 | 2016-09-13 11:04:32
320000.12 | 3200 | 2016-09-12 21:04:58
400000.00 | 2100 | 2016-09-13 15:12:45
510000.12 | 3200 | 2016-09-15 09:30:58
------------------------------------------------------
I need like below
-------------------------------------------
summa | filial_code | date_action
------------------------------------------
500000.00 | 2100 | 2016-09-13
320000.12 | 3200 | 2016-09-12
510000.12 | 3200 | 2016-09-15
------------------------------------------
But I need except to_char function. I tried trunc but i could not do that

Using TRUNC should actually convert it to a date and remove the time part, but you also need to handle your other columns. Either group by them or use an aggregation function:
SELECT SUM( d.summa ) AS summa,
d.FILIAL_CODE,
TRUNC(d.DATE_ACTION) AS date_action
FROM table1 d
WHERE d.action_id = 2
AND d.date_action Between to_date('01.01.2020', 'dd.mm.yyyy')
AND to_date('01.03.2020', 'dd.mm.yyyy')
GROUP BY TRUNC(d.DATE_ACTION), d.FILIAL_CODE

Related

When i select , only one column is checked without duplicates

I have a 2 table like this:
first table
+------------+---------------+--------+
| pk | user_one |user_two|
+------------+---------------+--------+
second table
+------------+---------------+--------+----------------+----------------+
| pk | sender |receiver|fk of firsttable|content |
+------------+---------------+--------+----------------+----------------+
First and second table have one to many(1:N) relations.
There are many records in second table:
| pk | sender|receiver|fk of firsttable|content |
|120 |car224 |car223 |1 |test message1 to 223
|121 |car224 |car223 |1 |test message2 to 223
|122 |car224 |car225 |21 |test message1 to 225
|123 |car224 |car225 |21 |test message2 to 225
|124 |car224 |car225 |21 |test message3 to 225
|125 |car224 |car225 |21 |test message4 to 225
I need to find if fk has the same value and I want the row with the largest pk.
I've changed the above column name to make it easier to understand.
Here is the actual sql I've tried so far:
select *
from (select rownum rn,
mr.mrno,
mr.user_one,
mr.user_two,
m.mno,
m.content
from tbl_messagerelation mr,
tbl_message m
where (mr.user_one = 'car224' or
mr.user_two='car224') and
m.rowid in (select max(rowid)
from tbl_message
group by m.mno) and
rownum <= 1*20)
where rn > (1-1) * 20
And this is the result:
+---------+-------+----------+----------+-------------------------+----------------------+
| rn | mrno | user_one | user_two | mno(pk of second table) | content |
+---------+-------+----------+----------+-------------------------+----------------------+
| 1 | 1 | car224 | car223 | 125 | test message4 to 225 |
| 2 | 21 | car224 | car225 | 125 | test message4 to 225 |
+---------+-------+----------+----------+-------------------------+----------------------+
My desired result is something like this:
+---------+---------+----------+--------------------+----------------------+
| fk | sender | receiver | pk of second table | content |
+---------+---------+----------+--------------------+----------------------+
| 1 | car224 | car223 | 121 | test message2 to 223 |
| 21 | car224 | car223 | 125 | test message4 to 225 |
+---------+---------+----------+--------------------+----------------------+
Your table description when compared to your query is confusing me. However, what I could understand was that you are probably looking for row_number().
An important advice is to use standard explicit JOIN syntax rather than outdated a,b syntax for joins. Join keys were not clear to me and you may replace it appropriately in your final query.
select * from
(
select mr.*, m.*, row_number() over ( partition by m.fk order by m.pk desc ) as rn
from tbl_messagerelation mr join tbl_message m on mr.? = m.?
) where rn =1
Or perhaps you don't need that join at all
select * from
(
select m.*, row_number() over ( partition by m.fk order by m.pk desc ) as rn
from tbl_message m
) where rn =1

xsd format number , 5 decimal places

I have xsd code which is printing LINE_AMOUNT value as 14,952.59 , now i want to display this as 14,952.59000(5 decimal places).
How to achieve this?
Thank you
SQL Fiddle
Query 1 Either (if you want to use the current NLS values for decimal and thousands characters):
SELECT TO_CHAR(
14952.59,
'FM9G999G999G999G990D00000'
)
FROM DUAL
Results:
| TO_CHAR(14952.59,'FM9G999G999G999G990D00000') |
|-----------------------------------------------|
| 14,952.59000 |
Query 2 or:
SELECT TO_CHAR(
14952.59,
'FM9,999,999,999,990.00000'
)
FROM DUAL
Results:
| TO_CHAR(14952.59,'FM9,999,999,999,990.00000') |
|-----------------------------------------------|
| 14,952.59000 |
Update: SQL Fiddle
Query 3:
SELECT TO_CHAR(
TO_NUMBER(
'13,214,952.59',
'FM9G999G999G999G990D99999'
),
'FM9G999G999G999G990D00000'
) AS formatted_value
FROM DUAL
Results:
| FORMATTED_VALUE |
|------------------|
| 13,214,952.59000 |
Query 4:
SELECT TO_CHAR(
TO_NUMBER(
'13,214,952.59',
'FM9,999,999,999,990.99999'
),
'FM9,999,999,999,990.00000'
) AS formatted_value
FROM DUAL
Results:
| FORMATTED_VALUE |
|------------------|
| 13,214,952.59000 |
Try this, it will work for you I think.
select trim(to_char(14952.59,9999999999.99999)) from dual
OUTPUT
14952.59000

Aggregation, mathematical function & GROUP BY in a single query in Hive

I have Table T1 with the below schema:
job_id job_name queue memory cores start_time end_time
job_1234 ABC A_user 51200 20 22-02-2018 22-02-2018
job_2345 ABC A_user 71680 30 22-02-2018 23-02-2018
I want the output to be:
ID f_queue f_job_name f_memory f_cores f_start_time f_end_time process_month
1 A_user ABC 120 50 22-02-2018 23-02-2018 201702
Where memory= (51200+71680/1024), cores=(20+30), ID and process_month are the static variables that I am passing to the hive script.
Is the below query the right one:
select
${ID},
job_id as f_queue,
job_name as f_job_name,
sum(memory)/1024 as f_memory,
sum(f_cores) as f_cores,
min(start_time) as f_start_time,
max(end_time) as f_end_time,
${process_month} as process_month
from T1 group by f_job_name,f_queue;
in group by you need to refer to actual column names i.e job_id,job_name not the alias names(i.e f_queue,f_job_name).
in your query you are doing group by job_id as f_queue i.e job_id field and job_name, when hive did group by on both job_id,job_name above records goes to different groups(job_1234,job_2345)
You need to change your query to
select
queue as f_queue,
job_name as f_job_name,
sum(memory)/1024 as f_memory,
sum(cores) as f_cores,
min(start_time) as f_start_time,
max(end_time) as f_end_time
from queue group by job_name,queue;
in the above query i'm doing group by on queue and job_name which will result
+----------+-------------+-----------+----------+---------------+-------------+--+
| f_queue | f_job_name | f_memory | f_cores | f_start_time | f_end_time |
+----------+-------------+-----------+----------+---------------+-------------+--+
| A_user | ABC | 120.0 | 50.0 | 22-02-2018 | 23-02-2018 |
+----------+-------------+-----------+----------+---------------+-------------+--+
Your query results:-
including job_name,job_id in group by clause
select
job_id as f_queue,
job_name as f_job_name,
sum(memory)/1024 as f_memory,
sum(cores) as f_cores,
min(start_time) as f_start_time,
max(end_time) as f_end_time
from queue group by job_name,job_id;
Output:-
+-----------+-------------+-----------+----------+---------------+-------------+--+
| f_queue | f_job_name | f_memory | f_cores | f_start_time | f_end_time |
+-----------+-------------+-----------+----------+---------------+-------------+--+
| job_1234 | ABC | 50.0 | 20.0 | 22-02-2018 | 22-02-2018 |
| job_2345 | ABC | 70.0 | 30.0 | 22-02-2018 | 23-02-2018 |
+-----------+-------------+-----------+----------+---------------+-------------+--+
Probably Your Required query Would be something like
select
${ID},
queue as f_queue,
job_name as f_job_name,
sum(memory)/1024 as f_memory,
sum(cores) as f_cores,
min(start_time) as f_start_time,
max(end_time) as f_end_time
${process_month} as process_month
from T1 group by queue,job_name;

add column check for format number to number oracle

I need to add a column to a table that check for input to be a max value of 999 to 999, like a soccer match score. How do I write this statement?
example:
| Score |
---------
| 1-2 |
| 10-1 |
|999-999|
| 99-99 |
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE SCORES (Score ) AS
SELECT '1-2' FROM DUAL
UNION ALL SELECT '10-1' FROM DUAL
UNION ALL SELECT '999-999' FROM DUAL
UNION ALL SELECT '99-99' FROM DUAL
UNION ALL SELECT '1000-1000' FROM DUAL;
Query 1:
SELECT SCORE,
CASE WHEN REGEXP_LIKE( SCORE, '^\d{1,3}-\d{1,3}$' )
THEN 'Valid'
ELSE 'Invalid'
END AS Validity
FROM SCORES
Results:
| SCORE | VALIDITY |
|-----------|----------|
| 1-2 | Valid |
| 10-1 | Valid |
| 999-999 | Valid |
| 99-99 | Valid |
| 1000-1000 | Invalid |

Oracle subquery internal error

The query in listing 1 joins two subqueries, both of which are computed from two named subqueries (ANIMAL and SEA_CREATURE). The output should list animals that don't live in the sea, and list animals that do live in the sea.
When run in a console window (SQL Navigator 5.5), the server returns error:
15:21:30 ORA-00600: internal error code, arguments: [evapls1], [], [], [], [], [], [], []
Why? And how to get around it?
Interesting to note, I can run the same query in a program written in Delphi XE7 (using TSQLQuery component), and it works ok. But this is not a problem with SQL Navigator. If I create a view containing the expression in listing 1, selecting from the view does not output an error. The problem is in the oracle server.
If I make the ANIMAL subquery really simple, like in Listing 2, it works. but anything else, even just selecting from a table, results in this internal error.
Listing 1: (Outputs error)
with ANIMAL as (
select ANIMAL_NAME
from xmltable( 't/e' passing xmltype( '<t><e>Tuna</e><e>Cat</e><e>Dolphin</e><e>Swallow</e></t>')
columns
ANIMAL_NAME varchar2(100) path 'text()')),
SEA_CREATURE as (
select 'Tuna' as CREATURE_NAME from dual
union all select 'Shark' from dual
union all select 'Dolphin' from dual
union all select 'Plankton' from dual)
select NONSEA_ANIMALS, SEA_ANIMALS
from (
select stringagg( ANIMAL_NAME) as NONSEA_ANIMALS
from ( (select * from ANIMAL)
minus (select CREATURE_NAME as ANIMAL_NAME from SEA_CREATURE))),
(select stringagg( ANIMAL_NAME) as SEA_ANIMALS
from ANIMAL
where ANIMAL_NAME in
(select CREATURE_NAME as ANIMAL_NAME from SEA_CREATURE))
Listing 2: (This works)
with ANIMAL as (
select 'Tuna' as ANIMAL_NAME from dual
union all select 'Cat' from dual
union all select 'Dolphin' from dual
union all select 'Swallow' from dual),
SEA_CREATURE as (
select 'Tuna' as CREATURE_NAME from dual
union all select 'Shark' from dual
union all select 'Dolphin' from dual
union all select 'Plankton' from dual)
select NONSEA_ANIMALS, SEA_ANIMALS
from (
select stringagg( ANIMAL_NAME) as NONSEA_ANIMALS
from ( (select * from ANIMAL)
minus (select CREATURE_NAME as ANIMAL_NAME from SEA_CREATURE))),
(select stringagg( ANIMAL_NAME) as SEA_ANIMALS
from ANIMAL
where ANIMAL_NAME in
(select CREATURE_NAME as ANIMAL_NAME from SEA_CREATURE));
Listing 3: Expected output for expressions in both Listings 1 & 2:
NONSEA_ANIMALS SEA_ANIMALS
-------------------------------
'Cat,Swallow' 'Tuna,Dolphin'
The Oracle banner is shown in Listing 4.
Listing 4: select * from v$version
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - Production
How is this craziness explained?
Update
Here is the explain plan ...
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------
| Id | Operation | Name |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TEMP TABLE TRANSFORMATION | |
| 2 | LOAD AS SELECT | |
| 3 | VIEW | |
| 4 | COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE |
| 5 | LOAD AS SELECT | |
| 6 | UNION-ALL | |
| 7 | FAST DUAL | |
| 8 | FAST DUAL | |
| 9 | FAST DUAL | |
| 10 | FAST DUAL | |
| 11 | NESTED LOOPS | |
| 12 | VIEW | |
| 13 | SORT AGGREGATE | |
| 14 | VIEW | |
| 15 | MINUS | |
| 16 | SORT UNIQUE | |
| 17 | VIEW | |
| 18 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6666_765BCCBD |
| 19 | SORT UNIQUE | |
| 20 | VIEW | |
| 21 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6667_765BCCBD |
| 22 | VIEW | |
| 23 | SORT AGGREGATE | |
| 24 | HASH JOIN RIGHT SEMI | |
| 25 | VIEW | VW_NSO_1 |
| 26 | VIEW | |
| 27 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6667_765BCCBD |
| 28 | VIEW | |
| 29 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6666_765BCCBD |
----------------------------------------------------------------------------
ORA-03113 and ORA-6000 usually happens on using WITH clause query when
something fatal happened on execution.
Oracle's subquery factoring or WITH clause, can be overused at times. Oracle may create a global temporary table for every query inside WITH clause, for reusing the results. So, XMLTABLE() here, could have created another GTT here, and perhaps this crash the database.
COLLECTION ITERATOR PICKLER FETCH is something when fetched from a
PL/SL object. It returns pickled(packed and formatted) data
It might involve creation of some temp table beneath like I mentioned previously. So the subquery factoring and the PL/Sql array selection didnt go well.
I have also seen queries with nested UNION ALL in WITH getting crashed.
This is most a bug in Oracle, and should be reported to them.
Only way to get around this now, would be reforming the query. In our application, usage of WITH is strictly restricted(due to high CPU usage) for report only purposes executed as batch.

Resources