display avg of the last column on the last row in SQL query - oracle

Hello I am having trouble trying to figure out this particular question. using oracle SQL developer.
trying to figure out how a query so that it will display exactly like the below table/picture.
the last row of this query to display word AVERAGE: and show the average (of all values in the in the sixth column) of the percentage above min selling price for all the sales made. and all the remaining column to display "--------"
Code ProductName Title ShopID SalePrice %SoldAbove Min.SellPrice
1 Martin Robot 1 $49000 15%
2
3
4
--- ------ ---- ---- AVERAGE: 16.5%
below is last row of the output i am looking for. But i have no clue on how to produce the
"--------" in the remaining columns let alone AVERAGE: and the average of all the values in the sixth column of the last row.
in summary, the last row of the output should show the average (in the sixth column) of the percentage
sold above the minimum selling price for all the sales.

Use ROLLUP:
SELECT DECODE( GROUPING( Code ), 1, '----', code ) AS code,
DECODE( GROUPING( Code ), 1, '----', MAX(col1) ) AS Col1,
DECODE( GROUPING( Code ), 1, '----', MAX(col2) ) AS Col2,
DECODE( GROUPING( Code ), 1, 'Average:', MAX(col3) ) AS Col3,
AVG( value )
FROM table_name
GROUP BY ROLLUP(Code);
Which, for the sample data:
CREATE TABLE table_name ( code, col1, col2, col3, value ) AS
SELECT 1, 'AAA', 1, 'AA1', 15.0 FROM DUAL UNION ALL
SELECT 2, 'BBB', 2, 'BB2', 17.5 FROM DUAL UNION ALL
SELECT 3, 'CCC', 3, 'CC3', 20.0 FROM DUAL;
Outputs:
CODE | COL1 | COL2 | COL3 | AVG(VALUE)
:--- | :--- | :--- | :------- | ---------:
1 | AAA | 1 | AA1 | 15
2 | BBB | 2 | BB2 | 17.5
3 | CCC | 3 | CC3 | 20
---- | ---- | ---- | Average: | 17.5
db<>fiddle here

This is a job for UNION ALL. A good way to get this sort of result:
SELECT * FROM (
SELECT COALESCE(whatever1, '------') whatever1
COALESCE(whatever2, '------') whatever2,
COALESCE(whatever3, '------') whatever3,
whatever4
FROM whatever
UNION ALL
SELECT NULL, NULL, NULL, AVG(whatever4) FROM whatever
) r
ORDER BY whatever1 NULLS LAST
The COALESCE function puts your ----- characters into the output.
You can also investigate GROUP BY WITH ROLLUP. It may do what you want.

Related

Oracle - how to update a unique row based on MAX effective date which is part of the unique index

Oracle - Say you have a table that has a unique key on name, ssn and effective date. The effective date makes it unique. What is the best way to update a current indicator to show inactive for the rows with dates less than the max effective date? I can't really wrap my head around it since there are multiple rows with the same name and ssn combinations. I haven't been able to find this scenario on here for Oracle and I'm having developer's block. Thanks.
"All name/ssn having a max effective date earlier than this time yesterday:"
SELECT name, ssn
FROM t
GROUP BY name, ssn
HAVING MAX(eff_date) < SYSDATE - 1
Oracle supports multi column in, so
UPDATE t
SET current_indicator = 'inactive'
WHERE (name,ssn,eff_date) IN (
SELECT name, ssn, max(eff_date)
FROM t
GROUP BY name, ssn
HAVING MAX(eff_date) < SYSDATE - 1
)
Use a MERGE statement using an analytic function to identify the rows to update and then merge on the ROWID pseudo-column so that Oracle can efficiently identify the rows to update (without having to perform an expensive self-join by comparing the values):
MERGE INTO table_name dst
USING (
SELECT rid,
max_eff_date
FROM (
SELECT ROWID AS rid,
effective_date,
status,
MAX( effective_date ) OVER ( PARTITION BY name, ssn ) AS max_eff_date
FROM table_name
)
WHERE ( effective_date < max_eff_date AND status <> 'inactive' )
OR ( effective_date = max_eff_date AND status <> 'active' )
) src
ON ( dst.ROWID = src.rid )
WHEN MATCHED THEN
UPDATE
SET status = CASE
WHEN src.max_eff_date = dst.effective_date
THEN 'active'
ELSE 'inactive'
END;
So, for some sample data:
CREATE TABLE table_name ( name, ssn, effective_date, status ) AS
SELECT 'aaa', 1, DATE '2020-01-01', 'inactive' FROM DUAL UNION ALL
SELECT 'aaa', 1, DATE '2020-01-02', 'inactive' FROM DUAL UNION ALL
SELECT 'aaa', 1, DATE '2020-01-03', 'inactive' FROM DUAL UNION ALL
SELECT 'bbb', 2, DATE '2020-01-01', 'active' FROM DUAL UNION ALL
SELECT 'bbb', 2, DATE '2020-01-02', 'inactive' FROM DUAL UNION ALL
SELECT 'bbb', 3, DATE '2020-01-01', 'inactive' FROM DUAL UNION ALL
SELECT 'bbb', 3, DATE '2020-01-03', 'active' FROM DUAL;
The query only updates the 3 rows that need changing and:
SELECT *
FROM table_name;
Outputs:
NAME | SSN | EFFECTIVE_DATE | STATUS
:--- | --: | :------------- | :-------
aaa | 1 | 01-JAN-20 | inactive
aaa | 1 | 02-JAN-20 | inactive
aaa | 1 | 03-JAN-20 | active
bbb | 2 | 01-JAN-20 | inactive
bbb | 2 | 02-JAN-20 | active
bbb | 3 | 01-JAN-20 | inactive
bbb | 3 | 03-JAN-20 | active
db<>fiddle here

Is there an other solution calculate data by filtered date without "with" clause?

Need to calculate value when date< first_date by name with interval 3 day in BigQuery.
Example of data:
+------+------------+------------------+
| Name | date | order_id | value |
+------+------------+----------+-------+
| JONES| 2019-01-03 | 11 | 10 |
| JONES| 2019-01-05 | 12 | 5 |
| JONES| 2019-06-03 | 13 | 3 |
| JONES| 2019-07-03 | 14 | 20 |
| John | 2019-07-23 | 15 | 10 |
+------+------------+----------+-------+
My solution is:
WITH data AS (
SELECT "JONES" name, DATE("2019-01-03") date_time, 11 order_id, 10 value
UNION ALL
SELECT "JONES", DATE("2019-01-05"), 12, 5
UNION ALL
SELECT "JONES", DATE("2019-06-03"), 13, 3
UNION ALL
SELECT "JONES", DATE("2019-07-03"), 14, 20
UNION ALL
SELECT "John", DATE("2019-07-23"), 15, 10
),
data2 AS (
SELECT *, MIN(date_time) OVER (PARTITION BY name) min_date
FROM data
)
SELECT name,
ARRAY_AGG(STRUCT(order_id as f_id, date_time as f_date) ORDER BY order_id LIMIT 1)[OFFSET(0)].*,
sum(case when date_time< date_add(min_date,interval 3 day) then value end) as total_value_day3,
SUM(value) AS total
FROM data2
GROUP BY name
Output:
+------+------+------------+----------------+------+
| name | f_id | f_date |total_value_day3| total|
+------+------+------------+----------------+------+
| JONES| 11 | 2019-01-03 | 15 | 38 |
| John | 15 | 2019-07-23 | 10 | 10 |
+------+------+------------+----------------+------+
So my question, can do the same calculated with a more effective way?
Or this solution is ok for large datasets?
The following gets the same results without using window functions or array aggregations, so BQ has to do less ordering/partitioning. For this small example, my query takes longer to run, but there is less byte shuffling. If you run this against a much larger dataset, I think mine will be more efficient.
WITH data AS (
SELECT "JONES" name, DATE("2019-01-03") date_time, "11" order_id, 10 value UNION ALL
SELECT "JONES", DATE("2019-01-05"), "12", 5 UNION ALL
SELECT "JONES", DATE("2019-06-03"), "13", 3 UNION ALL
SELECT "JONES", DATE("2019-07-03"), "14", 20 UNION ALL
SELECT "John", DATE("2019-07-23"), "15", 10
),
aggs as (
select name, min(date_time) as first_order_date, min(order_id) as first_order_id, sum(value) as total
from data
group by 1
)
select
name,
first_order_id as f_id,
first_order_date as f_date,
sum(value) as total_value_day3,
total
from aggs
inner join data using(name)
where date_time < date_add(first_order_date, interval 3 day) -- <= perhaps
group by 1,2,3,5
Note, this makes an assumption that order_id is sequential (aka order_id 11 always occurs before order_id 12) in the same manner that dates are sequential.

How to use Oracle's LISTAGG function with a multi values?

I have an 'ITEMS' table like below:
ITEM_NO ITEM_NAME
1 Book
2 Pen
3 Sticky Notes
4 Ink
5 Corrector
6 Ruler
In another 'EMP_ITEMS' table I have the below:
EMPLOYEE ITEMS_LIST
John 1,2
Mikel 5
Sophia 2,3,6
William 3,4
Daniel null
Michael 6
The output has to be like this:
EMPLOYEE ITEMS_LIST ITEM_NAME
John 1,2 Book,Pen
Mikel 5 Corrector
Sophia 2,3,6 Pen,Sticky Notes,Ruler
William 3,4 Sticky Notes,Ink
Daniel null null
Michael 6 Ruler
I used the below query:
SELECT e.EMPLOYEE,e.ITEMS_LIST, LISTAGG(i.ITEM_NAME, ',') WITHIN GROUP (ORDER BY i.ITEM_NAME) ITEM_DESC
FROM EMP_ITEMS e
INNER JOIN ITEMS i ON i.ITEM_NO = e.ITEMS_LIST
GROUP BY e.EMPLOYEE,e.ITEMS_LIST;
But there is an error:
ORA-01722: invalid number
But there is an error: ORA-01722: invalid number
That is because your ITEMS_LIST is a string composed of numeric and comma characters and is not actually a list of numbers and you are trying to compare a single item number to a list of items.
Instead treat it as a string a look for sub-string matches. To do this you will need to surround the strings in the delimiter character and compare to see if one is the substring of the other:
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE Items ( ITEM_NO, ITEM_NAME ) As
SELECT 1, 'Book' FROM DUAL UNION ALL
SELECT 2, 'Pen' FROM DUAL UNION ALL
SELECT 3, 'Sticky Notes' FROM DUAL UNION ALL
SELECT 4, 'Ink' FROM DUAL UNION ALL
SELECT 5, 'Corrector' FROM DUAL UNION ALL
SELECT 6, 'Ruler' FROM DUAL;
CREATE TABLE emp_items ( EMPLOYEE, ITEMS_LIST ) AS
SELECT 'John', '1,2' FROM DUAL UNION ALL
SELECT 'Mikel', '5' FROM DUAL UNION ALL
SELECT 'Sophia', '3,2,6' FROM DUAL UNION ALL
SELECT 'William', '3,4' FROM DUAL UNION ALL
SELECT 'Daniel', null FROM DUAL UNION ALL
SELECT 'Michael', '6' FROM DUAL;
Query 1:
SELECT e.employee,
e.items_list,
LISTAGG( i.item_name, ',' )
WITHIN GROUP (
ORDER BY INSTR( ','||e.items_list||',', ','||i.item_no||',' )
) AS item_names
FROM emp_items e
LEFT OUTER JOIN
items i
ON ( ','||e.items_list||',' LIKE '%,'||i.item_no||',%' )
GROUP BY e.employee, e.items_list
Results:
| EMPLOYEE | ITEMS_LIST | ITEM_NAMES |
|----------|------------|------------------------|
| John | 1,2 | Book,Pen |
| Mikel | 5 | Corrector |
| Daniel | (null) | (null) |
| Sophia | 3,2,6 | Sticky Notes,Pen,Ruler |
| Michael | 6 | Ruler |
| William | 3,4 | Sticky Notes,Ink |

match words in all rows in same column

There is a column in table 'mytable' named 'Description'.
+----+-------------------------------+
| ID | Description |
+----+-------------------------------+
| 1 | My NAME is Sajid KHAN |
| 2 | My Name is Ahmed Khan |
| 3 | MY friend name is Salman Khan |
+----+-------------------------------+
I need to write an Oracle SQL query/procedure/function to list the distinct words in the column.
The output should be:
+------------------+-------+
| Word | Count |
+------------------+-------+
| MY | 3 |
| NAME | 3 |
| IS | 3 |
| SAJID | 1 |
| KHAN | 3 |
| AHMED | 1 |
| FRIEND | 1 |
| SALMAN | 1 |
+------------------+-------+
Word matching should be case-insensitive.
I am using Oracle 12.1.
Let's suppose we would somehow manage to split every description in words.
So, instead of single row with Id = 1 and Description = 'My NAME is Sajid KHAN' we'd have 5 rows like this
ID | Description
--- | ------------
1 | My
1 | NAME
1 | is
1 | Sajid
1 | KHAN
in this form it'd be trivial, something like
select Description, count(*) from data_in_new_form group by Description
So, let's do this using recursive query.
create table mytable
as
select 1 as ID, 'My NAME is Sajid KHAN' as Description from dual
union all
select 2, 'My Name is Ahmed Khan' from dual
union all
select 3, 'MY friend name is Salman Khan' from dual
union all
select 4, 'test, punctuation! it is' from dual
;
with
rec (id, str, depth, element_value) as
(
-- Anchor member.
select id, upper(Description) as str, 1 as depth, REGEXP_SUBSTR( upper(Description), '(.*?)( |$)', 1, 1, NULL, 1 ) AS element_value
from mytable
UNION ALL
-- Recursive member.
select id, str, depth + 1, REGEXP_SUBSTR( str ,'(.*?)( |$)', 1, depth+1, NULL, 1 ) AS element_value
from rec
where depth < regexp_count(str, ' ')+1
)
, data as (
select * from rec
--order by id, depth
)
select element_value, count(*) from data
group by element_value
order by element_value
;
Please notice this version doesn't do anything about punctuation assuming words are separated with spaces.
UPDATE alternative way using hierarchic query
with rec as
(
SELECT id, LEVEL AS depth,
REGEXP_SUBSTR( upper(description) ,'(.*?)( |$)', 1, LEVEL, NULL, 1 ) AS element_value
FROM mytable
CONNECT BY LEVEL <= regexp_count(description, ' ')+1
and prior id = id
and prior SYS_GUID() is not null
)
, data as (
select * from rec
--order by id, depth
)
select element_value, count(*) from data
group by element_value
order by 2 desc
;
This query will work. The ordering of the words may be different. However, frequent words come at the beginning as you have listed.
SELECT word,
COUNT(*)
FROM
(SELECT TRIM (REGEXP_SUBSTR (Description, '[^ ]+', 1, ROWNUM) ) AS Word
FROM
(SELECT LISTAGG(UPPER(Description),' ') within GROUP(
ORDER BY ROWNUM ) AS Description
FROM mytable
)
CONNECT BY LEVEL <= REGEXP_COUNT ( Description, '[^ ]+')
)
GROUP BY WORD
ORDER BY 2 DESC;

Oracle sum group by date range

I have the following table
+-----------+-------+-------+
| Date | Type | Value |
+-----------+-------+-------+
| 1/1/2013 | A | 1 |
| 1/2/2013 | A | 3 |
| 1/3/2013 | A | 5 |
| 1/4/2013 | A | 6 |
| 1/6/2013 | A | 8 |
| 1/7/2013 | A | 1 |
| 1/8/2013 | A | 2 |
+-----------+-------+-------+
I want to sum the value for the previous 3 dates for a certain day so i used this query.
ie: sel_date = 1/3/2013.
select type, sum(value)
from table_name
where date <= seldate
and date > seldate - 3
group by type
Now the problem is, I want to output a table with a given date range computing for the previous 3 days for each date.
ie: sel_date range 1/3/2013 - 1/8/2013
+-----------+-------+------------+
| Date | Type | Sum(Value) |
+-----------+-------+------------+
| 1/3/2013 | A | 9 | // 5 + 3 + 1
| 1/4/2013 | A | 14 | // 6 + 5 + 3
| 1/5/2013 | A | 11 | // 0 + 6 + 5
| 1/6/2013 | A | 14 | // 8 + 0 + 6
| 1/7/2013 | A | 9 | // 1 + 8 + 0
| 1/8/2013 | A | 11 | // 2 + 1 + 8
+-----------+-------+------------+
Is there a way to do this in a single query. I tried reading on partitioning but it is leading me no where.
Use range between in windowing clause:
select dt, type, value,
sum(value) over (order by dt range between 2 preceding and current row) as sv
from t
Test data and output:
create table t (dt date, type varchar2(1), value number(5));
insert into t values (date '2013-01-01', 'A', 1);
insert into t values (date '2013-01-02', 'A', 3);
insert into t values (date '2013-01-03', 'A', 5);
insert into t values (date '2013-01-04', 'A', 6);
insert into t values (date '2013-01-05', 'A', 8);
insert into t values (date '2013-01-06', 'A', 1);
insert into t values (date '2013-01-07', 'A', 2);
insert into t values (date '2013-01-12', 'A', 2);
DT TYPE VALUE SV
----------- ---- ------ ----------
2013-01-01 A 1 1
2013-01-02 A 3 4
2013-01-03 A 5 9
2013-01-04 A 6 14
2013-01-05 A 8 19
2013-01-06 A 1 15
2013-01-07 A 2 11
2013-01-12 A 2 2
You can try with something like this:
with test(Date_, Type, Value ) as
(
select to_date('01/01/2013', 'mm/dd/yyyy'), 'A', 1 from dual union all
select to_date('01/02/2013', 'mm/dd/yyyy'), 'A', 3 from dual union all
select to_date('01/03/2013', 'mm/dd/yyyy'), 'A', 5 from dual union all
select to_date('01/04/2013', 'mm/dd/yyyy'), 'A', 6 from dual union all
select to_date('01/05/2013', 'mm/dd/yyyy'), 'A', 8 from dual union all
select to_date('01/06/2013', 'mm/dd/yyyy'), 'A', 1 from dual union all
select to_date('01/07/2013', 'mm/dd/yyyy'), 'A', 2 from dual
)
select *
from (
select date_, type,
value + nvl(lag(value, 1) over (partition by type order by date_), 0)
+ nvl(lag(value, 2) over (partition by type order by date_), 0) as value
from test
)
where date_ between to_date('01/03/2013', 'mm/dd/yyyy') and to_date('01/07/2013', 'mm/dd/yyyy')
This sums, for each row, the values of the two preceding ones, based on date; the external query is simply used to apply the filter, given that applying it in the internal query would lead to a wrong sum.
The LAG is used to read values from the rows that precede the current row by 1 or 2 positions.
You can use this:
select date1 ,type,
(select sum(t1.value) sumvalue from table_name t1 where t1.date1 between (t2.date1 - 2) and t2.date1 )
from table_name t2
where date1 between startDate and endDate
select t.date, sum(t.value) OVER(ORDER BY t.date ROWS BETWEEN 2 PRECEDING AND 0 FOLLOWING) as Pre_3row_sum
from table_name t

Resources