oracle calculate the average of the current and previous row values - oracle

I have the following table which contains a single column VALUE1, I would like to calculate the average between the previous row and current row in VALUE1 and represent it in a second column VALUE2 starting from the second row i.e. The first row value will not be averaged.
The result should look like
ID VALUE1 VALUE2
1 3 3
2 4 3.5
3 5 4.5
4 5 5
5 6 5.5
6 2 4
NOTE: For first row (ID = 1) I average the first row with it self.
Any help appreciated. Thanks in advance.

You can also use the AVG() analytic function with a window of the previous and current row:
WITH practicenew AS (SELECT 1 ID, 3 value1 FROM dual UNION ALL
SELECT 2 ID, 4 value1 FROM dual UNION ALL
SELECT 3 ID, 5 value1 FROM dual UNION ALL
SELECT 4 ID, 5 value1 FROM dual UNION ALL
SELECT 5 ID, 6 value1 FROM dual UNION ALL
SELECT 6 ID, 2 value1 FROM dual)
SELECT ID,
value1,
AVG(value1) OVER (ORDER BY ID
ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) value2
FROM practicenew;
ID VALUE1 VALUE2
---------- ---------- ----------
1 3 3
2 4 3.5
3 5 4.5
4 5 5
5 6 5.5
6 2 4

I think you can use this query, it gives the same output you have mentioned.
Create table script:
create table practicenew (ID number, Value1 number) ;
insert into practicenew (ID, Value1) values (1, 3) ;
insert into practicenew (ID, Value1) values (2,4) ;
insert into practicenew (ID, Value1) values (3,5);
insert into practicenew (ID, Value1) values (4,5);
insert into practicenew (ID, Value1) values (5,6);
insert into practicenew (ID, Value1) values (6,2 );
Then use NVL and Lag function. Lag will bring your previous value to current row and nvl is being used for the first row, as you will have null value in the first row while using lag.
Query: select ID, value1,nvl(((lag(value1) over (order by ID) + value1)/2),value1) as Value2
from practicenew;
Output:
ID Value1 Value2
1 3 3
2 4 3.5
3 5 4.5
4 5 5
5 6 5.5
6 2 4
I hope it helps!

Related

how to convert column value to multiple insert rown oracle cursor

I am trying to copy value from our old db to new db where there is a change in the table structure.
Below is the structure of the table
Table1
Table1ID, WheelCount, BlindCount, OtherCount
For eg values of table 1 is like below
Table1ID, 1,2,5
Table1 is now changed to TableNew with the below
TableNewID, DisableID , Quantity.
So the value should be
TableNewID1, 1,1 here 1= WheelCount from table1
TableNewID2, 2,2 here 2= BlindCount
TableNewID3, 3,5 here 5= OtherCount
how to write a cursor to transform table1 value to the new table tableNew structure.
Table1
Table1ID WheelCount BlindCount OtherCount
1 1 2 5
2 8 10 15
A master table defined to map disableid
DisableID Type
1 wheelCount
2 blindcount
3 otherCount
Expected structure
ID **Table1ID** **DISABLEID** QUANTITY
1 1 1 1
2 1 2 2
3 1 3 5
4 2 1 8
5 2 2 10
6 2 3 15
The simplest is a UNION ALL for each column you want to turn into a row.
insert into tablenew
select table1id,1,wheelcount from table1
union all
select table1id,2,blindcount from table1
union all
select table1id,3,othercount from table1
There are other, sleeker methods for avoiding multiple passes on the first table, in case it's huge.
This is how I understood it.
Current table contents:
SQL> SELECT * FROM table1;
TABLE1ID WHEELCOUNT BLINDCOUNT OTHERCOUNT
---------- ---------- ---------- ----------
1 1 2 5
2 8 10 15
Prepare new table:
SQL> CREATE TABLE tablenew
2 (
3 id NUMBER,
4 table1id NUMBER,
5 disableid NUMBER,
6 quantity NUMBER
7 );
Table created.
Sequence (will be used to populate tablenew.id column):
SQL> CREATE SEQUENCE seq_dis;
Sequence created.
Trigger (which actually populates tablenew.id):
SQL> CREATE OR REPLACE TRIGGER trg_bi_tn
2 BEFORE INSERT
3 ON tablenew
4 FOR EACH ROW
5 BEGIN
6 :new.id := seq_dis.NEXTVAL;
7 END;
8 /
Trigger created.
Copy data:
SQL> INSERT INTO tablenew (table1id, disableid, quantity)
2 SELECT table1id, 1 disableid, wheelcount AS quantity FROM table1
3 UNION ALL
4 SELECT table1id, 2 disableid, blindcount AS quantity FROM table1
5 UNION ALL
6 SELECT table1id, 3 disableid, othercount AS quantity FROM table1;
6 rows created.
Result:
SQL> SELECT *
2 FROM tablenew
3 ORDER BY table1id, disableid;
ID TABLE1ID DISABLEID QUANTITY
---------- ---------- ---------- ----------
1 1 1 1
3 1 2 2
5 1 3 5
2 2 1 8
4 2 2 10
6 2 3 15
6 rows selected.

How do you shift values down in a column in an Oracle table?

Given the following oracle database table:
group revision comment
1 1 1
1 2 2
1 null null
2 1 1
2 2 2
2 3 3
2 4 4
2 null null
3 1 1
3 2 2
3 3 3
3 null null
I want to shift the comment column one step down in relation to version, within its group, so that I get the following table:
group revision comment
1 1 null
1 2 1
1 null 2
2 1 null
2 2 1
2 3 2
2 4 3
2 null 4
3 1 null
3 2 1
3 3 2
3 null 3
I have the following query:
MERGE INTO example_table t1
USING example_table t2
ON (
(t1.revision = t2.revision+1 OR
(t2.revision = (
SELECT MAX(t3.revision)
FROM example_table t3
WHERE t3.group = t1.group
) AND t1.revision IS NULL)
)
AND t1.group = t2.group)
WHEN MATCHED THEN UPDATE SET t1.comment = t2.comment;
That does most of this (still need a separate query to cover revision = 1), but it is very slow.
So my question is, how do I use Max here as efficiently as possible to pull out the highest revision for each group?
I would use lag not max
create table example_table(group_id number, revision number, comments varchar2(40));
insert into example_table values (1,1,1);
insert into example_table values (1,2,2);
insert into example_table values (1,3,null);
insert into example_table values (2,1,1);
insert into example_table values (2,2,2);
insert into example_table values (2,3,3);
insert into example_table values (2,4,null);
select * from example_table;
merge into example_table e
using (select group_id, revision, comments, lag(comments, 1) over (partition by group_id order by revision nulls last) comments1 from example_table) u
on (u.group_id = e.group_id and nvl(u.revision,0) = nvl(e.revision,0))
when matched then update set comments = u.comments1;
select * from example_table;

How to create id with AUTO_INCREMENT on a view in Oracle

can any one help to create a AUTO_INCREMENT column on a view in oracle 11g.
Thanks
While it's not possible to return a single unique identity column for a view whose underlying data does not have any single unique identifier, it is possible to return composite values that uniquely identify the data. For example given a table of CSV Data with a unique ID on each row:
create table sample (id number primary key, csv varchar2(4000));
where the CSV column contains a string of comma separated values:
insert into sample
select 1, 'a' from dual union all
select 2, 'b,c' from dual union all
select 3, 'd,"e",f' from dual union all
select 4, ',h,' from dual union all
select 5, 'j,"",l' from dual union all
select 6, 'm,,o' from dual;
The following query will unpivot the csv data and the composite values (ID, SEQ) will uniquely identify each VALue, The ID column idetifies the record the data came from, and SEQ uniquely identifies the position in the CSV:
WITH pvt(id, seq, csv, val, nxt) as (
SELECT id -- Parse out individual list items
, 1 -- separated by commas and
, csv -- optionally enclosed by quotes
, REGEXP_SUBSTR(csv,'(["]?)([^,]*)\1',1,1,null,2)
, REGEXP_INSTR(csv, ',', 1, 1)
FROM sample
UNION ALL
SELECT id
, seq+1
, csv
, REGEXP_SUBSTR(csv,'(["]?)([^,]*)\1',nxt+1,1,null,2)
, REGEXP_INSTR(csv, ',', nxt+1, 1)
FROM pvt
where nxt > 0
)
select * from pvt order by id, seq;
ID SEQ CSV VAL NXT
---------- ---------- ---------- ---------- ----------
1 1 a a 0
2 1 b,c b 2
2 2 b,c c 0
3 1 d,"e",f d 2
3 2 d,"e",f e 6
3 3 d,"e",f f 0
4 1 ,h, [NULL] 1
4 2 ,h, h 3
4 3 ,h, [NULL] 0
5 1 j,"",l j 2
5 2 j,"",l [NULL] 5
5 3 j,"",l l 0
6 1 m,,o m 2
6 2 m,,o [NULL] 3
6 3 m,,o o 0
15 rows selected.

rank() a group of items by count(*)

I have some problems with Oracle analytic functions and need help.
Here's a generic example:
create table test (item varchar2(10), value varchar2(10));
insert into test values ('item1','value1');
insert into test values ('item1','value1');
insert into test values ('item1','value1');
insert into test values ('item1','value1');
insert into test values ('item1','value1');
insert into test values ('item1','value2');
insert into test values ('item1','value2');
insert into test values ('item3','value2');
insert into test values ('item3','value2');
insert into test values ('item3','value2');
insert into test values ('item5','value1');
insert into test values ('item5','value1');
insert into test values ('item5','value1');
insert into test values ('item5','value1');
insert into test values ('item5','value1');
insert into test values ('item5','value1');
insert into test values ('item5','value1');
insert into test values ('item5','value2');
insert into test values ('item5','value2');
insert into test values ('item5','value2');
select item, value, count(*) c,
sum(count(*)) over () total,
sum(count(*)) over (partition by item) total_by_item,
dense_rank() over (order by count(*) desc) dense_rank
from test
group by item, value
order by 5 desc;
The result of the query is:
ITEM VALUE C TOTAL TOTAL_BY_ITEM DENSE_RANK
---------- ---------- -- ---------- -------------- ----------
item5 value1 7 20 10 1
item5 value2 3 20 10 3
item1 value2 2 20 7 4
item1 value1 5 20 7 2
item3 value2 3 20 3 3
How can I get the items ranked by TOTAL_BY_ITEM? So it would look like this:
ITEM VALUE C TOTAL TOTAL_BY_ITEM WHAT_I_NEED
---------- ---------- -- ---------- -------------- -----------
item5 value1 7 20 10 1
item5 value2 3 20 10 1
item1 value2 2 20 7 2
item1 value1 5 20 7 2
item3 value2 3 20 3 3
Is it possible to achieve this without another join or sub-query? I have a feeling that it is possible. I naturally think that it has to be something like this: dense_rank(count(*)) over (partition by item), like with analytic SUM that I use to get the 5th column, but it doesn't work.
I don't think this is what you are searching for but just for reference without a subquery you can achieve the same result using MODEL clause:
select item, value, c, total, total_by_item, what_i_need
from test
group by item, value
model
dimension by (row_number() over (order by null) d)
measures (
item, value,
count(*) c,
sum(count(*)) over () total,
sum(count(*)) over (partition by item) total_by_item,
1 what_i_need
)
rules (
what_i_need[any] = dense_rank() over (order by total_by_item[cv()] desc)
)
order by 5 desc;
I don't think you can achieve it without subquery otherwise.

plsql: Getting another field values along with the aggregation values in a grouping statement

I am working on a time attendance system. I have the employees' transactions stored in the following table:
I want to get the earliest and the latest transactions for each employee including their date and type.
I am able to get the dates using grouping and aggregation. However, I am not able to figure out how to get types with them.
Would you please help me in it.
Thank you.
That's what the FIRST and LAST aggregate functions are designed for.
Here is a link to the documentation:
FIRST: http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/functions065.htm#SQLRF00641
LAST: http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/functions083.htm#sthref1206
And here is an example:
SQL> create table my_transactions (id,employee_id,action_date,type)
2 as
3 select 1, 1, sysdate, 'A' from dual union all
4 select 2, 1, sysdate-1, 'B' from dual union all
5 select 3, 1, sysdate-2, 'C' from dual union all
6 select 4, 1, sysdate-3, 'D' from dual union all
7 select 5, 2, sysdate-11, 'E' from dual union all
8 select 6, 2, sysdate-12, 'F' from dual union all
9 select 7, 2, sysdate-13, 'G' from dual
10 /
Table created.
SQL> select *
2 from my_transactions
3 order by id
4 /
ID EMPLOYEE_ID ACTION_DATE T
---------- ----------- ------------------- -
1 1 04-07-2011 10:15:07 A
2 1 03-07-2011 10:15:07 B
3 1 02-07-2011 10:15:07 C
4 1 01-07-2011 10:15:07 D
5 2 23-06-2011 10:15:07 E
6 2 22-06-2011 10:15:07 F
7 2 21-06-2011 10:15:07 G
7 rows selected.
SQL> select employee_id
2 , min(action_date) min_date
3 , max(type) keep (dense_rank first order by action_date) min_date_type
4 , max(action_date) max_date
5 , max(type) keep (dense_rank last order by action_date) max_date_type
6 from my_transactions
7 group by employee_id
8 /
EMPLOYEE_ID MIN_DATE M MAX_DATE M
----------- ------------------- - ------------------- -
1 01-07-2011 10:15:07 D 04-07-2011 10:15:07 A
2 21-06-2011 10:15:07 G 23-06-2011 10:15:07 E
2 rows selected.
Regards,
Rob.
You could try to use analytical(or windowing functions)
select *
from
(select id, employee_id, action_date,type,
max(action_date) over (partition by employee_id) max_action_date,
min(action_date) over (partition by employee_id) min_action_date
from transaction)
where action_date in (max_action_date, min_action_date)

Resources