Sort a list using loops - oracle

I'm still fairly new to PL/SQL and I tried to solve the following problem for a long time on my own, but I can not get to a solution (and i did not find a similar problem solved) hence I thought I ask myself.
I am trying to sort a list by the date (which indicates the 'time_left_to_live') so that the result is the same list but, as you can guess, sorted by the date.
EDIT: Maybe i should be even more precise:
My original goal is to write an AFTER UPDATE TRIGGER, which sorts the table again if the time_left_to_live is updated. My idea was to write a Procedure which sorts (or more updates) the list and call it.
Example for clarification:
UPDATE testv1 SET time_left_to_live = '01.05.2020' WHERE list_id = 3;
so after that the old data of list_id 3 should be 1 since it has the shortest amount left to live and the other datas should be incremented by 1.
1 01.05.2020
2 03.05.2020
3 31.05.2020
I hope I could explain the situation somewhat good.
I tried it with a nested loop but I simply can't think inside. Any help is appreciated.
My table:
DROP TABLE testv1;
DROP PROCEDURE wlp_sort;
CREATE TABLE testv1(
list_ID INT,
time_left_to_live DATE
);
INSERT INTO testv1 VALUES (3, '01.06.2020');
INSERT INTO testv1 VALUES (2, '31.05.2020');
INSERT INTO testv1 VALUES (1, '03.05.2020');
--UPDATE testv1 SET time_left_to_live = '01.05.2020' WHERE list_ID = 2;
SELECT * FROM testv1;

I'd suggest you not to do that. I presume that table you posted as an example is exactly that - an example. In reality, it is more complex. Furthermore, I presume that list_id column identifies each row which is OK, but wrong to be used for sorting purposes, especially not the way you wanted - by updating its value via database triggers.
What should you do, then? Use ORDER BY as it is the only mechanism that guarantees that rows will be returned in desired order. A long time ago, I think the last Oracle database version was 8i, you could use group by clause which in the background sorted result for you, but those times have passed so - order by it is.
Here's an example of what I mean.
This is the original contents of your table. Query includes additional column - the one you might want to use - row_number analytic function which "calculates" ordinal number on-the-fly. At first, it matches list_id value.
SQL> select list_id,
2 time_left_to_live,
3 row_number() over (order by time_left_to_live) rn
4 from testv1
5 order by time_left_to_live;
LIST_ID TIME_LEFT_ RN
---------- ---------- ----------
1 03.05.2020 1
2 31.05.2020 2
3 01.06.2020 3
SQL>
Let's update one row (also from your example) and see what happens (note that I used date literal; you updated a date column with a string. Oracle did implicitly try - and succeed - to convert it to a valid date value, but you shouldn't rely on that. 01.05.2020 could be 1st of May or 5th of January, depending on date format which may change and might be different in different databases. Date literal is, on the other hand, always in format date 'yyyy-mm-dd' and there's no confusion):
SQL> update testv1 set time_left_to_live = date '2020-05-01' where list_id = 2;
1 row updated.
SQL> select list_id,
2 time_left_to_live,
3 row_number() over (order by time_left_to_live) rn
4 from testv1
5 order by time_left_to_live;
LIST_ID TIME_LEFT_ RN
---------- ---------- ----------
2 01.05.2020 1
1 03.05.2020 2
3 01.06.2020 3
SQL>
ORDER BY clause sorts data, but now list_id and rn are different; list_id didn't change, but rn represents new order.
If your next step is to do something with a row whose ordinal number is 1, you'd just use query I suggested as an inline view and fetch values whose rn = 1:
SQL> select list_id,
2 time_left_to_live
3 from (select list_id,
4 time_left_to_live,
5 row_number() over (order by time_left_to_live) rn
6 from testv1
7 )
8 where rn = 1;
LIST_ID TIME_LEFT_
---------- ----------
2 01.05.2020
SQL>
I suggest you use this option.
Additional drawbacks related to database trigger: if you write an update statement that modifies list_id, no problem - it works outside of trigger and list_id and rn are synchronized again:
SQL> update testv1 a set
2 a.list_id = (select x.rn
3 from (select b.list_id,
4 b.time_left_to_live,
5 row_number() over (order by b.time_left_to_live) rn
6 from testv1 b
7 ) x
8 where x.list_id = a.list_id
9 );
3 rows updated.
SQL> select list_id,
2 time_left_to_live,
3 row_number() over (order by time_left_to_live) rn
4 from testv1
5 order by time_left_to_live;
LIST_ID TIME_LEFT_ RN
---------- ---------- ----------
1 01.05.2020 1
2 03.05.2020 2
3 01.06.2020 3
SQL>
But, in a trigger, you modify a column which fires a trigger which modifies a column which fires a trigger which modifies a column ... until you exceed maximum number of recursive SQL levels and then Oracle raises an error.
Or, if you planned to select from the table and then do something with it, you'll hit the mutating table error as you can't select from a table which is just being modified. True, you might use a compound trigger (or - in previous Oracle database versions - package and custom type), but - once again, in my opinion, that's just not the way you should handle the problem.

Agree fully with #Littlefoot. But even if you could build a user written sort in a trigger there is NO quarantine that order is preserved when written. A table is by definition an unordered set of rows and the SQL engine in under no obligation to maintain any ordering of presented rows. The way to guarantee any sequence is the order by clause

Related

Oracle: Update values in table with aggregated values from same table

I am looking for a possibly better approach to this.
I have created a temp table in Oracle 11.2 that I'm using to pre calculate values that I will need in other selects instead of always generating them again with each select.
create global temporary table temp_foo (
DT timestamp(6), --only the date part will be used in this example but for later things I will need the time
Something varchar2(100),
Customer varchar2(100),
MinDate timestamp(6),
MaxDate timestamp(6),
Filecount int,
Errorcount int,
AvgFilecount int,
constraint PK_foo primary key (DT, Customer)
) on commit preserve rows;
I then first insert some fixed values for everything except AvgFilecount. AvgFilecount should contain the average for the Filecount for the 3 previous records (going by the date in DT). It doesn’t matter that the result will be converted to an int, I don’t need the decimal places
DT | Customer | Filecount | AvgFilecount
2019-04-30 | x | 10 | avg(2+3+9)
2019-04-29 | x | 2 | based on values before this
2019-04-28 | x | 3 | based on values before this
2019-04-27 | x | 9 | based on values before this
I thought about using a normal UPDATE statement as this should be faster than looping through the values. I should mention that there are no gaps in the DT field but obviously there is a first one where I won‘t find any previous records. If I would loop through, I could easily calculate AvgFilecount with (the record before previous record/2 + previous record)/3 which I cannot with UPDATE as I cannot guarantee the order of how they are executed. So I‘m fine with just taking the last 3 records (going by DT) and calcuting it from there.
What I thought would be an easy update is giving me headaches. I‘m mostly doing SQL Server where I would just join the 3 other records but it seems is a bit different in Oracle. I have found https://stackoverflow.com/a/2446834/4040068 and wanted to use the second approach in the answer.
update
(select curr.DT, curr.temp_foo, curr.Filecount, curr.AvgFilecount as OLD, (coalesce(Minus1.Filecount, 0) + coalesce(Minus2.Filecount, 0) + coalesce(Minus3.Filecount, 0)) / 3 as NEW
from temp_foo curr
left join temp_foo Minus1 ON Minus1.Customer = curr.Customer and trunc(Minus1.DT) = trunc(curr.DT-1)
left join temp_foo Minus2 ON Minus2.Customer = curr.Customer and trunc(Minus2.DT) = trunc(curr.DT-2)
left join temp_foo Minus3 ON Minus3.Customer = curr.Customer and trunc(Minus3.DT) = curr.DT-3
order by 1, 2
)
set OLD = NEW;
Which gives me an
ORA-01779: cannot modify a column which maps to a non key-preserved
table
01779. 00000 - "cannot modify a column which maps to a non key-preserved table"
*Cause: An attempt was made to insert or update columns of a join view which
map to a non-key-preserved table.
*Action: Modify the underlying base tables directly.
I thought this should work as both join conditions are in the primary key and thus unique. I am currently implementing the first approach in the above mentioned answer but it is getting quite big and it feels like there should be a better solution to this.
Other things I thought about trying:
using a nested subselect (nested because Oracle doesn’t know top(n) and I need to sort the subselect) to select the previous 3 records ordered by DT and then he outer select with rownum <=3 and then I could just use AVG(). However, I was told subselect can be quite slow and joins are better in Oracle performance wise. Dunno if that is really the case, haven‘t done any testing
Edit: My insert right now looks like this. I am already aggregating the Filecount for a day as there can be multiple records per DT per Customer per Something.
insert into temp_foo (DT, Something, Customer, Filecount)
select dates.DT, tbl1.Something, tbl1.Customer, coalesce(sum(tbl3.Filecount),0)
from table(Function_Returning_Daterange(NULL, NULL)) dates
cross join
(SELECT Something,
Code,
Value
FROM Table2 tbl2
WHERE (Something = 'Value')) tbl1
left outer join Table3 tbl3
on tbl3.Customer = tbl1.Customer
and trunc(tbl3.MinDate) = trunc(dates.DT)
group by dates.DT, tbl1.Something, tbl1.Customer;
You could use an analytic average with a window clause:
select dt, customer, filecount,
avg(filecount) over (partition by customer order by dt
rows between 3 preceding and 1 preceding) as avgfilecount
from tmp_foo
order by dt desc;
DT CUSTOMER FILECOUNT AVGFILECOUNT
---------- -------- ---------- ------------
2019-04-30 x 10 4.66666667
2019-04-29 x 2 6
2019-04-28 x 3 9
2019-04-27 x 9
and then do the update part with a merge statement:
merge into tmp_foo t
using (
select dt, customer,
avg(filecount) over (partition by customer order by dt
rows between 3 preceding and 1 preceding) as avgfilecount
from tmp_foo
) s
on (s.dt = t.dt and s.customer = t.customer)
when matched then update set t.avgfilecount = s.avgfilecount;
4 rows merged.
select dt, customer, filecount, avgfilecount
from tmp_foo
order by dt desc;
DT CUSTOMER FILECOUNT AVGFILECOUNT
---------- -------- ---------- ------------
2019-04-30 x 10 4.66666667
2019-04-29 x 2 6
2019-04-28 x 3 9
2019-04-27 x 9
You haven't shown your original insert statement; it might be possible to add the analytic calculation to that, and avoid the separate update step.
Also, if you want the first two date values to be calculated as if the 'missing' extra days before them had zero counts, you could use sum and division instead of avg:
select dt, customer, filecount,
sum(filecount) over (partition by customer order by dt
rows between 3 preceding and 1 preceding)/3 as avgfilecount
from tmp_foo
order by dt desc;
DT CUSTOMER FILECOUNT AVGFILECOUNT
---------- -------- ---------- ------------
2019-04-30 x 10 4.66666667
2019-04-29 x 2 4
2019-04-28 x 3 3
2019-04-27 x 9
It depends what you expect those last calculated values to be.

Consolidate rows

I'm trying to cut down on rows a report has. There are 2 assets that return on this query but I want them to show up on one row.
Basically if dc.name LIKE '%CT/PT%' then I want it to be same row as the asset. The SP.SVC_PT_ID is the common field to join them.
There will be times when there is no dc.name LIKE '%CT/PT%' however I still want the DV.MFG_SERIAL_NUM to populated just with a Null to the right.
Select SP.SVC_PT_ID, SP.DEVICE_ID, DV.MFG_SERIAL_NUM, dc.name,
substr(dc.name,26)
From EIP.SVC_PT_DEVICE_REL SP,
eip.device_class dc,
EIP.DEVICE DV
Where SP.EFF_START_TIME < To_date('20170930', 'YYYYMMDD') + 1
and SP.EFF_END_TIME is null
and dc.id = DV.device_class_id
and DV.ID = SP.device_id
ORDER BY SP.SVC_PT_ID, DV.MFG_SERIAL_NUM;
I'm not sure I understand what you are saying; test case would certainly help. You said that query you posted returns two rows (only if we saw which ones ...) but you want them to be displayed as the image you attached to the message.
Generally speaking, you can do that using an aggregate function (such as MAX) on certain column(s), along with the GROUP BY clause that contains the rest of them.
Just for example:
select svc_pt_id, max(ctpt_name) ctpt_name, sum(ctpt_multipler) ctpt_multipler
from ...
group by svc_pt_id
As I said: a test case would help people who'd want to answer the question. True - someone might have understood it far better than I did and will provide assistance nevertheless.
EDIT: after you posted sample data (which, by the way, don't match screenshot you posted previously), maybe something like this might do the job: use analytic function to check whether name contains CT/PT; if so, take its data. Otherwise, display both rows.
SQL> with test as (
2 select 14 svc_pt_id, 446733 device_id, 'Generic Electric' name from dual union
3 select 14, 456517, 'Generic CT/PT, Multiplier' from dual
4 ),
5 podaci as
6 (select svc_pt_id, device_id, name,
7 rank() over (partition by svc_pt_id
8 order by case when instr(name, 'CT/PT') > 1 then 1
9 else 2
10 end) rnk
11 from test
12 )
13 select svc_pt_id, device_id, name
14 from podaci
15 where rnk = 1;
SVC_PT_ID DEVICE_ID NAME
---------- ---------- -------------------------
14 456517 Generic CT/PT, Multiplier
SQL>
My TEST table (created by WITH factoring clause) would be the result of your current query.

Update or Insert on a Table based on a column value

I have two tables BASE and DAILY as shown below:
BASE
Cust ID IP address
1 10.5.5.5
2 10.5.5.50
3 10.5.5.6
DAILY
Cust ID IP address
1 10.5.5.5
2 10.5.5.70
4 10.5.5.67
The table DAILY is periodically refreshed every 24 hours. Now for every Cust Id in BASE I have to check if the IP address is modified in DAILY. If yes then update the row in BASE.
All the new entries in DAILY have to be inserted into BASE.
I have tried this using a Cursor comparing and then updating and then another cursor for insertion.
But it is taking lot of time.
What is the best possible way to do this?
You could also use MERGE depending on your database system.
SQL Server syntax would be
MERGE INTO BASE B
USING DAILY D
ON D.CustId = B.CustId
WHEN NOT MATCHED THEN
INSERT (CustId, Ip) VALUES (D.CustId, D.Ip)
WHEN MATCHED AND D.Ip <> B.Ip THEN
UPDATE SET B.Ip = D.Ip;
Oracle PL/SQL syntax seems to be much the same, take a look here
If you just want to update all your BASE table, use an UPDATE to update all the rows in your BASE table.
UPDATE `BASE`
SET `IP address` = (SELECT `IP address`
FROM DAILY
WHERE DAILY.`Cust ID` = `BASE`.`Cust ID`);
Then, use this INSERT INTO query to insert new values that not exists in your table BASE.
INSERT INTO `BASE`
SELECT `Cust ID`, `IP address`
FROM DAILY
WHERE DAILY.`Cust ID` NOT IN (SELECT `Cust ID` FROM BASE);
SQL>
declare
begin
for i in (select * from daily where ip_add not in (select ip_add from base))
loop
update base set ip_add=i.ip_add where custid=i.custid;
end loop;
end;
PL/SQL procedure successfully completed.
SQL> select * from base;
CUSTID IP_ADD
---------- ----------
1 10..5.5.5
2 10..5.5.20 -- updated value from base where ip_add is different
3 10..5.5.6
SQL> select * from base ;
CUSTID IP_ADD
---------- ----------
1 10..5.5.5
2 10..5.5.20
4 10..5.5.62
SQL>

Find next id from varchar in Oracle

I have a row that is a varchar(50) that has a unique constraint and i would like to get the next unique number for an new insert but with a given prefix.
My rows could look like this:
ID (varchar)
00010001
00010002
00010003
00080001
So if I would like to get the next unqiue number from the prefix "0001" it would be "00010004" but if I would want it for the prefix "0008" it would be "00080002".
There will be more then 1 millon entries in this table. Is there a way with Oracle 11 to perform this kind of operation that is fairly fast?
I know that this setup is totaly insane but this is what I have to work with. I cant create any new tables etc.
You can search for the max value of the specified prefix and increment it:
SQL> WITH DATA AS (
2 SELECT '00010001' id FROM DUAL UNION ALL
3 SELECT '00010002' id FROM DUAL UNION ALL
4 SELECT '00010003' id FROM DUAL UNION ALL
5 SELECT '00080001' id FROM DUAL
6 )
7 SELECT :prefix || to_char(MAX(to_number(substr(id, 5)))+1, 'fm0000') nextval
8 FROM DATA
9 WHERE ID LIKE :prefix || '%';
NEXTVAL
---------
00010004
I'm sure you're aware that this is an inefficient method to generate a primary key. Furthermore it won't play nicely in a multi-user environment and thus won't scale. Concurrent inserts will wait then fail since there is a UNIQUE constraint on the column.
If the prefix is always the same length, you can reduce the workload somewhat: you could create a specialized index that would find the max value in a minimum number of steps:
CREATE INDEX ix_fetch_max ON your_table (substr(id, 1, 4),
substr(id, 5) DESC);
Then the following query could use the index and will stop at the first row retrieved:
SELECT id
FROM (SELECT substr(id, 1, 4) || substr(id, 5) id
FROM your_table
WHERE substr(id, 1, 4) = :prefix
ORDER BY substr(id, 5) DESC)
WHERE rownum = 1
If you need to do simultaneous inserts with the same prefix, I suggest you use DBMS_LOCK to request a lock on the specified newID. If the call fails because someone is already inserting this value, try with newID+1. Although this involves more work than traditional sequence, at least your inserts won't wait on each others (potentially leading to deadlocks).
This is a very unsatisfactory situation for you. As other posters have pointed out - if you don't use sequences then you will almost certainly have concurrency issues. I mentioned in a comment the possibility that you live with big gaps. This is the simplest solution but you will run out of numbers after 9999 inserts.
Perhaps an alternative would be to create a separate sequence for each prefix. This would only really be practical if the number of prefixes is fairly low but it could be done.
ps - your requirement that > 1000000 records should be possible may, in fact, mean you have no choice but to redesign the database.
SELECT to_char(to_number(max(id)) + 1, '00000000')
FROM mytable
WHERE id LIKE '0001%'
SQLFiddle demo here http://sqlfiddle.com/#!4/4f543/5/0

Oracle aggregate function to return a random value for a group?

The standard SQL aggregate function max() will return the highest value in a group; min() will return the lowest.
Is there an aggregate function in Oracle to return a random value from a group? Or some technique to achieve this?
E.g., given the table foo:
group_id value
1 1
1 5
1 9
2 2
2 4
2 8
The SQL query
select group_id, max(value), min(value), some_aggregate_random_func(value)
from foo
group by group_id;
might produce:
group_id max(value), min(value), some_aggregate_random_func(value)
1 9 1 1
2 8 2 4
with, obviously, the last column being any random value in that group.
You can try something like the following
select deptno,max(sal),min(sal),max(rand_sal)
from(
select deptno,sal,first_value(sal)
over(partition by deptno order by dbms_random.value) rand_sal
from emp)
group by deptno
/
The idea is to sort the values within group in random order and pick the first.I can think of other ways but none so efficient.
You might prepend a random string to the column you want to extract the random element from, and then select the min() element of the column and take out the prepended string.
select group_id, max(value), min(value), substr(min(random_value),11)
from (select dbms_random.string('A', 10)||value random_value,foo.* from foo)
In this way you cand avoid using the aggregate function and specifying twice the group by, which might be useful in a scenario where your query is very complicated / or you are just exploring the data and are entering manually queries with a lengthy and changing list of group by columns.

Resources