Oracle:To realize the column automatic management - oracle

dataSource:
username type rank
a 106 1
a 116 2
a 126 3
b 106 1
b 106 2
when remove a,116,2 this record return:
username type rank
a 106 1
a 126 2
b 106 1
b 106 2
when insert a,116 return:
username type rank
a 106 1
a 126 2
a 116 3
b 106 1
b 106 2
I choose use tigger to realize:
insert(succeed):
create or replace trigger bi_auto
before insert
on auto
for each row
declare
-- local variables here
begin
select count(rank)+1 into :new.rank from auto where username=:new.username;
end bi_auto;
delete(fail,return ora-04091, ora-06512, ora-04088):
create or replace trigger bd_auto
after delete
on auto
for each row
declare
-- local variables here
begin
insert into session_auto
select username, type, rank() over(partition by username order by rank) ranknew from auto where username=:old.username order by username;
delete from auto where username=:old.username;
insert into auto select * from session_auto;
end bd_auto;
Please help me to modify it,thanks.I know there is something wrong with the performance, but I want to know how to realize.

I think the entire approach is problematic and error prone. It would be much easier to calculate the rank dynamically. If you want the convenience of querying a table, you could just add it to a view:
CREATE OR REPLACE VIEW auto_view AS
SELECT username, type, RANK() OVER (PARTITION BY username ORDER BY TYPE ASC) r
FROM auto
If performance is that big of an issue, you could always materialize your view.

Related

When to add a sequence of fields in a materialized view

Good evening,
I am trying to understand in which cases the sequence would be used, for the example below, since the rowids would not always give me a single row to manage the changes.
Why consider a sequence of additional fields?
I will be grateful if you could clarify the doubt, with some example.
Thank you so much,
Greetings.
CREATE MATERIALIZED VIEW LOG ON sales
WITH ROWID, SEQUENCE(amount_sold, time_id, prod_id)
INCLUDING NEW VALUES;
Now imagine that you want to create a materialized view that contains aggregates on this table. Because the materialized view log has been created with all referenced columns in the materialized view's defining query, the materialized view is fast refreshable. If DML is applied against the sales table, then the changes are reflected in the materialized view when the commit is issued.
CREATE MATERIALIZED VIEW sum_sales
REFRESH FAST ON COMMIT AS
SELECT s.time_id, COUNT(*) AS count_grp,
SUM(s.amount_sold) AS sum_dollar_sales,
COUNT(s.amount_sold) AS count_dollar_sales
FROM sales s
GROUP BY s.time_id;
Without using the sequence, you will have the following error
ORA-12033: cannot use filter columns from materialized view log on "ADMIN"."SALES"
So let me try to explain why, by using a Testcase
drop table sales;
create table sales (time_id number, prod_id number, amount_sold number)
CREATE MATERIALIZED VIEW LOG ON sales
WITH ROWID, SEQUENCE(amount_sold, time_id, prod_id)
INCLUDING NEW VALUES;
truncate table sales;
insert into sales values (1,1,23);
insert into sales values (1,2,23);
commit;
select time_id, sum(amount_sold) from sales group by time_id;
TIME_ID SUM(AMOUNT_SOLD)
------- ----------------
1 46
Now Imagine, that you will modify a row multiple times.
update sales set amount_sold = 55 where time_id = 1 and PROD_ID = 1;
update sales set amount_sold = 12 where time_id = 1 and PROD_ID = 1;
select time_id, sum(amount_sold) from sales group by time_id;
TIME_ID SUM(AMOUNT_SOLD)
------- ----------------
1 35
your new amount_sold is 35. How to do a fast refresh without reading the value for the row (1,2) because you only modified (1,1)
select * from MLOG$_SALES where DMLTYPE$$ != 'I' order by ;
AMOUNT_SOLD TIME_ID PROD_ID M_ROW$$ SEQUENCE$$ SNAPTIME$$ DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$ XID$$
----------- ------- ------- ------------------ ---------- -------------------- --------- --------- --------------- ----------------
23 1 1 AAAwaSAAAAAAFpTAAA 105 4000-01-01T00:00:00Z U U CA== 4222223434942993
55 1 1 AAAwaSAAAAAAFpTAAA 106 4000-01-01T00:00:00Z U N CA== 4222223434942993
55 1 1 AAAwaSAAAAAAFpTAAA 107 4000-01-01T00:00:00Z U U CA== 4222223434942993
12 1 1 AAAwaSAAAAAAFpTAAA 108 4000-01-01T00:00:00Z U N CA== 4222223434942993
So you can use the previous value 46 and increment/decrement using the old/new value as follow
select 46 - 23 + 55 - 55 + 12 as newval from dual;
NEWVAL
------
35
You can also do the same when doing a delete
This is not possible having only the rowid. To generate the new value 35 you need to read an unmodified value, so you cannot do a fast refresh.
Hope that this can help you to understand in which cases the sequence is used

Access other values ​in a trigger before save Oracle

Is it possible to access the previous values ​​that have not yet been stored in the database?
I have a table related to a particular module (MOD) which I will call table XA.
I can insert multiple records into XA simultaneously they are going to be inserted, I cannot change this fact.
For example, the following data is inserted in XA
ID | ParentId | Type | Name | Value
1 | 1 | 5 | Cost | 20000
2 | 1 | 9 | Risk | 10000
And I need in this case to insert / update a record in this same table. A calculated value
At the moment of executing the trigger, the value with the name of Cost for example is inserted first, and then the value of Risk.
When evaluating the Risk, I must have the ability to know what the Cost value is to make the calculation and insert the calculated record.
I tried to create a Package to which I would feed the data, but I still have the same problem.
create or replace PACKAGE GLOBAL
IS
PRAGMA SERIALLY_REUSABLE;
TYPE arr IS TABLE OF VARCHAR2 (32)
INDEX BY VARCHAR2 (50);
NUMB arr;
END GLOBAL;
//Using in trigger
GLOBAL.NUMB (:NEW.ID || '-' || :NEW.ParentId) := :NEW.Value;
BEGIN
IF :NEW.Type == 9 AND GLOBAL.NUMB (5 || '-' || :NEW.ParentId) IS NOT NULL
THEN
// calculate and insert record
ELSE IF :NEW.Type == 5 AND GLOBAL.NUMB (9 || '-' || :NEW.ParentId) IS NOT NULL
// calculate and insert record
END IF;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
// NOT HAVE TWO INSERT TO SAME REGISTER
END;
Values ​​5 and 9 are for reference.
Both records are not always inserted, one or more can be inserted, even the calculated value can be imputed but must be replaced by the calculation.
And I can't create a view since there is an internal process that depends on this particular table.
Do you really really must store calculated value into a table? That's usually not the best idea as you have to maintain it in any possible case (inserts, updates, deletes).
Therefore, another suggestion: a view. Here's an example; my "calculation" is simple, I'm just subtracting cost - risk as I don't know what you really do. If calculation is very complex and should be run every time on a very large data set, yes - performance might suffer.
Anyway, here you go; see if it helps.
Sample data:
SQL> select * From xa order by parentid, name;
ID PARENTID TYPE NAME VALUE
---------- ---------- ---------- ---- ----------
1 1 5 Cost 20000
2 1 9 Risk 10000
5 4 5 Cost 4000
7 4 9 Risk 800
A view:
SQL> create or replace view v_xa as
2 select id,
3 parentid,
4 type,
5 name,
6 value
7 from xa
8 union all
9 select 0 id,
10 parentid,
11 99 type,
12 'Calc' name,
13 sum(case when type = 5 then value
14 when type = 9 then -value
15 end) value
16 from xa
17 group by parentid;
View created.
What does it contain?
SQL> select * from v_xa
2 order by parentid, type;
ID PARENTID TYPE NAME VALUE
---------- ---------- ---------- ---- ----------
1 1 5 Cost 20000
2 1 9 Risk 10000
0 1 99 Calc 10000
5 4 5 Cost 4000
7 4 9 Risk 800
0 4 99 Calc 3200
6 rows selected.
SQL>

What is the most efficient way to update values of a table based on a mapping from another table

I have a table including following details.
empID department location segment
1 23 55 12
2 23 11 12
3 25 11 39
I also have a mapping table like following
Field old value new value
Department 23 74
department 25 75
segment 10 24
location 11 22
So My task is to replace old values with new values. I can actually use a cursor and update departments first then segments so on and so forth . But that is time consuming and inefficient. I would like to know if there are any efficient way to do this. Which also need to support in future if we were plan to add more columns to the mapping.
cheers.
Check this if it solves the issue.
update emp set department = (select map.new_value from map where emp.department = map.old_value);
How about copying the data to a new table?
CREATE TABLE newemp AS
SELECT e.empid,
NVL(d.new_value, e.department) AS department,
NVL(l.new_value, e.location) AS location,
NVL(s.new_value, e.segment) AS segment
FROM emp e
LEFT JOIN map d ON d.field='DEPARTMENT' AND e.department = d.old_value
LEFT JOIN map l ON l.field='LOCATION' AND e.location = d.old_value
LEFT JOIN map s ON s.field='SEGMENT' AND e.segment = d.old_value
ORDER BY e.empid;
EMPID DEPARTMENT LOCATION SEGMENT
1 84 55 12
2 84 11 12
3 75 11 39
You'll need obviously three passes through the mapping table, but only one pass through the emp table.
We use a LEFT JOIN because not all values will be changed. If no new_value is found, the NVL function uses the existing value of the emp table.
You could update the original table from this new table (if the new table has a primary key):
UPDATE (SELECT empid,
e.department as old_department,
n.department as new_department,
e.location as old_location,
n.location as new_location,
e.segment as old_segment,
n.segment as new_segment
FROM emp e
JOIN newemp n USING (empid))
SET old_department = new_department,
old_location = new_location,
old_segment = new_segment
WHERE old_department != new_department
OR old_location != new_location
OR old_segment != new_segment;

convert string of a column to multiple rows

For data like below
Col1
----
1
23
34
124
Output should be like below
Out
1
2
3
4
I tried the below hierarchical query but its giving repeated data
select substr(col1, level, 1)
from table1
connect by level <= length(col1);
I can't use distinct as this is sample and main table where I have to use this query has quite large data.
Thanks

Oracle - Insert x amount of rows with random data

I am currently doing some testing and am in the need for a large amount of data (around 1 million rows)
I am using the following table:
CREATE TABLE OrderTable(
OrderID INTEGER NOT NULL,
StaffID INTEGER,
TotalOrderValue DECIMAL (8,2)
CustomerID INTEGER);
ALTER TABLE OrderTable ADD CONSTRAINT OrderID_PK PRIMARY KEY (OrderID)
CREATE SEQUENCE seq_OrderTable
MINVALUE 1
START WITH 1
INCREMENT BY 1
CACHE 10000;
and want to randomly insert 1000000 rows into it with the following rules:
OrderID needs to be be sequential (1, 2, 3 etc...)
StaffID needs to be a random number between 1 and 1000
CustomerID needs to be a random number between 1 and 10000
TotalOrderValue needs to be a random decimal value between 0.00 and 9999.99
Is this even possible to do? I can I could generate all of these using this update statement? however generating a million rows in 1 go I am not sure on how to do this
Thanks for any help on this matter
This is how i would randomly generate the number on update:
UPDATE StaffTable SET DepartmentID = DBMS_RANDOM.value(low => 1, high => 5);
For testing purposes I created the table and populated it in one shot, with this query:
CREATE TABLE OrderTable(OrderID, StaffID, CustomerID, TotalOrderValue)
as (select level, ceil(dbms_random.value(0, 1000)),
ceil(dbms_random.value(0,10000)),
round(dbms_random.value(0,10000),2)
from dual
connect by level <= 1000000)
/
A few notes - it is better to use NUMBER as data type, NUMBER(8,2) is the format for decimal. It is much more efficient for populating this kind of table to use the "hierarchical query without PRIOR" trick (the "connect by level <= ..." trick) to get the order ID's.
If your table is created already, insert into OrderTable (select level...) (same subquery as in my code) should work just as well. You may be better off adding the PK constraint only after you create the data though, so as not to slow things down.
A small sample from the table created (total time to create the table on my cheap laptop - 1,000,000 rows - was 7.6 seconds):
SQL> select * from OrderTable where orderid between 500020 and 500030;
ORDERID STAFFID CUSTOMERID TOTALORDERVALUE
---------- ---------- ---------- ---------------
500020 666 879 6068.63
500021 189 6444 1323.82
500022 533 2609 1847.21
500023 409 895 207.88
500024 80 2125 1314.13
500025 247 3772 5081.62
500026 922 9523 1160.38
500027 818 5197 5009.02
500028 393 6870 5067.81
500029 358 4063 858.44
500030 316 8134 3479.47

Resources