ID VALUE
0 2
1 3
2 0
When the first record with id 0 is deleted the second one should become 0 and the third one should be 1 .
Resulting in:
ID VALUE
0 3
1 0
When the record with id 1 is deleted the third one should become 1 .
ID VALUE
0 2
1 0
Which ever one is deleted the consecutive order should be maintained starting from 0.
This id is not the primary key. Should be done on delete in the trigger.
Don't modify the same table in the trigger, you will run into mutating table issues.
An alternative is to use a stored procedure and call it after you delete the record. Just select all IDs greater than your deleted ID, and subtract 1 from them.
PROCEDURE UPDATE_IDS(deletedID in Number) IS
BEGIN
UPDATE table t
SET ID = ID - 1
WHERE ID > deletedID;
END UPDATE_IDS;
Related
Initial Load on Day 1
id
key
fkid
1
0
100
1
1
200
2
0
300
Load on Day 2
id
key
fkid
1
0
100
1
1
200
2
0
300
3
1
400
4
0
500
Need to find delta records
Load on Day 2
id
key
address
3
1
400
4
0
500
Problem Statement
Need to find delta records in minimum time with following facts
1: I have to process around 2 billion records initially from a table as mentioned below
2: Also need to find delta with minimal time so that I can process it quickly
Questions :
1: Will it be a time consuming process to identify delta especially during production downtime ?
2: How long should it take to identify delta with 3 numeric columns in a table out of which
id & key forms a composite key.
Solution tried :
1: Use full join and extract delta with case nvl condition but looks to be costly.
nvl(node1.id, node2.id) id,
nvl(node1.key, node2.key) key,
nvl(node1.fkid, node2.fkid) fkid
FROM
TABLE_DAY_1 node1
FULL JOIN TABLE_DAY_2 node2 ON node2.id = node1.id
WHERE
node2.id IS NULL
OR node1.id IS NULL;```
You need two separate statements to handle this, one to detect new & changed rows, a separate one to detect deleted rows.
While it is cumberson to write, the fastest comparison is field-by-field, so:
SELECT /*+ parallel(8) full(node1) full(node2) USE_HASH(node1 node) */ *
FROM table_day_1 node1,
table_day_2 node2
WHERE node1.id = node2.id(+)
AND (node2.id IS NULL -- new rows
OR node1.col1 <> node2.col2 -- changed val on non-nullable col
OR NVL(node1.col3,' ') <> NVL(node2.col3,' ') -- changed val on nullable string
OR NVL(node1.col4,-1) <> NVL(node2.col4,-1) -- changed val on nullable numeric, etc..
)
Then for deleted rows:
SELECT /*+ parallel(8) full(node1) full(node2) USE_HASH(node1 node) */ node2.id
FROM table_day_1 node1,
table_day_2 node2
WHERE node1.id(+) = node2.id
AND node1.id IS NULL -- deleted rows
You will want to make sure Oracle does a full table scan. If you have lots of CPUs and parallel query is enabled on your database, make sure the query uses parallel query (hence the hint). And you want a hash join between them. Work with your DBA to ensure you have enough temporary space to pull this off, and enough PGA to at least handle this with a single pass workarea rather than multipass.
I have a table that contains one or more records for each item. Each item can contain multiple sub-items (boards) and so the Itemid is often replicated with each record showing the division category (a number) that the Item/sub-item combo resides in:
ItemId Board# Division
142585109 0 6
142585114 0 3
142585116 0 1
142585120 0 4
142585197 0 5
142585197 2 4
142585197 3 3
142585197 5 6
142585197 8 1
142585294 0 4
142585317 0 1
I want to update the table and aggregate all of the division values (as a comma separated string) in a new field in this table, something like:
ItemId Board# AggDivisions
142585109 0 6
142585114 0 3
142585116 0 1
142585120 0 4
142585197 0 1,3,4,5,6
142585294 0 4
142585317 0 1
I used a ListAgg query to do the aggregation which works correctly but when I tried to incorporate this into an update query, I end up with multiple duplicates in the aggregated field for each record.
Here is my update attempt:
update itemtable dd
set aggregateddivisions = (SELECT Listagg(division, ',') within GROUP (ORDER BY division)
FROM itemtable ev
WHERE ev.itemid = dd.itemid
)
where exists (select 1
from itemtable ev
where ev.itemid = dd.itemid
);
How can I update the table with the aggregated list of values from the same table without ending up with duplicates?
I have Oracle 11g and a table called CODES and there is column ID and CODESA as follow:
ID CODESA
1 9999
1 8889
2 77777
2 99999
3 1234
3 4321
4 565656
etc.
Then I need to update another table CODES2 and column CODESB based on ID in CODES table
I need a trigger to monitor this.
Let´s say I monitoring ID = 2 with this trigger and all different CODESA´s under that ID,
you can see that only these are possible to update in CODESB
2 77777
2 99999
How to make a trigger to launch if user is trying to enter some code in CODESB
which is for example from ID = 3 ?
Appreciate your help. Thanks,
Some_user
#APC is correct. We can use foreign key but lets say OP dont want those columns to be primary or unique, in that case trigger is the solution.
Create or replace trigger codes2_trg
Before insert or update On codes2
For each row
Declare
Cnt number;
Begin
Select count(1) into cnt
From codes where (id, codesa) = (:new.id, :new.codesb);
If cnt = 0 then
Raise_application_error('-20001', 'these balues are not allowed.');
End if;
End;
/
Cheers!!
I have someting like this
id day descrition
1 1 hi
1 1 today
1 1 is a beautifull
1 1 day
1 2 exemplo
1 2 for
1 2 this case
I need to do a funtion that for each day concatenate the descrtiomn colunm and return the result like this
id day descrition
1 1 hi today is a beautifull thay
1 2 exemplo for this case
Anny ideia about how can i do this usisng a loop in a function in oracle
You need a way of determining which order the values should be aggregated. The snippet below will rely on the implicit order in which Oracle reads the rows from the datafiles - if you have row movement enabled then you may get inconsistent results as the rows can be read in different orders as they are relocated in the underlying datafiles.
SELECT LISTAGG( description, ' ' ) WITHIN GROUP ( ORDER BY ROWNUM ) AS description
FROM your_table
GROUP BY id, day
It would be better to have another column that stores the order within each day.
I have an MS Access Database table that records communication status of values from several meters. The data is logged directly to the table, but I need to make sure that the table is populating. From the sample data you can see that the Comm columns doesn't read false or 0, so I want to return a log whenever the difference between now and "Date / Time" is greater than 5 minutes.
Date / Time FCB Comm BOF Comm EAF Comm FGP Comm
9/6/2011 10:29:10 1 1 1 1
9/6/2011 10:28:01 1 1 1 1
9/6/2011 10:27:11 1 1 1 1
9/6/2011 10:26:20 1 1 1 1
9/2/2011 08:17:01 1 1 1 1
9/2/2011 08:16:10 1 1 1 1
9/2/2011 08:15:02 1 1 1 1
9/2/2011 08:14:08 1 1 1 1
I wanted to know if anyone could tell me if this could like a reasonable query to run?
SELECT Data.[Date / Time], Data.[Ford Chiller Building Comm Okay],
Data.[Basic Oxygen Furnace Comm Okay], Data.[Electro-Arc Furnace Comm Okay],
Data.[J-9 Shop Comm Okay], Data.[Ford Glass Plant Comm Okay]
FROM Data
where DateDiff("n",now(), Data.[Date / Time] ) < 5;
You need something running continuously that generates a notification whenever expected data doesn't appear, and there's a couple of approaches you can take to do that.
One is to continuously run a query like the one you have above, checking the most recent date in the table against the value of the now() function.
Another approach is to take the latest date in your table, wait (sleep) for 5 minutes, and then check the table again for any newer entries. My expectation is that this approach will generate fewer hits on your table.
You could also just check the most recent date every 5 minutes regardless of the previous time checked and see if data hasn't come in.
You need to set up your notification loop first, then you can experiment with different approaches.
all you should really need to do is return the number of rows in the table whose timestamp is within 5 minutes of now(). You shouldn't need additional row detail, just is the count 0 or not?