I have a database table that logs the changes occur in other tables. table structure for my log table is as following.
Log_Table(id, table_name, operation, flag)
values (1, Customer_Table, 1, 1);
values (2, Customer_Table, 2, 1);
values (3, Customer_Table, 1, 1);
values (4, Customer_Table, 2, 1);
values (5, Customer_Table, 1, 1);
I update this table against a button on a web page by doing following operations:
/* first */
public List<Long> select_Changes()
{
select id from Log_Table where flag =1;
}
/* (wait for user input) */
/* second */
public void update_table(List<Long> ids)
{
update Log_Table set flag =0 where id in( ids)
}
Problem is that between first and second operation its up to the user to perform the operation. Meanwhile another user at same time does the same operations. I don't want rows already selected by the first user to be selected by the second user; that is, when the second user runs the first step (assuming two more rows have been added since the first user ran it), the result should be:
values(6,Customer,2,1);
values(7,Customer,1,1);
Please suggest what I should do? I need to lock the rows for any kind of operation after rows are get selected. I tried select for update clause but it did not solve the problem. It's in a web application.
It is almost never a good idea to hold a database transaction open while waiting for user input. What if the user goes to lunch while the transaction is pending, or their network connection goes down and isn't restored for days?
Also, you don't say what database product you are using. Depending on the product, its configuration, and the transaction isolation level, the results of the concurrent attempt what the transaction is pending, so if you want portable behavior you can't rely on the behavior of SELECT FOR UPDATE or even more standardized features.
My recommendation would be to provide a way in the row to identify pending rows, which are waiting for user confirmation. You could use three states for the flag column, to represent something like available, pending, and taken; however, you probably want to have some way to recognize rows which were presented to a user but for which the user never clicked "OK" or "Cancel" (or whatever the options are). If you add a timestamp column for that purpose, you could stay with two states for the flag column and use something like this (assuming that you are using databases which support the RETURNING clause) for the first step:
public List<Long> select_Changes()
{
UPDATE Log_Table
SET when_presented = CURRENT_TIMESTAMP
WHERE flag = 1
AND when_presented = NULL
RETURNING id, when_presented;
}
The second step would be changed to:
public void update_table(List<Long> ids)
{
UPDATE Log_Table
SET flag = 0
WHERE id IN (ids)
AND when_presented = time_claimed;
}
The second step would not necessarily need to change, but with the change above, you could use another RETURNING clause to confirm which id values this user actually claimed, closing a race condition otherwise present if a maintenance process set when_presented back to NULL on an apparently abandoned set of rows which were then presented to another user right before the first user belatedly tried to claim them.
I have added a timestamps column and an extra table with the structure
Last_Oeration_Time(id number, last_op_time timestamp(6))
When the first user clicks the button i run an sql insert query on Last_Oeration_Time like
insert int Last_Oeration_Time(id,last_op_time) values(seq_lasst_op_time_id.nextval,sysdate)
now when the second user runs the first step (assuming two more rows have been added since the first user ran it), the result is desired result. is this ok?
Related
I solved this in SQL Server with a trigger. Now I face it on Oracle.
I have a big set of data that periodically increases with new items.
The item has these fundamental columns:
ID string identifier (not null)
DATETIME (not null)
optional (eventually null, always null for type 1) DATETIME_EMIS emission datetame equal to the DATETIME of the corresponding emission item
type (0 or 1)
value (only if type 1)
It is basically a logbook.
For example: An item with ID='FIREALARM' and datetime='2023-02-12 12:02' has closing like this:
ID='FIREALARM' in datetime='2023-02-12 15:11', emission datetime='2023-02-12 12:02' (equal to the emission item).
What I need is to obtain a final item in the destination table like this:
ID='FIREALARM' in DATETIME_BEGIN ='2023-02-12 12:02', DATETIME_END ='2023-02-12 15:11'
Not all the items have the closing datetime (the ones of Type=1 instead 0), in this case the next item should be use to close the previous one (with the problem of finding it). For example:
Item1:
ID='DEVICESTATUS', datetime='2023-02-12 22:11', Value='Broken' ;
Item2:
ID='DEVICESTATUS', datetime='2023-02-12 22:14', Value='Running'
Should result in
ID='DEVICESTATUS', DATETIME_BEGIN ='2023-02-12 22:11',DATETIME_END ='2023-02-12 22:14', Value='Broken'
The final data should be extracted by a select query as faster as possible.
The process of the elaboration should be independent from the order of inserting.
In SQL Server, I created a trigger with several operations which involve a temporary table, some queries on the inserted set and the entire destination table, so a complex procedure that is not worth to be shown to understand the problem.
Now I discovered that Oracle has some limitations and is not easy to port the trigger on it. For example is not easy to use a temporary table in the same way, and the operation are for each row.
I am asking what could be a good strategy in Oracle to elaborate the data in the final form considering that the set increase continuously and the open and the closure items must be reduce to a single item. I am not asking for a solution of the problem, I am trying to understand what could be the instrument in Oracle useful to achieve a complex elaboration like this. Thanks.
From Oracle 12, you can use MATCH_RECOGNIZE to perform row-by-row pattern matching:
SELECT *
FROM destination
MATCH_RECOGNIZE(
PARTITION BY id
ORDER BY datetime
MEASURES
FIRST(datetime) AS datetime_begin,
LAST(datetime) AS datetime_end,
FIRST(value) AS value
PATTERN ( ^ any_row+ $ )
DEFINE
any_row AS 1 = 1
)
Which, for the sample data:
CREATE TABLE destination (id, datetime, value) AS
SELECT 'DEVICESTATUS', DATE '2023-02-12' + INTERVAL '22:11' HOUR TO MINUTE, 'Broken' FROM DUAL UNION ALL
SELECT 'DEVICESTATUS', DATE '2023-02-12' + INTERVAL '22:14' HOUR TO MINUTE, 'Running' FROM DUAL;
Outputs:
ID
DATETIME_BEGIN
DATETIME_END
VALUE
DEVICESTATUS
2023-02-12 22:11:00
2023-02-12 22:14:00
Broken
fiddle
I have the following function, which returns the next available client ID from the Client table:
CREATE OR REPLACE FUNCTION getNextClientID RETURN INT AS
ctr INT;
BEGIN
SELECT MAX(NUM) INTO ctr FROM Client;
IF SQL%NOTFOUND THEN
RETURN 1;
ELSIF SQL%FOUND THEN
-- RETURN SQL%ROWCOUNT;
RAISE_APPLICATION_ERROR(-20010, 'ROWS FOUND!');
-- RETURN ctr + 1;
END IF;
END;
But when calling this function,
BEGIN
DBMS_OUTPUT.PUT_LINE(getNextClientID());
END;
I get the following result:
which I found a bit odd, since the Client table contains no data:
Also, if I comment out RAISE_APPLICATION_ERROR(-20010, 'ROWS FOUND!'); & log the value of SQL%ROWCOUNT to the console, I get 1 as a result.
On the other hand, when changing
SELECT MAX(NUM) INTO ctr FROM Client;
to
SELECT NUM INTO ctr FROM Client;
The execution went as expected. What is the reason behind this behavior ?
Aggregate functions will always return a result:
All aggregate functions except COUNT(*), GROUPING, and GROUPING_ID
ignore nulls. You can use the NVL function in the argument to an
aggregate function to substitute a value for a null. COUNT and
REGR_COUNT never return null, but return either a number or zero. For
all the remaining aggregate functions, if the data set contains no
rows, or contains only rows with nulls as arguments to the aggregate
function, then the function returns null.
You can change your query to:
SELECT COALESCE(MAX(num), 1) INTO ctr FROM Client;
and remove the conditionals altogether. Be careful about concurrency issues though if you do not use SELECT FOR UPDATE.
Query with any aggregate function and without GROUP BY clause always returns 1 row. If you want no_data_found exception on empty table, add GROUP BY clause or remove max:
SQL> create table t (id number, client_id number);
Table created.
SQL> select nvl(max(id), 0) from t;
NVL(MAX(ID),0)
--------------
0
SQL> select nvl(max(id), 0) from t group by client_id;
no rows selected
Usually queries like yours (with max and without group by) are used to avoid no_data_found.
Agregate functions like MAX will always return a row. It will return one row with a null value if no row is found.
By the way SELECT NUM INTO ctr FROM Client; will raise an exception where there's more than one row in the table.
You should instead check whether or not ctr is null.
Others have already explained the reason why your code isn't "working", so I'm not going to be doing that.
You seem to be instituting an identity column of some description yourself, probably in order to support a surrogate key. Doing this yourself is dangerous and could cause large issues in your application.
You don't need to implement identity columns yourself. From Oracle 12c onwards Oracle has native support for identity columns, these are implemented using sequences, which are available in 12c and previous versions.
A sequence is a database object that is guaranteed to provide a new, unique, number when called, no matter the number of concurrent sessions requesting values. Your current approach is extremely vulnerable to collision when used by multiple sessions. Imagine 2 sessions simultaneously finding the largest value in the table; they then both add one to this value and try to write this new value back. Only one can be correct.
See How to create id with AUTO_INCREMENT on Oracle?
Basically, if you use a sequence then you don't need any of this code.
As a secondary note your statement at the top is incorrect:
I have the following function, which returns the next available client ID from the Client table
Your function returns the maximum ID + 1. If there's a gap in the IDs, i.e. 1, 2, 3, 5 then the "missing" number (4 in this case) will not be returned. A gap can occur for any number of reasons (deletion of a row for example) and does not have a negative impact on your database in any way at all - don't worry about them.
Using h2 in embedded mode, I am restoring an in memory database from a script backup that was previously generated by h2 using the SCRIPT command.
I use this URL:
jdbc:h2:mem:main
I am doing it like this:
FileReader script = new FileReader("db.sql");
RunScript.execute(conn,script);
which, according to the doc, should be similar to this SQL:
RUNSCRIPT FROM 'db.sql'
And, inside my app they do perform the same. But if I run the load using the web console using h2.bat, I get a different result.
Following the load of this data in my app, there are rows that I know are loaded but are not accessible to me via a query. And these queries demonstrate it:
select count(*) from MY_TABLE yields 96576
select count(*) from MY_TABLE where ID <> 3238396 yields 96575
select count(*) from MY_TABLE where ID = 3238396 yields 0
Loading the web console and using the same RUNSCRIPT command and file to load yields a database where I can find the row with that ID.
My first inclination was that I was dealing with some sort of locking issue. I have tried the following (with no change in results):
manually issuing a conn.commit() after the RunScript.execute()
appending ;LOCK_MODE=3 and the ;LOCK_MODE=0 to my URL
Any pointers in the right direction on how I can identify what is going on? I ended up inserting :
Server.createWebServer("-trace","-webPort","9083").start()
So that I could run these queries through the web console to sanity check what was coming back through JDBC. The problem happens consistently in my app and consistently doesn't happen via the web console. So there must be something at work.
The table schema is not exotic. This is the schema column from
select * from INFORMATION_SCHEMA.TABLES where TABLE_NAME='MY_TABLE'
CREATE MEMORY TABLE PUBLIC.MY_TABLE(
ID INTEGER SELECTIVITY 100,
P_ID INTEGER SELECTIVITY 4,
TYPE VARCHAR(10) SELECTIVITY 1,
P_ORDER DECIMAL(8, 0) SELECTIVITY 11,
E_GROUP INTEGER SELECTIVITY 1,
P_USAGE VARCHAR(16) SELECTIVITY 1
)
Any push in the right direction would be really appreciated.
EDIT
So it seems that the database is corrupted in some way just after running the RunScript command to load it. As I was trying to debug to find out what is going on, I tried executing the following:
delete from MY_TABLE where ID <> 3238396
And I ended up with:
Row not found when trying to delete from index "PUBLIC.MY_TABLE_IX1: 95326", SQL statement:
delete from MY_TABLE where ID <> 3238396 [90112-178] 90112/90112 (Help)
I then tried dropping and recreating all my indexes from within the context, but it had no effect on the overall problem.
Help!
EDIT 2
More information: The problem occurs due to the creation of an index. (I believe I have found a bug in h2 and I have working on creating a minimal case that reproduces it). The simple code below will reproduce the problem, if you have the right set of data.
public static void main(String[] args)
{
try
{
final String DB_H2URL = "jdbc:h2:mem:main;LOCK_MODE=3";
Class.forName("org.h2.Driver");
Connection c = DriverManager.getConnection(DB_H2URL, "sa", "");
FileReader script = new FileReader("db.sql");
RunScript.execute(c,script);
script.close();
Statement st = c.createStatement();
ResultSet rs = st.executeQuery("select count(*) from MY_TABLE where P_ID = 3238396");
rs.next();
if(rs.getLong(1) == 0)
System.err.println("It happened");
else
System.err.println("It didn't happen");
} catch (Throwable e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
I have reduced the db.sql script to about 5000 rows and it still happens. When I attempted to go to 2500 rows, it stopped happening. If I remove the last line of the db.sql (which is the index creation), the problem will also stop happening. The last line is this:
CREATE INDEX PUBLIC.MY_TABLE_IX1 ON PUBLIC.MY_TABLE(P_ID);
But the data is an important player in this. It still appears to only ever be the one row and the index somehow makes it inaccessible.
EDIT 3
I have identified the minimal data example to reproduce. I stripped the table schema down to a single column, and I found that the values in that column don't seem to matter -- just the number of rows. Here is the contents of (snipped with obvious stuff) of my db.sql generated via the SCRIPT command:
;
CREATE USER IF NOT EXISTS SA SALT '8eed806dbbd1ea59' HASH '6d55cf715c56f4ca392aca7389da216a97ae8c9785de5d071b49de5436b0c003' ADMIN;
CREATE MEMORY TABLE PUBLIC.MY_TABLE(
P_ID INTEGER SELECTIVITY 100
);
-- 5132 +/- SELECT COUNT(*) FROM PUBLIC.MY_TABLE;
INSERT INTO PUBLIC.MY_TABLE(P_ID) VALUES
(1),
(2),
(3),
... snipped you obviously have breaks in the bulk insert here ...
(5143),
(3238396);
CREATE INDEX PUBLIC.MY_TABLE_IX1 ON PUBLIC.MY_TABLE(P_ID);
But that will recreate the problem. [Note that my numbering skips a number every time there was a bulk insert line. So there really is 5132 rows, though you see 5143 select count(*) from MY_TABLE yields 5132]. Also, I seem to be able to recreate the problem in the WebConsole directly now by doing:
drop table MY_TABLE
runscript from 'db.sql'
select count(*) from MY_TABLE where P_ID = 3238396
You have recreated the problem if you get 0 back from the select when you know you have a row in there.
Oddly enough, I seem to be able to do
select * from MY_TABLE order by P_ID desc
and I can see the row at this point. But going directly for the row:
select * from MY_TABLE where P_ID = 3238396
Yields nothing.
I just realized that I should note that I am using h2-1.4.178.jar
The h2 folks have already apparently resolved this.
https://code.google.com/p/h2database/issues/detail?id=566
Just either need to get the code from version control or wait for the next release build. Thanks Thomas.
I have a table Orders (with columns orderId, orderType, userId, state, ...). I need my process to do the following:
Check if there exist an order with specific type, with specific state for specific user (SELECT).
If such an order doesn't exist - create one (INSERT).
So basically I want to assure that there is always only one order with:
orderType = x
userId = y
state = z
I can't however create constraints because there can exist more than one order for x1, y1, z1.
I must say that I'm not experienced in Oracle. I've read this article on locks in Oracle and it seams that the only lock type which would be useful here is:
LOCK TABLE Orders IN EXCLUSIVE MODE
But I think it is overkill to lock whole table only for some subset of data. I tried SELECT ... FOR UPDATE OF <update_column> using different columns for <update_column> but they allowed me to insert new rows.
Is there any pattern for this type of concurrency? It seams that Oracle created SELECT ... FOR UPDATE OF ... for the SELECT-UPDATE pattern. Is there anything similar for SELECT-INSERT?
You can create a unique function-based index to enforce this sort of constraint. If you want to enforce that there is a unique row with a state of "Done" but allow many rows with a state of "Draft".
CREATE UNIQUE INDEX idx_index_name
ON( CASE WHEN state = 'Done' THEN orderType ELSE NULL END,
CASE WHEN state = 'Done' THEN userId ELSE NULL END,
CASE WHEN state = 'Done' THEN state ELSE NULL END );
I think I could use some help here from more experienced users...
I have an integer field name in a table, let's call it SO_ID in a table SO, and to each new row I need to calculate a new SO_ID based on the following rules
1) SO_ID consists of 6 letters where first 3 are an area code, and the last three is the sequenced number within this area.
309001
309002
309003
2) so the next new row will have a SO_ID of value
309004
3) if someone deletes the row with SO_ID value = 309002, then the next new row must recycle this value, so the next new row has got to have the SO_ID of value
309002
can anyone please provide me with either a SQL function or PL/SQL (perhaps a trigger straightaway?) function that would return the next available SO_ID I need to use ?
I reckon I could get use of keyword rownum in my sql, but the follwoing just doens't work properly
select max(so_id),max(rownum) from(
select (so_id),rownum,cast(substr(cast(so_id as varchar(6)),4,3) as int) from SO
where length(so_id)=6
and substr(cast(so_id as varchar(6)),1,3)='309'
and cast(substr(cast(so_id as varchar(6)),4,3) as int)=rownum
order by so_id
);
thank you for all your help!
This kind of logic is fraught with peril. What if two sessions calculate the same "next" value, or both try to reuse the same "deleted" value? Since your column is an integer, you'd probably be better off querying "between 309001 and 309999", but that begs the question of what happens when you hit the thousandth item in area 309?
Is it possible to make SO_ID a foreign key to another table as well as a unique key? You could pre-populate the parent table with all valid IDs (or use a function to generate them as needed), and then it would be a simple matter to select the lowest one where a child record doesn't exist.
well, we came up with this... sort of works.. concurrency is 'solved' via unique constraint
select min(lastnumber)
from
(
select so_id,so_id-LAG(so_id, 1, so_id) OVER (ORDER BY so_id) AS diff,LAG(so_id, 1, so_id) OVER (ORDER BY so_id)as lastnumber
from so_miso
where substr(cast(so_id as varchar(6)),1,3)='309'
and length(so_id)=6
order by so_id
)a
where diff>1;
Do you really need to compute & store this value at the time a row is inserted? You would normally be better off storing the area code and a date in a table and computing the SO_ID in a view, i.e.
SELECT area_code ||
LPAD( DENSE_RANK() OVER( PARTITION BY area_code
ORDER BY date_column ),
3,
'0' ) AS so_id,
<<other columns>>
FROM your_table
or having a process that runs periodically (nightly, for example) to assign the SO_ID using similar logic.
If your application is not pure sql, you could do this in application code (ie: Java code). This would be more straightforward.
If you are recycling numbers when rows are deleted, your base table must be consulted when generating the next number. "Legacy" pre-relational schemes that attempt to encode information in numbers are a pain to make airtight when numbers must be recycled after deletes, as you say yours must.
If you want to avoid having to scan your table looking for gaps, an after-delete routine must write the deleted number to a separate table in a "ReuseMe" column. The insert routine does this:
begins trans
selects next-number table for update
uses a reuseme number if available else uses the next number
clears the reuseme number if applicable or increments the next-number in the next-number table
commits trans
Ignoring the issues about concurrency, the following should give a decent start.
If 'traffic' on the table is low enough, go with locking the table in exclusive mode for the duration of the transaction.
create table blah (soc_id number(6));
insert into blah select 309000 + rownum from user_tables;
delete from blah where soc_id = 309003;
commit;
create or replace function get_next (i_soc in number) return number is
v_min number := i_soc* 1000;
v_max number := v_min + 999;
begin
lock table blah in exclusive mode;
select min(rn) into v_min
from
(select rownum rn from dual connect by level <= 999
minus
select to_number(substr(soc_id,4))
from blah
where soc_id between v_min and v_max);
return v_min;
end;