How synchronize multiple machines to insert one single row? - oracle

I have some code that is running on several machines, and accesses an Oracle database. I use this database (among other things) as a synchronization object between the different machines by locking rows.
The problem I have is that when my process is starting, there is nothing yet in the database to rely on for synchronization, and my processes get oracle exceptions about unique constraint violated since they all try to insert at the same time.
My solution for now is to catch that precise exception and ignore it, but I don't really like having exceptions being thrown in the normal workflow of my application.
Is there a better way to "test and insert" atomically in a database ? Locking the whole table/partition when inserting a row is not an acceptable solution.
I checked merge into, thinking it was my solution, but it produces the same problem.

You probably want to use DBMS_LOCK, which allows for user application code to implement the same locking model as the Oracle database does in locking rows and other resources. You can create an enqueue of type 'UL' (user lock), and define a resource name, and then have multiple sessions lock to their hearts content, without any dependence on data in a table somewhere. It supports both exclusive and shared locking, so, you have have some processes that can run concurrently (if they take a shared lock) or other processes that run exclusively (if they take an exclusive lock) and they will automatically queue behind the shared lock (if any) that are being held by the other type of process, etc.
It's a a very flexible locking model, and you don't need to rely on any data in any table to implement it.
See the Oracle PL/SQL Packages and Types Reference, for the full scoop on the DBMS_LOCK package.
Hope that helps.

You won't get an error immediately if your PK is policed by a non-unique index, consider:
<<SESSION 1>>
SQL> create table afac (
2 id number,
3 constraint afac_pk primary key (id)
4 deferrable /* will cause the PK to be policed by a non-unique index */
5 );
Table created.
SQL> insert into afac values (1);
1 row created.
<<SESSION 2>>
SQL> insert into afac values (1); /* Will cause session 2 to be blocked */
Session 2 will be blocked until session 1 either commits or rollbacks. I don't know if this mechanism is compatible with your requirement though.

Related

Preserve exclusive table lock after DDL in Oracle

It is a well known fact that in an Oracle database it is not possible to make a transaction out of multiple DDL statements.
However, is there any way to lock a specific set of database objects within the current connection so that after a DDL query is executed, all locks are held until they are explicitly released?
An obvious solution of this kind doesn't work, because executing the DDL statement automatically commits the transaction, and with it, the locks are released:
LOCK TABLE x ....;
LOCK TABLE y ....;
ALTER TABLE x ....; -- Does not work properly since table locks are released here
ALTER TABLE y ....;
ALTER TABLE x ....;
COMMIT;
The DBMS_LOCK option doesn't work either, because it is an advisory lock, and the concurrent thread must respect this lock and at least be aware of its existence.
Moreover, it is not controlled which statements can be executed by concurrent threads/sessions. It is possible to execute a query only in the current session, and it must be ensured that no intermediate queries on tables X and Y are executed from other sessions until the current session has ended.
Are there any ideas how this can be implemented?
PS: Please don't mention the high-level task or XY problem. There is no high-level task. The question is posed exactly as it is.
A bit of a joke (breaks all dependent PL/SQL), but... ;)
ALTER TABLE x RENAME TO x__my_precious;
ALTER TABLE y RENAME TO y__my_precious;
ALTER TABLE x__my_precious ...;
ALTER TABLE y__my_precious ...;
ALTER TABLE x__my_precious RENAME TO x;
ALTER TABLE y__my_precious RENAME TO y;
I'm pretty sure what you're trying to do isn't possible with Oracle's native transaction control. DDL will always end a transaction, so no lock on that object is going to survive it. Even if you immediately attempted to lock it after the DDL, another waiting session could slip in and obtain the lock before you do.
You can, however, serialize access to the table by utilizing another dummy table or row in a dummy table, assuming you control the code of any process wishing to access the table. If this is the case, then before accessing the table, attempt to lock the dummy table or a row in it first, and only if it succeeds continue with accessing the main table. Then the process that does DDL can take out that same lock (preventing other processes from proceeding), then do the DDL in a subroutine (named PL/SQL block) with PRAGMA AUTONOMOUS_TRANSACTION. That way the DDL ends the autonomous transaction rather than the main one, which still holds the lock on the dummy table.
You have to use a dummy table because if you tried to use the same table you want to modify you'll deadlock yourself. Of course, this only works if you can make all other processes do the lock-the-dummy-table safety check before they can proceed.
Lastly, albeit what I said above should work, it is likely that you're trying to do something you shouldn't do. DDL against hot objects isn't a good idea. Whatever you're trying to do, there is probably a better way than modifying objects on the fly like this. Even if you are able to keep other locked out, you are likely to cause object reference errors, package invalidations, SQL cursor invalidations, etc.. it can be a real mess.

oracle object no longer exists

in my pl\sql process, there is an execution of "alter table exchange parition.."
on some table.
the problem is that during that operation - other users can try to access the target table.
one process that executed a select query on that table, got this error:
ORA-08103 object no longer exists.
i think that it is not the same like 'object or view doesn't exist'.
i think that 'object no longer exists' error, come when the process start ok,
and then the exchange (or other operation) come from the side and the process
can't bo done.
the chance is very low that it will happen, because the exchange is very very fast.
but for this case, there is any idea how to solve it? how to prevent this situation?
maybe a way to execute the exchange only if no-one touch the table?
thanks.
This will happen when session #2 alters the table in the table after session #1 has opened its cursor but before it is finished fetching the rows from that cursor.
I don't think there is a foolproof way to prevent the exchange from happening unless you are willing to change the code that other users are using to access the target table.
LOCK TABLE mytable IN EXCLUSIVE MODE will not wait for SELECT statements to complete, nor will it prevent new SELECT statements from starting, so acquiring an exclusive lock before attempting the ALTER TABLE will not work.
If you want to prevent the ALTER TABLE from happening at the same time as SELECTs, you need both to depend on acquiring the same lock. A relatively robust way to do that would be to use the DBMS_LOCK package to allocate a lock for that table. Call that lock "mytablelock".
Then, using DBMS_LOCK, your SELECT sessions would need to acquire "shared" locks on "mytablelock" before progressing. Your alter table sessions would need to acquire an "exclusive" lock on "mytablelock" before progressing.
This scheme would allow multiple SELECT sessions to run without interfering with each other, but it would prevent the ALTER TABLE from running while any SELECT was running.
A (much) less robust, but simpler, way to do it would be to change the SELECT statements issued from the other query into SELECT FOR UPDATES. But that's a recipe for lots of unnecessary waits and deadlock errors.

Operations on certain tables won't finish

We have a table TRANSMISSIONS(ID, NAME) which behaves funny in the following ways:
The statement to add a foreign key in another table referencing TRANSMISSIONS.ID won't finish
The statement to add a column to TRANSMISSIONS won't finish
The statement to disable/drop a unique constraint won't finish
The statement to disable/drop a trigger won't finish
TRANSMISSION's primary key is ID, there is also a unique constraint on NAME - therefore there are indexes on ID and NAME. We also have a trigger which creates values for column ID using a sequence, so that INSERT statements do not need to provide a value for ID.
Besides TRANSMISSIONS, there are two more tables behaving like this. For other tables, the above-mentioned statements work fine.
The database is used in an application with Hibernate and due to an incorrect JPA configuration we produced high values for ID during a time. Note that we use the trigger only for "manual" INSERT statements and that Hibernate produces ID values itself, also using the sequence.
The first thought was that the problems were due to the high IDs but we have the problems also with tables that never had such high IDs.
Anyways we suspected that the indexes might be fragmented somehow and called ALTER INDEX TRANSMISSIONS_PK SHRINK SPACE COMPACT, which ran through but showed no effect.
Also, we wanted to call ALTER TABLE TRANSMISSIONS SHRINK SPACE COMPACT which didn't work because we needed to call first ALTER TABLE TRANSMISSIONS ENABLE ROW MOVEMENT which never finished.
We have another instance of the database which does not behave in such a funny way. So we think it might be that in the course of running the application the database got somehow into an inconsistent state.
Does someone have any suggestions what might have gone out of control/into an inconsitent state?
More hints:
There are no locks present on any objects in the database (according to information in v$lock and v$locked_object)
We tried all these statements in SQL Developer and also using SQLPlus (command-line tool).

Creating a Triggers

How do I start a trigger so that this allows nobody to be able to rent a movie if their unpaid balance exceeds 50 dollars?
What you have here is a cross-row table constraint - i.e. you can't just put a single Oracle CONSTRAINT on a column, as these can only look at data within a single row at a time.
Oracle has support for only two cross-row constraint types - uniqueness (e.g. primary keys and unique constraints) and referential integrity (foreign keys).
In your case, you'll have to hand-code the constraint yourself - and with that comes the responsibility to ensure that the constraint is not violated in the presence of multiple sessions, each of which cannot see data inserted/updated by other concurrent sessions (at least, until they commit).
A simplistic approach is to add a trigger that issues a query to count how many records conflict with the new record; but this won't work because the trigger cannot see rows that have been inserted/updated by other sessions but not committed yet; so the trigger will sometimes allow members to rent 6 videos, as long as (for example) they get two cashiers to enter the data in separate terminals.
One way to get around this problem is to put some element of serialization in - e.g. the trigger would first request a lock on the member record (e.g. with a SELECT FOR UPDATE) before it's allowed to check the rentals; that way, if a 2nd session tries to insert rentals, it will wait until the first session does a commit or rollback.
Another way around this problem is to use an aggregating Materialized View, which would be based on a query that is designed to find any rows that fail the test; the expectation is that the MV will be empty, and you put a table constraint on the MV such that if a row was ever to appear in the MV, the constraint would be violated. The effect of this is that any statement that tries to insert rows that violate the constraint will cause a constraint violation when the MV is refreshed.
Writing the query for this based on your design is left as an exercise for the reader :)
If you want to restrict something about your table data then you should have a look at Constraints and not Triggers.
Constraints are ensuring some conditions about your table data. Like your example.
Triggers are fired when some action (i.e. INSERT, UPDATE, DELETE) took place and you can do some work then as a reaction to this action.

Restricting number of records allowed in a table in a way which can't be subverted

We have a web application (Grails) which we are going to sell licenses for based on the number of users. There is a table in the database (Oracle 10g) which holds users. Customers will host their own copy of the software and database. Can someone suggest strategies for limiting the number of records which are allowed to exist in the user table in a way which can't reasonably be subverted by the customer? Thanks.
You should at least consider avoiding all technical means here and instead insisting that your customer sign an SLSA with an audit provision, and then audit here and there.
All these technical means introduce risks of failure, ranging from flat-out crashes to mysterious performance problems. The more stealthy and devious, the more stealthy and devious the bugs.
It will depend on your definition of "reasonably". If they're hosting the database, they'll always be able to allow more rows.
The simplest possible solution would be an AFTER STATEMENT trigger that counted the number of rows and threw an exception if too many rows had been inserted. They could, of course, drop or disable that trigger. On the other hand, your application could also query the data dictionary to verify that the trigger was present and enabled.
You could make it more difficult for them to remove the trigger by creating a DDL trigger that looked for statements that affected this trigger or the table in question and disallowed them. That would require that the attacker find and remove that trigger as well before they could remove the trigger on the table.
You could deliver a database job (DBMS_SCHEDULER or DBMS_JOB) that periodically ran, looked for the statement and DDL triggers and re-created them if they were missing. The attacker could figure out that there was a database job that was recreating the objects and remove that job, then remove the DDL trigger, then remove the statement trigger. In this job, you could potentially send a notification back to you (via email or http or something else) alerting you to the issue though that may be tricky from a networking standpoint-- your customer's firewall may not allow outbound HTTP requests from the database server back to your servers.
If you have a license key that is being checked, you can embed the number of users allowed in that license key and bounce that against the number of rows in the table during the login table.
If the customer doesn't have access to modify the table definition, you could use a simple set of constraints on the table:
CREATE TABLE user_table
(id NUMBER PRIMARY KEY
,name VARCHAR2(100) NOT NULL
,rn NUMBER NOT NULL
,CONSTRAINT rn_check CHECK (rn = TRUNC(rn) AND rn BETWEEN 1 AND 30)
,CONSTRAINT rn_uk UNIQUE (rn)
);
Now, the column rn must take an integer value between 1 and 30, and duplicates are not allowed: thus, a maximum of 30 rows may be added.

Resources