In cockroachdb, how can I kill all long-running transactions with locks on a certain table? - cockroachdb

I have a table that I can't execute any statements against. I think this is because I have zombie transactions open with locks on the entire table. How can I drop all those transactions or otherwise unblock a blocked table?

The quickest way is usually to run a high priority transaction on the table. This will cancel all regular priority transactions it contends with. The non-zombies will get retried by the client, but the zombies will with any luck be dead for good.
BEGIN PRIORITY HIGH;
SELECT count(*) FROM foo; -- Do something cheaper here if the table is large
COMMIT;

Related

Sessions are locking each other while direct insert into subpartition with name of subpartition specified

We have a one large table that we need to insert data of it into another table.
Target table is partitioned by range (by day) and subpartitioned by departments.
For loading table data, we have used dbms_parallelel_execute and created a task using sql that gets list of departments, level is 20, that is at one time only 20 tasks corresponding to 20 departments will run. Those task will select the department's data from source table and inserts into target table.
Before doing insert, we first get subpartition name and generate the following insert:
INSERT /*+ NO_GATHER_OPTIMIZER_STATISTICS ENABLE_PARALLEL_DML APPEND_VALUES */ into Target_Table subpartition (subpartition_name) values (:B1, :B2, :B3, ....) ;
We read on oracle documentation that specifiying subpartition during insert will lock only that subpartition and append will work . The goal of doing this was to create n jobs that will independently insert into their own give subpartitions. Append itself is working, but when we monitor v$session while loading table data, we see that
BLOCKING_SESSION_STATUS is VALID;
FINAL_BLOCKING_SESSION_STATUS is VALID;
EVENT# is library cache lock
STATE is WAITING,
WAIT_CLASS is Concurrency
From this, we are concluding that one append_values is still locking other sessions to insert to another subpartition, is there something we missed? We have enabled parallel dml, disabled target table's indexes, set skip_unusable_indexes to true, no referential constraints are present in target table, table, partitions and subpartitions are set to nologging.
EDIT: Tested the same thing with another table that is also partitioned, but it doesn't have subpartitions, it is only list partitioned. So instead of subpartition (subpartition_name) inside insert statement there was partition(partition_name) . However, in this case , insert run without sessions waiting for others and no locks were held. I am assuming with subpartitioned interval tables the above won't work.
EDIT2 I have created the same scenario in another database which is also Oracle 19c. Created a table with partitions and subpartitions, set the interval, disabled indexes, set nologging and run the job that inserts into subpartitions. Surprisingly, the insert went without errors and no sessions locking each other. Now I am thinking maybe its some database parameter that should be turned on or changed. Because database versions, table structures, jobs, inserts are the same, but in one it is locking each other, in another it is not.
UPDATE Adding the insert part of the code :
if c_tab_cursor %isopen then
close c_tab_cursor;
end if;
open c_tab_cursor;
loop
fetch c_tab_cursor bulk collect
into v_row limit 100000;
exit when(v_row.count = 0);
forall i in v_row.First .. v_row.Last
insert /*+ NO_GATHER_OPTIMIZER_STATISTICS APPEND_VALUES */ into
Target_Table subpartition(SYS_P68457)
values v_row
(i);
commit;
end loop;
close c_tab_cursor;
Edit3 Adding table info, table is daily partitioned, and each partition has around 150 subpartitions. At the time of writing this, table had total 177845 subpartitions. My other guess is oracle is spending many time to find the right subpartition, which is also arguable because subpartition name is provided during insert.
I'd say it is expected "feature" - when you insert into the same segment. Direct path insert writes data beyond HWM(high water mark) rather than using segment's free space map.
When you commit direct path insert HWM advances, when you rollback HWM stays and data is discarded.
Check Oracle segment parameter "FREELIST", but I'm afraid even this parameter wont help you.
When your inserts touch different subpartitions this should not be happening.
There can be various objects held by library cache lock (maybe due to bug).
IMHO only way how to investigate this would be either to use hanganalyze to check which function in oracle is being blocked or to query P1,P2,P3 parameters of library cache lock and identify which object is blocking parallel run.
PS: I saw bugs like: Only one session could run Java stored procedure at the time because Oracle unnecessarily wanted to hold exclusive lock on some library case object.
v$session reports the wait state at that precise instant that you query it. It's meaningless unless you keep requerying and keep seeing the same thing. Better yet, use v$active_session_history to see Oracle's own 1-second sampling of the wait state. If you see lots of rows with that wait, then it's meaningful.
Assuming that this is meaningful, I would point out that you are using a single row VALUES list and yet are asking for parallel dml. Parallel dml is for multiple row operations, not single row operations. You can use it for an insert-select, for example, but not an insert-values.
If your application is necessarily single-row driven, remove ENABLE_PARALLEL_DML APPEND_VALUES hints. If you are binding arrays to these variables, you can leave the APPEND_VALUES but remove the ENABLE_PARALLEL_DML. For inserts, parallel DML only works with insert-select.
As you clearly intend to have multiple sessions, each loading a separate subpartitions, that's your parallelism right there - you don't need nor want to add another layer of parallelism with PDML.

What is the difference between row lock and table lock in Oracle database

What is the difference between row lock and table lock in Oracle database.
will for loop with update statement trigger table lock ??
Any DML statement on a table is going to acquire a table lock. But it is terribly unlikely that this table lock is going to affect another session in a way that limits concurrency. When your session updates rows, there will be a row exclusive table lock which will stop another session from doing DDL on the table (say, adding or removing a column) while there are active, uncommitted transactions involving the table. But presumably, you're not generally trying to modify the structure of the table at the same time that you're updating rows in the table (or understand that when you deploy these DDL changes that you'll block other sessions for a short period of time and you're picking your deployment times accordingly).
The specific rows that you are updating will be locked in order to prevent another session from modifying those rows until your transaction either commits or rolls back. Those row level locks are generally the locks that cause performance and scalability issues. Ideally, your code would be structured to hold the locks for as little time as possible (updating data in sets is much faster than doing row-by-row updates) and to minimize the probability that two sessions will try to update the same row simultaneously.

oracle object no longer exists

in my pl\sql process, there is an execution of "alter table exchange parition.."
on some table.
the problem is that during that operation - other users can try to access the target table.
one process that executed a select query on that table, got this error:
ORA-08103 object no longer exists.
i think that it is not the same like 'object or view doesn't exist'.
i think that 'object no longer exists' error, come when the process start ok,
and then the exchange (or other operation) come from the side and the process
can't bo done.
the chance is very low that it will happen, because the exchange is very very fast.
but for this case, there is any idea how to solve it? how to prevent this situation?
maybe a way to execute the exchange only if no-one touch the table?
thanks.
This will happen when session #2 alters the table in the table after session #1 has opened its cursor but before it is finished fetching the rows from that cursor.
I don't think there is a foolproof way to prevent the exchange from happening unless you are willing to change the code that other users are using to access the target table.
LOCK TABLE mytable IN EXCLUSIVE MODE will not wait for SELECT statements to complete, nor will it prevent new SELECT statements from starting, so acquiring an exclusive lock before attempting the ALTER TABLE will not work.
If you want to prevent the ALTER TABLE from happening at the same time as SELECTs, you need both to depend on acquiring the same lock. A relatively robust way to do that would be to use the DBMS_LOCK package to allocate a lock for that table. Call that lock "mytablelock".
Then, using DBMS_LOCK, your SELECT sessions would need to acquire "shared" locks on "mytablelock" before progressing. Your alter table sessions would need to acquire an "exclusive" lock on "mytablelock" before progressing.
This scheme would allow multiple SELECT sessions to run without interfering with each other, but it would prevent the ALTER TABLE from running while any SELECT was running.
A (much) less robust, but simpler, way to do it would be to change the SELECT statements issued from the other query into SELECT FOR UPDATES. But that's a recipe for lots of unnecessary waits and deadlock errors.

Disable queries on table while updating

I have a pl/sql script that clears (via delete from statement) and populates several depended tables like this:
delete from table-A
insert into table-A values(...)
delete from table-B
insert into table-B values(...)
These operations require ~ 10 seconds to complete and I'd like to stop all sql queries that try to read data from table-A or table-B while tables are updating. These queries should stop and continue execution when table-A and table-B are completely updated.
What is the proper way to do this?
As others have pointed out, Oracle's basic concurrency model is that writers do not block readers and readers do not block writers. You can't stop a simple select from running. Your queries will see the data as of the SCN that they started executing (assuming that you're using the default read committed transaction isolation level) so they will have a consistent view of the data before your updates started.
You could potentially acquire a custom named lock using dbms_lock.request. You would need to acquire this lock before running your updates and every session that queries the tables would also need to acquire the lock before it starts to query the tables. That will, obviously, decrease the scalability of your application but it will accomplish what you appear to be asking for. Presumably, the sessions doing queries can acquire the lock in shared mode while the session doing the updates would need to acquire it in exclusive mode.

Need help understanding the behaviour of SELECT ... FOR UPDATE causing a deadlock

I have two concurrent transactions executing this bit of code (simplified for illustration purposes):
#Transactional
public void deleteAccounts() {
List<User> users = em.createQuery("select u from User", User.class)
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.getResultList();
for (User user : users) {
em.remove(user);
}
}
My understanding is that one of the transactions, say transaction A, should execute the SELECT first, lock all the rows it needs and then go on with the DELETEs while the other transaction should wait for A's commit before performing the SELECT. However, this code is deadlocking. Where am I wrong?
The USER table probably has a lot of foreign keys referring to it. If any of them are un-indexed Oracle will lock the entire child table while it deletes the row from the parent table. If multiple statements run at the same time, even for a different user, the same child tables will be locked. Since the order of those recursive operations cannot be controlled it is possible that multiple sessions will lock the same resources in a different order, causing a deadlock.
See this section in the Concepts manual for more information.
To resolve this, add indexes to any un-indexed foreign keys. If the column names are standard a script like this could help you find potential candidates:
--Find un-indexed foreign keys.
--
--Foreign keys.
select owner, table_name
from dba_constraints
where r_constraint_name = 'USER_ID_PK'
and r_owner = 'THE_SCHEMA_NAME'
minus
--Tables with an index on the relevant column.
select table_owner, table_name
from dba_ind_columns
where column_name = 'USER_ID';
When you use a PESSIMISTIC_WRITE JPA generally traslate it to SELECT FOR UPDATE this make a lock in the database, not necessary for a row it depends of the database and how you configure the lock, by default the lock is by page or block not for row, so check your database documentation to confirm the how your database make the lock, also you can change it so you can apply the lock for a row.
When you call the method deleteAccounts it starts a new transaction and the lock will be active until the transaction commit (or rollback) in this case when the method has finished, if other transaction want to acquire the same lock it can't and I think this is why you have the dead lock, I suggest you to try annother mechanism maybe an optimistic lock, or a lock by entity.
You can try given a timeout to the acquire the lock so:
em.createQuery("select u from User", User.class)
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.setHint("javax.persistence.lock.timeout", 5000)
.getResultList();
I found a good article that explains better this error, it is cause by the database:
Oracle automatically detects deadlocks and resolves them by rolling
back one of the transactions/statements involved in the deadlock, thus
releasing one set of resources/data locked by that transaction. The
session that is rolled back will observe Oracle error: ORA-00060:
deadlock detected while waiting for resource. Oracle will also produce
detailed information in a trace file under database's UDUMP directory.
Most commonly these deadlocks are caused by the applications that
involve multi table updates in the same transaction and multiple
applications/transactions are acting on the same table at the same
time. These multi-table deadlocks can be avoided by locking tables in
same order in all applications/transactions, thus preventing a
deadlock condition.

Resources