H2 - Table not found after ALTER table - leftover "_COPY_" table - h2

I have an application that uses H2 v1.4.199, with versioning provided by Flyway.
For roughly 25% of the users, this migration ends up in an unusual state.
The migration is performing a number ALTER TABLE commands (typically dropping columns) on one database (lets call the table "fruits").
2021-01-25 14:57:04,226 ERROR main h2database - mydatabase:database opening mydatabase
org.h2.message.DbException: Table "FRUITS" not found [42102-199]
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.command.ddl.AlterTableAddConstraint.tryUpdate(AlterTableAddConstraint.java:108)
at org.h2.command.ddl.AlterTableAddConstraint.update(AlterTableAddConstraint.java:78)
at org.h2.engine.MetaRecord.execute(MetaRecord.java:60)
at org.h2.engine.Database.open(Database.java:842)
at org.h2.engine.Database.openDatabase(Database.java:319)
at org.h2.engine.Database.<init>(Database.java:313)
at org.h2.engine.Engine.openSession(Engine.java:69)
at org.h2.engine.Engine.openSession(Engine.java:201)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
at org.h2.engine.Engine.createSession(Engine.java:161)
at org.h2.engine.Engine.createSession(Engine.java:31)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:336)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:169)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:148)
at org.h2.Driver.connect(Driver.java:69)
at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
... more
Caused by: org.h2.jdbc.JdbcSQLSyntaxErrorException: Table "FRUITS" not found; SQL statement:
ALTER TABLE PUBLIC.FRUITS ADD CONSTRAINT PUBLIC.CONSTRAINT_23 PRIMARY KEY(ID) INDEX PUBLIC.PRIMARY_KEY_23 [42102-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:451)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
... 80 more
The "fruits" table has a constraint, but they end up without the "fruits" table that acts on and ends up stuck - and you can't even connect to the database.
Looking through the debug logs, I can see the following:
2021-02-02 10:11:18 lock: 1 exclusive write lock requesting for FRUITS_COPY_4_3
2021-02-02 10:11:18 lock: 1 exclusive write lock added for FRUITS_COPY_4_3
Looking through the H2 source code, I can see that a temporary table is created with the suffix of "COPY", so I'm assuming the existing table is dropped but somehow the temporary table is persisted with the new name,, which means the existing table name is no longer being used.
Unfortunately I'm not able to reproduce this so it's difficult to provide any further information.

Related

Spring Boot: How do I specify execute order of different schema.sql files?

I have created a table that has a foreign key constraint on spring-session-jdbc's spring_session table. The main motivation is that spring-session would delete the rows so that it would cascade and delete entries associated with the actual session. It became a "only works on my machine" problem because only me have had the table already in place when I start the development server. It would only work if others comment out the table first, initialize the server, then revert and do it again. Otherwise, nested exception is java.sql.SQLException: Failed to open the referenced table 'spring_session'.
I think the solution is to specify the run order of (or dependencies between) the initialization sql files. I cannot find that setting after some searching, so I am here.
schema.sql:
drop table if exists foo;
create table if not exists foo (
sid char(36) not null,
foreign key (sid) references spring_session (session_id) on delete cascade,
-- other columns and constraints
);
Possible workarounds:
Workaround #1: put an alter table add constraint statement like this in data.sql.
Workaround #2: grab spring-session-jdbc's schema.sql and put it into my schema.sql, then set spring.session.jdbc.initialize-schema=never in application.properties.
U can try flyway,it can manage your init sql files by giving them a version number. And it can record which sql have been executed, so if add another sql files, it will excute the sql you added, pass the others that have been executed.

Deadlock in oracle database (imported by datapump) on the row with the same primary key (insert and select for update)

This problem happens ONLY with the database user, which was imported by datapump. The problem is NOT present with the original database user.
I am getting deadlock in oracle 11.2.0.3 version where the current sqls of two transactions participating in deadlock are as follows :
SELECT /*AApaAA*/ objectid FROM T_DS_0 WHERE objectid = :1 FOR UPDATE
insert /*AApaAA*/ into T_DS_0(OBJECTID) values (:1 )
Both bind variables are 'AApaAA' and it is also primary key. It looks like deadlock on single resource.
There are foreign keys (on delete cascade) pointing to that primary key and they are indexed.
The deadlock graph is as follows :
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
TX-000c000f-00000322 49 102 X 46 587 X
TX-00070011-00000da4 46 587 X 49 102 S
It is not clear to me how the deadlock could happen at the single resource. It is true that the insert does not lock the row but the constraint which is probably a different resource, so that the deadlock could theoretically be possible if 1st transaction would perform the lock constraint and then lock row and the other one would be in the reverse order but I do not see any way how this could happen. Theoretically there is the child table locking possible (insert causes SX on child tables) but the select for update should not touch the child tables at all.
The full tracefile from oracle is at : https://gist.github.com/milanro/06f9a76a2607a26ac9ba8c91b88639b3
Did anyone experience a similar behavior?
Additional Info : This problem happens only when datapump is used to duplicate the db user. The original schema contains SYS indexes created during creation of Primary Keys. There are further indexes which start with the PRIMARY KEY column. Datapump then does not create the SYS index on the PRIMARY KEY column but uses the indexes starting with the PRIMARY KEY column.
It looks like when I create following database objects :
create table tbl(objectid varchar2(30), a integer, primary key (objectid));
create index idx1 on tbl(objectid,a);
there is 1 table and 2 indexes created. SYS index (OBJECTID) and idx1(OBJECTID,A). The PRIMARY KEY uses the SYS index.
After the datapump si performed, on the imported side only 1 table and 1 index is created, the index is idx1(OBJECTID, A) and that index is used for the PRIMARY KEY.
This happens in my database schema with the table SDMBase. And the deadlocks happen when I use in different transactions the combination of INSERT INTO SDMBase ... and SELECT ... FROM SDMBase ... FOR UPDATE where I use the same OBJECTID. The same code is executed in those transactions and it can be as follows in 1 transaction
1.INSERT (objectid1)
2.SELECT FOR UPDATE (objectid1)
3.INSERT (objectid1)
4.SELECT FOR UPDATE (objectid1)
...
The deadlocking situation happens on the line 2. and line 3. In my use-case, when these transactions run, the row with objectid1 is already in the database but does not have to be committed yet.
So I suppose that
step 1. should wait until objectid1 is commited and then fail and lock nothing
step 2. should lock objectid1 or wait if another transaction already locked it
step 3. should fail immediately and lock nothing
...
Apparently the step 1 even if failing, holds lock on PK for some time but only in case, the database is duplicated by datapump.
These deadlocks are rare and not reproducible manually, I suppose that the lock is not held the whole transaction but only very short time.
so it could be as follows :
TX1: 1.INSERT (objectid1) (fails and does not lock)
TX1: 2.SELECT (objectid1) (locks SDMBase)
TX2: 1.INSERT (objectid1) (fails but locks PK)
TX1: 3.INSERT (objectid1) (waits on PK)
TX2: 2.SELECT (objectid1) (waits on SDMBase)
Even if I create the index in the imported portal as SDMBase(OBJECTID) and let the PRIMARY KEY to use it, and even if I recreate the other index (OBJECTID,...), it still deadlocks. So I suppose that there is some problem with the PK constraint check.
The fix of this problem is to create the SDMBase(OBJECTID), let the PRIMARY KEY use it and then perform the datapump again. The import must be performed in 2 steps, at first one to exclude indexes and at the second one import only indexes
exclude=INDEX/INDEX,statistics
include=INDEX/INDEX CONTENT=METADATA_ONLY
This problem occurs in both 11.2.0.3 and 12.2.0.1

Oracle lock issue - ORA-00054: resource busy - while creating a foreign key

Initial situation:
A table PARENT_TABLE with a primary key on its column PK_COL.
A table CHILD_TABLE1 with a foreign key on PARENT_TABLE(PK_COL).
I insert a line into CHILD_TABLE1 in a transaction and do not commit.
Then I try to create a table CHILD_TABLE2 symmetrical to CHILD_TABLE1 in another session.
But an ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired is raised when I create the foreign key, because of the ongoing insertion in CHILD_TABLE1.
I don't understand why Oracle is preventing the foreign key creation: there is no modification performed on PARENT_TABLE.
Please help.
To reproduce under sqlplus:
set autocommit off
create table PARENT_TABLE(PK_COL varchar(10));
alter table PARENT_TABLE add constraint PK_CONSTRAINT primary key (PK_COL);
insert into PARENT_TABLE values ('foo');
commit;
create table CHILD_TABLE1(CHILD_PK_COL varchar(10), FK_COL varchar(10));
alter table CHILD_TABLE1 add constraint CHILD_TABLE1_CONSTRAINT foreign key (FK_COL) references PARENT_TABLE(PK_COL);
create index CHILD_TABLE1_INDEX on CHILD_TABLE1(FK_COL);
insert into CHILD_TABLE1 values ('bar', 'foo');
In another console:
alter session set ddl_lock_timeout=10;
create table CHILD_TABLE2(CHILD_PK_COL varchar(10), FK_COL varchar(10));
alter table CHILD_TABLE2 add constraint CHILD_TABLE2_CONSTRAINT foreign key (FK_COL) references PARENT_TABLE(PK_COL);
Funny: with NOVALIDATE in CHILD_TABLE2_CONSTRAINT creation, the execution is hanging...
You are not modifying something in the parent table. But you're
actually, trying to refer its primary key in your child table. Before
establishing a relationship or any DDL with table, it has to be free
of locks.
So, before creating this constraint, Oracle do check for existing locks over the referred table(PARENT_TABLE). A lock over a table(Table Level Lock,in this context) is actually for a reason to adhere to the ACID properties.
One best example to understand its importance is ON DELETE CASCADE which means if a record in the parent table is deleted, then the corresponding records in the child table will automatically be deleted.
So, when there's a uncommitted insert/update/delete over the child table referring a parent table. No other referential constraint can be created to the parent. Just to avoid a deadlock or chaos.
To be more crisp, when you have an uncommitted insert in your child table.
There's a lock over your parent table as well. So all other further DDLs referring it will be made wait.
You can use this query to check the same.
SELECT c.owner,
c.object_name,
c.object_type,
b.sid,
b.serial#,
b.status,
b.osuser,
b.machine
FROM v$locked_object a ,
v$session b,
dba_objects c
WHERE b.sid = a.session_id
AND a.object_id = c.object_id;
I added the LOCKED_MODE explanation in you query:
DECODE(a.LOCKED_MODE, 0,'NONE', 1,'NULL', 2,'ROW SHARE (RS/SS)', 3,'ROW EXCLUSIVE (RX/SX)', 4,'SHARE (S)', 5,'SHARE ROW EXCLUSIVE (SRX/SSX)', 6,'EXCLUSIVE (X)', NULL) LOCK_MODE.
Here is the result:
OBJECT_NAME OBJECT_TYPE LOCK_MODE SID SERIAL# STATUS
------------------------------ ------------------- ----------------------------- ---------- ---------- --------
PARENT_TABLE TABLE ROW EXCLUSIVE (RX/SX) 71 8694 INACTIVE
CHILD_TABLE1 TABLE ROW EXCLUSIVE (RX/SX) 71 8694 INACTIVE
RX/SX is a table lock so it prevents any DDL operation (That seems to be said in the doc). This lock is used on both parent and child. I suppose that the lock is added on parent to at least prevent it from being deleted so we would lost the pending update on the child table.
That said, I still have no solution. Suppose that the parent table is a manufacturer. There is a child car table and we are inserting plenty of new cars in that table on the fly. There is a foreign key from car to manufacturer. Now there is a new product that we want to manage: "bicycles". So we want to create a bicycle table similar to car. But we cannot create the table as we are performing insertions in car. Seems a very simple use case... How to support it?
=====
Edit:
There might be no solution. Here is a guy with the same issue.

I created a Global Temp Table. I cannot Drop the table

In Unix, connecting to oracle server, I create a temp table on commit preserve rows. I then first truncate the table then I go to drop the table. Trying to drop the table I receive the following error:
ORA-14452: attempt to create, alter or drop an index on temporary table already in use (DBD ERROR: error possibly near <> indicator at char 11 in 'drop table <>temp01')
I cannot end session using Kill through commands because I do not have permission.
Seems to me, the error is pretty clear:
$ oerr ora 14452
14452, 00000, "attempt to create, alter or drop an index on temporary table already in use"
// *Cause: An attempt was made to create, alter or drop an index on temporary
// table which is already in use.
// *Action: All the sessions using the session-specific temporary table have
// to truncate table and all the transactions using transaction
// specific temporary table have to end their transactions.
So, make sure that all sessions are not using the table. If even one other session is using the table, You will get this error, and won't be able to drop it.

How to Recover an Entire Oracle Schema

I was using Navicat for Oracle to backup an entire Schema. I mistakenly selected the Execute SQL File instead of the Backup file option and All previous data has been changed/lost. I tried using the Oracle Undo feature but it says the table definition has changed. Please i am not skilled in oracle, i only used it for a project cause it was required so i just use it to store the data. I need all the help i can get right now to recover the entire schema to how it was 24 hours ago else i am so screwed...(forgive my language)
From your description you ran a script that dropped and recreated your tables. As you have flashback enabled and your dropped table is in the recycle bin, you can use the 'Flashback Drop' feature to get the dropped table back.
Here's an example with a single table:
create table t43 (id number);
drop table t43;
create table t43 (id2 number);
show recyclebin;
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
-------------------------------- ------------------------------ ------------------------- -------------------
T43 BIN$/ILKmnS4b+jgQwEAAH9jKA==$0 TABLE 2014-06-23:15:38:06
If you try to restore the table with the new one still there you get an error:
flashback table t43 to before drop;
SQL Error: ORA-38312: original name is used by an existing object
You can either rename the restored table:
flashback table t43 to before drop rename to t43_restored;
... which is useful if you want to keep your new table but be able to refer to the old one; or probably more usefully in your situation rename the new table before restoring:
alter table t43 rename to t43_new;
table T43 altered.
flashback table t43 to before drop;
table T43 succeeded.
desc t43
Name Null Type
---- ---- ------
ID NUMBER
You can undrop all of your tables, and as referential constraints still work with tables in the bin you don't have to worry too much about restoring parent tables before child tables, though it's probably neater to do that if you can.
Note that the bit in the documentation about retoring dependent objects - that index names won't be preserved and you'll need to rename them after the restore with alter index.
You can't undrop a sequence; those don't go into the recycle bin. If you need to reset a sequence so it doesn't repeat values you already have, you can get the highest value it should hold (from the primary keys on your restored table, say) and use temporarily change the increment value to skip over the used numbers.

Resources