H2 database file corrupted and cannot recover - h2

We use h2 database as our Confluence data source. Yesterday there was a heap overflow error while performing the Confluence backup job. After that we found some data was corrupted.
When we execute the select statement on the damaged data, the error is as follows:
IO Exception: "java.io.IOException: org.h2.jdbc.JdbcSQLNonTransientException: IOException: ""Missing lob entry: 4172202"" [90028-200]";"lob: null table: -3 id: 4172202"[90031-200] 90031/90031
We have used the h2 recovery tool but the restored database still has the above error.
We checked the sql script generated by the recovery tool and found that the corrupted data was written in the comments like:
-- dump: fragmentation of damaged data...
Is there any way we can do to recover the h2 database?

Related

Can I continue exporting data despite the snapshot too old error?

I am exporting a schema from Oracle production database to create a new, test database.
Unfortunately, I received the error ORA-01555: snapshot too old
Despite the error, export still in progress .dmp file is growing.
I don't care about the consistency and quality of the data - The application team will test some tested migrations.
Can I continue and be able to import this data into my new test database?
See Oracle Support Doc ID 452341.1. This is most likely caused by corrupted LOB files and will require a physical recovery from backup. The Doc has specific steps to verify which LOBs are affected and correct the issue.

Test connection of h2 database

I tried to test the connection on h2 console and got this error:-
The error with code 50000 is thrown when something unexpected occurs, for example an internal stack overflow. For details about the problem, see the cause of the exception in the stack trace.[General error: "java.lang.IllegalStateException: Unable to read the page at position 6322192528771 [1.4.200/6]" [50000-200] HY000/50000 (Help)][1]
Your database file is corrupted. If you need data from it, you can try to use the recovery tool. If you don't need it, you can simply delete test1.mv.db in the home directory of your user account.
When you use persistent embedded databases in H2 you should be careful with them, Thread.interrupt() may corrupt the database file, for example, unless you're using the async: filesystem. Corruption is also possible when you open the database file from a recent version of H2 with older one. The default MVStore engine also has some own problems.

Getting the writable operation error while using oracle cdc

I am getting the following error while connecting through oracle cdc client and my origin database is read only database but the error is database required for writable operation. please help
Caused by: java.sql.SQLException: ORA-01300: writable database required for specified LogMiner options
ORA-06512: at "SYS.DBMS_LOGMNR", line 58
ORA-06512: at line 1
This is a limitation of LogMiner itself - the LogMiner dictionary requires write access to the database for all modes except Extracting the LogMiner Dictionary to a Flat File, which does not guarantee transactional consistency. Oracle recommends using the online catalog or extracting the dictionary from redo log files, which are the options that Data Collector offers.
See the Oracle LogMiner documentation.

IMPDP uses more disk space than expected

Background:
I've been tasked with importing a large amount of data from a production database to a test database (Oracle 12c release 2 running on RHEL) and I'm using Data Pump.
The first time I imported the tables, The tables were created and the data was imported as planned, but - due to an issue in my data pump parameter file - the constraints were not imported.
My subsequent attempts did not go as well, however. Data Pump began to freeze part way and the STATUS command showed that no bytes were being processed.
My Solution Attempts:
I tried using TABLE_EXISTS_ACTION=REPLACE and dropping the tables directly after an attempt. I also dropped the master tables of any data pump jobs I was unable to terminate from the utility.
Still, it seemed to hang earlier and earlier in the process as I continuously tried to import the tables. df -h returned 100% disk usage every time it hanged.
The dump file itself is on a separate drive so it's no longer taking up room. I've been trying to clear out space but it keeps filling up when I run a job and I can't tell where. Oracle flashbacks are disabled and I made sure to purge the oracle recycle bin.
tl;dr:
Running impdp jobs seems to use up disk space beyond the added tables and the job master tables. Where is this memory getting used up and what can I do to clear it for a succesful import?
I figured out the problem:
The database was in archivelog mode in order to set up streams and recovery manager backups. As a result, impdp was causing a flood of archived database changes.
In order to clear out the old archives I ran the following in rman for every database in noarchivelog mode on the server.
connect target /
run {
allocate channel c1 type disk;
delete force noprompt archivelog until time 'SYSDATE-30';
release channel c1;
}
This cleared up 60 gigabytes. I also added the parameter transform=disable_archive_logging:Y to my impdp parameter file. This suppresses archive creation when running data pump imports.

Oracle Database Copy Failed Using SQL Developer

Few days ago while i tried perform database copy from remote server to local server i got some warnings and one of them was like this
"Error occured executing DDL for TABLE:MASTER_DATA".
And then i clicked yes, but the result of database copy was unexpected, there were only few tables has been copied.
When i tried to see DDL from SQL section/tab on one of table, i got this kind of information
-- Unable to render TABLE DDL for object COMPANY_DB_PROD.MASTER_DATA with DBMS_METADATA attempting internal generator.
I also got this message and i believe this message showed up because there's something wrong with DDL on my database so tables won't be created.
ORA-00942: table or view does not exist
I've never encountered this problem before and i always perform database copy every day since two years ago.
For the record before this problem occurred, i have removed old .arch files manually not by RMAN and i never using any RMAN commands. I also have removed old .xml log files, because these two type of files have made my remote server storage full.
How to trace and fix this kind of problem? Is there any corruption on my Oracle?
Thanks in advance.
The problem was caused by datafile has reached its max size though. I have resolved the problem by following the answer of this discussion ORA-01652: unable to extend temp segment by 128 in tablespace SYSTEM: How to extend?
Anyway, thank you everyone for the help.

Resources