IMPDP uses more disk space than expected - oracle

Background:
I've been tasked with importing a large amount of data from a production database to a test database (Oracle 12c release 2 running on RHEL) and I'm using Data Pump.
The first time I imported the tables, The tables were created and the data was imported as planned, but - due to an issue in my data pump parameter file - the constraints were not imported.
My subsequent attempts did not go as well, however. Data Pump began to freeze part way and the STATUS command showed that no bytes were being processed.
My Solution Attempts:
I tried using TABLE_EXISTS_ACTION=REPLACE and dropping the tables directly after an attempt. I also dropped the master tables of any data pump jobs I was unable to terminate from the utility.
Still, it seemed to hang earlier and earlier in the process as I continuously tried to import the tables. df -h returned 100% disk usage every time it hanged.
The dump file itself is on a separate drive so it's no longer taking up room. I've been trying to clear out space but it keeps filling up when I run a job and I can't tell where. Oracle flashbacks are disabled and I made sure to purge the oracle recycle bin.
tl;dr:
Running impdp jobs seems to use up disk space beyond the added tables and the job master tables. Where is this memory getting used up and what can I do to clear it for a succesful import?

I figured out the problem:
The database was in archivelog mode in order to set up streams and recovery manager backups. As a result, impdp was causing a flood of archived database changes.
In order to clear out the old archives I ran the following in rman for every database in noarchivelog mode on the server.
connect target /
run {
allocate channel c1 type disk;
delete force noprompt archivelog until time 'SYSDATE-30';
release channel c1;
}
This cleared up 60 gigabytes. I also added the parameter transform=disable_archive_logging:Y to my impdp parameter file. This suppresses archive creation when running data pump imports.

Related

Debezium Oracle Connectors CDC usage on top of Kafka and Backup / DataGuard

We are trying to use the Oracle Debezium connector (1.9b) on top of Kafka.
We tried to use 2 things regarding snapsot_mode: schema_only and initial.
We use "log.mining.strategy":"online_catalog" (should be the default)
We are using a PDB/CDB Oracle instance on Oracle 19c.
My understanding is that;
The connector create a session to the PDB
It add a shared lock to ensure the structure will not change (shared) for a short duration
the DDL structure is retrieved from the PDB
It create a session to the CDB
It retrieve the last LSN event from CDB
if snapshot == initial, it will use a "JDBC query to retrieve the whole data" from PDB
it does NOT seems to release the initiated session (or rather process) to the PDB
it continues to mines new events from CDB
x. ... it seems to work for a couple of minutes
After a couple of minutes, the number of process increase drastically
The Oracle database freeze, due to an excess of process (that you can follow using v$process)
We had a lot of errors message; like:
A. Failed to resolve Oracle database
B. IO Error: Got minus one from a read call
C. ORA-04025: maximum allowed library object lock allocated
D. None of log files contains offset SCN: xxxxxxx
The message in point D. said it tries to use a offset which was part of "an old" archived log.
Every 30min (or before, it we have more activity), the log is switched from a file to another.
And a backup is occuring every 30minutes which will read the logs, backup it and then: delete it .
It seems to me that Debezium tried to reach past archived log whose was deleted by the backup process.
The process of "deleting previous archived logs" seems "correct" to me, isn't it ?
Why Debezium tries to pass through the archived logs ? because when snapshot==schema_only it should only catch the news events, therefore why using the archived one ?
How can we manage it ?
I hope that if this point is resolved in my use-case, the debezium will stop to "loop" creating new process and ultimately will stop blocking the Oracle DB.
If you have any clues or opinions, don't hesitate to share it. Many thanks
We try to use shared lock and none
We try to limite the number of tables in the scope
I cannot ask to stop the backup, in production it's not a good idea and in test, it seems that the backup is only there to clean the archived logs and avoid ending with completely used storage.

Is it possible to apply archivelogs from standby DB to Production DB?

I've lost two hard disks yesterday . One contains an Oracle Datafile and another contains part of archive logs generated in the past 2 days. (say, arch_5.dbf and arch_6.dbf are lost, in the set of arch_1 to arch_10).
I have switched over to my standby site as part of business continuity plan.
Now, I have to recover the missed datafile. It requires the missed two archive log files for recovery.
Is is possible to apply the same set of archivelogs from standby to production, in order to recover the datafile ?
Kindly advise.
~SK~
It might prove a bit easier to
Use RMAN Incremental Backups to Refresh a Standby Database
you could use the archives that are transported to the standby site but they won't help with the recovery of the lost datafiles, unless their creation is logged in the archives.
Using the incremental backup option is easier.

oracle online recovery does not change archived db

created db with dbca choosing asm, fast recovery area, archived mode.
Now I want to backup it preform some tests that would change the content of the db and If need comes then recover it from the backup.
I know of export/import utilities but need to use rman in case I need to move the db.
I followed the following tutorial with some caveats with most of the commands succeeding:
https://www.thegeekstuff.com/2013/08/oracle-rman-backup/
https://www.thegeekstuff.com/2014/11/oracle-rman-restore/
RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
SQL> SHUTDOWN
RMAN> STARTUP NOMOUNT;
RMAN> RESTORE CONTROLFILE FROM "+DG1/<DB_NAME>/CONTROLFILE/CURRENT.<3_DIGIT_NUMBER>.<10_DIGIT_NUMBER>"
(before one of these commands) mounted the db with SQL> STARTUP MOUNT because needed exclusive type
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE;
RMAN> ALTER DATABASE OPEN RESETLOGS;
*last one didn't run successfully, outputted that only if badly recovered can be run
the change I've made to check the backup and recovery, was droping a table on one occurrence and inserting a record in another.
the problem is after checking the db didnt change to its previous state.
"only if badly recovered can be run" is not a known error message.
NOthing changed, because you didn't put a limitation on the 'recover' step. So it recovered right back through all the online redo - right back to where it was an instant before you shut it down to do the restore/recover. You need to look at the SET UNTIL command in the rman manuals. SET UNTIL a point in time or scn prior to when you did the activity that you expected to be gone after the restore/recover.
This is exactly as expected, and exactly what you would do in case of a disaster recovery where you do not want any data loss. In your case you do not want complete recovery, but a Point In Time (PIT) recovery.

Problem when backing up Oracle 10g

I've just started a job and have identified an issue in which the database isn't currently being backed up properly so to speak. We are doing one back up every 6 hours that uses the Oracle native backup utility, but we were also sold a process by a company in which they stated that they could in essence perform "warm" backups of our database by simply taking file system copies of our database files and when we needed to restore we'd simply shut down Oracle and then copy over the files that had been copied, restart Oracle and the world would be whole again. The challenge is the fact that we have not gotten this to work just yet. I need to spend some more time reviewing the message that Oracle is giving, but my primary question is, "Is it possible" to take copies of Oracle files while Oracle is still running and to use those files at a later date to restore the database? I know that it works if the database is shut down, and then copies are made, but this is the first that I've heard that a copy (file system) can be made while the database is running. Any guidance would be greatly appreciated. Here is the error that we are getting.
ORA-00314: log 3 of thread 1, expected sequence# 1939 doesn't match 1944
ORA-00312: online log 3 thread 1: 'E:\ORACLE\ORADATA\ITMS\REDO03.LOG'
Yes, it is possible, but you have to put all the tablespaces into backup mode first and take them out afterwards (e.g. ALTER TABLESPACE x BEGIN BACKUP and ALTER TABLESPACE x END BACKUP; you'll need to check the syntax and make sure it's appropriate for your situation!). Oversimplifying hugely, this tells Oracle not to write to any of the data files, so they're all kept in a consistent state.
The two main problems you get otherwise are that individual files are updated while you're copying them so a single file can be corrupted; and more visibly that different files have different internal timestamps and sequences so Oracle won't allow them to be used.
If you're using a process you've bought in then it should already be taking care of all that though. It sounds the backup is OK and it's the restore that you haven't got working.
I haven't been involved in a restore from a hot backup for some time so someone else will need to give the detail on the actual error. My read of it is that you've tried to open with the restored data files but the later live redo logs. When restoring I think you either have to RECOVER the database using the redo logs generated since the backup was taken; or if you're trying to restore to that point in time then you can open the data with the RESETLOGS directive and lose all the changes from all the redo logs that came later. But really take more informed advice than this...
As far as I know, there are two ways that you can "copy" datafiles from a running Oracle instance.
The datafiles are copied for a
tablespace when the tablespace is in
"BEGIN BACKUP" mode.
You are using a high-end journalling
filesystem such as Veritas that can
snapshot and track block
changes on the filesystem while the
copy is taking place.
It is possible. You must must be in ARCHIVELOG mode.
An example script would be for manual:
Alter tablespace USERS begin backup;
host cp -p /u02/oradata/PROD/users01.dbf /u03/backup/PROD/
host cp -p /u02/oradata/PROD/users02.dbf /u03/backup/PROD/
Alter tablespace USERS end backup;
However, I would recommend just using RMAN. RMAN is QUITE ROBUST, included free, and will do the hot backup, as well as cold. It will clone to another instance, clone as a point in time, recover to a certain point in time, etc. Any manual backup procedure should be migrated to using the RMAN.
If you wanted to backup the entire database while it is open (I prefer as Oracle with DBA so you can avoid passwords in scripts, but ymmv):
$ ORAENV_ASK=NO
$ ORACLE_SID=PROD
$ . oraenv
$ rman target=/
Recovery Manager: Release 10.2.0.4.0 - Production on Thu Oct 28 14:23:29 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: PROD (DBID=x)
RMAN> backup as compressed backupset database plus archivelog;
...
Backup Complete.
I've effectively done this with a non-mission-critical database running on Amazon EC2. My backup strategy is to periodically take a snapshot of the EBS volume. To restore a backup, I create a new EBS volume from the snapshot, start up the instance using it, then run RECOVER DATABASE.
This loses any transactions that were in-flight at the time when the snapshot was taken, of course.

How to duplicate an Oracle instance?

How can I duplicate an Oracle instance? Does anyone have any idea how to do so?
Assuming you want the schema and data duplicated, use the exp and imp commands to export your database, then import it as another user using the FROMUSER and TOUSER parameters.
Well, presumably you have a backup (surely!), so just test your backup recovery on your test server.
To be slightly more serious, it depends what version you are using, newer versions of RMAN make it pretty easy I believe, older versions, you basically do it as a backup recover.
How I've done it in the past, is basically
copy backup data files
create init file
create a new controlfile is the command 'CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS ARCHIVELOG'
Apply archivelogs and then open with resetlogs
Here is an article which explains the process with a bit more detail
A minor comment on your terminology - "instance" is actually the set of processes running on the database server host and you want to duplicate the "database".
As someone else mentioned, the best way is to start with an RMAN backup of the original database. However, since Oracle 9 RMAN has had the "DUPLICATE DATABASE" command, which takes care of a lot of housekeeping that used to be necessary if you just made a copy by restoring a production backup (e.g. resetting DBID, changing data and log file locations in the control file, setting database GLOBAL_NAME, etc.).
If you're not using RMAN, and the database is on the small side, you can script something that puts each tablespace in hot backup mode, copies the datafiles for that tablespace to a backup location, and then takes the tablespace out of hot backup mode. You now have a recoverable backup that can be moved to another host for archive log application. This definitely has a performance impact on the original database and should be your last resort.
Create a template based on your existing instance. You can then create other instances.

Resources