Oracle Database object File backup - oracle

How can i take a backup of all database objects(table schema, procedure, function)
and store it in my windows file location every night. I'm connecting using PL/SQL Developer to oracle server located at different location.
To put in short words "I should have backup in my machine rather than the server", Any ideas

Assuming you installed them as part of your Oracle client installation, you could use the Oracle Export and Import utilities to create a logical backup on your client machine.
On the other hand, I would strongly question the wisdom of this requirement. Your DBA ought to be quite concerned about someone generating regular exports of their database that are not under the same controls as the normal backups to prevent them falling into the wrong hands. You're also copying all the data from the database over the network on a regular basis-- that is going to put a substantial load on the database and on the network that are likely to draw the attention of DBAs and network admins.

To back up backup sets from disk to tape:
If you are backing up a subset of available backup sets, then execute the LIST BACKUPSET command to obtain their primary keys.
The following example lists the backup sets in summary form:
RMAN> LIST BACKUPSET SUMMARY;
List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Comp Tag
--- -- -- - ----------- --------------- ------- ------- ---- ---
1 B F A DISK 28-MAY-07 1 1 NO TAG20070528T132432
2 B F A DISK 29-MAY-07 1 1 NO TAG20070529T132433
3 B F A DISK 30-MAY-07 1 1 NO TAG20070530T132434
The following example lists details about backup set 3:
RMAN> LIST BACKUPSET 3;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
3 Full 8.33M DISK 00:00:01 30-MAY-07
BP Key: 3 Status: AVAILABLE Compressed: NO Tag: TAG20070530T132434
Piece Name: /disk1/oracle/dbs/c-35764265-20070530-02
Control File Included: Ckp SCN: 397221 Ckp time: 30-MAY-07
SPFILE Included: Modification time: 30-MAY-07
SPFILE db_unique_name: PROD

Related

reg: goldengate extract process not working

My extract process is not running, below is the errors found, kindly suggest how to get all process up and running.
GGSCI (pltv015) 3> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT ABENDED EXTEMP 00:00:04 05:46:53
EXTRACT RUNNING PUMPEMP 00:00:00 00:00:03
REPLICAT STOPPED REP507 00:00:00 00:18:08
REPLICAT ABENDED REPTEST 00:00:00 2527:29:44
for EXTEMP :
2020-07-31 06:59:39 ERROR OGG-06601 Mismatch between the length of seqno from checkpoint (9) and recovery (6) for extract trail /opt/app/t1c2d507/ggs/t1c2d507/tr
ails/p1
for REP507 ::
2020-07-31 06:59:37 ERROR OGG-00664 OCI Error beginning session (status = 1017-ORA-01017: invalid username/password; logon denied).
2020-07-31 06:59:37 ERROR OGG-01668 PROCESS ABENDING.
2020-07-31 06:59:39 ERROR OGG-06601 Oracle GoldenGate Capture for Oracle, extemp.prm: Mismatch between the length of seqno
from checkpoint (9) and recovery (6) for extract trail /opt/app/t1c2d507/ggs/t1c2d507/trails/p1.
Just in case it might help you. The following workaround applies only to Oracle GoldenGate version 12.2.0.1.0. Applies to any to any platform.
Running GG version 12.2 PUMP fails with this error
ERROR OGG-06601 Mismatch between the length of seqno from checkpoint (9) and recovery (6) for extract trail /path_to_the_trail/
Trying to read trail file which uses 6 digit checkpoint with version 12.2 when this version uses a 9 digit checkpoint. Same error might happen even when the trail files are actually having the same length as well. In that case, the error message is incorrect as it is related with a bug with code 25439681.
If the error "Mismatch between the length of seqno from checkpoint (9) and
recovery (6) for extract trail" is seen and the filename lengths are the same
then this bug may have been encountered. Note that this message masks the
real error message so the fix in Bug 25439681 does not resolve the underlying error but
makes sure the correct error is reported.
Workaround
PART I
Stop PUMP
Stop Manager
Add the following to your GLOBALS file
TRAIL_SEQLEN_6D
REASON: Tell GG to use 6 digit checkpoint
Start Manager
Alter Pump with ETROLLOVER
Start Pump
Allow PUMP to read local trail file and write them to a remote trail file
Allow replicat to process all transactions. Replicat should show 0 lags to indicate all transactions , from the source, have been processed on the target database.
REASON: Clean up existing trail files, created from a prior release to GG version 12.2, still using a 6 digit checkpoint
PART II
Assuming you had no problems with PART I, then you need to perform some tasks both in source and target.
On Source
Remove TRAIL_SEQLEN_6D from GLOBALS
alter ext E1 etrollover where E1 is the name of your extract which creates the local trail file. REASON: ETROLLOVER needed to convert 6 digit checkpoint to 9 digits as well as GG version 12.2
Use the following to display the new sequence number of local trail file.
info extract E1, detail
or
info extract E1, showch
Write Checkpoint #1
Current Checkpoint (current write position):
Sequence #: xx
where xx = new sequence number of local trail file
alter ext P1, extseqno xx , extrba 0 (where xx = new sequence number of local trail file and P1 is the name of your PUMP) --> to handle input trail and the REASON: Tell PUMP to use the new local trail file created in step 1
alter ext p1, etrollover ---> to handle output trail. Reason Tell PUMP to create and write to a new remote trail file.
Use the following to display the new sequence number of the remote trail file
info extract E1, detail
or
info extract E1, showch
Write Checkpoint #1
Current Checkpoint (current write position):
Sequence #: yy
where yy = new sequence number of the remote trail file
On Target
alter replicat R1, extseqno yy , extrba 0 where yy = new sequence number + 1 of the remote trail file
Go back to Source
Allow changes to be made to Source tables involved with GG
Perform insert or update to verify it gets replicated to the target.
UPDATE
To update the password of the CGADMIN
Step 1: check Golden Gate user
SQL> select username,account_status from dba_users where username like ‘GG%’;
USERNAME ACCOUNT_STATUS
—————————— ——————————–
GGADMIN OPEN
Step 2: Change the password is database first
SQL> alter user GGADMIN identified by newpassWORD;
Step 3: Encrypt the new modified password in golden gate processes.
ENCRYPT PASSWORD passWORD ENCRYPTKEY DEFAULT
AACAAAAAAAAAAAIAWIVENGVBBFXEFEQH
Step 4: copy the password
dblogin userid GGADMIN, password AACAAAAAAAAAAAIAWIVENGVBBFXEFEQH, encryptkey default

what is the file ORA_DUMMY_FILE.f in oracle?

oracle version: 12.2.0.1
As you know, these are then unix processes for the parallel servers in oracle:
ora_p000_ora12c
ora_p001_ora12c
....
ora_p???_ora12c
They can be seen also with the view: gv$px_process.
The spid for each parallel server can be obtained from there.
Then I look for the open files associated with te parallel server here:
ls -l /proc/<spid>/fd
And I'm obtaining around 500-10000 file descriptors for several parallel servers equal to this one:
991 -> /u01/app/oracle/admin/ora12c/dpdump/676185682F2D4EA0E0530100007FFF5E/ORA_DUMMY_FILE.f (deleted)
I've deleted them using:(actually I've create a small script for doing it because there are thousands of them)
gdb -p <spid>
gdb> p close(<fd_id>)
But after some hours the file descriptors start being created again (hundreds every day)
If they are not deleted then eventually the linux limit is reached and any parallel query throws an error like this:
ORA-12801: error signaled in parallel query server P001
ORA-01116: error in opening database file 132
ORA-01110: data file 132: '/u02/oradata/ora12c/pdbname/tablespacenaname_ts_1.dbf'
ORA-27077: too many files open
Does anyone have any idea of how and why this file descriptors are being created, and how to avoid it?.
Edited: Added some more information that could be useful.
I've tested that when a new PDB is created a directory DATA_PUMP_DIR is created in it (select * from all_directories) that is pointing to:
/u01/app/oracle/admin/ora12c/dpdump/<xxxxxxxxxxxxx>
The linux directory is also created.
Also one file descriptor is created pointing to ORA_DUMMY_FILE.f in the new dpdump subdirectory like the ones described initially
lsof | grep "ORA_DUMMY_FILE.f (deleted)"
/u01/app/oracle/admin/ora12c/dpdump/<xxxxxxxxxxxxx>/ORA_DUMMY_FILE.f (deleted)
This may be ok, the problem I face is the continuos growing of the file descriptors pointing to ORA_DUMMY_FILE that reach the linux limits.

Vertica - is there a way of retrieving the rejected records by code?

The "REJECTMAX" parameter is a technique of executing copy command even though there are invalid records in the csv
(so if i have 100 records, 9 of them are invalid & max rejected is 10 the file will upload)
I wonder if there is a way that i can get as a text the rejected records that prints into the rejected file so i can log it into application error log.
Here you have an example on how to use REJECTED DATA. Suppose you have a table like this:
SQL> CREATE TABLE public.mydata ( id INTEGER ) ;
CREATE TABLE
and an input file containing:
$ cat /tmp/mydata
1
2
3
ABC
4
5
Clearly ABC won't fit into an integer...
So we run:
SQL> COPY public.mydata FROM '/tmp/mydata' REJECTMAX 2 REJECTED DATA '/tmp/mydata.rejected' ;
NOTICE 7850: In a multi-threaded load, rejected record data may be written to additional files
HINT: Rejected data may be written to files [/tmp/mydata.rejected], [/tmp/mydata.rejected.1], etc
Rows Loaded
-------------
5
And now...
$ cat /tmp/mydata.rejected
ABC
Is this what you were looking for?

Oracle - redo sequence number is different from oracle server's sequence number

I have an oracle database which has problems preventing it from opening.
To overcome the issues, I tried following steps:
First I mounted database:
SQL> startup mount;
ORA-32004: obsolete and/or deprecated parameter(s) specified
ORACLE instance started.
Total System Global Area 1.2560E+10 bytes
Fixed Size 2171344 bytes
Variable Size 6878662192 bytes
Database Buffers 5670699008 bytes
Redo Buffers 8601600 bytes
Database mounted.
After that, I recovered database as below:
SQL> recover database until cancel;
ORA-00279: change 338584095 generated at 11/22/2016 08:41:55 needed for thread 1
ORA-00289: suggestion : /oracle/app/product/11g/db/dbs/arch1_9218_833801667.dbf
ORA-00280: change 338584095 for thread 1 is in sequence #9218
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/oracle/app/oradata/ora11g/system01.dbf'
ORA-01112: media recovery not started
After this I tried to alter open database as below:
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/oracle/app/oradata/ora11g/system01.dbf'
and finally I tried recovering system01 datafile as below:
SQL> recover datafile 1;
ORA-00283: recovery session canceled due to errors
ORA-00314: log 2 of thread 1, expected sequence# 9218 doesn't match 9215
ORA-00312: online log 2 thread 1: '/oracle/app/oradata/ora11g/redo02.log'
as you can see in the final error "ORA-00314: log 2 of thread 1, expected sequence# 9218 doesn't match 9215" there is a sequence mismatch between the logfile redo02.log and the server.
How can this mismatch occur and what can I do to fix this?
PS: Since database cannot be opened, I cannot switch logfile and since redo02.log is the current logfile, I cannot drop or clean it.
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIME
------------- ------------------
1 1 0 52428800 1 NO UNUSED
338564041 22-NOV-16
3 1 0 52428800 1 NO UNUSED
338544000 22-NOV-16
2 1 9218 52428800 1 NO CURRENT
338584094 22-NOV-16

Oracle generates lots of .dbf files

When I arrived at the office this morning, our Oracle 10.2 server was out of disk space. On closer inspection I found that about 1 to 4 or more .dbf files are generated once a minute (e.g. 1_1278092_658232789.dbf, 1_1278093_658232789.dbf, etc.). I created a bit of space, but Oracle still creates these files without deleting the old ones. It seems to have started about 35 hours ago. How do I restore the server to normal. Please note that I am not an Oracle DBA and have limited Oracle knowledge.
Edit 1:
First, I manage to clear about 270GB of space with the following, which allowed the server to keep running:
RMAN> CROSSCHECK BACKUP;
RMAN> DELETE ARCHIVELOG ALL;
To answer ora-600's questions:
In which path does Oracle create those files?
/home/oracle/archive/
(which is also the value of log_archive_dest_1)
DB_CREATE_FILE_DEST (parameter for datafiles)
This does not seem to have been set ("show parameter DB_CREATE_FILE_DEST" shows no value), but the database files are in
/home/oracle/app/oracle/product/oradata/irs3
DB_RECOVERY_FILE_DEST (parameter for FRA) -- which sub directory?
sys#iris > show parameter DB_RECOVERY_FILE_DEST
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /home/backup
db_recovery_file_dest_size big integer 2500G
I suspect that these are flashback logs. If so you should limit the flash recovery area (FRA) by setting the parameter DB_RECOVERY_FILE_DEST_SIZE to a smaller value. Oracle keeps writing flashback logs until the FRA is out of space... then it stats removing/overwriting old files.
Wel, the previous DBA did set this to a very high value and now it is full. E.g. look at:
sys#iris > SELECT NAME, (SPACE_LIMIT/1024/1024) || 'MB' AS SPACE_LIMIT,
((SPACE_LIMIT - SPACE_USED + SPACE_RECLAIMABLE)/1024/1024) || 'MB' AS SPACE_AVAILABLE,
ROUND((SPACE_USED - SPACE_RECLAIMABLE)/SPACE_LIMIT * 100, 1)
AS PERCENT_FULL
FROM V$RECOVERY_FILE_DEST;
NAME SPACE_LIMIT SPACE_AVAILABLE PERCENT_FULL
/home/backup 2560000MB 940MB 100
But RMAN now spits errors like these in its log files:
....
input archive log thread=1 sequence=1278543 recid=1271197 stamp=866048159
input archive log thread=1 sequence=1278544 recid=1271198 stamp=866048232
channel ORA_DISK_1: starting piece 1 at 11-DEC-14
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 12/11/2014 22:07:20
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 2691888128 bytes disk space from 2684354560000 limit
continuing other job steps, job failed will not be re-run
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1278907 recid=1271561 stamp=866062135
....
Even though there is space on the drive:
-bash-3.2$ df -h
Filesystem Size Used Avail Use% Mounted on
....
/dev/vg01/lvol1 684G 365G 317G 54% /home
Why does the query above give the space as full, even though there are space available on the drive?
Below is more info, if needed.
Thanks.
Nico
RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/home/backup/%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/home/oracle/app/oracle/product/10/dbs/snapcf_irs3.f'; # default
Thanks for the details that helps to identify the problem.
I think you have 2 problems.
1st problem is the database keeps creating theese small .dbf files. This is not a problem but the files need to be dealt with correctly. These files are called "archivelogs". When a database is in archivelog mode (required for online backup) it creates a copy of a redolog every time is full. During your daily backup you should backup and delete archivelogs.
2nd problem lots of reclaimable space in the FRA.
The FRA has a logical limit which is expressed by DB_RECOVERY_FILE_DEST_SIZE. When oracle creates a file in the FRA it is registered in the controlfile as well. This means you have to delete files from the FRA always with rman. I think you know this since you deleted archivelogs with rman and not with "rm -f".
Your query showed 100% as a result of: (SPACE_USED - SPACE_RECLAIMABLE)/SPACE_LIMIT * 100
This means all files in the FRA are reclaimable. They might don't even exist physically what means they are expired. 2nd option is they exist but they are obsolete according to the "RETENTION POLICY TO REDUNDANCY 1" rule.
Solution:
I think you should adjust the backup concept a bit.
a) First of all run the following rman commands:
crosscheck archivelog all;
crosscheck backup;
delete noprompt expired archivelog all;
delete noprompt expired backup;
delete obsolete;
b) Configure the parameter DB_RECOVERY_FILE_DEST_SIZE to an appropriate value. It depends on how many databases you have on the server and how much space is used from the /home directory for other stuff. I would say choose a value between 300GB and 600GB.
c) Adjust the backup scripts:
RMAN should run the commands mentioned in a) in the daily backup job.
With this setup you never should have much reclaimable space in the FRA (except you enabled flashback functionality -- check with "select flashback_on from v$database;").
Maybe you have to adjust some of the following commands but this is a default rman script which includes self cleaning:
crosscheck archivelog all;
backup database;
backup archivelog all delete input;
crosscheck backup;
delete noprompt expired archivelog all;
delete noprompt expired backup;
delete obsolete;
This backup script cleans up expired entries from the controlfile, backs up archivelogs + deletes them and deletes old backups which are no longer needed.
To tell rman which backups are no longer needed configure the RETENTION POLICY. I prefer a recovery window than redundancy:
RMAN> CONFIGURE RETENTION POLICY TO recovery window of 2 days;
In which path does Oracle create those files?
- DB_CREATE_FILE_DEST (parameter for datafiles)
- DB_RECOVERY_FILE_DEST (parameter for FRA)
-- which sub directory?
I suspect that these are flashback logs. If so you should limit the flash recovery area (FRA) by setting the parameter DB_RECOVERY_FILE_DEST_SIZE to a smaller value. Oracle keeps writing flashback logs until the FRA is out of space... then it stats removing/overwriting old files.

Resources