Is there anyway to change the backup retention? I have not found any way to change that and we want to confirm if it is possible.
ADB stores backups (database and archivelogs) for 60 days as documented here. You can restore a database in-place, or you can create a new database from any point in time in the last 60 days.
Currently, there's no way to change the backup retention.
As an alternative, you can do Data Pump exports to the object store and keep the dump files based on your retention requirements.
Related
We have a incident at a customer with Oracle archive logs filling up the disk.
We want to find out why this is happening, anyone have any SQLs or scripts to find out which process, user etc that generates so much logs?
"Why it's happening is simple. The app is generating more redo (insert, update, delete) than the current archivelog disk space can handle. Either they have a rogue process, or they are not performing necessary housekeeping on the archivelogs.
As to the rogue process, the best means of identifying it is probably with logminer. Logminer if fully documented, here.
As for the housekeeping, they should be taking regular rman backups of the archivelogs, with the DELETE INPUT option. At a minimum they should be doing this daily. For a very active database, it is not out of the question to take that backup more frequently, as needed to stay ahead of the generation.
Also, don't overlook the possibility (probability?) that their archivelog destination is simply too small for their needs.
Also check that if they are using FRA, that they are really using FRA. If their backup destination format specifies the actual name of a directory that happens to be under the FRA, then the FRA mechanism doesn't know anything about that space consumption. And it won't be doing it's own housekeeping.
I want to know if I can configure retention policy per database?
I have several databases, about 100 and I need a different retention policy for each database. Is that possible?
If possible, How do is see the retention policies per database?
SELECT value from v$rman_configuration WHERE name = 'RETENTION POLICY';
The above sql just returns one single row.
you can add line in you backup script, base on different database you connected.
======================for example===
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
It very much depends on what you mean by "database":
RMAN only works with physical database files for the entire "database", i.e. all users/schemas contained in those files. Your retention policy in RMAN then applies to all data files and all users/schemas within those files. Each physical Oracle database (all tablespaces, archive logs, etc.) can have its own retention policy, which is stored in the database control files or in the RMAN repository. There will only ever be a single retention policy at any given time, which is what your query is showing.
If by "database" you mean individual logical schemas within a physical database, then the answer is "no", you cannot have separate RMAN retention policies at that level.
We are working on a new release of our product and we want to implement a feature where a user can view older data for a disease prediction made by the application. For example, the user would have an option to go back in time to see the predictions made one year ago. At the database level what needs to happen is to fetch archived data.The number of tables in the database is around 200 and out of them the tables that need to go back to an older state
I read about Flashbacks and although they seem to be used more for recovery, was curious to know if they can be used.
1> Would it be possible to use Flashbacks?
2> If yes, how would it affect performance?
3> If no what could be some other options?
Thank you
Flashback could be a way, but you need to use flashback data archive for the table you want. Using this technology, you can choose how much time you want to be able to store. What i find interesting using flashback technology, is that you query the same table (with some aditional options), instead of the other option, of creating a history table.
My team is planning a very large set of updates to our apps soon, including some hefty DB updates (Oracle 11gR2). As I was writing scripts that would revert all the DB updates (as a roll back contingency) and researching potential Oracle features, I came across this Oracle documentation. I see that flashbacks use "flashback logs" to restore the DB to a specific state. I also see that the restore points use the system change number to bookmark the DB. \
This SO questions says flashback will "return a table to the state it was in 10 minutes ago" but does that mean the data will be reverted too? (we have lots of reference tables as well)
Would either of these Oracle features be useful to undo our DB updates while maintaining the integrity of our production data? It's unclear to me what the two features do in practice and how they are different.
The main difference is that flashback rolls back changes including the changes made by others in the whole table or database to any point of time in the past within the range of flashback setting. To roll back to restored points will only rollback what you change in your transaction, and changes by others won't be affected.
When you create a Guaranteed restore point it will keep enough flashback logs to flashback the database to the guaranteed restore point.
Guaranteed restore points must be dropped manually using DROP RESTORE POINT statement. Guaranteed restore points do not expire. If you do not do that flash recovery area will grow indefinitely until filesystem or Diskgroup becomes full...
Flashback database to restore point
I'd like to figure out the best way to archive the data that is no needed anymore, in order to improve the application performance and also to save disk space. In your experience what is the best way to implement this, what kind of tools can I use? It is better to develop an specific in house application for that purpose?
One way to manage archiving:
Partition your tables on date range, e.g. monthly partitions
As partitions become obsolete, e.g. after 36 months, those partitions can now be moved to your data warehouse, which could be another database of just text files depending upon your access needs.
After moving, the obsolete partitions can be removed from your primary database, so always maintaining just (e.g.) 36 months of current data.
All this can be automated using a mix of SQL/Shell scripts.
The best way to archive old data in ORACLE database is:
Define an archive and retention policy based on date or size.
Export archivable data to an external table (tablespace) based on a defined policy.
Compress the external table and store it in a cheaper storage medium.
Delete the archived data from your active database using SQL DELETE.
Then to clean up the space execute the below commands:
alter table T_XYZ enable row movement;
alter table T_XYZ shrink space;
If you still want to free up some disk space back to the OS (As Oracle would have now reserved the total space that it was previously using), then you may have to resize the datafile itself:
SQL> alter database datafile '/home/oracle/database/../XYZ.dbf' resize 1m;
For more details, please refer:
http://stage10.oaug.org/file/sroaug080229081203621527.pdf
I would export the data to a comma-delimited file so it can be exported into almost any database. So if you change versions of Oracle or go to something else years later you can restore it without much concern.
Use the spool file feature of SQL*Plus to do this: http://cisnet.baruch.cuny.edu/holowczak/oracle/sqlplus/#savingoutput