Database Backup, drop transaction logs, shrink, drop nonclustered indexs, rebuild tables Fill factor 100%, compress - database-backups

Let me start by saying I am a developer and I am just taking database backups for bug testing/fixing.
I want to get the size of the backups I am taking down because at the moment it would be faster getting the backups posted to me than transfering them over the intranet, and the first thing I do after restoring any backups on my development system is drop and shrink the transaction logs anyway.
Is there a way using SQL Server Managment Studio 2005 to take a backup that doesn't include the transaction log, or the nonclustered index tables, and can rebuild the tables in the backup so it has a fill factor of 100%, and then compress the backup file?
Or at least is there a way I can take a backup that doesn't include the transaction log?

I got a script working that backed up the database and changed the fill factor on the tables and dropped the indexes and created a script to recreate them when restoring them but it didn't make a lot of difference to the final size of the database after compressing, so stuck to just backing up the database and and then using 7zip to compress it.

Related

Is it possible to add an existing tablespace datafile to a new tablespace?

Background: hard-drive died in existing Oracle12cR2 server but I was able to recover all the previous tablespaces from backup, including SYSTEM01.DBF and USERS01.DBF. I created a new Oracle 12cR2 database server, and would like to know if I can recover any of the data in the tablespaces?
Thanks.
if you have oracle running in archive log mode and have a recent backup complete with all archives until the crash: yes you can. After that you can use various methods to move the data to the new database.
My question that remains is: why move it to an other database when you were able to recover the original one? The recovered database is as good as (or even better) than the new one.

Is it possible to apply archivelogs from standby DB to Production DB?

I've lost two hard disks yesterday . One contains an Oracle Datafile and another contains part of archive logs generated in the past 2 days. (say, arch_5.dbf and arch_6.dbf are lost, in the set of arch_1 to arch_10).
I have switched over to my standby site as part of business continuity plan.
Now, I have to recover the missed datafile. It requires the missed two archive log files for recovery.
Is is possible to apply the same set of archivelogs from standby to production, in order to recover the datafile ?
Kindly advise.
~SK~
It might prove a bit easier to
Use RMAN Incremental Backups to Refresh a Standby Database
you could use the archives that are transported to the standby site but they won't help with the recovery of the lost datafiles, unless their creation is logged in the archives.
Using the incremental backup option is easier.

backup copy of Oracle DB-server

Dears,
I manage an Oracle DB with multiple tables that contains hundreds of millions of records with limited retention time ( telco-CDRs ).
I plan to make a backup copy of the DB-server but I don't need to include the records in the backup.
this backup, will be useful in case the server crashes.
The data itself, is not important for me and I don't have enough space in our archive tape-library, all I need is to get the server up and running again ASAP.
Kindly share your experience.
thanks in advance

IMPDP uses more disk space than expected

Background:
I've been tasked with importing a large amount of data from a production database to a test database (Oracle 12c release 2 running on RHEL) and I'm using Data Pump.
The first time I imported the tables, The tables were created and the data was imported as planned, but - due to an issue in my data pump parameter file - the constraints were not imported.
My subsequent attempts did not go as well, however. Data Pump began to freeze part way and the STATUS command showed that no bytes were being processed.
My Solution Attempts:
I tried using TABLE_EXISTS_ACTION=REPLACE and dropping the tables directly after an attempt. I also dropped the master tables of any data pump jobs I was unable to terminate from the utility.
Still, it seemed to hang earlier and earlier in the process as I continuously tried to import the tables. df -h returned 100% disk usage every time it hanged.
The dump file itself is on a separate drive so it's no longer taking up room. I've been trying to clear out space but it keeps filling up when I run a job and I can't tell where. Oracle flashbacks are disabled and I made sure to purge the oracle recycle bin.
tl;dr:
Running impdp jobs seems to use up disk space beyond the added tables and the job master tables. Where is this memory getting used up and what can I do to clear it for a succesful import?
I figured out the problem:
The database was in archivelog mode in order to set up streams and recovery manager backups. As a result, impdp was causing a flood of archived database changes.
In order to clear out the old archives I ran the following in rman for every database in noarchivelog mode on the server.
connect target /
run {
allocate channel c1 type disk;
delete force noprompt archivelog until time 'SYSDATE-30';
release channel c1;
}
This cleared up 60 gigabytes. I also added the parameter transform=disable_archive_logging:Y to my impdp parameter file. This suppresses archive creation when running data pump imports.

Problem when backing up Oracle 10g

I've just started a job and have identified an issue in which the database isn't currently being backed up properly so to speak. We are doing one back up every 6 hours that uses the Oracle native backup utility, but we were also sold a process by a company in which they stated that they could in essence perform "warm" backups of our database by simply taking file system copies of our database files and when we needed to restore we'd simply shut down Oracle and then copy over the files that had been copied, restart Oracle and the world would be whole again. The challenge is the fact that we have not gotten this to work just yet. I need to spend some more time reviewing the message that Oracle is giving, but my primary question is, "Is it possible" to take copies of Oracle files while Oracle is still running and to use those files at a later date to restore the database? I know that it works if the database is shut down, and then copies are made, but this is the first that I've heard that a copy (file system) can be made while the database is running. Any guidance would be greatly appreciated. Here is the error that we are getting.
ORA-00314: log 3 of thread 1, expected sequence# 1939 doesn't match 1944
ORA-00312: online log 3 thread 1: 'E:\ORACLE\ORADATA\ITMS\REDO03.LOG'
Yes, it is possible, but you have to put all the tablespaces into backup mode first and take them out afterwards (e.g. ALTER TABLESPACE x BEGIN BACKUP and ALTER TABLESPACE x END BACKUP; you'll need to check the syntax and make sure it's appropriate for your situation!). Oversimplifying hugely, this tells Oracle not to write to any of the data files, so they're all kept in a consistent state.
The two main problems you get otherwise are that individual files are updated while you're copying them so a single file can be corrupted; and more visibly that different files have different internal timestamps and sequences so Oracle won't allow them to be used.
If you're using a process you've bought in then it should already be taking care of all that though. It sounds the backup is OK and it's the restore that you haven't got working.
I haven't been involved in a restore from a hot backup for some time so someone else will need to give the detail on the actual error. My read of it is that you've tried to open with the restored data files but the later live redo logs. When restoring I think you either have to RECOVER the database using the redo logs generated since the backup was taken; or if you're trying to restore to that point in time then you can open the data with the RESETLOGS directive and lose all the changes from all the redo logs that came later. But really take more informed advice than this...
As far as I know, there are two ways that you can "copy" datafiles from a running Oracle instance.
The datafiles are copied for a
tablespace when the tablespace is in
"BEGIN BACKUP" mode.
You are using a high-end journalling
filesystem such as Veritas that can
snapshot and track block
changes on the filesystem while the
copy is taking place.
It is possible. You must must be in ARCHIVELOG mode.
An example script would be for manual:
Alter tablespace USERS begin backup;
host cp -p /u02/oradata/PROD/users01.dbf /u03/backup/PROD/
host cp -p /u02/oradata/PROD/users02.dbf /u03/backup/PROD/
Alter tablespace USERS end backup;
However, I would recommend just using RMAN. RMAN is QUITE ROBUST, included free, and will do the hot backup, as well as cold. It will clone to another instance, clone as a point in time, recover to a certain point in time, etc. Any manual backup procedure should be migrated to using the RMAN.
If you wanted to backup the entire database while it is open (I prefer as Oracle with DBA so you can avoid passwords in scripts, but ymmv):
$ ORAENV_ASK=NO
$ ORACLE_SID=PROD
$ . oraenv
$ rman target=/
Recovery Manager: Release 10.2.0.4.0 - Production on Thu Oct 28 14:23:29 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: PROD (DBID=x)
RMAN> backup as compressed backupset database plus archivelog;
...
Backup Complete.
I've effectively done this with a non-mission-critical database running on Amazon EC2. My backup strategy is to periodically take a snapshot of the EBS volume. To restore a backup, I create a new EBS volume from the snapshot, start up the instance using it, then run RECOVER DATABASE.
This loses any transactions that were in-flight at the time when the snapshot was taken, of course.

Resources