I have an OracleDB database running in a CI pipeline. It gets setup and torn-down on each test run; and it's bloody slow.
Since I don't care about integrity, is it possible to run OracleDB with no flushing to disk? Much like Postgres's fsync setting?
I believe that the COMMIT_WAIT Oracle parameter is similar to PostgreSQL's fsync setting. After running the below command, Oracle will not wait for commits to be flushed to disk:
alter system set commit_wait = nowait;
But in my limited experience with that parameter, it does not significantly improve performance unless your system is doing a ridiculous number of commits. Perhaps you should also look into using NOARCHIVELOG mode. Oracle will still write transaction data to disks, but it will at least stop making an extra copy:
SQL> shutdown immediate;
SQL> startup mount
SQL> alter database noarchivelog;
SQL> alter database open;
But even those settings probably won't significantly improve administrative task times. Oracle instances will never be small, lightweight processes. The closest thing is the multi-tenant option, which enables rapidly creating pluggable databases. But that option requires setting up a central container database, which is probably what you're trying to avoid in the first place.
Can you provide more information about your CI pipeline? There might be some optimizations specific to your environment. For example, I think that Oracle has some code to support Docker (although I'm not familiar with those options).
Related
This question is almost exaclty like
oracle rman simple backup
but there isn't an acceptable answer there, and this question is about 11g. So I'll ask:
I'd like to do some table initialization DDL tests on an oracle shema, and I'd like to revert the database to the prior-test state between runs. I'm executing the following in RMAN:
shutdown immediate;
startup mount
backup database;
sql 'alter database open';
As I see it works fine, list backup shows backups.
Than I made some modifications (Added some users, added some tables, adding data) and I tried to restore backup:
shutdown immediate;
startup mount
restore database;
recover database;
sql 'alter database open resetlogs';
Expected result: the database should be restored to the exact state as to when the initial backup was taken.
Actual result: all the new tables and users I created in my test DDL continue to exist. I verified this by closing connections, restarting sessions, and then even selecting from the tables! The tables still exist even after the restore!
What is the deal with this? In MSSQL and Postgres, a backup means you save the state of the db, and restoring it means you go back to when the backup was. But in RMAN for oracle 11g, it 'claims' the restore was successful, but the evidence clearly shows otherwise.
How can I get oracle to save the state of the database exactly as it is, and then make changes, and when I restore, i want the database to be exactly as it was when I backed it up?
Is this possible in Oracle?
Yes it is possible - you have several options:
create a cold backup of the database (datafiles, controlfiles, online redo logs) and then to restore them when it is necessary
Perform so called "point in time recovery" (assuming your DB is in archivelog mode). Take DB backup with RMAN note the "time" or "SCN" or "archivelog sequence" after a while you can restore DB and recover until previous noted time/SCN/LOG SEQUENCE
Special designed by Oracle for this purposes and I recommend it in your case "Flashback Database" (brows Oracle docs to see what this is).
Oracle always "try" to restore/recover your database up to last committed transaction if it is possible, that is why you get the result you described above, but if you want to restore up to specific time/SCN/SEQUENCE just tell Oracle about this :)
H2 Database is not very stable (But very Fast wich is very good for DEV), especialy during the developpement process, i hope that the number of corruption is du to the immediat shutdown of the Server (during debuging).
How to ensure that a H2 DataBase is not corrupted?
In order garant that a backup is good.
Probably the best way to check if everything is OK is to create a SQL script from the database, using the SCRIPT statement. If that works, then the data is fully readable. The index data might still be corrupt, but indexes can be re-created.
Another option is to always backup the data in the form of a SQL script. This will make a separate check unnecessary; but backup is a bit slower and can't be done online (while updates are happening).
By the way: if a database file gets corrupt, it's due to misconfiguration or wrong usage (H2 supports disabling the transaction log), due to hardware failure, or due to a bug in the database engine itself.
I'm a SQL Server DBA, but we have an Oracle 10g database that I need to start performing daily backups on. We do not have Enterprise Manager. Is there a way to schedule a daily backup in Oracle like in SQL Server?
I apologize if this question is severely elementary for Oracle people, but I've had a very tough time trying to research this and coming up with an answer other than "Use EM".
Easiest in your case is to make a simple Windows Batch script that set ORACLE_HOME and PATH and uses rman to make the backup. Schedule the script in the Windows Task scheduler. Assuming your database is production and because of this runs in archive log mode your script could be something like this:
(I am not a Windows expert so subtle errors might be easy to spot for you)
rman_backup.bat:
ORACLE_SID=your_oracle_sid
ORACLE_HOME=d:/where/your/installation/is
PATH=%ORACLE_HOME%/bin;%PATH%
rman cmdfile=your_rman_actions_script.rman log=your_log_file.log
your_rman_action_script.rman looks like
connect target=/
backup DATABASE PLUS ARCHIVELOG;
For documentation look at Oracle 10g database documentation and start with 2 day dba. After that check out the backup docu found here Administration
I would (but then, my background is more on Unix, less on Windows) do the scheduling from outside the database, using the OS Scheduler to run a backup script. Assuming that no real backup system is available.
In the beginning of backup, you would run a SQL script to place the tablespaces in backup mode (ALTER TABLESPACE x BEGIN BACKUP), then back up the tablespace data files, and after that restore normal mode (ALTER TABLESPACE x END BACKUP). PL/SQL can be used here for looping over all tablespaces.
After that, you'd back up the control file (ALTER SYSTEM BACKUP CONTROLFILE ...), and finally you would rotate the redo logs enough times that all relevant log data has been archived, and back up the archive logs.
And as of doing incremental backups f.ex. throughout the working week, just do the log rotation & archive log copy part.
My delayed job has something to do with exporting slightly edited version of most of the tables in the app's database, and while doing so, it is critical that none of the current data is being edited.
Is it possible to lock the entire database while running this delayed job?
More Information:
The database to be exported is in PostgreSQL, Heroku's postgresql database, to be more specific.
The flow is something like (all below should be done automatically by the code):
site would be put in maintenance mode,
freeze then export the database, then
when exporting is complete, re-activate the site back
Given there is not a lot of information with your question, I am going to answer you as best I can.
1) What is the database type and model? Is it a standalone DB like MS Access or Informix SE?
2) If not a standalone engine, does this database support replication. I used to work a lot with MS SQL Server, and replication had implications while the database was live and being edited. That is the implications were whether edited data was replicated. In this case, consult the docs. Is it an option to use replication to preserve the current database?
3) What kind of task is this? It sounds like maintenance. Our Informix SE databases lock when being imported or exported. On the production server, it is my job to make sure no local server applications are trying to access the locked DB, and that our external payments web site cannot interfere while the db is locked.
4) If this is a production site that is not in maintenance mode, then I suggest you probably do not want to lock an entire database.
I am sorry for not answering your question directly, but more information is needed like are you asking if this can be done from the Ruby DB interface on some model of db.
I've just started a job and have identified an issue in which the database isn't currently being backed up properly so to speak. We are doing one back up every 6 hours that uses the Oracle native backup utility, but we were also sold a process by a company in which they stated that they could in essence perform "warm" backups of our database by simply taking file system copies of our database files and when we needed to restore we'd simply shut down Oracle and then copy over the files that had been copied, restart Oracle and the world would be whole again. The challenge is the fact that we have not gotten this to work just yet. I need to spend some more time reviewing the message that Oracle is giving, but my primary question is, "Is it possible" to take copies of Oracle files while Oracle is still running and to use those files at a later date to restore the database? I know that it works if the database is shut down, and then copies are made, but this is the first that I've heard that a copy (file system) can be made while the database is running. Any guidance would be greatly appreciated. Here is the error that we are getting.
ORA-00314: log 3 of thread 1, expected sequence# 1939 doesn't match 1944
ORA-00312: online log 3 thread 1: 'E:\ORACLE\ORADATA\ITMS\REDO03.LOG'
Yes, it is possible, but you have to put all the tablespaces into backup mode first and take them out afterwards (e.g. ALTER TABLESPACE x BEGIN BACKUP and ALTER TABLESPACE x END BACKUP; you'll need to check the syntax and make sure it's appropriate for your situation!). Oversimplifying hugely, this tells Oracle not to write to any of the data files, so they're all kept in a consistent state.
The two main problems you get otherwise are that individual files are updated while you're copying them so a single file can be corrupted; and more visibly that different files have different internal timestamps and sequences so Oracle won't allow them to be used.
If you're using a process you've bought in then it should already be taking care of all that though. It sounds the backup is OK and it's the restore that you haven't got working.
I haven't been involved in a restore from a hot backup for some time so someone else will need to give the detail on the actual error. My read of it is that you've tried to open with the restored data files but the later live redo logs. When restoring I think you either have to RECOVER the database using the redo logs generated since the backup was taken; or if you're trying to restore to that point in time then you can open the data with the RESETLOGS directive and lose all the changes from all the redo logs that came later. But really take more informed advice than this...
As far as I know, there are two ways that you can "copy" datafiles from a running Oracle instance.
The datafiles are copied for a
tablespace when the tablespace is in
"BEGIN BACKUP" mode.
You are using a high-end journalling
filesystem such as Veritas that can
snapshot and track block
changes on the filesystem while the
copy is taking place.
It is possible. You must must be in ARCHIVELOG mode.
An example script would be for manual:
Alter tablespace USERS begin backup;
host cp -p /u02/oradata/PROD/users01.dbf /u03/backup/PROD/
host cp -p /u02/oradata/PROD/users02.dbf /u03/backup/PROD/
Alter tablespace USERS end backup;
However, I would recommend just using RMAN. RMAN is QUITE ROBUST, included free, and will do the hot backup, as well as cold. It will clone to another instance, clone as a point in time, recover to a certain point in time, etc. Any manual backup procedure should be migrated to using the RMAN.
If you wanted to backup the entire database while it is open (I prefer as Oracle with DBA so you can avoid passwords in scripts, but ymmv):
$ ORAENV_ASK=NO
$ ORACLE_SID=PROD
$ . oraenv
$ rman target=/
Recovery Manager: Release 10.2.0.4.0 - Production on Thu Oct 28 14:23:29 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: PROD (DBID=x)
RMAN> backup as compressed backupset database plus archivelog;
...
Backup Complete.
I've effectively done this with a non-mission-critical database running on Amazon EC2. My backup strategy is to periodically take a snapshot of the EBS volume. To restore a backup, I create a new EBS volume from the snapshot, start up the instance using it, then run RECOVER DATABASE.
This loses any transactions that were in-flight at the time when the snapshot was taken, of course.