What happens if I don't specify logging/nologging in database objects in Oracle? What I meant to say how would behave with logging/nologging in database objects and without logging/nologging in database objects?
LOGGING/NOLOGGING helps manage enabling direct path writes in order to reduce the generation of REDO and UNDO. It is one of several ways to control the delicate balance between recoverability and performance.
Oracle Architecture Background Information
REDO is how Oracle provides durability, the "D" in ACID. When a transaction is committed the changes are not necessarily stored neatly in the datafiles. That keeps things fast and lets background processes handle some work. REDO is a description of the change. It is stored quickly, on multiple disks, in a "dumb" log. Changes are fast and if the server loses power one microsecond after the commit returned, Oracle can go through the REDO logs to make sure that change isn't lost.
UNDO helps Oracle provide consistency, the "C" in ACID. It stores a description of how to reverse the change. This information may be needed by another process that's reading the table and needs to know what the value used to be at an older point-in-time.
Direct path writes skip REDO, UNDO, the cache, and some other features, and directly modify data files. This is a fast but potentially dangerous option in many environments, which is why there are so many confusing options to control it. Direct path writes only apply to INSERTS, and only in the scenarios described below.
If you do nothing the default option is the safest, LOGGING.
The Many Ways to Control Direct Path Writes
LOGGING/NOLOGGING is one of several options to control direct path writes. Look at this table from AskTom to understand how the different options all work together:
Table Mode Insert Mode ArchiveLog mode result
----------- ------------- ----------------- ----------
LOGGING APPEND ARCHIVE LOG redo generated
NOLOGGING APPEND ARCHIVE LOG no redo
LOGGING no append ARCHIVE LOG redo generated
NOLOGGING no append ARCHIVE LOG redo generated
LOGGING APPEND noarchive log mode no redo
NOLOGGING APPEND noarchive log mode no redo
LOGGING no append noarchive log mode redo generated
NOLOGGING no append noarchive log mode redo generated
FORCE LOGGING can override all those settings. There are probably some other switches I'm not aware of. And of course there are the many limitations that prevent direct path - triggers, foreign keys, cluster, index organized tables, etc.
The rules are even more restrictive for indexes. An index will always generate REDO during DML statements. Only DDL statements, like CREATE INDEX ... NOLOGGING or ALTER INDEX ... REBUILD on a NOLOGGING index will not generate REDO.
Why are there so many ways? Because recoverability is incredibly important and different roles may have different views on the matter. And sometimes some people's decisions need to override others.
Developers decide at the statement level, "Insert Mode". Many weird things can happen with an /*+ APPEND */ hint and developers need to choose carefully when to use it.
Architects decide at the object level, "Table Mode". Some tables, regardless of how fast a developer may want to insert into it, must always be recoverable.
Database Administrators decide at the database or tablespace mode, "Archive log" and FORCE LOGGING. Maybe the organization just doesn't care about recovering a specific database, so set it to NOARCHIVELOG mode. Or maybe the organization has a strict rule that everything must be recoverable, so set the tablespace to FORCE LOGGING.
If you have table/index with nologging, then redo will not be generated when data is inserted into the object using direct path approaches such as insert /*+ append */.
However if database is in force logging mode then nologging will not have any affect. Redo is generated whether table/index is in logging or nologging mode.
If nologging option is set redo logs won't be generated while inserting data. You can use this to increase significantly performance of for example INSERT statement when inserting large amount of data.
Be careful never to use nologging option under Data guard setup. DB replication relies on redologs so it'll a create pretty big mess you certainly want to avoid.
Related
I want to execute insert statement with nologging.
As I can see, the only way is to use the APPEND hint in the query.
The problem is that the APPEND hint can't work with DML ERROR LOGGING clause.
In my case, I must use that clause.
What is the connection between APPEND (that actually mean to spit all the data in the end of table and not check for blocks "holes") to the (no)logging? How can I apply a nologging with NOAPPEND mode?
I tried to alter-nologging the table, but the REDO is just grow when the statement is with NOAPPEND hint.
I'm using Oracle 11g.
Thank you.
APPEND hints and NOLOGGING options are just different ways to switch between direct-path writes and conventional writes. The APPEND hints are used in SQL statements, and NOLOGGING is an option for a table or tablespace.
The problem here is that direct-path writes do not work with DML error logging, and as far as I know there is no direct workaround. There is no magic combination of hints and options to make those features work together.
There are potentially other ways to improve the performance. For example, the statement can run in parallel but with conventional writes. Or, if you're only interested in logging the errors for a few specific constraints, you could disable those constraints, perform the change in a direct-path write, enable the constraints and record the constraint exceptions, and then remove the rows that block the constraint.
How can I verify an Oracle database rollback action is successful? Can I use Number of rows in activity log and Number of rows in event log?
V$TRANSACTION does not contain historical information but it does contain information about all active transactions. In practice this is often enough to quickly and easily monitor rollbacks and estimate when they will complete.
Specifically the columns USED_UBLK and USED_UREC contain the number of UNDO blocks and records remaining. USED_UREC is not always the same as the number of rows; sometimes the number is higher because it includes index entries and sometimes the number is lower because it groups inserts together.
During a long rollback those numbers will decrease until they hit 0. No rows in the table imply that the transactions successfully committed or rolled back. Below is a simple example.
create table table1(a number);
create index table1_idx on table1(a);
insert into table1 values(1);
insert into table1 values(1);
insert into table1 values(1);
select used_ublk, used_urec, ses_addr from v$transaction;
USED_UBLK USED_UREC SES_ADDR
--------- --------- --------
1 6 000007FF1C5A8EA0
Oracle LogMiner, which is part of Oracle Database, enables you to query online and archived redo log files through a SQL interface. Redo log files contain information about the history of activity on a database.
LogMiner Benefits
All changes made to user data or to the database dictionary are
recorded in the Oracle redo log files so that database recovery
operations can be performed.
Because LogMiner provides a well-defined, easy-to-use, and
comprehensive relational interface to redo log files, it can be used
as a powerful data audit tool, as well as a tool for sophisticated
data analysis. The following list describes some key capabilities of
LogMiner:
Pinpointing when a logical corruption to a database, such as errors
made at the application level, may have begun. These might include
errors such as those where the wrong rows were deleted because of
incorrect values in a WHERE clause, rows were updated with incorrect
values, the wrong index was dropped, and so forth. For example, a user
application could mistakenly update a database to give all employees
100 percent salary increases rather than 10 percent increases, or a
database administrator (DBA) could accidently delete a critical system
table. It is important to know exactly when an error was made so that
you know when to initiate time-based or change-based recovery. This
enables you to restore the database to the state it was in just before
corruption. See Querying V$LOGMNR_CONTENTS Based on Column Values
for details about how you can use LogMiner to accomplish this.
Determining what actions you would have to take to perform
fine-grained recovery at the transaction level. If you fully
understand and take into account existing dependencies, it may be
possible to perform a table-specific undo operation to return the
table to its original state. This is achieved by applying
table-specific reconstructed SQL statements that LogMiner provides in
the reverse order from which they were originally issued. See
Scenario 1: Using LogMiner to Track Changes Made by a Specific
User for an example.
Normally you would have to restore the table to its previous state,
and then apply an archived redo log file to roll it forward.
Performance tuning and capacity planning through trend analysis. You
can determine which tables get the most updates and inserts. That
information provides a historical perspective on disk access
statistics, which can be used for tuning purposes. See Scenario 2:
Using LogMiner to Calculate Table Access Statistics for an
example.
Performing postauditing. LogMiner can be used to track any data
manipulation language (DML) and data definition language (DDL)
statements executed on the database, the order in which they were
executed, and who executed them. (However, to use LogMiner for such a
purpose, you need to have an idea when the event occurred so that you
can specify the appropriate logs for analysis; otherwise you might
have to mine a large number of redo log files, which can take a long
time. Consider using LogMiner as a complementary activity to auditing
database use. See the Oracle Database Administrator's Guide for
information about database auditing.)
Enjoy.
I have tried to find examples but they are all simple with a single where clause. Here is the situation. I have a bunch of legacy data transferred from another database. I also have the "good" tables in that same database. I need to transfer (data-conversion) data from the legacy tables to thew tables. Because this is a different set of tables the data-conversion requires complex joins to put the old data into the new tables correctly.
So, old tables old data.
New tables must have the old data but it requires lots of joins to get that old data into the new tables correctly.
Can I use direct path with lots of joins like this? INSERT SELECT (lots of joins)
Does direct path apply to tables that are already on the same database (transfer between tables)? Is it only for loading tables from say a text file?
Thank you.
The query in your SELECT can be as complex as you'd like with a direct-path insert. The direct-path refers only to the destination table. It has nothing to do with the way that data is read or processed.
If you're doing a direct-path insert, you're asking Oracle to insert the new data above the high water mark of the table so you bypass the normal code that reuses space in existing blocks for new rows to be inserted. It also has to block other inserts since you can't have the high water mark of the table change during a direct-path insert. This probably isn't a big deal if you've got a downtime window in which to do the load but it would be quite problematic if you wanted the existing tables to be available for other applications during the load.
No, on the contrary, it means you need to do a backup after a NOLOGGING load, not that you can't backup the database.
Allow me to elaborate a bit. Normally, when you do DML in Oracle, the before images of the changes you are are making get logged in UNDO, and all the changes (including the UNDO changes) are first written to REDO. This is how Oracle manages transactions, instance recovery, and database recovery. If a transaction is aborted or rolled back, Oracle uses the information in UNDO to undo the changes your transaction made. If the instance crashes, then on instance restart, Oracle will use the information in REDO and UNDO to recover up to the last committed transaction. First, Oracle will read the REDO and roll forward, then, use UNDO to roll back all the transactions that were not committed at the time of the crash. In this way, Oracle is able to recover up to the last committed transaction.
Now, when you specify an APPEND hint on an insert statement, Oracle will execute the INSERT with direct load. This means that data is loaded into brand new, never before used blocks, from above the highwater mark. Because the blocks being loaded are brand new, there is no "before image", so, Oracle can avoid writing UNDO, which improves performance. If the database is in NOARCHIVELOG mode, then Oracle will also not write REDO. On a database in ARCHIVELOG mode, Oracle will still write REDO, unless, before you do the insert /*+ append */, you set the table to NOLOGGING, (i.e. alter table tab_name nologging;). In that case, REDO logging is disabled for the table. However, this is where you could run into backup/recovery implications. If you do a NOLOGGING direct load, and then you suffer a media failure, and the datafile containing the segment with the nologging operation is restored from a backup taken before the nologging load, then the redo log will not contain the changes required to recover that segment. So, what happens? Well, when you do a NOLOGGING load, Oracle writes extent invaldation records to the redo log, instead of the actual changes. Then, if you use that redo in recovery, those data blocks will be marked logically corrupt. Any subsequent queries against that segment will get an ORA-26040 error.
So, how to avoid this? Well, you should always take a backup imediately following any NOLOGGING direct load. If you restore/recover from a backup taken after the nologging load, there is no problem, because the data will be in the datablocks in the file that was restored.
Hope that's clear,
-Mark
Yes, there should not be any arbitrary limits on query complexity.
If you do
insert /*+ APPEND */ into target_table select .... from source1, source2..., sourceN where
It should work fine. Consider though, that the performance of the load will be limited by the performance of that query, so, be sure it's well-tuned, if you're expecting good performance.
Finally, consider whether setting NOLOGGING on the target table would improve performance significantly. But, also consider the backup recovery implications, if you decide to implement NOLOGGING.
Hope that helps,
-Mark
We have a mature Oracle database application (in production for over 10 years), and during that time, we have been using scripts of our own devising to remove old data that is no longer needed. They work by issuing delete statements against the appropriate tables, in a loop with frequent commits, in order to avoid overloading the system with i/o or using too much undo space.
They work fine, for the most part. They run daily, and it takes about an hour to remove the oldest days worth of data from the system. The main concerns I have are the effects on tables and indexes that all this deleting may have, and the fact that even though they don't overly load the system, deleting one day's worth of data in that short time does have the effect of blowing out the instances buffer cache, resulting in subsequent queries running slightly slower for the next few hours as the cache is gradually restored.
For years we've been considering better methods. In the past, I had heard that people used partitioned tables to manage old data reaping - one month per partition, for example, and dropping the oldest partition on a monthly basis. The main drawback to this approach is that our reaping rules go beyond "remove month X". Users are allowed to specify how long data must stay in the system, based on key values (e.g., in an invoice table, account foo can be removed after 3 months, but account bar may need to remain for 2 years).
There is also the issue of referential integrity; Oracle documentation talks about using partitions for purging data mostly in the context of data warehouses, where tables tend to be hypercubes. Ours is closer to the OLTP end of things, and it is common for data in month X to have relationships to data in month Y. Creating the right partitioning keys for these tables would be ticklish at best.
As for the cache blowouts, I have read a bit about setting up dedicated buffer caches, but it seems like it's more on a per-table basis, as opposed to a per-user or per-transaction basis. To preserve the cache, I'd really like the reaping job to only keep one transaction's worth of data in the cache at any time, since there is no need to keep the data around once deleted.
Are we stuck using deletes for the foreseeable future, or are there other, more clever ways to deal with reaping?
For the most part I think that you're stuck doing deletes.
Your comments on the difficulty of using partitions in your case probably do prevent them being used effectively (different delete dates being used depending on the type of record) but it it possible that you could create a "delete date" column on the records that you could partition on? It would have the disadvantage of making updates quite expensive as a change in the delete date might cause row migration, so your update would really be implemented as a delete and insert.
It could be that even then you cannot use DDL partition operations to remove old data because of the referential integrity issues, but partitioning still might serve the purpose of physically clustering the rows to be deleted so that fewer blocks need to be modified in order to delete them, mitigating the impact on the buffer cache.
Delete's aren't that bad, provided that you rebuild your indexes. Oracle will recover the pages that no longer contain data.
However, as-of 8i (and quite probably still), it would not properly recover index pages that no longer contained valid references. Worse, since the index leaves were chained, you could get into a situation where it would start walking the leaf nodes to find a row. This would cause a rather significant drop in performance: queries that would normally take seconds could take minutes. The drop was also very sudden: one day it would be fine, the next day it wouldn't.
I discovered this behavior (there was an Oracle bug for it, so other people have too) with an application that used increasing keys and regularly deleted data. Our solution was to invert portions of the key, but that's not going to help you with dates.
What if you temporarily deactivate indexes, perform the deletes and then rebuild them? Would it improve the performance of your deletes? Of course, in this case you have to make sure the scripts are correct and ensure proper delete order and referential integrity.
We have the same problem, using the same strategy.
If the situation becomes really bad (very fragmented allocation of indexes, tables, ...), we try to apply space reclamation actions.
Tables have to allow row movement (like for the flashback):
alter table TTT enable row movement;
alter table TTT shrink space;
and then rebuild all indexes.
I don't know how you are with maintenance windows, if the application has to be usable all the time, it is harder, if not, you can do some "repacking" when it is off-line. "alter table TTT move tablespace SSSS" does a lot of work cleaning up the mess as the table is rewritten. You can also specify new storage parameters such as extent management, sizes, ... take a look in the docs.
I use a script like this to create a script for the whole database:
SET SQLPROMPT "-- "
SET ECHO OFF
SET NEWPAGE 0
SET SPACE 0
SET PAGESIZE 0
SET FEEDBACK OFF
SET HEADING OFF
SET TRIMSPOOL ON
SET TERMOUT OFF
SET VERIFY OFF
SET TAB OFF
spool doit.sql
select 'prompt Enabling row movement in '||table_name||'...'||CHR (10)||'alter table '||table_name||' enable row movement;' from user_tables where table_name not like '%$%' and table_name not like '%QTAB' and table_name not like 'SYS_%';
select 'prompt Setting initial ext for '||table_name||'...'||CHR (10)||'alter table '||table_name||' move storage (initial 1m);' from user_tables where table_name not like '%$%' and table_name not like '%QTAB' and table_name not like 'SYS_%';
select 'prompt Shrinking space for '||table_name||'...'||CHR (10)||'alter table '||table_name||' shrink space;' from user_tables where table_name not like '%$%' and table_name not like '%QTAB' and table_name not like 'SYS_%';
select 'prompt Rebuilding index '||index_name||'...'||CHR (10)||'alter index '||index_name||' rebuild;' from user_indexes where status = 'UNUSABLE';
spool off
prompt now check and then run #doit.sql
exit
I need to perform a query 2.5 million times. This query generates some rows which I need to AVG(column) and then use this AVG to filter the table from all values below average. I then need to INSERT these filtered results into a table.
The only way to do such a thing with reasonable efficiency, seems to be by creating a TEMPORARY TABLE for each query-postmaster python-thread. I am just hoping these TEMPORARY TABLEs will not be persisted to hard drive (at all) and will remain in memory (RAM), unless they are out of working memory, of course.
I would like to know if a TEMPORARY TABLE will incur disk writes (which would interfere with the INSERTS, i.e. slow to whole process down)
Please note that, in Postgres, the default behaviour for temporary tables is that they are not automatically dropped, and data is persisted on commit. See ON COMMIT.
Temporary table are, however, dropped at the end of a database session:
Temporary tables are automatically dropped at the end of a session, or
optionally at the end of the current transaction.
There are multiple considerations you have to take into account:
If you do want to explicitly DROP a temporary table at the end of a transaction, create it with the CREATE TEMPORARY TABLE ... ON COMMIT DROP syntax.
In the presence of connection pooling, a database session may span multiple client sessions; to avoid clashes in CREATE, you should drop your temporary tables -- either prior to returning a connection to the pool (e.g. by doing everything inside a transaction and using the ON COMMIT DROP creation syntax), or on an as-needed basis (by preceding any CREATE TEMPORARY TABLE statement with a corresponding DROP TABLE IF EXISTS, which has the advantage of also working outside transactions e.g. if the connection is used in auto-commit mode.)
While the temporary table is in use, how much of it will fit in memory before overflowing on to disk? See the temp_buffers option in postgresql.conf
Anything else I should worry about when working often with temp tables? A vacuum is recommended after you have DROPped temporary tables, to clean up any dead tuples from the catalog. Postgres will automatically vacuum every 3 minutes or so for you when using the default settings (auto_vacuum).
Also, unrelated to your question (but possibly related to your project): keep in mind that, if you have to run queries against a temp table after you have populated it, then it is a good idea to create appropriate indices and issue an ANALYZE on the temp table in question after you're done inserting into it. By default, the cost based optimizer will assume that a newly created the temp table has ~1000 rows and this may result in poor performance should the temp table actually contain millions of rows.
Temporary tables provide only one guarantee - they are dropped at the end of the session. For a small table you'll probably have most of your data in the backing store. For a large table I guarantee that data will be flushed to disk periodically as the database engine needs more working space for other requests.
EDIT:
If you're absolutely in need of RAM-only temporary tables you can create a table space for your database on a RAM disk (/dev/shm works). This reduces the amount of disk IO, but beware that it is currently not possible to do this without a physical disk write; the DB engine will flush the table list to stable storage when you create the temporary table.