How do I select without using rollback segments? - oracle

I'm trying to export something using a select statement that runs for a very long time and I've been getting ORA-01555 snapshot too old errors. I searched for this error and it has something to do with select statement using rollback segment "redo tablespace".
How do I select without getting this error? I don't care about the integrity of the results I'm going to get or any other consequences that this may bring about.

Oracle does not allow to read inconsistent results and does not provide the corresponding isolation level "read uncommitted" (if this is an isolation level at all). If you don't care about consistency, you may split the query in several parts (using different where clauses). If you would like to fix the error, you would have to resize the undo tablespace (or change the undo retention) - but this is a job for a DBA (if it is necessary).

Related

How can I get rid of the bad execution plans in Oracle?

Recently I faced with the problem that some (theoretically irrelevant) formal changes in the code of a function (even adding or removing a space character to or from the code) can greatly affect the performance of the function. (see my previous questions here and here).
The mystery was solved by Jon Heller as
If adding spaces to the code changes performance, this is likely a
plan management issue. Many Oracle tuning tools operate on the SQL_ID,
which is like an MD5 hash of the SQL text. So if you change a single
character of the SQL text, the optimizer treats the code like a brand
new statement. Any plan management fixes, like a SQL profile, or plan
outline, will not be applied to the new statement. Maybe a DBA tuned
an old statement with an /*+ INDEX... */ hint, but that hint isn't
carried over to the new statement. Compare the Note sections in the
DBMS_XPLAN output.
and as
A space in a SQL statement would change the SQL_ID, which could cause
the optimizer to no longer match the SQL statement with plan
management features like profiles, outlines, baselines (possibly -
they're supposed to be able to avoid this problem in some cases),
patches, advanced rewrites, etc.
So the only question I have left is how can I get rid of the stuck bad execution plans? How can I "clean" Oracle from them?
This worked for me:
alter system flush shared_pool;

PL/SQL Errors Suddenly

I am developing a PL/SQL script, using TOAD. At this point of the development, I am debugging it. This has involved: wrap a section in begin/end, F5 run it, receive error info, fix problem, repeat.
All of a sudden, out of nowhere, I am receiving
ORA-00604: error occurred at recursive SQL level 2
ORA-01654: unable to extend index SYS.I_OBJ5 by 128 in tablespace SYSTEM
The script begins with a drop table/create table set of instructions for a simple 2-field table, in my logon schema. After this started happening, I narrowed the part I am re-running to just one line: drop table <tblName>
In trying to narrow this down, I finally went to the TOAD Schema Browser, right-clicked on the table, and selected "Drop table" from the context menu — same result.
I must have run this statement 120 times yesterday, without this act giving me any trouble. Now? Not happenin! I am really stumped. Did all those runs maybe load up some area that is now full? Part of this script opens file system files. I didn't know I had to then close them, and I ran into "`This action would result in ‘too many files open’ (each iterative run opened one more). Have I done something like that by dropping and recreating this table so many times?
I agree with #Peter M, most likely your SYSTEM tablespace is full.
The error message says it quite clearly: unable to extend index ... in tablespace SYSTEM means that Oracle ran out of space while trying to make an index bigger. The tablespace SYSTEM is used by Oracle for internal purposes, for instance for the list of tables and columns. It is therefore quite important and normally well supervised by DBAs and kept clean of other objects like developer tables. The schema name SYSalso points in this direction.
The other hint is recursive SQL: Oracle runs not only your SQL (like CREATE TABLE) but sometimes needs to do some housekeeping, like updating said list of table, which is also done by SQL. The second flavour is called recursive.
I'd guess therefore that it is not your table that causing the SYSTEM tablespace to overflow, but the many changes.
If this happened at my place of work, I'd got a friendly phone call by a DBA by now, asking what's going on...

Sequences (using as ID) issue in Oracle SQL Developer

I am using sequences to create IDs, so while executing insert stored procedure it will create unique value for ID. But after some time it is losing the definition for the sequence.
Not sure why this is happening again and again and how to solve the problem?
I am using Oracle SQL Developer and in the edit table property there is 'Identity Column' setting. See below:
Next step is setting up trigger and sequence:
It was working fine for some time until this property defaulted. Now it is not there anymore:
Still have this trigger and sequence object in the schema and able to setup again but it will break later.
How to avoid this problem in future?
I think it is just a bug/limitation in your client software, Oracle SQL Developer. The "Identity Column" tab is a handy way to create the corresponding sequence and trigger but it doesn't seem to recognise existing elements. I've just verified my own system and that's exactly what happens.
It makes sense, because adding a new sequence and trigger is a pretty straightforward task (all you need is a template) but displaying current sequence is hard given that a trigger can implement any conceivable logic. Surely it could be done but the cost-benefit ratio probably left things this way.
In short, your app is not broken so nothing needs to be fixed on your side.
This is what I received from IT support regarding the issue:
A few possibilities that might cause this:
1 - Another user with limited privileges might be editing the table using SQL Developer. In this case, if this user's privilege is not enough to obtain the sequence and/or trigger information from the database, the tool might leave the fields blank and disable it when table changes are saved.
2 - The objects are being changed or removed outside of SQL Developer, causing it to lose the information. In my tests I noticed that dropping the trigger and recreating it with the same name caused the identity property information to be lost on SQL Developer.
Even being the trigger enabled, and working for inserts it could not retrieve the information.
Then, if I run an alter trigger to enable it (even tough dba_trigger is reporting it as already enabled), SQL Developer will list the information again:
ALTER TRIGGER "AWS"."TABLE1_TRG" ENABLE;
So it looks like there are some issues with the SQL Developer, that is causing this behavior.
Next time it happen, please check if the trigger still exist on the database and is enabled with the query below:
select owner, trigger_name, TRIGGER_TYPE, TRIGGERING_EVENT, TABLE_OWNER, TABLE_NAME, STATUS
from dba_triggers
where trigger_name = 'ENTER_YOUR_TRG_NAME'; --Just change the trigger name in WHERE

PostgreSQL vs. Oracle default transaction management

In PostgreSQL, if you encounter an error in transaction (for example when your insert statement violates unique constraint), the whole transaction is aborted, you cannot commit it and no rows are inserted:
database=# begin;
BEGIN
database=# insert into table (id, something) values ('1','whatever');
INSERT 0 1
database=# insert into table (id, something) values ('1','whatever');
ERROR: duplicate key value violates unique constraint "table_id_key"
Key (id)=(1) already exists.
database=# insert into table (id, something) values ('2','whatever');
ERROR: current transaction is aborted, commands ignored until end of transaction block
database=# rollback;
database=# select * from table;
id | something |
-----+------------+
(0 rows)
You can change that by setting ON_ERROR_ROLLBACK to "on" or "interactive", after that you can do multiple inserts ignoring errors, commit and have only successfully inserted rows in table after transaction end.
database=# \set ON_ERROR_ROLLBACK interactive
In Oracle, this is the default transaction management behaviour, which surprises me. Isn't this completely counterintuitive and dangerous?
When I start a transaction I want to be sure that all the statements were successfull. What if my multiple inserts comprise some kind of an object or data structure? I end up completely unaware of the data state in my database and should be checking it after the commit.
If one of the inserts fails I want to be sure that other inserts will be rollbacked or not even evaluated after the first error, which is exactly how it's done in PostgreSQL.
Why does Oracle have such way of transaction management as a default, and why is it considered good practice?
For example, some random guy here in comments
This is a very neat feature.
I don't understand this, though: "Normally, any error you make will
throw an exception and cause your current transaction to be marked as
aborted. This is sane and expected behavior..."
No, it's really not. Oracle doesn't work this way, nor does MySQL. I
have no experience with MSSQL or DB2 but I'll bet a dollar each they
don't work this way either. There no intuitive reason why a syntax
error, or any other error for that matter, should abort a transaction.
I can only assume there's either some limitation deep in the Postgres
guts that requires this behavior, or that it conforms to some obscure
part of the SQL standard that everyone else sensibly ignores. There's
certainly no API / UX reason why it should work this way.
We really shouldn't be too proud of any workarounds we've developed
for this pathological behavior. It's like IT Stockholm Syndrome.
Does not it violate even the definition of the transaction?
Transactions provide an "all-or-nothing" proposition, stating that
each work-unit performed in a database must either complete in its
entirety or have no effect whatsoever.
I agree with you. I think it's a mistake not to abort the whole tx. But people are used to that, so they think it's reasonable and correct. Like people who use MySQL think that the DBMS should accept 0000-00-00 as a date, or people using Oracle expect that '' IS NULL.
The idea that there's a clear distinction between a syntax error and something else is flawed.
If I write
BEGIN;
CREATE TABLE new_customers (...);
INSET INTO new_customers (...)
SELECT ... FROM customers;
DROP TABLE customers;
COMMIT;
I don't care that it's a typo resulting in a syntax error that caused me to lose my data. I care that the transaction didn't successfully execute all its statements but still committed.
It'd be technically feasible to allow soft rollback in PostgreSQL before any rows are actually written by a statement - probably before we even enter the executor. So failures in the parse and parameter binding phases could allow the tx not to be aborted. We have a statement memory context we could use to clean up.
However, once the statement starts changing rows, it's doing so on disk with the same transaction ID as the prior statements in the tx. So you can't roll it back without rolling back the whole tx. To allow statement rollback Pg needs to assign a new subtransaction ID. That costs resources. You can do it explicitly with SAVEPOINTs when you want to, and internally that's what psql is doing. In theory we could allow the server to do this implicitly for each statement to implement statement rollback, just at a performance cost. But I doubt any patch implementing this would get committed, at least not without a LOT of argument, because most of the PostgreSQL team are (IMO reasonably) not fond of "whoops, that broke but we'll continue anyway" transaction semantics.

How to find out when an Oracle table was updated the last time

Can I find out when the last INSERT, UPDATE or DELETE statement was performed on a table in an Oracle database and if so, how?
A little background: The Oracle version is 10g. I have a batch application that runs regularly, reads data from a single Oracle table and writes it into a file. I would like to skip this if the data hasn't changed since the last time the job ran.
The application is written in C++ and communicates with Oracle via OCI. It logs into Oracle with a "normal" user, so I can't use any special admin stuff.
Edit: Okay, "Special Admin Stuff" wasn't exactly a good description. What I mean is: I can't do anything besides SELECTing from tables and calling stored procedures. Changing anything about the database itself (like adding triggers), is sadly not an option if want to get it done before 2010.
I'm really late to this party but here's how I did it:
SELECT SCN_TO_TIMESTAMP(MAX(ora_rowscn)) from myTable;
It's close enough for my purposes.
Since you are on 10g, you could potentially use the ORA_ROWSCN pseudocolumn. That gives you an upper bound of the last SCN (system change number) that caused a change in the row. Since this is an increasing sequence, you could store off the maximum ORA_ROWSCN that you've seen and then look only for data with an SCN greater than that.
By default, ORA_ROWSCN is actually maintained at the block level, so a change to any row in a block will change the ORA_ROWSCN for all rows in the block. This is probably quite sufficient if the intention is to minimize the number of rows you process multiple times with no changes if we're talking about "normal" data access patterns. You can rebuild the table with ROWDEPENDENCIES which will cause the ORA_ROWSCN to be tracked at the row level, which gives you more granular information but requires a one-time effort to rebuild the table.
Another option would be to configure something like Change Data Capture (CDC) and to make your OCI application a subscriber to changes to the table, but that also requires a one-time effort to configure CDC.
Ask your DBA about auditing. He can start an audit with a simple command like :
AUDIT INSERT ON user.table
Then you can query the table USER_AUDIT_OBJECT to determine if there has been an insert on your table since the last export.
google for Oracle auditing for more info...
SELECT * FROM all_tab_modifications;
Could you run a checksum of some sort on the result and store that locally? Then when your application queries the database, you can compare its checksum and determine if you should import it?
It looks like you may be able to use the ORA_HASH function to accomplish this.
Update: Another good resource: 10g’s ORA_HASH function to determine if two Oracle tables’ data are equal
Oracle can watch tables for changes and when a change occurs can execute a callback function in PL/SQL or OCI. The callback gets an object that's a collection of tables which changed, and that has a collection of rowid which changed, and the type of action, Ins, upd, del.
So you don't even go to the table, you sit and wait to be called. You'll only go if there are changes to write.
It's called Database Change Notification. It's much simpler than CDC as Justin mentioned, but both require some fancy admin stuff. The good part is that neither of these require changes to the APPLICATION.
The caveat is that CDC is fine for high volume tables, DCN is not.
If the auditing is enabled on the server, just simply use
SELECT *
FROM ALL_TAB_MODIFICATIONS
WHERE TABLE_NAME IN ()
You would need to add a trigger on insert, update, delete that sets a value in another table to sysdate.
When you run application, it would read the value and save it somewhere so that the next time it is run it has a reference to compare.
Would you consider that "Special Admin Stuff"?
It would be better to describe what you're actually doing so you get clearer answers.
How long does the batch process take to write the file? It may be easiest to let it go ahead and then compare the file against a copy of the file from the previous run to see if they are identical.
If any one is still looking for an answer they can use Oracle Database Change Notification feature coming with Oracle 10g. It requires CHANGE NOTIFICATION system privilege. You can register listeners when to trigger a notification back to the application.
Please use the below statement
select * from all_objects ao where ao.OBJECT_TYPE = 'TABLE' and ao.OWNER = 'YOUR_SCHEMA_NAME'

Resources