PL/SQL Errors Suddenly - oracle

I am developing a PL/SQL script, using TOAD. At this point of the development, I am debugging it. This has involved: wrap a section in begin/end, F5 run it, receive error info, fix problem, repeat.
All of a sudden, out of nowhere, I am receiving
ORA-00604: error occurred at recursive SQL level 2
ORA-01654: unable to extend index SYS.I_OBJ5 by 128 in tablespace SYSTEM
The script begins with a drop table/create table set of instructions for a simple 2-field table, in my logon schema. After this started happening, I narrowed the part I am re-running to just one line: drop table <tblName>
In trying to narrow this down, I finally went to the TOAD Schema Browser, right-clicked on the table, and selected "Drop table" from the context menu — same result.
I must have run this statement 120 times yesterday, without this act giving me any trouble. Now? Not happenin! I am really stumped. Did all those runs maybe load up some area that is now full? Part of this script opens file system files. I didn't know I had to then close them, and I ran into "`This action would result in ‘too many files open’ (each iterative run opened one more). Have I done something like that by dropping and recreating this table so many times?

I agree with #Peter M, most likely your SYSTEM tablespace is full.
The error message says it quite clearly: unable to extend index ... in tablespace SYSTEM means that Oracle ran out of space while trying to make an index bigger. The tablespace SYSTEM is used by Oracle for internal purposes, for instance for the list of tables and columns. It is therefore quite important and normally well supervised by DBAs and kept clean of other objects like developer tables. The schema name SYSalso points in this direction.
The other hint is recursive SQL: Oracle runs not only your SQL (like CREATE TABLE) but sometimes needs to do some housekeeping, like updating said list of table, which is also done by SQL. The second flavour is called recursive.
I'd guess therefore that it is not your table that causing the SYSTEM tablespace to overflow, but the many changes.
If this happened at my place of work, I'd got a friendly phone call by a DBA by now, asking what's going on...

Related

How do I select without using rollback segments?

I'm trying to export something using a select statement that runs for a very long time and I've been getting ORA-01555 snapshot too old errors. I searched for this error and it has something to do with select statement using rollback segment "redo tablespace".
How do I select without getting this error? I don't care about the integrity of the results I'm going to get or any other consequences that this may bring about.
Oracle does not allow to read inconsistent results and does not provide the corresponding isolation level "read uncommitted" (if this is an isolation level at all). If you don't care about consistency, you may split the query in several parts (using different where clauses). If you would like to fix the error, you would have to resize the undo tablespace (or change the undo retention) - but this is a job for a DBA (if it is necessary).

Sequences (using as ID) issue in Oracle SQL Developer

I am using sequences to create IDs, so while executing insert stored procedure it will create unique value for ID. But after some time it is losing the definition for the sequence.
Not sure why this is happening again and again and how to solve the problem?
I am using Oracle SQL Developer and in the edit table property there is 'Identity Column' setting. See below:
Next step is setting up trigger and sequence:
It was working fine for some time until this property defaulted. Now it is not there anymore:
Still have this trigger and sequence object in the schema and able to setup again but it will break later.
How to avoid this problem in future?
I think it is just a bug/limitation in your client software, Oracle SQL Developer. The "Identity Column" tab is a handy way to create the corresponding sequence and trigger but it doesn't seem to recognise existing elements. I've just verified my own system and that's exactly what happens.
It makes sense, because adding a new sequence and trigger is a pretty straightforward task (all you need is a template) but displaying current sequence is hard given that a trigger can implement any conceivable logic. Surely it could be done but the cost-benefit ratio probably left things this way.
In short, your app is not broken so nothing needs to be fixed on your side.
This is what I received from IT support regarding the issue:
A few possibilities that might cause this:
1 - Another user with limited privileges might be editing the table using SQL Developer. In this case, if this user's privilege is not enough to obtain the sequence and/or trigger information from the database, the tool might leave the fields blank and disable it when table changes are saved.
2 - The objects are being changed or removed outside of SQL Developer, causing it to lose the information. In my tests I noticed that dropping the trigger and recreating it with the same name caused the identity property information to be lost on SQL Developer.
Even being the trigger enabled, and working for inserts it could not retrieve the information.
Then, if I run an alter trigger to enable it (even tough dba_trigger is reporting it as already enabled), SQL Developer will list the information again:
ALTER TRIGGER "AWS"."TABLE1_TRG" ENABLE;
So it looks like there are some issues with the SQL Developer, that is causing this behavior.
Next time it happen, please check if the trigger still exist on the database and is enabled with the query below:
select owner, trigger_name, TRIGGER_TYPE, TRIGGERING_EVENT, TABLE_OWNER, TABLE_NAME, STATUS
from dba_triggers
where trigger_name = 'ENTER_YOUR_TRG_NAME'; --Just change the trigger name in WHERE

OracleDatareader seems to execute an update statement

I am using oracle client 11.2.0
Dll version 4.112.3.0
We have a page in our application where people can give a sql statement and retreive results. basically do an oracle command.executereader
Recently one of my team members gave an update statement as a test and it actually performed an update on a record!!!!
Anyone who has encountered this?
Regards
Sid.
It is a normal (albeit a bit unsettling) behavior. ExecuteReader is expected to execute the sql command provided as CommandText and build a DbDataReader that you use to loop over the results.
If the command doesn't return any row to read is not something that the reader should prevent in any case. And so it is not expected that it checks if your command is really a SELECT statement.
Think for example if you pass a stored procedure name or if you have multiple sql batch to execute. (INSERT followed by a SELECT)
I think that the biggest problem here is the fact that you allow an arbitrary sql command typed by your users to reach the database engine. A very big hole in security. You should, at least, execute some analysis on the query text before submitting the code to the database engine.
I agree with Steve. Your reader will execute any command, and might get a bit confused if it's not a select and doesn't return a result set.
To prevent people from modifying anything, create a new user, grant select only (no update, no delete, no insert) on your tables to that user (grant select on tablename to seconduser). Then, log in as seconduser, and, create synonyms for your tables (create synonym tablename for realowner.tablename). Have your application use the seconduser when connecting to the DB. This should prevent people from "hacking" your site. If you want to be of the safe side, grant no permissions but create session to the second user to prevent him from creating tables, dropping your views and similar stuff (I'd guess your executereader won't allow DDL, but test it to make sure).

I get an ORA-01775: looping chain of synonyms error when I use sqlldr

I apologize for posting a question that seems to have been asked numerous times on the internet, but I can't quite fix it for some reason.
I was trying to populate some tables using Oracle's magical sqldr utility, but it throws an ORA-01775 error for some reason.
Everywhere I go on Google, people say something along the lines of: "Amateur, get your synonyms sorted out" (that was paraphrased) and that's nice and all, but I did not make any synonyms.
Here, the following does not work on my system:
SQLPLUS user/password
SQL>CREATE TABLE test (name varchar(10), id number);
SQL>exit
Then, I have a .ctl file with the following contents:
load data
characterset utf16
infile *
append
into table test
(name,
id
)
begindata
"GURRR" 4567
Then I run this command:
sqlldr user#localhost/password control=/tmp/controlfiles/test.ctl
The result:
SQL*Loader-702: Internal error - ulndotvcol: OCIStmtExecute()
ORA-01775: looping chain of synonyms
Part of test.log:
Table TEST, loaded from every logical record.
Insert option in effect for this table: APPEND
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
NAME FIRST 2 CHARACTER
ID NEXT 2 CHARACTER
SQL*Loader-702: Internal error - ulndotvcol: OCIStmtExecute()
ORA-01775: looping chain of synonyms
And, if I try to do a manual insert:
SQL> insert into test values ('aa', 56);
1 row created.
There is no problem.
So, yeah, I am stuck!
If it helps, I am using Oracle 11g XE on CentOS.
Thanks for the help guys, I appreciate it.
EDIT:
I kind of, sort of figured out part of the problem. The problem was that somewhere along the line, maybe during a failed load or something, Oracle had given itself corrupt views and synonyms.
The affected views were: GV_$LOADISTAT, GV_$LOADPSTAT, V_$LOADISTAT and V_$LOADPSTAT. I am not quite sure why the views got corrupt, but recompiling them resulted in compiled with errorserrors. The synonyms used in the queries themselves were corrupt, namely the gv$loadistat, gv$loadpstat, v$loadistat and v$loadpstat synonyms.
I wasn't sure about why this was happening and I didn't quite understand anything. So, I decided to drop the synonyms and recreate them. Unfortunately, I couldn't recreate them, as the view they pointed to (there is a bit of weird recursion going on here...) was corrupt. These views were the aforementioned GV_$LOADISTAT and other views. In other words, the synonyms pointed to the views that used those synonyms. Talk about a looping chain.
So...I recreated the public synonyms but instead of specifying the view as GV_$LOADISTAT, I specified them as sys.GV_$LOADISTAT. e.g.
DROP PUBLIC synonym GV$LOADISTAT;
CREATE PUBLIC synonym GV$LOADISTAT for sys.GV_$LOADISTAT;
Then, I recreated the user views to point to those public synonyms.
CREATE OR REPLACE FORCE VIEW "USER"."GV_$LOADISTAT" ("INST_ID", "OWNER", "TABNAME", "INDEXNAME", "SUBNAME", "MESSAGE_NUM", "MESSAGE")
AS
SELECT "INST_ID",
"OWNER",
"TABNAME",
"INDEXNAME",
"SUBNAME",
"MESSAGE_NUM",
"MESSAGE"
FROM gv$loadistat;
That seemed to fix the views/synonyms. Yeah, it is a bit of a hack, but it somehow worked. Unfortunately, this was not enough to run SQL Loader. I got a table or view does not exist error.
I tried granting more permissions to my regular user, but it didn't work. So, I gave up and ran SQL Loader as sysdba. It worked! It is not a good thing to do, but it is a development only system made for testing purposes, so, I didn't care.
I could not repeat your looping synonym chain error, but it appears the control file needed a bit of work, at least for my environment.
I was able to get your example to work by modifying it thusly:
load data
infile *
append
into table test
fields terminated by "," optionally enclosed by '"'
(name,
id
)
begindata
"GURRR",4567

How to find out when an Oracle table was updated the last time

Can I find out when the last INSERT, UPDATE or DELETE statement was performed on a table in an Oracle database and if so, how?
A little background: The Oracle version is 10g. I have a batch application that runs regularly, reads data from a single Oracle table and writes it into a file. I would like to skip this if the data hasn't changed since the last time the job ran.
The application is written in C++ and communicates with Oracle via OCI. It logs into Oracle with a "normal" user, so I can't use any special admin stuff.
Edit: Okay, "Special Admin Stuff" wasn't exactly a good description. What I mean is: I can't do anything besides SELECTing from tables and calling stored procedures. Changing anything about the database itself (like adding triggers), is sadly not an option if want to get it done before 2010.
I'm really late to this party but here's how I did it:
SELECT SCN_TO_TIMESTAMP(MAX(ora_rowscn)) from myTable;
It's close enough for my purposes.
Since you are on 10g, you could potentially use the ORA_ROWSCN pseudocolumn. That gives you an upper bound of the last SCN (system change number) that caused a change in the row. Since this is an increasing sequence, you could store off the maximum ORA_ROWSCN that you've seen and then look only for data with an SCN greater than that.
By default, ORA_ROWSCN is actually maintained at the block level, so a change to any row in a block will change the ORA_ROWSCN for all rows in the block. This is probably quite sufficient if the intention is to minimize the number of rows you process multiple times with no changes if we're talking about "normal" data access patterns. You can rebuild the table with ROWDEPENDENCIES which will cause the ORA_ROWSCN to be tracked at the row level, which gives you more granular information but requires a one-time effort to rebuild the table.
Another option would be to configure something like Change Data Capture (CDC) and to make your OCI application a subscriber to changes to the table, but that also requires a one-time effort to configure CDC.
Ask your DBA about auditing. He can start an audit with a simple command like :
AUDIT INSERT ON user.table
Then you can query the table USER_AUDIT_OBJECT to determine if there has been an insert on your table since the last export.
google for Oracle auditing for more info...
SELECT * FROM all_tab_modifications;
Could you run a checksum of some sort on the result and store that locally? Then when your application queries the database, you can compare its checksum and determine if you should import it?
It looks like you may be able to use the ORA_HASH function to accomplish this.
Update: Another good resource: 10g’s ORA_HASH function to determine if two Oracle tables’ data are equal
Oracle can watch tables for changes and when a change occurs can execute a callback function in PL/SQL or OCI. The callback gets an object that's a collection of tables which changed, and that has a collection of rowid which changed, and the type of action, Ins, upd, del.
So you don't even go to the table, you sit and wait to be called. You'll only go if there are changes to write.
It's called Database Change Notification. It's much simpler than CDC as Justin mentioned, but both require some fancy admin stuff. The good part is that neither of these require changes to the APPLICATION.
The caveat is that CDC is fine for high volume tables, DCN is not.
If the auditing is enabled on the server, just simply use
SELECT *
FROM ALL_TAB_MODIFICATIONS
WHERE TABLE_NAME IN ()
You would need to add a trigger on insert, update, delete that sets a value in another table to sysdate.
When you run application, it would read the value and save it somewhere so that the next time it is run it has a reference to compare.
Would you consider that "Special Admin Stuff"?
It would be better to describe what you're actually doing so you get clearer answers.
How long does the batch process take to write the file? It may be easiest to let it go ahead and then compare the file against a copy of the file from the previous run to see if they are identical.
If any one is still looking for an answer they can use Oracle Database Change Notification feature coming with Oracle 10g. It requires CHANGE NOTIFICATION system privilege. You can register listeners when to trigger a notification back to the application.
Please use the below statement
select * from all_objects ao where ao.OBJECT_TYPE = 'TABLE' and ao.OWNER = 'YOUR_SCHEMA_NAME'

Resources