Skipping unusable indexes causes dblink error - oracle

Every once and a while when I'm executing the following statement:
alter session set skip_unusable_indexes=true;
I'm getting the following error:
ORA-03135: connection lost contact
ORA-02063: preceding line from my_dblink
What does skipping indexes has to do with my dblink?
How can I detect the problematic index?
How can I limit the scope of the above statement only to my local indexes?

1) What does skipping indexes has to do with my dblink?
It has no relation. Please elaborate how you are getting the issue. Is
that you are login to sqlplus and as soon as you alter session, your
DB link disconnects?
2) How can I detect the problematic index?
select STATUS,index_name,table_name from user_indexes where status='UNUSABLE';
select STATUS,index_name,table_name from user_indexes where status!='VALID' and status!='N/A';
3) How can I limit the scope of the above statement only to my local indexes?
I believe you meant to say the indexes on connected database and not
on the DB link database. You can not do this. This is session or
system setting.

I suspect that this is a bug, or at least an unimplemented feature.
When you set your session to skip unusable indexes, you're modifying the query optimiser/parser behaviour, and I suspect that this modification cannot be "pushed" to the remote instance that you have made a connection to.
I also suspect that the key to avoiding this problem, if it can be avoided, is to alter the session before referencing any database links, but even then I would not be surprised if the remote database does not implement the modification, as it is effectively a different session.

Related

How do I select without using rollback segments?

I'm trying to export something using a select statement that runs for a very long time and I've been getting ORA-01555 snapshot too old errors. I searched for this error and it has something to do with select statement using rollback segment "redo tablespace".
How do I select without getting this error? I don't care about the integrity of the results I'm going to get or any other consequences that this may bring about.
Oracle does not allow to read inconsistent results and does not provide the corresponding isolation level "read uncommitted" (if this is an isolation level at all). If you don't care about consistency, you may split the query in several parts (using different where clauses). If you would like to fix the error, you would have to resize the undo tablespace (or change the undo retention) - but this is a job for a DBA (if it is necessary).

Sequences (using as ID) issue in Oracle SQL Developer

I am using sequences to create IDs, so while executing insert stored procedure it will create unique value for ID. But after some time it is losing the definition for the sequence.
Not sure why this is happening again and again and how to solve the problem?
I am using Oracle SQL Developer and in the edit table property there is 'Identity Column' setting. See below:
Next step is setting up trigger and sequence:
It was working fine for some time until this property defaulted. Now it is not there anymore:
Still have this trigger and sequence object in the schema and able to setup again but it will break later.
How to avoid this problem in future?
I think it is just a bug/limitation in your client software, Oracle SQL Developer. The "Identity Column" tab is a handy way to create the corresponding sequence and trigger but it doesn't seem to recognise existing elements. I've just verified my own system and that's exactly what happens.
It makes sense, because adding a new sequence and trigger is a pretty straightforward task (all you need is a template) but displaying current sequence is hard given that a trigger can implement any conceivable logic. Surely it could be done but the cost-benefit ratio probably left things this way.
In short, your app is not broken so nothing needs to be fixed on your side.
This is what I received from IT support regarding the issue:
A few possibilities that might cause this:
1 - Another user with limited privileges might be editing the table using SQL Developer. In this case, if this user's privilege is not enough to obtain the sequence and/or trigger information from the database, the tool might leave the fields blank and disable it when table changes are saved.
2 - The objects are being changed or removed outside of SQL Developer, causing it to lose the information. In my tests I noticed that dropping the trigger and recreating it with the same name caused the identity property information to be lost on SQL Developer.
Even being the trigger enabled, and working for inserts it could not retrieve the information.
Then, if I run an alter trigger to enable it (even tough dba_trigger is reporting it as already enabled), SQL Developer will list the information again:
ALTER TRIGGER "AWS"."TABLE1_TRG" ENABLE;
So it looks like there are some issues with the SQL Developer, that is causing this behavior.
Next time it happen, please check if the trigger still exist on the database and is enabled with the query below:
select owner, trigger_name, TRIGGER_TYPE, TRIGGERING_EVENT, TABLE_OWNER, TABLE_NAME, STATUS
from dba_triggers
where trigger_name = 'ENTER_YOUR_TRG_NAME'; --Just change the trigger name in WHERE

Not able to insert the same record after connection interruption

I was inserting some records in the production table ,while doing that before commit ,I lost the production connection and none of the record got inserted.
Now when I am trying to insert the same record ,the sql plus is getting hanged and data is not getting saved.
But when I tried for other record which I was not inserted ,those records are getting inserted.
I have checked the table again ,for availability of data.Those previous data has not stored anywhere.
SQL plus is not generating any error also ,so that I can check the error and try to rectify.
Can anyone please help me to analyse and troubleshoot the problem.
while inserting in oracle the connection has lost now I am not able to add the same data
If your SQL/Plus session hangs, it's probably being blocked by your previous session. To find the offending session, you can use (requires DBA privileges):
select * from v$lock where block = 1
This should give you the session ID of the blocking session. Now you can run
select * from v$session
and check whether the session ID returned by the first query indeed belongs to your previous session. To kill the session, use the command
alter system kill session '<SID>,<serial#>'

Can someone explain ORA-29861 error in plain english and its possible cause?

I have an application implemented in Grails framework using underlying Hibernate. After it runs for a while, I got an Oracle DB error and resolved it by rebuilding the offending index. I wonder if anyone can propose the possible cause(s) and ways to prevent it from happening.
Caused by:
org.springframework.jdbc.UncategorizedSQLException:
Hibernate operation: Could not execute JDBC batch update;
uncategorized SQLException for SQL [update RSS_ITEM set guid=?,
pubdate=?, link=?, rss_source_id=?, title=?, description=?,
rating_raw=?, rating_tuned=?, date_created=?, date_locked=? where
RSS_ITEM_ID=?]; SQL state [99999]; error code [29861]; ORA-29861:
domain index is marked LOADING/FAILED/UNUSABLE
; nested exception is java.sql.BatchUpdateException:
ORA-29861:
domain index is marked LOADING/FAILED/UNUSABLE
To locate broken index use:
select index_name,index_type,status,domidx_status,domidx_opstatus from user_indexes where index_type like '%DOMAIN%' and (domidx_status <> 'VALID' or domidx_opstatus <> 'VALID');
To rebuild the index use:
alter index INDEX_NAME rebuild;
Domain indexes are a special type of index. It is possible to build our own using OCI but the chances are you're using one of the index types offered by Oracle Text. I say this as your table seems to include free text columns.
The most commonly used Text index is the CTXSYS.CONTEXT index type. The point about this index type is that it is not maintained transactionally, so as to minimize the effort involved in indexing large documents. This means when you insert or update a document into your table it is not indexed immediately. Instead is that a background process, such as a database job, which kicks off the index synchronization on a regular basis. The index is unusable while it is being synchronized. If the resync fails for any reason then you will need to drop and recreate the index.
Is this a regular occurrence for you? If so you may need to re-appraise your application. Perhaps a different sort of index (such as CTXSYS.CTXCAT) might be more appropriate. One thing which strikes me about your error message is that your UPDATE statement touches a lot of columns, including what looks like the primary key. This makes me think you have a single generic update statement which sets every column regardless of whether it has actually changed. This is bad practice with normal indexes; it will kill your application if you are using text indexes.
http://ora-29861.ora-code.com/
Cause: An attempt has been made to access a domain index that is being
built or is marked failed by an
unsuccessful DDL or is marked unusable
by a DDL operation.
Action: Wait if the specified index is marked LOADING Drop the
specified index if it is marked FAILED
Drop or rebuild the specified index if
it is marked UNUSABLE.
That should hopefully be enough context. Can you figure out the problem from that?

ORACLE 11g case insensitive by default

I found in this article, that since ORACLE 10g, there is a way to make a particular connection-session compare strings case-insensitive, without needing any crazy SQL functions, using an ALTER SESSION.
Does anyone know if, in 11g, there might be a way to make the database to always operate in this mode by default for all new connection-sessions, thereby eliminating the need for running ALTER SESSIONs every time you connect?
Or perhaps, an additional parameter you could specify on your connection string that would turn the same on?
You could just set the NLS_SORT, NLS_COMP parameters mentioned in the article as the values in the the Oracle init file using the alter system set <parameter> = <value>; clause.
Info on using the alter system commands can be found here.
Here is a good link on the correct usage of the NLS_* parameters. Note that some settings of of the NLS_SORT parameter can/could cause performance issues, namely when it is not set to BINARY. The Oracle docs state:
Setting NLS_SORT to anything other
than BINARY causes a sort to use a
full table scan, regardless of the
path chosen by the optimizer. BINARY
is the exception because indexes are
built according to a binary order of
keys. Thus the optimizer can use an
index to satisfy the ORDER BY clause
when NLS_SORT is set to BINARY. If
NLS_SORT is set to any linguistic
sort, the optimizer must include a
full table scan and a full sort in the
execution plan.
Sure you can!
Get your friendly DBA to set these parameters:
ALTER SYSTEM SET NLS_COMP=LINGUISTIC SCOPE=SPFILE;
ALTER SYSTEM SET NLS_SORT=BINARY_AI SCOPE=SPFILE;
This is taken from my short article on How to make Oracle Case Insensitive
I tried using a logon trigger to issue these commands to get case-insensitive queries:
execute immediate 'alter session set NLS_SORT=BINARY_CI';
execute immediate 'alter session set NLS_COMP=LINGUISTIC';
And while that did give me CI, it also gave me unbelievably bad performance issues. We have one table in particular that, without those settings, inserts take 2 milliseconds. With those settings in place, inserts took 3 seconds. I have confirmed this by creating and dropping the trigger multiple times.
I don't know if doing it at the system level, as opposed to the session level with a trigger, makes a difference or not.
I found the same performance issue with inserts and nls in 11g r2! Luckily for me the performance hit was not significant enough requiring an app change.
If you can do without binary_ci for the INSERT, then I would do an alter session just before the insert and afterwards, so you don't have to drop the trigger

Resources