I found in this article, that since ORACLE 10g, there is a way to make a particular connection-session compare strings case-insensitive, without needing any crazy SQL functions, using an ALTER SESSION.
Does anyone know if, in 11g, there might be a way to make the database to always operate in this mode by default for all new connection-sessions, thereby eliminating the need for running ALTER SESSIONs every time you connect?
Or perhaps, an additional parameter you could specify on your connection string that would turn the same on?
You could just set the NLS_SORT, NLS_COMP parameters mentioned in the article as the values in the the Oracle init file using the alter system set <parameter> = <value>; clause.
Info on using the alter system commands can be found here.
Here is a good link on the correct usage of the NLS_* parameters. Note that some settings of of the NLS_SORT parameter can/could cause performance issues, namely when it is not set to BINARY. The Oracle docs state:
Setting NLS_SORT to anything other
than BINARY causes a sort to use a
full table scan, regardless of the
path chosen by the optimizer. BINARY
is the exception because indexes are
built according to a binary order of
keys. Thus the optimizer can use an
index to satisfy the ORDER BY clause
when NLS_SORT is set to BINARY. If
NLS_SORT is set to any linguistic
sort, the optimizer must include a
full table scan and a full sort in the
execution plan.
Sure you can!
Get your friendly DBA to set these parameters:
ALTER SYSTEM SET NLS_COMP=LINGUISTIC SCOPE=SPFILE;
ALTER SYSTEM SET NLS_SORT=BINARY_AI SCOPE=SPFILE;
This is taken from my short article on How to make Oracle Case Insensitive
I tried using a logon trigger to issue these commands to get case-insensitive queries:
execute immediate 'alter session set NLS_SORT=BINARY_CI';
execute immediate 'alter session set NLS_COMP=LINGUISTIC';
And while that did give me CI, it also gave me unbelievably bad performance issues. We have one table in particular that, without those settings, inserts take 2 milliseconds. With those settings in place, inserts took 3 seconds. I have confirmed this by creating and dropping the trigger multiple times.
I don't know if doing it at the system level, as opposed to the session level with a trigger, makes a difference or not.
I found the same performance issue with inserts and nls in 11g r2! Luckily for me the performance hit was not significant enough requiring an app change.
If you can do without binary_ci for the INSERT, then I would do an alter session just before the insert and afterwards, so you don't have to drop the trigger
Related
Database : Oracle 12c (12.1.0.2) - Enterprise Edition with RAC
I'm trying to reduce REDO and archive logs generated for my application and measure using V$SYSSTAT and corresponding archive logs using DBA_HIST* views.
In my application code on DB side, I'm using the session level setting of TEMP_UNDO_ENABLED to direct UNDO for gtt into temporary tablespace. The specific feature noted here.
ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE;
INSERT INTO my_gtt VALUES...
Note the documentation has this quote:
..if the session already has temporary objects using regular undo, setting this parameter will have no effect
If I use a pure database session, I can ascertain that since no other temporary tables have been created/used before setting the parameter, the REDO logs generated are minimal. I can use a simple (select value from V$SYSSTAT where name= 'redo size') to see the difference.
However the actual application (Java) triggers this code through a JDBC session. As such, I'm unable to ascertain if before the call to 'ALTER SESSION..' there were any GTT or other temporary objects previously created/used in the session. The consequence of this is, if say a GTT was already used, then the call to 'ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE' simply ignores the setting without an indication. The code will continue logging UNDO & REDO in the normal tablespace, which is unintended.
Is there any way to query if this parameter TEMP_UNDO_ENABLED is already set/unset within the session, so that before I do a ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE I'll know for sure this will or will not have an effect?
Thanks in advance for inputs.
There is no holistic way to do this satisfying all cases. Posting some options I got as answer elsewhere:
Assumptions :
Both options work only if:
Only GTT is concerned (excluding WITH and other temporary objects)
COMMIT/ROLLBACK has not already been done including from SAVEPOINTS
or other methods
Option 1 : Use v$tempseg_usage, to check if any segment created in DATA, instead of TEMP_UNDO
select count(*)
from v$tempseg_usage
where contents = 'TEMPORARY'
and segtype = 'DATA'
and session_addr =
(select saddr
from v$session
where sid = sys_context('userenv', 'sid'));
Option 2 : Use gv$transaction as below, ubafil = 0 if for temp_undo, else ubafil = undo tablespace file id:
select count(*)
from gv$transaction
where ses_addr = (select saddr
from v$session
where sid = sys_context('userenv', 'sid'))
and ubafil <> 0;
On other note for thought, I still think, there should have been a parameter or an indication elsewhere that simply indicates the setting of TEMP_UNDO_ENABLED has not had an effect, within the scope of a SESSION, not having to touch views that would otherwise be considered as administrative.
I'm open to answers if someone finds a better approach.
Although this does not answer your question directly but this link may help you.
In 12c temporary undo concept has been added
" Oracle 12c introduced the concept of Temporary Undo, allowing the
undo for a GTT to be written to the temporary tablespace, thereby
reducing undo and redo."
I am using sequences to create IDs, so while executing insert stored procedure it will create unique value for ID. But after some time it is losing the definition for the sequence.
Not sure why this is happening again and again and how to solve the problem?
I am using Oracle SQL Developer and in the edit table property there is 'Identity Column' setting. See below:
Next step is setting up trigger and sequence:
It was working fine for some time until this property defaulted. Now it is not there anymore:
Still have this trigger and sequence object in the schema and able to setup again but it will break later.
How to avoid this problem in future?
I think it is just a bug/limitation in your client software, Oracle SQL Developer. The "Identity Column" tab is a handy way to create the corresponding sequence and trigger but it doesn't seem to recognise existing elements. I've just verified my own system and that's exactly what happens.
It makes sense, because adding a new sequence and trigger is a pretty straightforward task (all you need is a template) but displaying current sequence is hard given that a trigger can implement any conceivable logic. Surely it could be done but the cost-benefit ratio probably left things this way.
In short, your app is not broken so nothing needs to be fixed on your side.
This is what I received from IT support regarding the issue:
A few possibilities that might cause this:
1 - Another user with limited privileges might be editing the table using SQL Developer. In this case, if this user's privilege is not enough to obtain the sequence and/or trigger information from the database, the tool might leave the fields blank and disable it when table changes are saved.
2 - The objects are being changed or removed outside of SQL Developer, causing it to lose the information. In my tests I noticed that dropping the trigger and recreating it with the same name caused the identity property information to be lost on SQL Developer.
Even being the trigger enabled, and working for inserts it could not retrieve the information.
Then, if I run an alter trigger to enable it (even tough dba_trigger is reporting it as already enabled), SQL Developer will list the information again:
ALTER TRIGGER "AWS"."TABLE1_TRG" ENABLE;
So it looks like there are some issues with the SQL Developer, that is causing this behavior.
Next time it happen, please check if the trigger still exist on the database and is enabled with the query below:
select owner, trigger_name, TRIGGER_TYPE, TRIGGERING_EVENT, TABLE_OWNER, TABLE_NAME, STATUS
from dba_triggers
where trigger_name = 'ENTER_YOUR_TRG_NAME'; --Just change the trigger name in WHERE
I'm using an external tool that scans tables in my database. It uses dba_objects.last_ddl_time to determine which tables have been scanned. Obviously, this strategy does not work if the table data is modified in between scans so sometimes I have to help it...
I need a way to "bump" the Last DDL time without actually changing anything.
I'm looking for the simplest possible instant DDL statement that can be executed on any table, knowing just the table name.
I have sysdba privileges.
Edit:
For example, I can use comment on table xxx is 'Boom'; but then I lose the original comment. I know how to fix this, but then it is no longer an small and easy statement I can quickly time in sql*plus
Changing LOGGING/NOLOGGING is pretty fast (though not instant).
If you set the LOGGING attribute back to itself, it will notch the LAST_DDL_TIME without making any real change to the table. This example below tries to touch every table except sys tabels (presumably you'd want more limits here)
BEGIN
FOR TABLE_POINTER IN (SELECT OWNER, TABLE_NAME, DECODE(LOGGING,'YES','LOGGING','NOLOGGING') DO_LOGGING
FROM DBA_TABLES WHERE OWNER NOT IN ('SYSTEM','SYS','SYSBACKUP','MDSYS' --etc. other restrictions here
))
LOOP
EXECUTE IMMEDIATE UTL_LMS.FORMAT_MESSAGE('ALTER TABLE %s.%s %s',TABLE_POINTER.OWNER, TABLE_POINTER.TABLE_NAME, TABLE_POINTER.DO_LOGGING);
END LOOP;
END;
/
EDIT: The above wouldn't work with temp tables. An alternative such as setting PCT_FREE to itself or another suitable attribute may be preferable. You may need to handle IOTs, Partitioned Tables, etc. differently than the rest of the tables as well.
Every once and a while when I'm executing the following statement:
alter session set skip_unusable_indexes=true;
I'm getting the following error:
ORA-03135: connection lost contact
ORA-02063: preceding line from my_dblink
What does skipping indexes has to do with my dblink?
How can I detect the problematic index?
How can I limit the scope of the above statement only to my local indexes?
1) What does skipping indexes has to do with my dblink?
It has no relation. Please elaborate how you are getting the issue. Is
that you are login to sqlplus and as soon as you alter session, your
DB link disconnects?
2) How can I detect the problematic index?
select STATUS,index_name,table_name from user_indexes where status='UNUSABLE';
select STATUS,index_name,table_name from user_indexes where status!='VALID' and status!='N/A';
3) How can I limit the scope of the above statement only to my local indexes?
I believe you meant to say the indexes on connected database and not
on the DB link database. You can not do this. This is session or
system setting.
I suspect that this is a bug, or at least an unimplemented feature.
When you set your session to skip unusable indexes, you're modifying the query optimiser/parser behaviour, and I suspect that this modification cannot be "pushed" to the remote instance that you have made a connection to.
I also suspect that the key to avoiding this problem, if it can be avoided, is to alter the session before referencing any database links, but even then I would not be surprised if the remote database does not implement the modification, as it is effectively a different session.
I am using oracle 11g r2, and trying to configure DB to sort order using linguistic sort.
I did
alter system set NLS_SORT='RUSSIAN' SCOPE=SPFILE;
alter system set NLS_COMP='LINUGUISTIC' SCOPE=SPFILE;
after i've restarter oracle i checked these params:
show parameters NLS_SORT;
show parameters NLS_COMP;
it show me the right values.
But when I make sort
select name from test order by name;
it show me results in not correct order, ie digits first, then letters.
but if i will do
alter session set nls_sort='RUSSIAN';
alter session set nls_comp='LINGUISTIC';
select name from test order by name;
it show me the right order.
anyone know why sysem changes not showing me right results ?
The priority for globalisation settings is shown in the documentation. You're setting priority 4 in that list, 'Specified in the initialization parameter file'. You are not setting priority 1 ('Explicitly set in SQL functions') and you get the results you want when you do set priority 2 ('Set by an ALTER SESSION statement'). By a process of elimination that indicates that your 'not correct' order is being influenced by priority 3, 'Set as an environment variable'.
You can check the values actually being used by your session with select * from nls_session_parameters.
The NLS_SORT environment variable is probably not being set directly; I suspect it's being derived from NLS_LANGUAGE, which is derived from NLS_LANG. If you aren't explicitly setting that in your operating system environment then the client will set it based on the operating system locale, generally, though the exact client you use may make a significant difference. You might need to explicitly set an NLS_COMP environment variable, if the database default for that is really being overridden.
SQL Developer, for example, allows you to specify the NLS settings in the preferences (accessed from Tools->Preferences->Database->NLS); the defaults appear to be based on operating system settings, in Windows anyway. For SQL*Plus you'd need to set operating system environment variables.
This also means that if you get it working in one place - the queries give the right order when run from SQL Developer, say - they might not work when used elsewhere, say over JDBC which has its own locale settings. Just something to watch out for.
A brute-force approach might be to add the alter session commands to a login trigger, but that doesn't sound ideal as it just masks the environment configuration.
You can set NLS parameters at different levels
As initialization parameters on the instance/server.
SQL> alter system set V$NLS_PARAMETER = 'XXX' scope = both;
As environment variables on the client.
% setenv NLS_SORT FRENCH
As ALTER SESSION parameters.
SQL> ALTER SESSION SET V$NLS_PARAMETER = = 'XXX'
Any setting overrides the setting on a higher level. So setting it server side does not guarantee that the setting is used by all clients connecting.
If you want to make sure it is set for every client connecting use a logon trigger. Even then a user can explicitly override the 'default' setting
You've got a typo in your second "ALTER SYSTEM" command (LINUGUISTIC instead of LINGUISTIC).
If your real command doesn't contain this error, I'd check whether your client sets the NLS session parameters to something else.
Regardless of the system settings, I would make every effort to ensure that your applications completely specify the NLS environment that they require. It's much more robust, particularly when you need to point the application code at different environments that may be newly setup, or shared with other systems.
In fact, I'd go as far as to say that you might be better not using setting system-level environment settings.