How to make specific instance of Oracle DB case insensitive? - oracle

I would like to make specific INSTANCE of Oracle Database 11g Enterprise Edition CASE INSENSITIVE. Cany anyone let me know if this is possible. I already read few blogs where whole DB can be made CASE INSENSITIVE but for specific instance didn't get any idea.

You need to play around with NLS_SORT, NLS_COMP parameters. But don't directly implement in production, as you might just hit performance issues if you do not do the extra bit of work required.
I am not sure what you mean by "a specific instance". Is it a RAC environment? Do you want one of the nodes to always have case insensitive sessions?
Anyway, you can do it at SESSION level as well as SYSTEM level. But do you really want to implement the HARD way at SYSTEM level?
For example, at session level :
SQL> alter session set nls_comp='LINGUISTIC';
Session altered
SQL> alter session set nls_sort='BINARY_CI';
Session altered
At system level :
You could have a AFTER LOGON TRIGGER and execute the alter session statements. Eek, but that would be killing the performance. You will have to figure out where all you need to create case-insensitive function-based indexes. In my experience, I have always seen case-insensitive search to be slower, no matter you have the required imdex at place.
You can have a look at my article for detailed explanation, http://lalitkumarb.wordpress.com/2014/01/22/oracle-case-insensitive-sorts-compares/

Related

Simulate Oracle Index Without Creating It

Is there a way for me to test a new index (i.e. in memory?) without actually creating it yet? I would like to test it out and see if the Explain Plan is better before I hand off the index creation to the DBA.
My database is Oracle 12c.
Starting with 11g Oracle prepared exact for this reason inivisible indexes - the basic idea is simple.
You created a new indes as inivisible. i.e. no other session will see it and will not get possible negative sideffects (note that contrary to a popular belief more indixes means better performance - a new index can ruin the performence of some queries).
So only session that sets OPTIMIZER_USE_INVISIBLE_INDEXES can use and test the invisible index. Only after you are sure there are no negative effect, you can ALTER the index as visible.
See here for more details
Plain answer: NO. Invisible indexes still need to be created on storage Location - not Memory,
Create a subset of your data in the same way it is stored - i.e. same tablespace, storage Location etc. to reduce time for creating it. Unfortunatly this is not a 100% solution. Ask your DBA if there are times where DB not used heavily and create the index during that slot.

dynamic_sampling hint in queries causing issues in oracle 12c

We got told by dba that our application causes troubles on servers.
There are queries that start like following:
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
optimizer_features_enable(default) no_parallel result_cache(snapshot=3600)
OPT_ESTIMATE(#"innerQuery", TABLE, "THIS_#21", SCALE_ROWS=0.0007347778778)
*/ SUM(C1) FROM ...
and they crash server, we receive ORA-12537.
We are using NHibernate, but I am fairly sure those queries are not generated by our application. The queries just have no meaning in business logic, they are some random joins. We don't have sql trace rights, but in logs that dba gives us those queries are executed under our module name.
I googled and found out that DS_SVC is a comment for some service queries that Oracle12 uses in dynamic sampling.
Our queries not exactly complex, couple of left joins with rownum limit 1000.
So the question is - can I say those DS_SVC queries are a problem on dba side? If so, where can I get some docs to prove it?
Looks like a 12c bug. See if changing this helps. Can ask Oracle support as well.
ALTER SESSION SET “_fix_control”=’7452863:0′
https://www.pythian.com/blog/performance-problems-with-dynamic-statistics-in-oracle-12c/
DYNAMIC_SAMPLING hint is used to let CBO collect
cardinality during run time.
Looks like algorithm has been changed in 12c and dynamic sampling is
triggered in a broader set of use cases. This behavior can be disabled
at statement, session or system level using the fix control for the
bug 7452863. For example, ALTER SESSION SET
“_fix_control”=’7452863:0′;
Those queries are generated by the optimizer itself. The feature is called "Dynamic sampling". Until 11g this was by default used only when there were no stats on tables.
Since 12c Dynamic sampling can also be triggered by other new feature "Adaptive execution plans". For example in situations where histograms are missing on columns.
Generally this is quite complex DBA stuff to deal with. There are various ways how to fix "Adaptive exec plans" or to disable them partially/completely.
Best you can do, is to contact Oracle support.
We have added /*+ dynamic_sampling(0) */ hint in our queries. It helped, the exception is gone.

Forcing Oracle to use Primary Key Index without using Hints

We have an application that generates some temporary tables and then processes the data. I dont really have control of the way the application creates this and the subsequent queries involved. What we have noticed is that Oracle uses a full table scan instead of using the index which is the primary key of the tables. If it used the primary key index the process would run a whole lot faster.
Since I do not have control over the select queries generated by the application I cannot use hints and force Oracle to use primary key index. Is there any other setting I could change somewhere that could force Oracle to use primary key index for the temporary tables?
The two most common reasons for a query not using indexes are:
It's quicker to do a full table scan.
Poor statistics.
If your queries are selecting all of the table or doing joins without mentioning a primary key in the where clause etc., chances are it's quicker to do a full scan. Without the query and indexes, and preferably an explain plan as well it's impossible to tell for certain.
I would, however, recommend that you ask your DBA to re-gather - I hope, if not gather for the first time - statistics on the table. Use dbms_stats.gather_table_stats, with an estimate percentage of 25%+.
If the tables are re-created each time the application is run then try and gather statistics after creation and primary key generation. If they are truncated and re-filled each time, then ask your DBA to rebuild them and the PK and then gather statistics as this could significantly increase query runtime.
With no control over anything I don't see how you can improve the query time any other way.
You can use hints without changing SQL by leveraging SQL Profiles. Wrap your hint(s) into a SQL Profile that takes effect for that particular SQL ID.
I understand you don't have control over SQL, I have many apps where I encounter the same restriction. After checking query structure and statistics as in Ben's post and you have proved that hinting to use the index will improve performance why not try a manually created SQL profile.
Christian Antognini has a great paper here about SQL Profiles and creating them manually. The paper mentions creating SQL Profiles manually is undocumented. I would agree undocumented, but that doesn't necessarily mean unsupported. I would say there is little documentation out there, but if you want proof that Oracle allows manual creation, check the API or look at the coe_xfr_sql_profile.sql file in the SQLT utility directory.
I also posted a cheatsheet on how to quickly manually create a SQL Profile here.

Implementing user-defined db parameters/properties in Oracle

OK, the question title probably isn't the best, but I'm looking for a good way to implement an extensible set of parameters for Oracle database applications that "stay with" the host/instance. By "stay with", I mean that I'd like to rule out just having an Oracle table of name/value pairs that would have to modified if I create a test/QA instance by cloning the production instance. (For example, imagine a parameter called email_error_address that should be set to prod_support#abc.com in production and qa_support#abc.com in testing).
These parameters need to be accessed from both PL/SQL code running in the database as well as client-side code. I started out doing this by overloading the plsql_cc_flags init parameter (not a solution I'm proud of), but this is getting messy to maintain and parse.
[Edit]
Ideally, the implementation would allow changes to the list without restarting the instance, similar to the dynamically-modifiable init parameters.
You want to have a separate set of values for each environment. You want these values to be independent of the data, so that they don't get overridden if you import data from another instance.
The solution is to use an external table (providing you are on 9i or higher). Because external tables hold the data in an OS file they are independent of the database. To apply changed values all you need to do is overwrite the OS file.
All you need to do is ensure that the files for each environment are kept separate, This is easy enough if Test, QA, Production, etc are on their own servers. If they are on the same server then you will need to distinguish them by file name or directory path; in either case you may need to issue a bit of DDL to correct the location in the event of a database refresh.
The drawback to using external tables is that they can be a bit of a performance overhead - they are really intended for bulk loading. If this is likely to be a problem you could use caching, with a user-defined namespace or CONTEXT. Load the values into memory using DBMS_SESSION.SET_CONTEXT() either on demand on with an ON LOGON trigger. Retrieve the values by wrapper calls to SYS_CONTEXT(). Because the namespace is in session memory retrieval is quite fast. René Nyffenegger has a simple example of working with CONTEXT: check it out.
While I've been writing this up I see you have added a requirement to change things on the fly. As I have said already this is easy with an OS file, but the use of caching makes things sightly more difficult. The solution would be to use a globally accessible CONTEXT. Have a routine which loads all the values at startup which you can also call whenever you refresh the OS file.
You could use environment variables that you can set per oracle user (the account that starts up the Oracle database) or per server. The environment variables can be read with the DBMS_SYSTEM.GET_ENV procedure.
I tend to use a system_parameters table. If your concerned with it being overwritten put it in it's own schema and make a public synonym.
#APC's answer is clever.
You could solve the performance overhead by adding a materialized view on top of the external table(s). You would refresh it after RMAN-cloning, and after each update of the config files.

How to disable oracle cache for performance tests

I'm trying to test the utility of a new summary table for my data.
So I've created two procedures to fetch the data of a certain interval, each one using a different table source. So on my C# console application I just call one or another. The problem start when I want to repeat this several times to have a good pattern of response time.
I got something like this: 1199,84,81,81,81,81,82,80,80,81,81,80,81,91,80,80,81,80
Probably my Oracle 10g is making an inappropriate caching.
How I can solve this?
EDIT: See this thread on asktom, which describes how and why not to do this.
If you are in a test environment, you can put your tablespace offline and online again:
ALTER TABLESPACE <tablespace_name> OFFLINE;
ALTER TABLESPACE <tablespace_name> ONLINE;
Or you can try
ALTER SYSTEM FLUSH BUFFER_CACHE;
but again only on test environment.
When you test on your "real" system, the times you get after first call (those using cached data) might be more interesting, as you will have cached data. Call the procedure twice, and only consider the performance results you get in subsequent executions.
Probably my Oracle 10g is making a
inappropriate caching.
Actually it seems like Oracle is doing some entirely appropriate caching. If these tables are going to be used a lot then you would hope to have them in cache most of the time.
edit
In a comment on Peter's response Luis said
flushing before the call I got some
interesting results like:
1370,354,391,375,352,511,390,375,326,335,435,334,334,328,337,314,417,377,384,367,393.
These findings are "interesting" because the flush means the calls take a bit longer than when the rows are in the DB cache but not as long as the first call. This is almost certainly because the server has stored the physical records in its physical cache. The only way to avoid that, to truely run against an empty cache is to reboot the server before every test.
Alternatively learn to tune queries properly. Understanding how the database works is a good start. And EXPLAIN PLAN is a better tuning aid than the wall-clock. Find out more.

Resources