Is there a reason why Oracle is case sensitive and others like SQL Server, and MySQL are not by default?
I know that there are ways to enable/disable case sensitivity, but it just seems weird that oracle differs from other databases.
I'm also trying to understand reasons for case sensitivity. I can see where "Table" and "TaBlE" can be considered equivalent and not equivalent, but is there an example where case sensitivity would actually make a difference?
I'm somewhat new to databases and am currently taking a class.
By default, Oracle identifiers (table names, column names, etc.) are case-insensitive. You can make them case-sensitive by using quotes around them (eg: SELECT * FROM "My_Table" WHERE "my_field" = 1). SQL keywords (SELECT, WHERE, JOIN, etc.) are always case-insensitive.
On the other hand, string comparisons are case-sensitive (eg: WHERE field='STRING' will only match columns where it's 'STRING') by default. You can make them case-insensitive by setting NLS_COMP and NLS_SORT to the appropriate values (eg: LINGUISTIC and BINARY_CI, respectively).
Note: When inquiring data dictionary views (eg: dba_tables) the names will be in upper-case if you created them without quotes, and the string comparison rules as explained in the second paragraph will apply here.
Some databases (Oracle, IBM DB2, PostgreSQL, etc.) will perform case-sensitive string comparisons by default, others case-insensitive (SQL Server, MySQL, SQLite). This isn't standard by any means, so just be aware of what your db settings are.
Oracle actually treats field and table names in a case-insensitive manner unless you use quotes around identifiers. If you create a table without quotes around the name, for example CREATE MyTable..., the resulting table name will be converted to upper case (i.e. MYTABLE) and will be treated in a case insensitive manner. SELECT * from MYTABLE, SELECT * from MyTable, SELECT * from myTabLe will all match MYTABLE (note the lack of quotes around the table name). Here is a nice article on this issue that discusses this issue in more detail and compares databases.
Keep in mind too for SQL Server the case sensitivity is based on the collation. The default collation is case insensitive - but this could be changed to be case sensitive. A similar example is why do the default Oracle databases use a Western European character set when UTF is required for global applications that use non ASCII characters? I think it's just a vendor preference.
If I had to guess, I'd say for historical/backwards-compatibility reasons.
Oracle first came out in 1977, and it was likely computationally expensive with the technology at the time to do the extra work for case-insensitive searches, so they just opted for exact matches.
For some applications case-sensitivity is important and for others it isn't. Whichever DBMS you use, business requirements should determine whether you need case-senitivity or not. I wouldn't worry too much about the "default".
Related
We recently find using resultSetType=TYPE_SCROLL_INSENSITIVE in prepareStatement(String sql, int resultSetType, int resultSetConcurrency) instead of the default prepareStatement(String sql) can cause db2 to generate/use a different access plan, which could lead to significantly different query time, depending on table-size/filter range/etc.
We use DB2 LUW V10.5 FP8.
Just wondering if there any guidelines on how to choose this resultSetType attribute, in terms of performance optimization, for DB2, or other RDBMS in general? Btw, we don't change our ResultSet, so using *INSENSITIVE is allowed for us.
TYPE_SCROLL_INSENSITIVE
Should be used with care, default is ASENSITIVE. Which allows Db2 to create either an INSENSITIVE or SENSITIVE cursor depending on what it thinks is best.
With Db2 for i at least, INSENSITIVE causes the DB to create a copy of the data.
We just got a system outsourced and at first glance i can see some tables and fields with names as CASE or FROM. It is an Oracle 10g DB and we are going to be consuming those data from Java, Hibernate, C#, C++.
Is there something special we should be aware of?
For what i've seen in other posts this is not recommended because it will affect readability of our code, but is there any other, major or more serious problems this could cause?
Thanks!
To escape reserved words in Oracle, you need to enclose them (in this case, the table name) in double quotes. IE:
SELECT *
FROM "CASE"
Otherwise, you'll get an "ORA-00903: Invalid table name" error. IIRC, Oracle treats text inside of double quotes as case sensitive so you can still get the error if the table name was created in lowercase when using the example query.
Other than that, I can only see the usual issue with poorly named entities/attributes.
I am comparing results of two different JDBC drivers of oracle with the help of oracle's SQL tracing capabilities. I am using TKProf to format the results.
When I look at the output of TKProf sometimes I see parameters named prefixed with 'v':
WHERE start_time BETWEEN :v0 AND :v1
At other instances the parameters are not prefixed:
WHERE start_time BETWEEN :1 AND :2
I suspect that in second case, query optimizer is not picking some indices.
Is there a hint in the naming convention of parameters?
I'm certainly not an expert in how the JDBC drivers talk to the database. With luck, someone else will provide that detail.
I believe, though, that the parameter names shouldn't mean anything. That should just be what the particular driver decides to call them when it's sending the query to the database. But if you can peek at the actual values of the bind variables, that might tell you something. My concern would be that one driver is setting it up so the values have to go through a cast on the way to running the query, which could affect index use.
They're private bind variable names picked by the client software; possibly in your Java code the query has between ? and ? which has to be translated to something Oracle will understand. The names are almost irrelevant - certainly nothing to do with indexes or optimisation. I say 'almost' because I'm not sure if Oracle will see them as the same query, or will do separate hard parses of each.
I've been developing a DotNet project on oracle (Ver 10.2) for the last couple of months and was using Varchar2 for my string data fields. This was fine and when navigating the project page refreshes were never more than a half second if even (it's quiet a data intensive project). The data is referenced from 2 different schemas, one a centralised store of data and one of which is my own. Now the centralised schema will be changing to be unicode compliant (but hasn't yet) so all Varchar2 fields will become NVarchar2, in preparation for this I changed all the fields in my schema to be NVarchar2 and since then performance has been horrible .. up to 30/40 second page refreshes.
Could this be because Varchar2 fields in the centralised schema will be joined against NVarchar2 fields in my schema on some stored procedures. I know NVarchar2 is twice the size of Varchar2 but that wouldn't explain the sudden massive change. As I said any tips for what to look for to improve would be great, if I haven't explained the scenario well enough do ask for more information.
Firstly, do a
select * from v$nls_parameters where parameter like '%SET%';
Character sets can be complicated. You can have single-byte charactersets, fixed-size multibyte character set sand variable-sized multi-byte character sets. See the unicode descriptions here
Secondly, if you are joining a string in a single-byte characterset to a string in a two-byte characters set, you have a choice. You can do a binary/byte comparison (which generally won't match anything if you compare between a single-byte character set and a two-byte characterset). Or you can do a linguistic comparison, which will generally mean some CPU cost, as one value is converted into another, and often the failure to use an index.
Indexes are ordered, A,B,C etc. But a character like Ä may fall in different places depending on the Linguistic order. Say the index structure puts Ä between A and B. But then you do a linguistic comparison. The language of that comparison may put Ä after Z, in which case the index can't be used. (Remember your condition could be a BETWEEN rather than an = ).
In short, you'll need a lot of preparation, both in your schema and the central store, to enable efficient joins between different charactersets.
It is difficult to say anything based on what you have provided. Did you manage to check if the estimated cardinalities and/or explain plan changed when you changed the datatype to NVARCHAR2? You may want to read the following blog post to see if you can find a lead
http://joze-senegacnik.blogspot.com/2009/12/cbo-oddities-in-determing-selectivity.html
It is likely no longer able to use indexes that it previously could. As Narendra suggests check the explain plan to see what changed. It is possible that once the centeralized store is changed the indexes will again be usable. I suggest testing that path.
Setting the NLS_LANG initialization parameter properly is essential to proper data conversion. The character set that is specified by the NLS_LANG initialization parameter should reflect the setting for the client operating system. Setting NLS_LANG correctly enables proper conversion from the client operating system code page to the database character set. When these settings are the same, Oracle assumes that the data being sent or received is encoded in the same character set as the database character set, so no validation or conversion is performed. This can lead to corrupt data if conversions are necessary.
One thing I always wonder while writing query is that am I writing most optimized query or not? I know certain things like:
1) using SELECT field1, filed2 instead of SELECT *
2) Giving proper indexes to the tables
but I am sure there are more things that should be kept in mind for writing queries, since most of the database can only grow more and optimal query will help in execution time. Can you share some tips and tricks on writing queries?
Testing is the best way to measure performance. Monitor your queries on the live database and make use of things like the slow query log.
I would also recommend enabling the query cache, which will give most typical usage situations a massive boost.
Use proper data types for your fields
Use back-tick character (`) for reserved keywords
When dealing with multiple tables, try using joins
Resource:
See:
20 SQL Tips
As well as the Do's and Dont's, you may find the Hidden Features of MySQL useful.
As a matter of fact, no "tips" can help you.
Database design require deep knowledge, not tips.
There are always "weight" of these "dont's". Most of such listings fall to list most unimportant things and fail to mention important ones. Your list for example, is if it was culinary forum:
Always use a knife with black handle
To prepare good dish you need to choose proper ingredients.
First one is impressing but never help in the real world.
Second one is right, but must be backed with deep knowledge to make it right.
So, it must be a book, not tips. Ones from Paul Dubios are among recommended.
use below fields necessarily in each table
tablename_id( auto increment , unsigned zerofill)
created_by( timestamp)
tablerow_status( enum ('t','f') by default set 't')
always make an comment when u create a field in mysql( it helps when u search in phpmyadmin))
alwayz take care of Normalization forms
if u r doing some field that would be alwayz positive then select unsigned .
use decimal data type instead of float in somw case( like discount, it should be maximum 99.99% so use decimal( 5,2)
use date, time data type whereve needed, don't use timestamp everywhere
Correlated subqueries are very bad, but often not well understood and end up in production. They can often be fixed by using derived tables and a join instead.
http://en.wikipedia.org/wiki/Correlated_subquery
One more thing I found today is regarding the difference between COUNT(*) and COUNT(col)
Using COUNT(*) is faster than COUNT(col)
MYISAM tables cached number of rows in this table, for innoDB doesn't cache row count and may be slower without WHERE clause
It is better to use NOT NULL column for both MYISAM and innoDB than some other column where Null is allowed.
More details here