VarChar2 to Char comparison : Oracle settings can allow this? - oracle

I've just a quick question to see how it comes that I get 2 different results for the same thing.
We have two databases which are built up exactly the same in terms of structure.
In both, there is a view which do a comparison between a varchar2(10) and a char(10) where the fields are only filled with a length of 7 (+3 spaces for the char off course).
Off course this is something wrong in our structure, but that's something different than my question.
How is it possible that one database is able to do the comparison (varchar2=char) and the other one not?
Is there some Oracle-setting which can allow this.
Thanks for the help,
Grts,
Maarten

It's probably bug 11726301 "Wrong Result with query_rewrite_enabled=false and Joins of CHAR with Other CHAR and VARCHAR2 Columns"
Fixed in 11.2.0.3
Workaround is to set query_rewrite_enabled=true

Related

Are EXTENDED VARCHAR2 baned to become synchronized record fields?

My app is built on an Oracle Database.
Would it be possible to overcome the 4000 byte of Text limitation we have for Synchronized Record fields of appian?
I know the VARCHAR2(4000) limitation is considered to be a standard column type by Oracle, then choosing the EXTENDED for param. max_string_size in the DB would make it an "extended data type", as CLOB is. But since CLOB is forbidden to become a Snyc. RT field, would my large VARCHAR2 columns be also forbidden?
Asking for people who tried it. If no one did, I will ask the DBA, but could be easier to ask here ;)
My opinion is, don't bother with the extended string option. It provides no real benefit. A varchar2 greater than 4KB is a CLOB under a different name.

Oracle-DB: Reliable way to calculate the length of a field in CHAR from all_tab_columns.data_length?

I've written a very simple database access layer that maps C# objects onto database tables and back. It uses the information in all_tab_columns in order to do some input validation (mainly length checking). It works ok on ISO-encoded databases but on UTF-8 it produces wrong results. I tried (data_length / LENGTHB('ä')) which sometimes seems to work and sometimes doesn't. I'm aware it's a dirty hack, but I haven't found a pattern yet. Is there a reliable way to calculate the CHAR length of a VARCHAR2 field from data_length?
I found the answer on my own. ALL_TAB_COLUMNS provides a field CHAR_LENGTH that contains the maximum amount of characters in the column. Example:
SELECT column_name, char_length FROM all_tab_columns WHERE table_name = 'SOME_TABLE';

Inserting char variables into table oracle sql developer

I've a table CMP with two fields as CMP_CODE varchar2(20) and CMP_NAME varchar2(50).
When I'm trying to insert an entry like '001' to CMP_CODE, every time it is getting inserted as '1'.
My statement was like
Insert into CMP(CMP_CODE ,CMP_NAME) values ('007','test');
Previously the problem was not there, but I've re-installed our XE database recently, is the problem with that?
Your valuable help in this regard is highly appreciated. Thanks in advance.
The fieldtype "VARCHAR2" itself is probably NOT responsible for snipping of your zeros.
The error seems to be in your application. In case, if you use an Numeric-Variable-Type (eg. Integer, Long, Float, Decimal) this behavior is very basic and in most cases desirable.
But due very less information about your situation it is kind of hard, to tell whats is really going on.

Oracle comparison empty string and space bug? [duplicate]

I know that it does consider ' ' as NULL, but that doesn't do much to tell me why this is the case. As I understand the SQL specifications, ' ' is not the same as NULL -- one is a valid datum, and the other is indicating the absence of that same information.
Feel free to speculate, but please indicate if that's the case. If there's anyone from Oracle who can comment on it, that'd be fantastic!
I believe the answer is that Oracle is very, very old.
Back in the olden days before there was a SQL standard, Oracle made the design decision that empty strings in VARCHAR/VARCHAR2 columns were NULL and that there was only one sense of NULL (there are relational theorists that would differentiate between data that has never been prompted for, data where the answer exists but is not known by the user, data where there is no answer, etc. all of which constitute some sense of NULL).
By the time that the SQL standard came around and agreed that NULL and the empty string were distinct entities, there were already Oracle users that had code that assumed the two were equivalent. So Oracle was basically left with the options of breaking existing code, violating the SQL standard, or introducing some sort of initialization parameter that would change the functionality of potentially large number of queries. Violating the SQL standard (IMHO) was the least disruptive of these three options.
Oracle has left open the possibility that the VARCHAR data type would change in a future release to adhere to the SQL standard (which is why everyone uses VARCHAR2 in Oracle since that data type's behavior is guaranteed to remain the same going forward).
Tom Kyte VP of Oracle:
A ZERO length varchar is treated as
NULL.
'' is not treated as NULL.
'' when assigned to a char(1) becomes
' ' (char types are blank padded
strings).
'' when assigned to a varchar2(1)
becomes '' which is a zero length
string and a zero length string is
NULL in Oracle (it is no long '')
Oracle documentation alerts developers to this problem, going back at least as far as version 7.
Oracle chose to represent NULLS by the "impossible value" technique. For example, a NULL in a numeric location will be stored as "minus zero", an impossible value. Any minus zeroes that result from computations will be converted to positive zero before being stored.
Oracle also chose, erroneously, to consider the VARCHAR string of length zero (the empty string) to be an impossible value, and a suitable choice for representing NULL. It turns out that the empty string is far from an impossible value. It's even the identity under the operation of string concatenation!
Oracle documentation warns database designers and developers that some future version of Oracle might
break this association between the empty string and NULL, and break any code that depends on that association.
There are techniques to flag NULLS other than impossible values, but Oracle didn't use them.
(I'm using the word "location" above to mean the intersection of a row and a column.)
I suspect this makes a lot more sense if you think of Oracle the way earlier developers probably did -- as a glorified backend for a data entry system. Every field in the database corresponded to a field in a form that a data entry operator saw on his screen. If the operator didn't type anything into a field, whether that's "birthdate" or "address" then the data for that field is "unknown". There's no way for an operator to indicate that someone's address is really an empty string, and that doesn't really make much sense anyways.
According to official 11g docs
Oracle Database currently treats a character value with a length of zero as null. However, this may not continue to be true in future releases, and Oracle recommends that you do not treat empty strings the same as nulls.
Possible reasons
val IS NOT NULL is more readable than val != ''
No need to check both conditions val != '' and val IS NOT NULL
Empty string is the same as NULL simply because its the "lesser evil" when compared to the situation when the two (empty string and null) are not the same.
In languages where NULL and empty String are not the same, one has to always check both conditions.
Example from book
set serveroutput on;
DECLARE
empty_varchar2 VARCHAR2(10) := '';
empty_char CHAR(10) := '';
BEGIN
IF empty_varchar2 IS NULL THEN
DBMS_OUTPUT.PUT_LINE('empty_varchar2 is NULL');
END IF;
IF '' IS NULL THEN
DBMS_OUTPUT.PUT_LINE(''''' is NULL');
END IF;
IF empty_char IS NULL THEN
DBMS_OUTPUT.PUT_LINE('empty_char is NULL');
ELSIF empty_char IS NOT NULL THEN
DBMS_OUTPUT.PUT_LINE('empty_char is NOT NULL');
END IF;
END;
Because not treating it as NULL isn't particularly helpful, either.
If you make a mistake in this area on Oracle, you usually notice right away. In SQL server, however, it will appear to work, and the problem only appears when someone enters an empty string instead of NULL (perhaps from a .net client library, where null is different from "", but you usually treat them the same).
I'm not saying Oracle is right, but it seems to me that both ways are approximately equally bad.
Indeed, I have had nothing but difficulties in dealing with Oracle, including invalid datetime values (cannot be printed, converted or anything, just looked at with the DUMP() function) which are allowed to be inserted into the database, apparently through some buggy version of the client as a binary column! So much for protecting database integrity!
Oracle handling of NULLs links:
http://digitalbush.com/2007/10/27/oracle-9i-null-behavior/
http://jeffkemponoracle.com/2006/02/empty-string-andor-null.html
First of all, null and null string were not always treated as the same by Oracle. A null string is, by definition, a string containing no characters. This is not at all the same as a null. NULL is, by definition, the absence of data.
Five or six years or so ago, null string was treated differently from null by Oracle. While, like null, null string was equal to everything and different from everything (which I think is fine for null, but totally WRONG for null string), at least length(null string) would return 0, as it should since null string is a string of zero length.
Currently in Oracle, length(null) returns null which I guess is O.K., but length(null string) also returns null which is totally WRONG.
I do not understand why they decided to start treating these 2 distinct "values" the same. They mean different things and the programmer should have the capability of acting on each in different ways. The fact that they have changed their methodology tells me that they really don't have a clue as to how these values should be treated.

Char Vs Byte in Oracle

I am comparing two databases which have similar schema. Both should support unicode characters.
When i describe the same table in both database, db 1 shows all the varchar fields with char, (eg varchar(20 char)) but the db2 shows without char, (varchar(20)
the second schema supports only one byte/char.
When i compare nls_database_parameters and v$nls_parameters in both database its all same.
could some one let me know what may be the change here?
Have you checked NLS_LENGTH_SEMANTICS? You can set the default to BYTE or CHAR for CHAR/VARCHAR2 types.
If these parameters are the same on both datbases then maybe the table was created by explicitly specifying it that way.

Resources