Differences in prepared vs. direct statements using Oracle ODBC - oracle

I'm using an Oracle database with a collation different to my OS language. I'm accessing the database using the ODBC driver. When I prepare a statement (e.g. a "select * from x where=?"), that involves special non-ASCII characters supported by the DB's collation, I'm finding the data row with the characters. When I execute the select directly with the argument in the sql string, the data row isn't found.

Pure guess on my part, but it may be because your client computer isn't encoding the sql string with the argument written into it correctly. I think that if your client is set to a different regional setting than the DB collation, the character array containing the select statement that gets sent to Oracle would contain "incorrect" bytes where the original funky characters were located - Oracle would interpret these as some character other than the one you originally sent (causing the row to not be found).
Is there any reason you can't just use the parameterized approach (since it is working correctly)?

Related

datatype that will hold 1000+ characters for Oracle?

I have a classic ASP front-end and an Oracle back-end. I'm looking for an ado datatype comparable to a varchar(max); it must be Oracle compatible and it must hold a 1000 characters.
advariant is not an option - the database keeps throwing up when we try to pass such datatype ergo the need for a replacement.

Oracle SQL Developer environment encoding

I have Oracle SQL Developer (3.1.07) and I'm trying to work with a database that uses WE8ISO8859P1 encoding:
SELECT * FROM nls_database_parameters WHERE parameter = 'NLS_CHARACTERSET';
I have problems with saving packages that contains unicode symbols. When I open previously saved package all unicode symbols are turned to '¿'.
What settings do I have to change to make SQL Developer keep those symbols?
I've tried to set environment encoding to 'ISO-8859-15' and some other encodings, but it won't help.
If your database encodes text to a non-unicode single-byte encoding (e.g. ISO-8859), any symbol not present on the character table will be seen as invalid and replaced by a placeholder. You can't go back from that, the information is lost.
That can be usually worked around when storing data, but as for source code, you cannot control how Oracle would encode your strings.
If your database is configured to use such encoding scheme you're probably not supposed to write code that violates its rules.
Maybe you could need this character set migration
http://docs.oracle.com/cd/B10501_01/server.920/a96529/ch10.htm#1656
on the Oracle's documentation
At least to open PKG in sql developer, you can do a quick try and see if it works:-
Change SQL Developer 'encoding' to 'unicode-utf-8' which is default to later versions now.
You would ,eventually, need to go for database charset migration to 'AL32UTF8' to avoid other issues (like data) due to this char set.
If you look at USER_SOURCE you'll see that the source code, as stored/interpreted by the database, will be in a VARCHAR2 column so use the database character set. As such, your source code will need to be in WE8ISO8859P1.
In theory, if the client and database are using the same character set, then the database won't try to do any character set translation and you may be able to sneak in a sequence of bytes that the database thinks are WE8ISO8859P1 but will make sense in unicode. However, at some point, someone will use the wrong client and it will break.
You don't need unicode for identifiers etc in the code, so I assume it is in string literals. You are better off storing these in a table (NVARCHAR2 column) and selecting them into the code rather than hard-coding them. If that isn't possible, you could use UNISTR and hard-code the relevant hex values.

SSIS Package Troubleshooting

I'm working with an SSIS Package that pulls data from a DB2 source, runs through a conversion process (unicode stuff) and then stores the data in a SQL table. From the error information below, I have been able to determine that there is some kind of special characters in the DB2 file/table. What I do not know is how I can narrow down which specific record has the issue. There are about 200,000 records in the DB2 file and I need to know which one is specifically causing the issue.
Is there a way to query the DB2 source looking for "special characters"? Is there a way to have the SSIS package show me which record it is failing on?
Error: 2009-07-15 01:32:31.19 Code:
0xC020901C Source: Import MY APP Data
DETAIL [2670] Description: There was
an error with output column "COLUMN1"
(2710) on output "OLE DB Source
Output" (2680). The column status
returned was: "Text was truncated or
one or more characters had no match in
the target code page.".
DB2 has a built-in function called HEX() that takes in just about any expression of any type and returns a VARCHAR of the hex representation of each byte. You can also specify any binary value as a literal by prepending it x', for example: x'0123456789abcdef'
If the problem is coming from a single-byte character, you could find it by building up temp table of all single characters from x'00' to x'ff' and seeing which ones appear in each row of your DB2 data. You could also add some code to the utility that converts the data for Unicode so it will scan the DB2 records for any anomalies.

UTF 8 from Oracle tables

The client has asked for a number of tables to be extracted into csv's, all done no problem. They've just asked we make sure the files are always in UTF 8 format.
How do I check this is actually the case. Or even better force it to be so, is it something i can set in a procedure before running a query perhaps?
The data is extracted from an Oracle 10g database.
What should I be checking?
Thanks
You can check the database character set with the following query:
select value from nls_database_parameters
where parameter='NLS_CHARACTERSET'
If it says AL32UTF8 then your database is in the format what you need and if the export does not impair it then your are done.
You may read about Oracle globalization support here, and here about NLS parameters like the above.
How, exactly, are you generating the CSV files? Depending on the exact architecture, there will be different answers.
If you are, for example, using SQL*Plus to extract the data, you would need to set the NLS_LANG on the client machine to something appropriate (i.e. AMERICAN_AMERICA.AL32UTF8) to force the data to be sent to the client machine in UTF-8. If you are using other approaches, NLS_LANG may or may not be important.
What you have to look for is the eight-bit ascii characters in hte input (if any) are translated into double byte utf-8 characters.
This is highly dependant on your local ASCII code page but typically:-
ASCII "£" should be x'A3' in ascii magically becomes x'C2A3' in utf-8.
Ok it wasn't as simple as I first hoped. The query above returns AL32UTF8.
I am using a stored proc compiled on the database to loop through a list of table names held in an array inside the stored procedure.
I use DBMS_SQL package to build the SQL and UTL_FILE.PUT_NCHAR to insert data into a text file.
I believed then my resultant output would be in UTF 8 however opening in Textpad says it's in ANSI and the data is garbled in places :)
Cheers
It might be important that NLS_CHARACTERSET is AL32UTF8 and NLS_NCHAR_CHARACTERSET is AL16UTF16

Consistent method of inserting TEXT column to Informix database using JDBC and ODBC

I have problem when I try insert some data to Informix TEXT column
via JDBC. In ODBC I can simply run SQL like this:
INSERT INTO test_table (text_column) VALUES ('insert')
but this do not work in JDBC and I got error:
617: A blob data type must be supplied within this context.
I searched for such problem and found messages from 2003:
http://groups.google.com/group/comp.databases.informix/browse_thread/thread/4dab38472e521269?ie=UTF-8&oe=utf-8&q=Informix+jdbc+%22A+blob+data+type+must+be+supplied+within+this%22
I changed my code to use PreparedStatement. Now it works with JDBC,
but in ODBC when I try using PreparedStatement I got error:
Error: [Informix][Informix ODBC Driver][Informix]
Illegal attempt to convert Text/Byte blob type.
[SQLCode: -608], [SQLState: S1000]
Test table was created with:
CREATE TABLE _text_test (id serial PRIMARY KEY, txt TEXT)
Jython code to test both drivers:
# for Jython 2.5 invoke with --verify
# beacuse of bug: http://bugs.jython.org/issue1127
import traceback
import sys
from com.ziclix.python.sql import zxJDBC
def test_text(driver, db_url, usr, passwd):
arr = db_url.split(':', 2)
dbname = arr[1]
if dbname == 'odbc':
dbname = db_url
print "\n\n%s\n--------------" % (dbname)
try:
connection = zxJDBC.connect(db_url, usr, passwd, driver)
except:
ex = sys.exc_info()
s = 'Exception: %s: %s\n%s' % (ex[0], ex[1], db_url)
print s
return
Errors = []
try:
cursor = connection.cursor()
cursor.execute("DELETE FROM _text_test")
try:
cursor.execute("INSERT INTO _text_test (txt) VALUES (?)", ['prepared', ])
print "prepared insert ok"
except:
ex = sys.exc_info()
s = 'Exception in prepared insert: %s: %s\n%s\n' % (ex[0], ex[1], traceback.format_exc())
Errors.append(s)
try:
cursor.execute("INSERT INTO _text_test (txt) VALUES ('normal')")
print "insert ok"
except:
ex = sys.exc_info()
s = 'Exception in insert: %s: %s\n%s' % (ex[0], ex[1], traceback.format_exc())
Errors.append(s)
cursor.execute("SELECT id, txt FROM _text_test")
print "\nData:"
for row in cursor.fetchall():
print '[%s]\t[%s]' % (row[0], row[1])
if Errors:
print "\nErrors:"
print "\n".join(Errors)
finally:
cursor.close()
connection.commit()
connection.close()
#test_varchar(driver, db_url, usr, passwd)
test_text("sun.jdbc.odbc.JdbcOdbcDriver", 'jdbc:odbc:test_db', 'usr', 'passwd')
test_text("com.informix.jdbc.IfxDriver", 'jdbc:informix-sqli://169.0.1.225:9088/test_db:informixserver=ol_225;DB_LOCALE=pl_PL.CP1250;CLIENT_LOCALE=pl_PL.CP1250;charSet=CP1250', 'usr', 'passwd')
Is there any setting in JDBC or ODBC to have one version of code
for both drivers?
Version info:
Server: IBM Informix Dynamic Server Version 11.50.TC2DE
Client:
ODBC driver 3.50.TC3DE
IBM Informix JDBC Driver for IBM Informix Dynamic Server 3.50.JC3DE
First off, are you really sure you want to use an Informix TEXT type? The type is a nuisance to use, in part because of the problems you are facing. It pre-dates anything in any SQL standard with respect to large objects (TEXT still isn't in SQL-2003 - though approximately equivalent structures, CLOB and BLOB, are). And the functionality of BYTE and TEXT blobs has not been changed since - oh, let's say 1996, though I suspect there's a case for choosing an earlier date, like 1991.
In particular, how much data are you planning to store in the TEXT columns? Your example shows the string 'insert'; that is, I presume, much much smaller than you would really use. You should be aware that a BYTE or TEXT columns uses a 56-byte descriptor in the table plus a separate page (or set of pages) to store the actual data. So, for tiny strings like that, it is a waste of space and bandwidth (because the data for the BYTE or TEXT objects will be shipped between client and server separately from the rest of the row). If your size won't get above about 32 KB, then you should look at using LVARCHAR instead of TEXT. If you will be using data sizes above that, then BYTE or TEXT or BLOB or CLOB are sensible alternatives, but you should look at configuring either blob spaces (for BYTE or TEXT) or smart blob spaces (for BLOB or CLOB). You can, and are, using TEXT IN TABLE, rather than in a blob space; be aware that doing so impacts your logical logs whereas using a blob space does not impact them anything like as much.
One of the features I've been campaigning for a decade or so is the ability to pass string literals in SQL statements as TEXT literals (or BYTE literals). That is in part because of the experience of people like you. I haven't yet been successful in getting it prioritized ahead of other changes that need to be made. Of course, you need to be aware that the maximum size of an SQL statement is 64 KB text, so you could create too big an SQL statement if you aren't careful; placeholders (question marks) in the SQL normally prevent that being a problem - and increasing the size of an SQL statement is another feature request which I've been campaigning for, but a little less ardently.
OK, assuming that you have sound reasons for using TEXT...what next. I'm not clear what Java (the JDBC driver) is doing behind the scenes - apart from too much - but it is a fair bet that it is noticing that a TEXT 'locator' structure is needed and is shipping the parameter in the correct format. It appears that the ODBC driver is not indulging you with similar shenanigans.
In ESQL/C, where I normally work, then the code has to deal with BYTE and TEXT differently from everything else (and BLOB and CLOB have to be dealt with differently again). But you can create and populate a locator structure (loc_t or ifx_loc_t from locator.h - which may not be in the ODBC directory; it is in $INFORMIXDIR/incl/esql by default) and pass that to the ESQL/C code as the host variable for the relevant placeholder in the SQL statement. In principle, there is probably a parallel method available for ODBC. You may have to look at the Informix ODBC driver manual to find it, though.

Resources