We use Oracle 10g and Oracle 11g.
We also have a layer to automatically compose queries, from pseudo-SQL code written in .net (something like SqlAlchemy for Python).
Our layer currently wraps any string in single quotes ' and, if contains non-ANSI characters, it automatically compose the UNISTR with special characters written as unicode bytes (like \00E0).
Now we created a method for doing multiple inserts with the following construct:
INSERT INTO ... (...)
SELECT ... FROM DUAL
UNION ALL SELECT ... FROM DUAL
...
This algorithm could compose queries where the same string field is sometimes passed as 'my simple string' and sometimes wrapped as UNISTR('my string with special chars like \00E0').
The described condition causes a ORA-12704: character set mismatch.
One solution is to use the INSERT ALL construct but it is very slow compared to the one used now.
Another solution is to instruct our layer to put N in front of any string (except for the ones already wrapped with UNISTR). This is simple.
I just want to know if this could cause any side-effect on existing queries.
Note: all our fields on DB are either NCHAR or NVARCHAR2.
Oracle ref: http://docs.oracle.com/cd/B19306_01/server.102/b14225/ch7progrunicode.htm
Basicly what you are asking is, is there a difference between how a string is stored with or without the N function.
You can just check for yourself consider:
SQL> create table test (val nvarchar2(20));
Table TEST created.
SQL> insert into test select n'test' from dual;
1 row inserted.
SQL> insert into test select 'test' from dual;
1 row inserted.
SQL> select dump(val) from test;
DUMP(VAL)
--------------------------------------------------------------------------------
Typ=1 Len=8: 0,116,0,101,0,115,0,116
Typ=1 Len=8: 0,116,0,101,0,115,0,116
As you can see identical so no side effect.
The reason this works so beautifully is because of the elegance of unicode
If you are interested here is a nice video explaining it
https://www.youtube.com/watch?v=MijmeoH9LT4
I assume that you get an error "ORA-12704: character set mismatch" because your data inside quotes considered as char but your fields is nchar so char is collated using different charsets, one using NLS_CHARACTERSET, the other NLS_NCHAR_CHARACTERSET.
When you use an UNISTR function, it converts data from char to nchar (in any case that also converts encoded values into characters) as the Oracle docs say:
"UNISTR takes as its argument a text literal or an expression that
resolves to character data and returns it in the national character
set."
When you convert values explicitly using N or TO_NCHAR you only get values in NLS_NCHAR_CHARACTERSET without decoding. If you have some values encoded like this "\00E0" they will not be decoded and will be considered unchanged.
So if you have an insert such as:
insert into select N'my string with special chars like \00E0',
UNISTR('my string with special chars like \00E0') from dual ....
your data in the first inserting field will be: 'my string with special chars like \00E0' not 'my string with special chars like à'. This is the only side effect I'm aware of. Other queries should already use NLS_NCHAR_CHARACTERSET encoding, so it shouldn't be any problem using an explicit conversion.
And by the way, why not just insert all values as N'my string with special chars like à'? Just encode them into UTF-16 (I assume that you use UTF-16 for nchars) first if you use different encoding in 'upper level' software.
use of n function - you have answers already above.
If you have any chance to change the charset of the database, that would really make your life easier. I was working on huge production systems, and found the trend that because of storage space is cheap, simply everyone moves to AL32UTF8 and the hassle of internationalization slowly becomes the painful memories of the past.
I found the easiest thing is to use AL32UTF8 as the charset of the database instance, and simply use varchar2 everywhere. We're reading and writing standard Java unicode strings via JDBC as bind variables without any harm, and fiddle.
Your idea to construct a huge text of SQL inserts may not scale well for multiple reasons:
there is a fixed length of maximum allowed SQL statement - so it won't work with 10000 inserts
it is advised to use bind variables (and then you don't have the n'xxx' vs unistr mess either)
the idea to create a new SQL statement dynamically is very resource unfriedly. It does not allow Oracle to cache any execution plan for anything, and will make Oracle hard parse your looong statement at each call.
What you're trying to achieve is a mass insert. Use the JDBC batch mode of the Oracle driver to perform that at light-speed, see e.g.: http://viralpatel.net/blogs/batch-insert-in-java-jdbc/
Note that insert speed is also affected by triggers (which has to be executed) and foreign key constraints (which has to be validated). So if you're about to insert more than a few thousands of rows, consider disabling the triggers and foreign key constraints, and enable them after the insert. (You'll lose the trigger calls, but the constraint validation after insert can make an impact.)
Also consider the rollback segment size. If you're inserting a million of records, that will need a huge rollback segment, which likely will cause serious swapping on the storage media. It is a good rule of thumb to commit after each 1000 records.
(Oracle uses versioning instead of shared locks, therefore a table with uncommitted changes are consistently available for reading. The 1000 records commit rate means roughly 1 commit per second - slow enough to benefit of write buffers, but quick enough to not interfer with other humans willing to update the same table.)
Related
I want to move an oracle database from a non-unicode server (EL8ISO8859P7 character set and AL16UTF16 NCHAR character set) to a unicode server. Specifically to an Oracle Express server with AL32UTF8 character set.
Simply exporting (exp) and importing (imp) the data fails. We have a lot of the varchar2 columns with their length specified in bytes. When their contents are mapped in unicode they take more bytes and are truncated.
I tried the following:
- doubling the length of all varchar2 columns of the original database with a script (varchar2(10) becomes varchar2(20))
- exporting
- importing to the new server
And it worked. Apparently doubling is arbitrary, I probably should have changed them to the same size with CHAR semantics.
I also tried the following:
- change all varchar2 columns to nvarchar2 (same size - varchar(10) becomes nvarchar(10))
- exporting
- importing to the new server
It also worked.
Somehow the latter (converting to nvarchar) seems "cleaner". Then again you have a unicode database with unicode data types which seems weird.
So the question is: is there a suggested way to go about moving the database between the two servers? Is there any serious problem with either of the two approaches I mentioned above?
Don't use NVARCHAR2 data types unless that is your only option. The national character set exists to deal with cases where you have an existing, legacy application that does not support Unicode and you want to add a handful of columns to the system that do support Unicode without touching those legacy applications. Using NVARCHAR2 columns is great for those cases but it creates all sorts of issues in application development. Plenty of tools, APIs, and applications either don't support NVARCHAR2 columns or require additional configuration to do so. And since NVARCHAR2 columns are relatively uncommon in the Oracle world, it's very easy to spend gobs of time trying to resolve the particular issues you encounter. Less critically, since AL16UTF16 requires at least 2 bytes per character, you're likely to require quite a bit more space since much of your data is likely to consist of English characters.
I would strongly prefer migrating to the new database with character-length semantics (i.e. VARCHAR2(10 BYTE) becomes VARCHAR2(10 CHAR)). That avoids doubling the allowed length. It also makes it much easier to explain to users what the length limits are (or to code those validations in front-ends). It's terribly confusing to most users to explain that a particular column can sometimes hold 20 characters (when only English characters are used), can sometimes hold 10 characters (when only non-English characters are used), and can sometimes hold something in the middle (when there is a mixture of characters). Character length semantics make all those issues drastically easier.
Migrating to unicode databases is a 4 step process.
Use exp[dp] to export the data and generate ddl for the tables.
Alter the ddl to change the byte length varchar2 fields to character length fields.
create the tables using the modified ddl script.
import the data using imp[dp]
skipping steps 2 and 3 leaves you with the byte length defined fields again and probably with a lot of errors during import because data doesn't fit in the defined columns. If there is only us characters in the source database it won't be a big problem but for example latin characters will give problems because a single character could need more bytes.
Following the listed procedure prevents the length problems. There are obviously more ways to do this but rule is to first have the ddl definition ok and insert the data later.
Certain invalid characters sometimes get inserted into database due to improper keyboard configuration of end-users.
We have a pl/sql script that iterates through all columns and replaces the invalid characters. The script takes less than one minute to complete (but our system is still beta and is not used widely) .
One solution is to create a trigger that replaces the invalid characters right before insertion into the tables and one solution would be using a scheduled job to run the so-called script.
Which one is more efficient ?
If you use a scheduled job, there is still some time during which the invalid characters are inside your data. If it is very important that this is not the case, use a trigger. A trigger will ensure that the database only contains valid characters.
I don't think efficiency is very important here. A trigger can easily keep up with data entry by users.
Not a fan of triggers, to be honest.
You could improve the efficiency of a scheduled job by providing an index that identifies the rows that need to be updated.
In general the syntax would be something like:
create index my_speshul_index on my_table(
case when some_expression then 1 else null end);
some_expression is the expression that detects the presence of these characters -- possibly a regexp_like.
Then you would query:
select ... from my_table
where case when some_expression then 1 else null end;
Another way to go would be to allow the data to enter the database as originally typed, but create a virtual column that would contain the cleaned data.
However, that's only workable if the problem is limited to a few columns. Since it can be all user-enterable columns, a trigger with regexp_replace seems to be the best way to go. Depending on what characters you allow, it could be as simple as a series of statements like
:new.columnA := regexp_replace(:new.ColumnA,'[^[:print:]]','');
That is, just remove all the non-printable characters.
Is there any subprogram similar to DBMS_METADATA.GET_DDL that can actually export the table data as INSERT statements?
For example, using DBMS_METADATA.GET_DDL('TABLE', 'MYTABLE', 'MYOWNER') will export the CREATE TABLE script for MYOWNER.MYTABLE. Any such things to generate all data from MYOWNER.MYTABLE as INSERT statements?
I know that for instance TOAD Oracle or SQL Developer can export as INSERT statements pretty fast but I need a more programmatically way for doing it. Also I cannot create any procedures or functions in the database I'm working.
Thanks.
As far as I know, there is no Oracle supplied package to do this. And I would be skeptical of any 3rd party tool that claims to accomplish this goal, because it's basically impossible.
I once wrote a package like this, and quickly regretted it. It's easy to get something that works 99% of the time, but that last 1% will kill you.
If you really need something like this, and need it to be very accurate, you must tightly control what data is allowed and what tools can be used to run the script. Below is a small fraction of the issues you will face:
Escaping
Single inserts are very slow (especially if it goes over a network)
Combining inserts is faster, but can run into some nasty parsing bugs when you start inserting hundreds of rows
There are many potential data types, including custom ones. You may only have NUMBER, VARCHAR2, and DATE now, but what happens if someone adds RAW, BLOB, BFILE, nested tables, etc.?
Storing LOBs requires breaking the data into chunks because of VARCHAR2 size limitations (4000 or 32767, depending on how you do it).
Character set issues - This will drive you ¿¿¿¿¿¿¿ insane.
Enviroment limitations - For example, SQL*Plus does not allow more than 2500 characters per line, and will drop whitespace at the end of your line.
Referential Integrity - You'll need to disable these constraints or insert data in the right order.
"Fake" columns - virtual columns, XML lobs, etc. - don't import these.
Missing partitions - If you're not using INTERVAL partitioning you may need to manually create them.
Novlidated data - Just about any constraint can be violated, so you may need to disable everything.
If you want your data to be accurate you just have to use the Oracle utilities, like data pump and export.
Why don't you use regular export ?
If you must you can generate the export script:
Let's assume a Table myTable(Name VARCHAR(30), AGE Number, Address VARCHAR(60)).
select 'INSERT INTO myTable values(''' || Name || ','|| AGE ||',''' || Address ||''');' from myTable
Oracle SQL Developer does that with it's Export feature. DDL as well as data itself.
Can be a bit unconvenient for huge tables and likely to cause issues with cases mentioned above, but works well 99% of the time.
I always wonder why should we limit a column length in a database table to some limit then the default one.
Eg. I have a column short_name in my table People the default length is 255 characters for the column but I restrict it to 100 characters. What difference will it make.
The string will be truncated to the maximum length ( in characters usually ).
The way it is actually implemented is up to the database engine you use.
For example:
CHAR(30) will always use up 30 characters in MySQL, and this allows
MySQL to speed up access because it is able to predict the value
length without parsing anything;
VARCHAR(30) will trim any lengthy strings to 30 characters in MySQL when strict mode is on, otherwise you may use longer strings and they will be fully stored;
In SQLite, you can store strings in any type of column, ignoring the type.
The reason many features of SQL are supported in those database engines eventhough they are not being utilized, or being utilized in different ways, is in order to maintain compliance to the SQL schema.
I have a query that has
... WHERE PRT_STATUS='ONT' ...
The prt_status field is defined as CHAR(5) though. So it's always padded with spaces. The query matches nothing as the result. To make this query work I have to do
... WHERE rtrim(PRT_STATUS)='ONT'
which does work.
That's annoying.
At the same time, a couple of pure-java DBMS clients (Oracle SQLDeveloper and AquaStudio) I have do NOT have a problem with the first query, they return the correct result. TOAD has no problem either.
I presume they simply put the connection into some compatibility mode (e.g. ANSI), so the Oracle knows that CHAR(5) expected to be compared with no respect to trailing characters.
How can I do it with Connection objects I get in my application?
UPDATE I cannot change the database schema.
SOLUTION It was indeed the way Oracle compares fields with passed in parameters.
When bind is done, the string is passed via PreparedStatement.setString(), which sets type to VARCHAR, and thus Oracle uses unpadded comparision -- and fails.
I tried to use setObject(n,str,Types.CHAR). Fails. Decompilation shows that Oracle ignores CHAR and passes it in as a VARCHAR again.
The variant that finally works is
setObject(n,str,OracleTypes.FIXED_CHAR);
It makes the code not portable though.
The UI clients succeed for a different reason -- they use character literals, not binding. When I type PRT_STATUS='ONT', 'ONT' is a literal, and as such compared using padded way.
Note that Oracle compares CHAR values using blank-padded comparison semantics.
From Datatype Comparison Rules,
Oracle uses blank-padded comparison
semantics only when both values in the
comparison are either expressions of
datatype CHAR, NCHAR, text literals,
or values returned by the USER
function.
In your example, is 'ONT' passed as a bind parameter, or is it built into the query textually, as you illustrated? If a bind parameter, then make sure that it is bound as type CHAR. Otherwise, verify the client library version used, as really old versions of Oracle (e.g. v6) will have different comparison semantics for CHAR.
If you cannot change your database table, you can modify your query.
Some alternatives for RTRIM:
.. WHERE PRT_STATUS like 'ONT%' ...
.. WHERE PRT_STATUS = 'ONT ' ... -- 2 white spaces behind T
.. WHERE PRT_STATUS = rpad('ONT',5,' ') ...
I would change CHAR(5) column into varchar2(5) in db.
You can use cast to char operation in your query:
... WHERE PRT_STATUS=cast('ONT' as char(5))
Or in more generic JDBC way:
... WHERE PRT_STATUS=cast(? as char(5))
And then in your JDBC code do use statement.setString(1, "ONT");