Problem reading nchar data from oracle database - oracle

I have an oracle database which nls character set is set to ALS32UTF8 and nls nchar character set is set to UTF8.
But however if i insert any data to a nvarchar column in a table.
Subsequent when i do a select the data i got is ???.
Why is this so?
The funny thing is that using TOAD i can read view the correct nvarchar data using the schema brower - > data
But if i use sql to do a select i get ???.
Any idea anyone and how to resolve this?

What is the NLS_LANG setting on your client? For a good reference on NLS parameters, read this FAQ by Oracle. For additional reading on Unicode, see this essay by Joel Spolsky.

You need to set the clients NLS_LANG to utf.
sqlplus uses these environent variables (registry parameters on windows):
(You may need to use sqlplusw.exe to use utf-8 on windows.)
NLS_LANG=AMERICAN_AMERICA.AL32UTF8
LC_CTYPE="en_US.UTF-8"
ORA_NCHAR_LITERAL_REPLACE=true
See also : Inserting national characters into an oracle NCHAR or NVARCHAR column does not work

Related

Change the character set in oracle 12c database

I need to change character set from EE8ISO8859P2 to EE8MSWIN1250.
I have read a lot of guides, but I have not found a solution. How can I make this conversion? I need a complete instruction.
I would recommend to change it to UTF-8, i.e. AL32UTF8 following the Database Migration Assistant for Unicode Guide or Character Set Migration.
As sandman also suggested, do not run ALTER DATABASE CHARACTER SET ...
It is de-supported since Oracle 10g
Database SQL Reference 10g Release 1: ALTER DATABASE:
You can no longer change the database character set or the national
character set using the ALTER DATABASE statement. Please refer to
Oracle Database Globalization Support Guide for information on
database character set migration.
It used to be complicated with csscan etc. but nowadays you download a GUI tool called Oracle Database Migration Assistant and follow the instructions. It's a lot easier if your character sets are single-byte (I'm assuming) as then you won't have lossy conversion of some data, for example a multi-byte character set like UTF8.
You will require downtime though and it might take hours to do, depending on the size of the data found by the DMU tool. You can NOT change the character set by simply doing an 'alter database' as some people might suggest.

difference between NLS_NCHAR_CHARACTERSET and NLS_CHARACTERSET for Oracle

I would like to know the difference between
NLS_NCHAR_CHARACTERSET and NLS_CHARACTERSET settings in Oracle?
From my understanding, NLS_NCHAR_CHARACTERSET is for NVARCHAR data types
and for NLS_CHARACTERSET would be for VARCHAR2 data types.
I tried to test this on my development server which my current settings for CHARACTERSET are as the following :
PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET US7ASCII
Then I inserted some Chinese character values into the database. I inserted the characters into a table called data_<seqno> and updated the column for ADDRESS and ADDRESS_2 which are VARCHAR2 columns. Right from my understanding with the current setting for NLS_CHARACTERSET US7ASCII, Chinese characters should not be supported but it is still showing in the database. Does NLS_NCHAR_CHARACTERSET take precedence over this?
Thank You.
In general all your points are correct. NLS_NCHAR_CHARACTERSET defines the character set for NVARCHAR2, et. al. columns whereas NLS_CHARACTERSET is used for VARCHAR2.
Why is it possible that you see Chinese characters with US7ASCII?
The reason is, your database character set and your client character set (i.e. see NLS_LANG value) are both US7ASCII. Your database uses US7ASCII and it "thinks" also the client sends data using US7ASCII. Thus it does not make any conversion of the strings, the data are transferred bit-by-bit from client to server and vice versa.
Due to that fact you can use characters which are actually not supported by US7ASCII. Be aware, in case your client uses a different character set (e.g. when you use ODP.NET Managed Driver in an Windows application) the data will be rubbish! Also if you would consider a database character set migration you have the same issue.
Another note: I don't think you would get the same behavior with other character sets, e.g. if your database and your client both would use WE8ISO8859P1 for example. Also be aware that you actually have wrong configuration. Your database uses character set US7ASCII, your NLS_LANG value is also US7ASCII (most likely it is not set at all and Oracle defaults it to US7ASCII) but the real character set of SQL*Plus, resp. your cmd.exe terminal is most likely CP950 or CP936.
If you like to set everything properly you can either set your environment variable NLS_LANG=.ZHT16MSWIN950 (CP936 seems to be not supported by Oracle) or change your codepage before running sqlplus.exe with command chcp 437. With this proper settings you will not see any Chinese characters as you probably would have expected.

Special Characters not getting displayed in Oracle tables

I have data which contains special characters like à ç è etc..
I am trying to insert the data into tables having these characters. Data gets inserted without any issues but these characters are replaced with with ?/?? when stored in tables
How should I resolve this issue?I want to store these characters in my tables.
Is it related to NLS parameters?
Currently the NLS characterset is having AL32UTF8 as seen from V$Nls_parameters table.
Is there any specific table/column to be checked ? Or is it something at the database settings ?
Kindly advise.
Thank in advance
From the comments: It is not required that column must be NVARCHAR (resp. NVARCHAR2), because your database character set is AL32UTF8 which supports any Unicode character.
Set your NLS_LANG variable to AMERICAN_AMERICA.AL32UTF8 before you launch your SQL*Plus. You may change the language and/or territory to your own preferences.
Ensure you select a font which is able to display the special characters.
Note, client character set AL32UTF8 is determined by your local LANG variable (i.e. en_US.UTF-8), not by the database character set.
Check also this answer for more information: OdbcConnection returning Chinese Characters as "?"

Character encoding issue in Oracle PL/SQL

I'm facing a character discrepancy issue while extracting data from db tables.
I've written a PL/SQL code to spool some data to .txt file from my db tables and running this sql using unix shell but when I'm getting the spooled file, the result set is a changed one from the one at back end.
For example:
At back end: SADETTÝN
In Spooled txt file : SADETTŸN
If you look at the Y character, it is a changed one. I want to preserve all the characters the way they are at back end.
My db's character set:
SELECT * FROM v$nls_parameters WHERE parameter LIKE 'NLS%CHARACTERSET'
PARAMETER VALUE
NLS_CHARACTERSET WE8ISO8859P1
NLS_NCHAR_CHARACTERSET WE8ISO8859P1
And Unix NLS_LANG parameter :
$ echo $NLS_LANG
AMERICAN_AMERICA.WE8ISO8859P1
I tried changing NLS_LANG parameter to WE8ISO8859P9(Trukish characterset) but no help!
Could anyone let me know the solution to this problem?
I presume that you are trying to visualize your file with "vi" or something similar.NLS_LANG parameter is used only by your database to export to your file.For your editor(vi), you need to set the LANG parameter to the corresponding value to your NLS_LANG.
Exemple : For ISO8859P1 american english you have to do
export LANG=en_US.ISO8859-1
In other words your file is just fine it's your editor who doesn't know what to do with your Turkish characters.
You should use NCHAR data types. More information is available at Oracle Documentation - SQL and PL/SQL Programming with Unicode
For spooling from SQL*Plus, you need to set the NLS_LANG environment variable correctly. Here is a similar question in stackoverflow.

UTF 8 from Oracle tables

The client has asked for a number of tables to be extracted into csv's, all done no problem. They've just asked we make sure the files are always in UTF 8 format.
How do I check this is actually the case. Or even better force it to be so, is it something i can set in a procedure before running a query perhaps?
The data is extracted from an Oracle 10g database.
What should I be checking?
Thanks
You can check the database character set with the following query:
select value from nls_database_parameters
where parameter='NLS_CHARACTERSET'
If it says AL32UTF8 then your database is in the format what you need and if the export does not impair it then your are done.
You may read about Oracle globalization support here, and here about NLS parameters like the above.
How, exactly, are you generating the CSV files? Depending on the exact architecture, there will be different answers.
If you are, for example, using SQL*Plus to extract the data, you would need to set the NLS_LANG on the client machine to something appropriate (i.e. AMERICAN_AMERICA.AL32UTF8) to force the data to be sent to the client machine in UTF-8. If you are using other approaches, NLS_LANG may or may not be important.
What you have to look for is the eight-bit ascii characters in hte input (if any) are translated into double byte utf-8 characters.
This is highly dependant on your local ASCII code page but typically:-
ASCII "£" should be x'A3' in ascii magically becomes x'C2A3' in utf-8.
Ok it wasn't as simple as I first hoped. The query above returns AL32UTF8.
I am using a stored proc compiled on the database to loop through a list of table names held in an array inside the stored procedure.
I use DBMS_SQL package to build the SQL and UTL_FILE.PUT_NCHAR to insert data into a text file.
I believed then my resultant output would be in UTF 8 however opening in Textpad says it's in ANSI and the data is garbled in places :)
Cheers
It might be important that NLS_CHARACTERSET is AL32UTF8 and NLS_NCHAR_CHARACTERSET is AL16UTF16

Resources