I've been tasked with determining if our web platform can be 'localized' to Japanese, and how to do so. The platform is PL/SQL based in an Oracle 10g database. We have localized it for French Canadian and Brazilian Portuguese in the past, but I'm wondering what issues I may run into with Japanese (Kanji, I believe). Am I correct that Japanese is a double-byte char set while the others we've used are single-byte? How will this impact code and/or database table structure and access?
The various sentences/phrases/statements are stored in a database table and are looked up as needed based on the user's id and language setting. The table field that stores the 'text' is defined as a CLOB. It's often read into a VARCHAR2 variable.
I tried to copy/past some Japanese characters into the table via a direct paste to the field in a TOAD schema browser. That resulted in '??' being displayed.
Is there anything I have to do in order to be able to store Japanese characters in that table? Or access/display them from that table?
Check your database character set by
SELECT *
FROM V$NLS_PARAMETERS
WHERE PARAMETER IN ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');
If the character set support Japanese (e.g. AL32UTF8) it should be no big deal to localize your application also to Japanese. Changing the character set on an existing database is also possible but requires some effort, see Character Set Migration
Check also this answer for topics related to database character set vs. client character set, i.e. NLS_LANG setting.
Related
I have a problem with storing thai language data in Oracle SQL Developer. When I update data with Thai language, the data in table show strange language ¿¿¿¿¿¿ like below :
Anyone can help me config to store both english and thai language in database? Thanks.
I suspect SQL Developer is a victim here, while database's character set is a culprit. What does
select * from nls_database_parameters;
return? Check NLS_CHARACTERSET value; should be AL32UTF8. If it is not, what is it, then? Note that you might need to upgrade the database so that its character set actually is AL32UTF8.
I need to change character set from EE8ISO8859P2 to EE8MSWIN1250.
I have read a lot of guides, but I have not found a solution. How can I make this conversion? I need a complete instruction.
I would recommend to change it to UTF-8, i.e. AL32UTF8 following the Database Migration Assistant for Unicode Guide or Character Set Migration.
As sandman also suggested, do not run ALTER DATABASE CHARACTER SET ...
It is de-supported since Oracle 10g
Database SQL Reference 10g Release 1: ALTER DATABASE:
You can no longer change the database character set or the national
character set using the ALTER DATABASE statement. Please refer to
Oracle Database Globalization Support Guide for information on
database character set migration.
It used to be complicated with csscan etc. but nowadays you download a GUI tool called Oracle Database Migration Assistant and follow the instructions. It's a lot easier if your character sets are single-byte (I'm assuming) as then you won't have lossy conversion of some data, for example a multi-byte character set like UTF8.
You will require downtime though and it might take hours to do, depending on the size of the data found by the DMU tool. You can NOT change the character set by simply doing an 'alter database' as some people might suggest.
I am using PLSQL developer to work with oracle db. When I compile a view which stores code like this:
select 'xidmət' as service from dual
the 'ə' character in the string becomes '?' character.
I think it's some oracle or plsql developer configuration problem, but I don't know what.What do you think the problem is?
First, check what your character set is using this:
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
Then set your NLS_LANG environment variable to AMERICAN_AMERICA.CHARSET where CHARSET is the value found with that select. If you are in Windows, you will have to go to Control Panel, System, Advanced, Environment Variables, and set NLS_LANG under System variables.
Oracle does at least a 'one-pass' conversion between client and database, but the problem is that there are so many layers between client and database, including your client software, that it is usually better to match your client NLS_LANG with the database setting.
It also depends how that character was inserted. It might have been inserted using a different client tool using a different NLS_LANG, so you might have to update your extended ASCII characters (or foreign characters) before you get a consistent view from your select.
Need to configure your systems NLS_LANG parameter, according to your language preferences. Here's a link:
http://www.nazmulhuda.info/setting-nls_lang-environment-variable-for-windows-and-unix-for-oracle-database
For example, we have letters like "ąčęėįšųū". So in order to see them in pl/sql, we set NLS_LANG with value "LITHUANIAN_LITHUANIA.BLT8MSWIN1257".
Hope it helps. Good luck.
I have data which contains special characters like à ç è etc..
I am trying to insert the data into tables having these characters. Data gets inserted without any issues but these characters are replaced with with ?/?? when stored in tables
How should I resolve this issue?I want to store these characters in my tables.
Is it related to NLS parameters?
Currently the NLS characterset is having AL32UTF8 as seen from V$Nls_parameters table.
Is there any specific table/column to be checked ? Or is it something at the database settings ?
Kindly advise.
Thank in advance
From the comments: It is not required that column must be NVARCHAR (resp. NVARCHAR2), because your database character set is AL32UTF8 which supports any Unicode character.
Set your NLS_LANG variable to AMERICAN_AMERICA.AL32UTF8 before you launch your SQL*Plus. You may change the language and/or territory to your own preferences.
Ensure you select a font which is able to display the special characters.
Note, client character set AL32UTF8 is determined by your local LANG variable (i.e. en_US.UTF-8), not by the database character set.
Check also this answer for more information: OdbcConnection returning Chinese Characters as "?"
I have Oracle SQL Developer (3.1.07) and I'm trying to work with a database that uses WE8ISO8859P1 encoding:
SELECT * FROM nls_database_parameters WHERE parameter = 'NLS_CHARACTERSET';
I have problems with saving packages that contains unicode symbols. When I open previously saved package all unicode symbols are turned to '¿'.
What settings do I have to change to make SQL Developer keep those symbols?
I've tried to set environment encoding to 'ISO-8859-15' and some other encodings, but it won't help.
If your database encodes text to a non-unicode single-byte encoding (e.g. ISO-8859), any symbol not present on the character table will be seen as invalid and replaced by a placeholder. You can't go back from that, the information is lost.
That can be usually worked around when storing data, but as for source code, you cannot control how Oracle would encode your strings.
If your database is configured to use such encoding scheme you're probably not supposed to write code that violates its rules.
Maybe you could need this character set migration
http://docs.oracle.com/cd/B10501_01/server.920/a96529/ch10.htm#1656
on the Oracle's documentation
At least to open PKG in sql developer, you can do a quick try and see if it works:-
Change SQL Developer 'encoding' to 'unicode-utf-8' which is default to later versions now.
You would ,eventually, need to go for database charset migration to 'AL32UTF8' to avoid other issues (like data) due to this char set.
If you look at USER_SOURCE you'll see that the source code, as stored/interpreted by the database, will be in a VARCHAR2 column so use the database character set. As such, your source code will need to be in WE8ISO8859P1.
In theory, if the client and database are using the same character set, then the database won't try to do any character set translation and you may be able to sneak in a sequence of bytes that the database thinks are WE8ISO8859P1 but will make sense in unicode. However, at some point, someone will use the wrong client and it will break.
You don't need unicode for identifiers etc in the code, so I assume it is in string literals. You are better off storing these in a table (NVARCHAR2 column) and selecting them into the code rather than hard-coding them. If that isn't possible, you could use UNISTR and hard-code the relevant hex values.