Hello from Greece and Merry Xmas to everyone.
We got an outsourcing plant designing job and the customer wants it in Oracle DB. Design will take place within the next 3 months and afterwards will deliver the DB to customer.
He asked these 3 particular specs for Oracle:
Oracle - 12cR1--->Done, purchased 10 licenses, located this particular version and installed.
Oracle DB character set – AL32UTF8 ---> Done, selected this while creating DB
NCHAR character set– AL16UTF16 ---> This is my problem. How can i view the NCHAR of the database that i have created? I have SQL developer also installed as a tool.
Thank you so much in advance
Try
SELECT *
FROM V$NLS_PARAMETERS
WHERE PARAMETER IN ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET')
If you need to change: For a new database, just drop the entire DB and create a new one from scratch. Otherwise follow the Character Set Migration - however, AL16UTF16 is the default for national character, actually I don't see any reason to change it.
Related
I have a problem with storing thai language data in Oracle SQL Developer. When I update data with Thai language, the data in table show strange language ¿¿¿¿¿¿ like below :
Anyone can help me config to store both english and thai language in database? Thanks.
I suspect SQL Developer is a victim here, while database's character set is a culprit. What does
select * from nls_database_parameters;
return? Check NLS_CHARACTERSET value; should be AL32UTF8. If it is not, what is it, then? Note that you might need to upgrade the database so that its character set actually is AL32UTF8.
I need to change character set from EE8ISO8859P2 to EE8MSWIN1250.
I have read a lot of guides, but I have not found a solution. How can I make this conversion? I need a complete instruction.
I would recommend to change it to UTF-8, i.e. AL32UTF8 following the Database Migration Assistant for Unicode Guide or Character Set Migration.
As sandman also suggested, do not run ALTER DATABASE CHARACTER SET ...
It is de-supported since Oracle 10g
Database SQL Reference 10g Release 1: ALTER DATABASE:
You can no longer change the database character set or the national
character set using the ALTER DATABASE statement. Please refer to
Oracle Database Globalization Support Guide for information on
database character set migration.
It used to be complicated with csscan etc. but nowadays you download a GUI tool called Oracle Database Migration Assistant and follow the instructions. It's a lot easier if your character sets are single-byte (I'm assuming) as then you won't have lossy conversion of some data, for example a multi-byte character set like UTF8.
You will require downtime though and it might take hours to do, depending on the size of the data found by the DMU tool. You can NOT change the character set by simply doing an 'alter database' as some people might suggest.
I've been tasked with determining if our web platform can be 'localized' to Japanese, and how to do so. The platform is PL/SQL based in an Oracle 10g database. We have localized it for French Canadian and Brazilian Portuguese in the past, but I'm wondering what issues I may run into with Japanese (Kanji, I believe). Am I correct that Japanese is a double-byte char set while the others we've used are single-byte? How will this impact code and/or database table structure and access?
The various sentences/phrases/statements are stored in a database table and are looked up as needed based on the user's id and language setting. The table field that stores the 'text' is defined as a CLOB. It's often read into a VARCHAR2 variable.
I tried to copy/past some Japanese characters into the table via a direct paste to the field in a TOAD schema browser. That resulted in '??' being displayed.
Is there anything I have to do in order to be able to store Japanese characters in that table? Or access/display them from that table?
Check your database character set by
SELECT *
FROM V$NLS_PARAMETERS
WHERE PARAMETER IN ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');
If the character set support Japanese (e.g. AL32UTF8) it should be no big deal to localize your application also to Japanese. Changing the character set on an existing database is also possible but requires some effort, see Character Set Migration
Check also this answer for topics related to database character set vs. client character set, i.e. NLS_LANG setting.
Our application is designed to work with an Oracle 11g database with Charset (NLS_CHARACTERSET) and National Charset (NLS_NCHAR_CHARACTERSET) both set to UTF8.
While lauching an Oracle database instance on Amazon Relational Database Service (RDS), I'm prompted to chose a Charset that I set to UTF8.
However, I was unable to find a way to set the National Charset, and this parameter is set to AL16UTF16 during database creation.
I tried the following :
Created a new Parameter Group to set the NLS_NCHAR_CHARACTERSET, but the parameter isn't listed. I also tried unsuccessfully to force the creation of a new Parameter Group with this parameter using the AWS CLI.
Tried to ALTER the database but the SYSDBA role is not available on Amazon managed database instances.
Created different Oracle RDS databases with different Charset parameter to check if the National Charset is affected, but it is still set to AL16UTF16.
Is there any way to do it ?
The parameter can be set by specifying --character-set-name during instance creation using AWS CLI. So far I have not found anyway to change that for an existing instance.
In my testing, I set it using --character-set-name KO16MSWIN949, which will support Korean based on AWS doc:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.OracleCharacterSets.html
AWS RDS has no way to set the Oracle Database National Charset (NLS_NCHAR_CHARACTERSET) to UTF8. NLS_NCHAR_CHARACTERSET will always be AL16UTF16. Data types NVARCHAR2, NCHAR, and NCLOB are affected. For the sake of discussion, I will refer to these data types as NCHAR.
Use of AL16UTF16 has space consequences in a migration. As the name implies, all characters are stored as 16 bits (2 bytes). For example, the Western letter 'A' will be stored zero-padded as '\0','A'.
Because of this, the space requirement at the migration target could be higher than at the source. How much higher depends on the prevalence of NCHAR columns. 25% higher is an actual example from experience. An 8 TB schema on conventional hardware required 10 TB on AWS RDS.
If your NLS_CHARACTERSET is AL32UTF8, then one way to prevent migration to the space-wasting AL16UTF16 character set is to migrate your NCHAR columns to CHAR. Example:
from:
CREATE TABLE ...
( "BUSINESS_UNIT" NVARCHAR2(5) NOT NULL ENABLE,
to:
alter session set NLS_LENGTH_SEMANTICS = 'CHAR';
CREATE TABLE ...
( "BUSINESS_UNIT" VARCHAR2(5) NOT NULL ENABLE,
etc.
Using Cloudformation you can set it too in the CharacterSetName attribute of AWS::RDS::DBInstance.
I have to change the character set from AL32UTF8 to WE8MSWIN1252 in a Oracle 11g r2 Express instance... I tried to use the command:
ALTER DATABASE CHARACTER SET WE8MSWIN1252;
But it fails saying that MSWIN1252 isn't a superset of AL32UTF8. Then I found some articles talking about CSSCAN, and that tool doesn't seem to be available in Oracle 11 Express.
http://www.oracle-base.com/articles/10g/CharacterSetMigration.php
Anyone has an idea on how to do that? Thanks in advance
Edit
Clarifying a little bit: The real issue is that I'm trying to import data into a table that has a column defined as VARCHAR(6 byte). The string causing the issue is 'eq.mês', it needs 6 bytes in MSWIN1252 and 7 bytes in UT8
You can't.
The Express Edition of 11g is only available using a UTF-8 character set. If you want to go back to the express edition of 10g, there was a Western European version that used the Windows-1252 character set. Unlike with the other editions, Oracle doesn't support the full range of character sets in the Express Edition nor does it support changing the character set of an existing XE database.
Why do you believe you need to change the database character set? Other than potentially taking a bit more storage space to support the characters in the upper half of the Windows-1252 range, which generally aren't particularly heavily used, there aren't many downsides to a UTF-8 database.
I would say that your best option when you want to go to a character set that supports only a subset of the original characters, that your best option is to use exp and imp back (or expdp and impdp).
Are you sure that no single table will contain any character not found in the 1252 code page ?
The problem of only execute that ALTER DATABASE command is that the Data Dictionary was not converted and it can be corrupted.
I had the same problem. In my case, we are using a Oracle 11g Express Edition (11.2.0.2.0) and we really need that it runs on WE8MSWIN1252 character set, but I cannot change the character set on installation (it always install with AL32UTF8).
With a Oracle Client 11g installed as Administrator and run only the csscan full=y (check this link https://oracle-base.com/articles/10g/character-set-migration) and we notice that are lossy and convertible data problems in our database. But, the problems are with the MDSYS (Oracle Spatial) and APEX_040000 (Oracle Application Express) schemas. So, as we dont need this products, we remove them (check this link: http://fast-dba.blogspot.com.br/2014/04/how-to-remove-unwanted-components-from.html).
Then, we export with expdp the users schemas and drop the users (they must be recreated at the end of the process).
Executing csscan again with full=y capture=y, it reports that: The data dictionary can be safely migrated using the CSALTER script. If the report doesnt have this, the csalter.plb script will not work, because there are some conditions that will not be satisfied:
changeless for all CHAR VARCHAR2, and LONG data (Data Dictionary and Application Data)
changeless for all Application Data CLOB
changeless and/or convertible for all Data Dictionary CLOB
In our case, this conditions were satisfied and we could ran the CSALTER script successfully. Moreover, this script executes the ALTER DATABASE command you are trying to run and it converts the CLOB data of Data Dictionary that is convertible.
Finally, we create the users and the tablespaces of our application and we import the dump of the user data successfully.