Oracle to export data with chinese character - oracle

I have oracle database that I can use to store/display chinese characters.
But when I am using sqlplus to query the database (e.g. running script from sqlplus) the chinese character can't be displayed (all question marks). My purpose is eventually to export the data to CSV files (that contains chinese characters). Any comment on how I can do it?

Related

CLOB in an ASCII characterset database contains non-ASCII characters - how?

I am working with an Oracle 12.2 database. The database characterset is WE8MSWIN1252 (ie. an ASCII characterset).
The database contains a table with a CLOB column (according to Oracle SQL Developer). Some values in this column contain non-ASCII characters (I know this as when using ASCIISTR function on this column I can see the escaped non-ASCII character codes).
How is this possible? I thought ASCII characterset databases could only store unicode in NVARCHAR, NCLOB etc.
(I only discovered this when I was using a linked server to the Oracle db from SQL Server - when I ran an OPENQUERY on the table with the CLOB, it returned ? for the non-ASCII characters. I changed the OPENQUERY query string to use TO_NCLOB(clob_column) and it returned the non-ASCII characters.)
Any ideas?
Thanks
From wikipedia WE8MSWIN1252 description:
Windows-1252 or CP-1252 (code page 1252) is a single-byte character encoding of the Latin alphabet, used by default in the legacy components of Microsoft Windows for English and many European languages including Spanish, French, and German.
So, it a CLOB in a database with this charset can store strings like éàè. And ASCIISTR returns escaped codes because these chars are not defined in ASCII, for example:
SQL> select asciistr('é') eaccent, asciistr('e') e from dual;
EACCENT E
---------- -
\FFFD\FFFD e

Gibberish filed values when Loading data with Oracle SQL Loader (sqlldr)

Using
sqlldr
to load data in an oracle table doesn't result in any errors but the value of the fields are gibberish. not all the values of course, only the values which was in persian(arabic) format.
already read many questions here but could not resolve the issue, most relatable topic to this problem were :[Unreadable character in generated sqlplus file
although the values in my case isn't question mark, but something like ,ÑÇå Âåä or äÇ ãÔÎÕ
Also played with the NLS_LANG environment variable but it was to no avail.
Created different oracle databases with different character set's, this also was to no avail.
I'm new to oracle, so it is highly likely that I'm making a rookie mistake while creating database and setting character set or something else, to be honest i have no idea. but tried many responses from users and here i am.
I Uploaded the table schema and Ctl extension File to replicate the problem, the link to the related files are resided in this link: [https://www.dropbox.com/sh/ejxvast0ruioksk/AABXhjujqzhRpuMVjl7V-zxUa?dl=0][1]
In total you have three character sets or encodings.
What is the encoding of your file? Check the save options of the editor or the application which created the file.
The character set of your command line window cmd.exe, called "codepage". You can interrogate (or change) with command chcp
The character set of your database.
1) and 2) must be the same. Use command chcp to set them equal (or change settings in your editor)
3) can be different but the character set must support persian/arabic characters, so most likely AL32UTF8 which is the default nowadays.
Use the NLS_LANG value to tell the database which character set is used for 1) and 2), example
C:\>chcp 1256
Aktive Codepage: 1256.
C:\>set NLS_LANG=.AR8MSWIN1256
C:\>sqlldr ...
You can get a list of codepages vs. Oracle character set with this query:
SELECT VALUE AS ORACLE_CHARSET, UTL_I18N.MAP_CHARSET(VALUE) AS IANA_NAME
FROM V$NLS_VALID_VALUES
WHERE PARAMETER = 'CHARACTERSET';
And here a list of Code Page Identifiers
See also OdbcConnection returning Chinese Characters as "?" to get more details.

Oracle database - identify and convert special character issues

I have some issues related to special characters in some tables. For example, some words with character ü were inserted in database as NŒ. Is there a way to find this unicode problems and convert it in a table?
I also checked NLS_CHARACTERSET and LS_NCHAR_CHARACTERSET v$nls_parameters table and it looks fine (AL32UTF8 and UTF8).

Implementing Japanese localization

I've been tasked with determining if our web platform can be 'localized' to Japanese, and how to do so. The platform is PL/SQL based in an Oracle 10g database. We have localized it for French Canadian and Brazilian Portuguese in the past, but I'm wondering what issues I may run into with Japanese (Kanji, I believe). Am I correct that Japanese is a double-byte char set while the others we've used are single-byte? How will this impact code and/or database table structure and access?
The various sentences/phrases/statements are stored in a database table and are looked up as needed based on the user's id and language setting. The table field that stores the 'text' is defined as a CLOB. It's often read into a VARCHAR2 variable.
I tried to copy/past some Japanese characters into the table via a direct paste to the field in a TOAD schema browser. That resulted in '??' being displayed.
Is there anything I have to do in order to be able to store Japanese characters in that table? Or access/display them from that table?
Check your database character set by
SELECT *
FROM V$NLS_PARAMETERS
WHERE PARAMETER IN ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');
If the character set support Japanese (e.g. AL32UTF8) it should be no big deal to localize your application also to Japanese. Changing the character set on an existing database is also possible but requires some effort, see Character Set Migration
Check also this answer for topics related to database character set vs. client character set, i.e. NLS_LANG setting.

Special Characters not getting displayed in Oracle tables

I have data which contains special characters like à ç è etc..
I am trying to insert the data into tables having these characters. Data gets inserted without any issues but these characters are replaced with with ?/?? when stored in tables
How should I resolve this issue?I want to store these characters in my tables.
Is it related to NLS parameters?
Currently the NLS characterset is having AL32UTF8 as seen from V$Nls_parameters table.
Is there any specific table/column to be checked ? Or is it something at the database settings ?
Kindly advise.
Thank in advance
From the comments: It is not required that column must be NVARCHAR (resp. NVARCHAR2), because your database character set is AL32UTF8 which supports any Unicode character.
Set your NLS_LANG variable to AMERICAN_AMERICA.AL32UTF8 before you launch your SQL*Plus. You may change the language and/or territory to your own preferences.
Ensure you select a font which is able to display the special characters.
Note, client character set AL32UTF8 is determined by your local LANG variable (i.e. en_US.UTF-8), not by the database character set.
Check also this answer for more information: OdbcConnection returning Chinese Characters as "?"

Resources