Connect to IBM DB2 using Korean characters from JDBC - utf-8

I have a IBM DB2 10.1 Express edition installed with codeset UTF-8, and codepage 1208. I am able to create Tables and columns with korean characters.
But when i create a korean user and try to log in using that, It gives me a error " Unable to convert unicode string to ebcdic format". I tried using db2jcc and db2jcc4.jar, Also i tried from DB2 CLP .
Any help would be greatly appreciated.
Thanks in advance

DB2 supports only single-byte characters in authorization names. If the character you're using in the ID is a multibyte character then I believe DB2 will not be able to authenticate it.

Related

SAS/ACCESS ODBC Oracle, Greek characters are displayed as question marks (?)

We installed oracle instant client v19 to connect to oracle dbs via ODBC, but when we read data all Greek characters showing up as question marks.

migrating database from access 2003 to oracle Arabic characters are shown as question marks

UPDATE
as answer below shows that is error couse by JDBC so is there any
other suggestion to migrate access database to oracle database other
than using toad and the hard way to do it because trigger views
sequences wont be imported by that way so I have to create them by my
self??! :S
I am migrating database from access 2003 to oracle database 12c but Arabic characters are shown as question marks at the step where you connect to access database using SQL developers
I followed what you suggest at this answer and restart my pc but nothing changed
NOTE
when opening .mdb file from access Arabic characters shown right but when opening it from SQL developers I get question marks instead of arabic characters
is there anything else to do ?
I run that query as #krokodilko suggested and I get below result
select * from nls_database_parameters where parameter like '%CHAR%'
NLS_NCHAR_CONV_EXCP FALSE
NLS_NUMERIC_CHARACTERS .,
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_CHARACTERSET AR8MSWIN1256
select * from nls_session_parameters where parameter like '%LANG%';
NLS_LANGUAGE ENGLISH
NLS_DATE_LANGUAGE ENGLISH
by the way when I open another Oracle schema Arabic characters show correctly dose access has special encoding ?
Unfortunately, this looks like a problem with JDBC-ODBC Bridge. It does not work properly with the Access ODBC driver when text includes Unicode characters.
See other questions regarding MS Access over JDBC-ODBC Bridge like this:
Reading Unicode characters from an Access database via JDBC-ODBC.
There is also proposed solution which may work for general Java-to-MSAccess connection using pure Java JDBC driver (UCanAccess):
Manipulating an Access database from Java without ODBC
But, your question is about using SQL Developer for migration, so, it is not a solution for you, since SQL Developer supports only limited number of JDBC drivers. Not UCanAccess.
Hard-way is better than no-way.

Issue in storing value in Nvarchar column in Oracle

I have a table with NVarchar column in oracle.When I inserted below value it converted into some other characters.
Insert INTO tbltest (CONTENT) Values(N'✓ à la mode δ')
Select * From tbltest
CONTENT
--------------------
¿ à la mode d
So what datatype should i take to save this type of data.Please suggest.
For SQL Developer I don't know exact solution but check settings in Tools -> Preferences -> Environment -> Encoding.
Select an encoding which supports your characters, e.g. UTF-8
Regarding your C# Code:
Microsoft .NET Framework Data Provider for Oracle is deprecated for many years, you should not use it in new projects. I think development stopped 20 years ago, thus it does not support Unicode characters - you cannot use it.
Use a modern provider, e.g. "Oracle Data Provider for .NET (ODP.NET)", you can download it from here: Oracle Data Access Components (ODAC) for Windows Downloads
In case of "Oracle Data Provider for .NET" you have to set NLS_LANG to an encoding which support these characters, e.g. .AL32UTF8. You can set NLS_LANG either as Environment variable or in Registry at HKLM\SOFTWARE\Wow6432Node\ORACLE\KEY_%ORACLE_HOME_NAME%\NLS_LANG (for 32 bit), resp. HKLM\SOFTWARE\ORACLE\KEY_%ORACLE_HOME_NAME%\NLS_LANG (for 64 bit).
Other providers (e.g. ODP.NET Managed Driver or Oracle OraOLEDB) are not NLS_LANG sensitive.
See more information here: OdbcConnection returning Chinese Characters as "?"
This is due to your NLS_CHARACTERSET, NLS_NCHAR_CHARACTERSET database settings and NLS_LANG settings ar your client end where you are seeing.
As your value has special character specific to your language, you need to set with that. Like below

JDBC with Non-Unicode database, how to specify handling of unsupported characters?

I have a Java application that works with Unicode and a database (Oracle, MSSQL, DB2, MySQL) that is in an 8-bit non-Unicode codepage (for example IBM1141). Migrating database to Unicode is not an option.
Is there any way to specify the behavior (replace/error/warn) of the JDBC driver, when the application passes a unicode character, which cannot be encoded in the database encoding?
The JDBC specification has nothing to say on the topic of encoding, so it is up to the implementation to handle this.
Since Java itself uses UTF-16 internally, every JDBC driver that is worth its salt will automatically convert between the database encoding and UTF-16.
The behaviour of a JDBC driver when it encounters characters that it cannot convert is implementation specific and will depend on the “philosophy” of the database system.
The two JDBC drivers I know well behave differently:
Oracle JDBC will silently replace characters that cannot be converted with a replacement character. There is no way to get the Oracle JDBC driver or the Oracle database to throw an error.
PostgreSQL JDBC will always report an error if a character cannot be converted. There is no way to get PostgreSQL to silently modify the character or store an invalid character.
This is normally not an issue when you read data from the database, because everything can be converted to UTF-16, but it will be a problem when writing to the database. You'll have to sanitize the data yourself before writing them to the database.

Oracle encoding problems

There is aan Oracle database with regional encoding for Kazakh language. There is also a client, but it uses UTF encoding.
When i update db field in Kazakh through pl/sql developer or call update proc in web application, these special characters becomes '?'.
But if i locate ora18n java file in oracle sql developer folder and update field in Kazakh, everithing works well.
How can solve my problem? If you need any db or clinet conf, u are welcome!

Resources