Identify special character in oracle - oracle

How to identify special character like below marked in blue in oracle.
these types of rows are causing an issue while we are migrating data from Oracle to MSSQL server 2012.
For migration we are using SSMA tool V5.3.0.

Related

migrating database from access 2003 to oracle Arabic characters are shown as question marks

UPDATE
as answer below shows that is error couse by JDBC so is there any
other suggestion to migrate access database to oracle database other
than using toad and the hard way to do it because trigger views
sequences wont be imported by that way so I have to create them by my
self??! :S
I am migrating database from access 2003 to oracle database 12c but Arabic characters are shown as question marks at the step where you connect to access database using SQL developers
I followed what you suggest at this answer and restart my pc but nothing changed
NOTE
when opening .mdb file from access Arabic characters shown right but when opening it from SQL developers I get question marks instead of arabic characters
is there anything else to do ?
I run that query as #krokodilko suggested and I get below result
select * from nls_database_parameters where parameter like '%CHAR%'
NLS_NCHAR_CONV_EXCP FALSE
NLS_NUMERIC_CHARACTERS .,
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_CHARACTERSET AR8MSWIN1256
select * from nls_session_parameters where parameter like '%LANG%';
NLS_LANGUAGE ENGLISH
NLS_DATE_LANGUAGE ENGLISH
by the way when I open another Oracle schema Arabic characters show correctly dose access has special encoding ?
Unfortunately, this looks like a problem with JDBC-ODBC Bridge. It does not work properly with the Access ODBC driver when text includes Unicode characters.
See other questions regarding MS Access over JDBC-ODBC Bridge like this:
Reading Unicode characters from an Access database via JDBC-ODBC.
There is also proposed solution which may work for general Java-to-MSAccess connection using pure Java JDBC driver (UCanAccess):
Manipulating an Access database from Java without ODBC
But, your question is about using SQL Developer for migration, so, it is not a solution for you, since SQL Developer supports only limited number of JDBC drivers. Not UCanAccess.
Hard-way is better than no-way.

NVarchar2 fields inserted as blank/NULL while Importing data from SQL Server to Oracle over dblink

I'm currently importing data from a SQL Server database into an Oracle 11g one and I'm encountering some strange behavior when importing nvarchar2 columns. The charset on the SQL Server db is Unicode (UTF-8), while on Oracle's side it is not (WE8ISO8859P1).
Whenever i try to import SQL Server nvarchar2 values, Oracle inserts them as blank strings. Trying this on a different IDE, the show up as NULL. Whatever I do, the text is not imported.
I've had success using the dbms_hs_passthrough package, yet this seems overkill for such a simple insert/select task.
What am I missing?
Thanks in advance.
EDIT: If i perform a SELECT on the remote table I can see the text values just fine. CTAS also works as expected. I can only replicate the erroneous behavior when i run the package.
EDIT 2: I've narrowed this problem down to a possible bug in the MERGE statement in Oracle with data from SQL Server over DBLink. I solved this problem by performing separate INSERT/UPDATE DML statements.

Oracle 11g and .NET ODP character set conversion

We have Oracle 11g database with EE8ISO8859P2 character set. This character set can’t be changed, i.e. it must stay as it is.
However the data that will be inserted and read from the database will be from another character set: CL8MSWIN1251.
We are using .NET and ODP. One possible approach is manual transliteration in the .NET application itself.
Is the following scenario with Oracle 11g, .NET and ODP feasible?
Data is stored on the server into database with EE8ISO8859P2 character set encoding.
ADO.NET ODP driver connects to the database and retrieves the character data as opaque bytes which are further decoded in the .NET client by using the mapping: CL8MSWIN1251=>CLR Unicode.
We have tried with NLS_LANG environment variable but that did not solve the problem.
Any suggestions?
I assume you mean ODP instead of ODC.
I don't think your proposal is a good idea. Of course, it is always possible to "cheat" the database character inside your .NET application. But imagine a DBA has to analyze an issue on the database. Usually he will use a tool like SQL*Plus, SQL Developer, or similar. Or what happens when another tool access the database, e.g. for reporting purpose. All these topics will not work.
An Oracle database actually has two character sets, the "normal" one the the "national" one. Perhaps you can use the second characters set by using NVARCHAR or NCHAR datatypes.

SSMA (SQL Server Migration Assistant) for Oracle cannot find datatypes

I am trying to migrate my Oracle db to SqlServer 2008 using SSMA. I defined some type mappings for columns. When I synchronize after converting schema it gives errors like: "Cannot find datatype datetime" or bit. These datatypes are valid SQL Server datatypes.
Why I am getting these errors?
Just a guess, but it's quite hard to provide more than that before you give more details... (the code being synchronized to SQL Server, first of all).
Do you have case-sensitive collation on your SQL Server? I believe SSMA may have problems in this case. Try synchronizing to case-insensitive DB.
Also you may try running generated SQL Server (translated) code in the Management Studio and then find the problem with generated SQL or DB setup from that point. Again, it's most likely possible to figure out the problem solely by looking at your generated SQL if it's indeed corrupted due to some bug in SSMA.

How to copy data encrypted by dbms_obfuscation_toolkit.DESEncrypt

I have an Oracle (10.2.0.4) database table with a column which is encrypted by dbms_obfuscation_toolkit.DESEncrypt tool kit.
Some of our data has been messed up by it getting re-encrypted with another key.
I want to do some testing on this data to try and recover it. Therefore, I want to copy the data from our live system and into a test system.
I've tried simply exporting the data from SQL Developer (in various text based formats), but the "binary" nature of the encrypted data seems to break the file format.
I tried exp, but this reported errors (although I'm not sure if this is to do with the encrypted data or not).
How can I copy just this one table's data from one database to another?
Thanks.
The errors I got when exporting the table are below. I was doing this from my local machine connecting to a remote database:
c:\>exp <user>/<password>#<sid> FILE=export.dmp TABLES=(TABLE1)
Export: Release 11.1.0.6.0 - Production on Thu Oct 14 20:46:51 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P15 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
. . exporting table TABLE1
EXP-00008: ORACLE error 904 encountered
ORA-00904: "MAXSIZE": invalid identifier
I would try with a database link. If you can't create a database link, you could try the COPY command of SQL*Plus, although I'm not sure if it would work with encrypted columns (it looks like this command is deprecated in the newest releases).
If this fails, the best tool to export/import data from Oracle to Oracle would probably be Data Pump (included in the DB).
It turned out that my Windows test database had a slightly different character set encoding when compared to our live (unix) system - WE8ISO8859P1 -v- WE8ISO8859P15. I did a character set conversion on my test database, using the instructions here and then I was able to import the data.

Resources