I've got an issue inserting ñ chatacter to an Oracle database.
The INSERT operation completes successfully.
However, when I SELECT, I get n instead of ñ.
Also, I noticed executing:
select 'ñ' from dual;
gives me 'n'.
select * from nls_database_parameters where parameter='NLS_CHARACTERSET';
gives me 'EE8MSWIN1250'.
How do I insert ñ?
I'd like to avoid modifying db settings.
The only way I got this working was:
read the input .csv file with its windows-1250 encoding
change encoding to unicode
change strings to unicode codes representation (in which ñ is \00d1)
execute INSERT/UPDATE passing the value from #3 to a UNISTR function
For sure there is an easier way to achieve this.
Simplest way would be, find out the ASCII of the character using ASCII() and insert use CHR() to convert back to string.
SQL:
select ascii('ñ'),chr(50097) from dual;
Output:
ASCII('Ñ') CHR(50097)
---------- ----------
50097 ñ
The client you are using to connect to database matters. Some clients, donot support these characters.
Before you start your SQL*Plus you have to set the codepage (using chcp command) and NLS_LANG environment parameter accordingly:
chcp 1250
set NLS_LANG=.EE8MSWIN1250
sqlplus ...
However as already given in comments, Windows 1250 does not support character ñ. So, these values will not work.
It is not compulsory to set these values equal to your database character set, however codepage and NLS_LANG must match, i.e. following ones should work as well (as they all support ñ)
chcp 1252
set NLS_LANG=.WE8MSWIN1252
sqlplus ...
or
chcp 850
set NLS_LANG=.WE8PC850
sqlplus ...
Again, EE8MSWIN1250 does not support character ñ, unless data type of your column is NVARCHAR2 (resp. NCLOB) it is not possible to store ñ in the database.
Is that character supported by the database character set?
I could not find that character here: https://en.wikipedia.org/wiki/Windows-1250
I had no problems inserting that character in my database which has character set WE8MSWIN1252.
https://en.wikipedia.org/wiki/Windows-1252
Related
my column,"ideograph", is of varchar2(1000) datatype. I inserted one character into the column with Microsoft IME, 寺, Oracle sqlplus just gives me a "?".
I checked v$nls_parameters.
nls_characterset= AL32UTF8
nls_nchar_characterset= AL16UTF16
Now,based on what i've read in other questions here, if I'm using varchar2 datatype, then based on the v$nls_parameters query, the db is using AL32UTF8, but this is a " national characterset", I think, I have no idea what is the " database" character set is.
I also used the dump function with the dual table.
select dump('寺',1016) from dual;
I got typ=96 len=1 characterset=AL32UTF8: 3F
I did the same thing in livesql.oracle.com, which uses apex on a cloud, and I got this:
typ=96 len=3 characterset=AL32UTF8: e5,af,ba
why the hexadecimal codes are different if it says is the same character set?
I have no idea what encoding microsoft IME is using, maybe that is the problem.
How can I fix this? I can create another database, because I don't have any data in the one I'm doing this, is just for training purposes. but I want to see if I can fix it without having to do that.
General solution is learning NLS_LANG setting in Oracle and understanding that you may have different NLS_LANG settings on Windows because Windows command line mode does not work the same way as Windows graphical user interface as far as character set settings are concerned.
I need to update value in one table, which is having special character.
Below is the Update Query I have Executed:
UPDATE TABLE_X
SET DISPLAY_NAME = 'AC¦', NATIVE_IDENTITY='AC¦'
WHERE ID='idNumber'
Special Character "¦" is not getting updated in Oracle.
I have already tried below approaches:
Checked the character set being used in Oracle using below query
select * from nls_database_parameters where parameter='NLS_CHARACTERSET';
It is having "US7ASCII" Character set.
I have tried to see if any of the character set will help using below query
SELECT CONVERT('¦ ', 'ASCII') FROM DUAL;
I have tried below different encoding:
WE8MSWIN1252
AL32UTF8
BINARY - this one is giving error "ORA-01482: unsupported character set"
Before Changing the character set in DB i wanted to try out 'CONVERT' function from Oracle, but above mentioned character set is either returning "Block Symbol" or "QuestionMark � " Symbol.
Any idea how can I incorporate this special symbol in DB?
Assuming that the character in question is not part of the US7ASCII character set, which it does not appear to be unless you want to replace it with the ASCII vertical bar character |, you can't validly store the character in a VARCHAR2 column in the database.
You can change the database character set to a character set that supports all the characters you want to represent
You can change the data type of the column to NVARCHAR2 assuming your national character set is UTF-16 which it would normally be.
You can store a binary representation of the character in some character set you know in a RAW column and convert back from the binary representation in your application logic.
I would prefer changing the database character set but that is potentially a significant change.
I would like to know the difference between
NLS_NCHAR_CHARACTERSET and NLS_CHARACTERSET settings in Oracle?
From my understanding, NLS_NCHAR_CHARACTERSET is for NVARCHAR data types
and for NLS_CHARACTERSET would be for VARCHAR2 data types.
I tried to test this on my development server which my current settings for CHARACTERSET are as the following :
PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET US7ASCII
Then I inserted some Chinese character values into the database. I inserted the characters into a table called data_<seqno> and updated the column for ADDRESS and ADDRESS_2 which are VARCHAR2 columns. Right from my understanding with the current setting for NLS_CHARACTERSET US7ASCII, Chinese characters should not be supported but it is still showing in the database. Does NLS_NCHAR_CHARACTERSET take precedence over this?
Thank You.
In general all your points are correct. NLS_NCHAR_CHARACTERSET defines the character set for NVARCHAR2, et. al. columns whereas NLS_CHARACTERSET is used for VARCHAR2.
Why is it possible that you see Chinese characters with US7ASCII?
The reason is, your database character set and your client character set (i.e. see NLS_LANG value) are both US7ASCII. Your database uses US7ASCII and it "thinks" also the client sends data using US7ASCII. Thus it does not make any conversion of the strings, the data are transferred bit-by-bit from client to server and vice versa.
Due to that fact you can use characters which are actually not supported by US7ASCII. Be aware, in case your client uses a different character set (e.g. when you use ODP.NET Managed Driver in an Windows application) the data will be rubbish! Also if you would consider a database character set migration you have the same issue.
Another note: I don't think you would get the same behavior with other character sets, e.g. if your database and your client both would use WE8ISO8859P1 for example. Also be aware that you actually have wrong configuration. Your database uses character set US7ASCII, your NLS_LANG value is also US7ASCII (most likely it is not set at all and Oracle defaults it to US7ASCII) but the real character set of SQL*Plus, resp. your cmd.exe terminal is most likely CP950 or CP936.
If you like to set everything properly you can either set your environment variable NLS_LANG=.ZHT16MSWIN950 (CP936 seems to be not supported by Oracle) or change your codepage before running sqlplus.exe with command chcp 437. With this proper settings you will not see any Chinese characters as you probably would have expected.
I have data which contains special characters like à ç è etc..
I am trying to insert the data into tables having these characters. Data gets inserted without any issues but these characters are replaced with with ?/?? when stored in tables
How should I resolve this issue?I want to store these characters in my tables.
Is it related to NLS parameters?
Currently the NLS characterset is having AL32UTF8 as seen from V$Nls_parameters table.
Is there any specific table/column to be checked ? Or is it something at the database settings ?
Kindly advise.
Thank in advance
From the comments: It is not required that column must be NVARCHAR (resp. NVARCHAR2), because your database character set is AL32UTF8 which supports any Unicode character.
Set your NLS_LANG variable to AMERICAN_AMERICA.AL32UTF8 before you launch your SQL*Plus. You may change the language and/or territory to your own preferences.
Ensure you select a font which is able to display the special characters.
Note, client character set AL32UTF8 is determined by your local LANG variable (i.e. en_US.UTF-8), not by the database character set.
Check also this answer for more information: OdbcConnection returning Chinese Characters as "?"
I'm facing a character discrepancy issue while extracting data from db tables.
I've written a PL/SQL code to spool some data to .txt file from my db tables and running this sql using unix shell but when I'm getting the spooled file, the result set is a changed one from the one at back end.
For example:
At back end: SADETTÝN
In Spooled txt file : SADETTŸN
If you look at the Y character, it is a changed one. I want to preserve all the characters the way they are at back end.
My db's character set:
SELECT * FROM v$nls_parameters WHERE parameter LIKE 'NLS%CHARACTERSET'
PARAMETER VALUE
NLS_CHARACTERSET WE8ISO8859P1
NLS_NCHAR_CHARACTERSET WE8ISO8859P1
And Unix NLS_LANG parameter :
$ echo $NLS_LANG
AMERICAN_AMERICA.WE8ISO8859P1
I tried changing NLS_LANG parameter to WE8ISO8859P9(Trukish characterset) but no help!
Could anyone let me know the solution to this problem?
I presume that you are trying to visualize your file with "vi" or something similar.NLS_LANG parameter is used only by your database to export to your file.For your editor(vi), you need to set the LANG parameter to the corresponding value to your NLS_LANG.
Exemple : For ISO8859P1 american english you have to do
export LANG=en_US.ISO8859-1
In other words your file is just fine it's your editor who doesn't know what to do with your Turkish characters.
You should use NCHAR data types. More information is available at Oracle Documentation - SQL and PL/SQL Programming with Unicode
For spooling from SQL*Plus, you need to set the NLS_LANG environment variable correctly. Here is a similar question in stackoverflow.