gibberish on oracle 12.1 db sqlplus errors - oracle

I am getting gebbrish on my sqlplus ORA errors. Example:
SQL> conn ur#mydb
Enter password:
ERROR:
ORA-01017: ┐┐┐┐┐ ┐┐┐┐┐/┐┐┐┐┐ ┐┐ ┐┐┐┐┐; ┐┐┐┐┐┐┐┐ ┐┐┐┐┐
this is my nls_lang on the registry:
AMERICAN_AMERICA.WE8MSWIN1252
I have windows 8 64 bit. Oracle db 12.1.0.1.
Tried everything.
Thank you for the help.

Try: I had the same problem, solved when I changed the NLS_VALUE to AMERICAN_AMERICA.WE8MSWIN1252
Procedure
Run the following queries to get the corresponding values:
SELECT VALUE as Language FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER='NLS_LANGUAGE';
SELECT VALUE as Territory FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER='NLS_TERRITORY';
SELECT VALUE as Characterset FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER='NLS_CHARACTERSET';
The NLS_LANG parameter is set as: <Language>_<Territory>.<Characterset> (for example, set NLS_LANG = AMERICAN_AMERICA.UTF8)
To set the value of the NLS_LANG parameter in Windows, verify the HKEY_LOCAL_MACHINE/SOFTWARE/ORACLE/NLS_LANG entry in the registry.
To set the value of the NLS_LANG parameter in UNIX, NLS_LANG is set as a local environment variable
source: http://pic.dhe.ibm.com/infocenter/ssfs/v9r2/index.jsp?topic=%2Fcom.ibm.help.install.doc%2Ft_ConfiguringTheNLS_LANGParameterForAnOracleClient.html

Related

Firebird 2.5 query returns COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed

I have an old source database in which apparently custom collation UTF8_CI_AI_NUMERIC_SORT was created. I'm running it on docker via image jacobalberty/firebird:2.5-ss. Originally database was created on a Windows machine.
When I try to do a query on the table where this collation was used, I get the error:
SQL> select * from "InvoiceService";
Statement failed, SQLSTATE = 22021
COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed
Show collations returns the following:
SQL> show collations;
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'NUMERIC-SORT=1'
I tried the following fixes:
add entry to fbintl.conf:
<charset UTF8>
intl_module fbintl
collation UTF8_CI_AI_NUMERIC_SORT
</charset>
Then run the sp_register_character_set("UTF8", 4) procedure, and receiving error about duplicate collations (because UTF8_CI_AI_NUMERIC_SORT is already defined in the DB).
Dropping collation
SQL> drop collation UTF8_CI_AI_NUMERIC_SORT;
Statement failed, SQLSTATE = 42000
unsuccessful metadata update
-Collation UTF8_CI_AI_NUMERIC_SORT is used in table InvoiceService (field name NAME) and cannot be dropped
Adding new column in which different collation would be used, but can't even add it:
SQL> ALTER TABLE "InvoiceService" ADD NAME2 VARCHAR(600) CHARACTER SET UTF8;
Statement failed, SQLSTATE = 22021
unsuccessful metadata update
-InvoiceService
-COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed
With using gbak restoring only metadata, fixing the schema and then inserting only the data, but gbak does not support restoring data only
...
I'm out of ideas now. What else could I try?
So, I finally managed to solve the problem. What I did was to create a DB backup with
gbak -v -t -user SYSDBA /path/to/source.fdb /path/to/backup.fbk
Then use the 3.0 version of Docker image with Firebird DB (jacobalberty/firebird:3.0) and restore from backup with
gbak -create /path/to/backup.fbk /path/to/restored3.fdb
Note that the same backup-restore procedure without switching the Docker image did not work.
I didn't have to do anything else. There's only a slight difference in SHOW COLLATIONS; output:
// originally:
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'NUMERIC-SORT=1'
// restored DB
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'COLL-VERSION=58.0.6.50;NUMERIC-SORT=1'

Oracle replaces £ with ?? on INSERT

I have an Oracle 12c database that is replacing £ with ?? on INSERT. The insert is coming from a 11.2 instant client SQL*Plus session. Both boxes are running Linux.
The same also happens from SQL*Plus on the DB host itself.
The DB host has the following settings:
$LANG: en_GB.UTF8
NLS_CHARACTERSET: AL32UTF8
The Client has:
$LANG: en_GB.UTF-8
$NLS_LANG is not set for either DB or Client hosts.
select dump('£', 1017) from dual;
DUMP('??',1017)
-----------------------------------------------------
Typ=96 Len=6 CharacterSet=AL32UTF8: ef,bf,bd,ef,bf,bd
EDIT: Correction. Only £ is being replaced, but is being replaced with ??
You are seeing the Unicode replacement character. It looks like your Linux environment setting for the Oracle NLS_LANG variable is not appropriate:
$ export NLS_LANG="ENGLISH_UNITED KINGDOM.US7ASCII"
$ sqlplus ...
SQL> select dump('£', 1017) from dual;
DUMP('??',1017)
-----------------------------------------------------
Typ=96 Len=6 CharacterSet=AL32UTF8: ef,bf,bd,ef,bf,bd
With a setting that matches your LANG it works as expected:
$ export NLS_LANG="ENGLISH_UNITED KINGDOM.AL32UTF8"
$ sqlplus ...
SQL> select dump('£', 1017) from dual;
DUMP('£',1017)
-----------------------------------------
Typ=96 Len=2 CharacterSet=AL32UTF8: c2,a3
which is the Unicode pound sign.

Import / export oracle scheme with correct character set

I have exported a scheme successfully. On the import however the log says that the character sets don't match. The strange thing is that on the server the export was done the character set is the same as on the target database.
This is from the source:
SQL> select * from v$NLS_PARAMETERS
2 ;
**NLS_CHARACTERSET
WE8MSWIN1252**
**NLS_NCHAR_CHARACTERSET
AL16UTF16**
And this is from the log of the import:
Importvorgang mit Zeichensatz WE8MSWIN1252 und Zeichensatz AL16UTF16 NCHAR durchgeführt
Export-Client verwendet Zeichensatz US7ASCII (mögliche Zeichensatzkonvertierung)
Why is the dump recognized as US7ASCII set? The source and target both are non-US machines.
Thank you
Yes, Looks like issue with char set of client session. Set it to globally supported and recommended UTF8 format.
Pls take the export again and try importing. (Do the following before export):
In Windows
set NLS_LANG=AMERICAN_AMERICA.UTF8
In Unix
export NLS_LANG=AMERICAN_AMERICA.UTF8
These days DB char set is also recommended to be 'AL32UTF8'.

How do I select a variables value in SQL Developer

Problem
I just want to see the value of a variable. I don't understand why this has to be so difficult.
My SQL Statement
--set serveroutput on format wrapped; Tried this too
SET SERVEROUTPUT ON;
--DBMS_OUTPUT.ENABLE(32000); Tried with, and without this
vend_num xx.VENDOR_CWT.VEND_NO%TYPE;
SELECT vend_no
INTO vend_num
FROM xx.VENDOR_NAME
WHERE VENDOR_NAME1 = 'xxxx';
dbms_output.put_line(vend_num);
The Error I'm Geting
Error starting at line 13 in command:
dbms_output.put_line(vend_num)
Error report:
Unknown Command
What I've Tried
I've tried the following answers:
Print text in Oracle SQL Developer SQL Worksheet window
Printing the value of a variable in SQL Developer
I've done what this answer suggested with the gui: https://stackoverflow.com/a/7889380/496680
I've tried exec dbms_output[...] as some posts have suggested.
Question
How do I just print the value of vend_num;
DBMS_Output is a PL/SQL package, so you'd call it from within PL/SQL code.
declare
end_num xx.VENDOR_CWT.VEND_NO%TYPE;
begin
SELECT vend_no
INTO vend_num
FROM xx.VENDOR_NAME
WHERE VENDOR_NAME1 = 'xxxx';
dbms_output.put_line(vend_num);
end;
/

How do I check the NLS_LANG of the client?

I'm working on Windows OS, I know that this setting is stored in the registry. The problem is that the registry path changes from version to version, browsing though that bunch of registry keys is definitly not a good idea.
I can get the NLS_LANG of the server with SELECT USERENV ('language') FROM DUAL.
I'd like to compare that with the client setting and show a warning when they don't match, just like Pl/Sql Developer does.
This is what I do when I troubleshoot encoding-issues. (The NLS_LANG value read by sqlplus):
SQL>/* It's a hack. I don't know why it works. But it does!*/
SQL>#[%NLS_LANG%]
SP2-0310: unable to open file "[NORWEGIAN_NORWAY.WE8MSWIN1252]"
You will have to extract the NLS_LANG value in current ORACLE_HOME from the registry.
All client-side tools (sqlplus, sqlldr, exp, imp, oci, etc...) read this value from registry
and determine if any character transcoding should occur.
ORACLE_HOME and registry section:
C:\>dir /s/b oracle.key
C:\Oracle10\BIN\oracle.key
C:\>type C:\Oracle10\BIN\oracle.key
SOFTWARE\ORACLE\KEY_OraClient10204_Home
In times like these I turn to IPython to demonstrate an idea:
A couple of lookups and you are there!
In [36]: OHOMES_INSTALLED = !where oci.dll
In [37]: OHOMES_INSTALLED
Out[37]:
['C:\\Oracle10\\BIN\\oci.dll',
'C:\\oraclexe\\app\\oracle\\product\\11.2.0\\server\\bin\\oci.dll']
In [38]: ORACLE_HOME = os.path.dirname(OHOMES_INSTALLED[0])
In [39]: ORACLE_HOME
Out[39]: 'C:\\Oracle10\\BIN'
In [40]: f = open(os.path.join(ORACLE_HOME, "oracle.key"))
In [41]: SECTION = f.read()
In [42]: SECTION
Out[42]: 'SOFTWARE\\ORACLE\\KEY_OraClient10204_Home\n'
In [43]: from _winreg import *
In [44]: aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
In [46]: aKey = OpenKey(aReg,SECTION.strip())
In [47]: val = QueryValueEx(aKey, "NLS_LANG")
In [48]: print val
(u'NORWEGIAN_NORWAY.WE8MSWIN1252', 1)
According to Jocke's answer (thanks Jocke), I tested the following query:
SELECT DISTINCT client_charset FROM v$session_connect_info
WHERE sid = sys_context('USERENV','SID');
It perfectly does the job, but I'm unsure if any user will have the necessary rights.
I am not sure if this works every time but for me in sql*plus:
variable n varchar2(200)
execute sys.dbms_system.get_env('NLS_LANG', :n )
print n
AMERICAN_AMERICA.WE8ISO8859P1
Just build a function-wrapper, grant execute to the users who needs it, and there you go.

Resources