Firebird 2.5 query returns COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed - utf-8

I have an old source database in which apparently custom collation UTF8_CI_AI_NUMERIC_SORT was created. I'm running it on docker via image jacobalberty/firebird:2.5-ss. Originally database was created on a Windows machine.
When I try to do a query on the table where this collation was used, I get the error:
SQL> select * from "InvoiceService";
Statement failed, SQLSTATE = 22021
COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed
Show collations returns the following:
SQL> show collations;
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'NUMERIC-SORT=1'
I tried the following fixes:
add entry to fbintl.conf:
<charset UTF8>
intl_module fbintl
collation UTF8_CI_AI_NUMERIC_SORT
</charset>
Then run the sp_register_character_set("UTF8", 4) procedure, and receiving error about duplicate collations (because UTF8_CI_AI_NUMERIC_SORT is already defined in the DB).
Dropping collation
SQL> drop collation UTF8_CI_AI_NUMERIC_SORT;
Statement failed, SQLSTATE = 42000
unsuccessful metadata update
-Collation UTF8_CI_AI_NUMERIC_SORT is used in table InvoiceService (field name NAME) and cannot be dropped
Adding new column in which different collation would be used, but can't even add it:
SQL> ALTER TABLE "InvoiceService" ADD NAME2 VARCHAR(600) CHARACTER SET UTF8;
Statement failed, SQLSTATE = 22021
unsuccessful metadata update
-InvoiceService
-COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed
With using gbak restoring only metadata, fixing the schema and then inserting only the data, but gbak does not support restoring data only
...
I'm out of ideas now. What else could I try?

So, I finally managed to solve the problem. What I did was to create a DB backup with
gbak -v -t -user SYSDBA /path/to/source.fdb /path/to/backup.fbk
Then use the 3.0 version of Docker image with Firebird DB (jacobalberty/firebird:3.0) and restore from backup with
gbak -create /path/to/backup.fbk /path/to/restored3.fdb
Note that the same backup-restore procedure without switching the Docker image did not work.
I didn't have to do anything else. There's only a slight difference in SHOW COLLATIONS; output:
// originally:
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'NUMERIC-SORT=1'
// restored DB
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'COLL-VERSION=58.0.6.50;NUMERIC-SORT=1'

Related

syntax error when trying to upload xml database to host

I'm getting the following error message.
Error
Static analysis:
1 errors were found during analysis.
The name of the entity was expected. (near "DEFAULT CHARACTER SET" at position 31)
SQL query:
CREATE DATABASE IF NOT EXISTS DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;
MySQL said:
Documentation #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci' at line 1
You are missing the database name docs
CREATE DATABASE IF NOT EXISTS mydatabasename
DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci

Oracle dump file table data extraction to file (original exp format)

I have Oracle dump files created with original exp (not expdp) (EXPORT:V10.02.01, Oracle 10g). They contain only table data for four tables.
1) I want to extract the table data into files (flat/fixed-width, CVS, or other text file) without importing them into another Oracle DB. [preferred]
2) Alternatively, I need a solution that can import them into an ordinary user (not SYSDBA) so that I can use other tools to extract the data.
My databases are 11g, but I can find 10g databases if needed. I have TOAD for Oracle Xpert 11.6.1.6 as my disposal. I am a moderately experieinced Oracle programmer, but I haven't worked with EXP/IMP before.
(The information below has been obscured to protect the data.)
Here's how the dump files were created:
exp FILE=data.dmp \
LOG=data.log \
TABLES=USER1.TABLE1,USER1.TABLE2,USER1.TABLE3,USER1.TABLE4 \
INDEXES=N TRIGGERS=N CONSTRAINTS=N GRANTS=N
Here's the log:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Note: grants on tables/views/sequences/roles will not be exported
Note: indexes on tables will not be exported
Note: constraints on tables will not be exported
About to export specified tables via Conventional Path ...
Current user changed to USER1
. . exporting table TABLE1 271 rows exported
. . exporting table TABLE2 272088 rows exported
. . exporting table TABLE3 2770 rows exported
. . exporting table TABLE4 21041 rows exported
Export terminated successfully without warnings.
Thank you in advance.
UPDATE:
TOAD version 9.7.2 will read a "dmp" file generated by EXP.
Select DATABASE -> EXPORT -> EXPORT FILE BROWSER from the menus.
You need to have the DBA utilities for TOAD installed. There is no real guarantee that the file is parsed
correctly, but the data will show up in TOAD in the schema browser.
NOTE: The only other known utility that will a dmp file generated by the exp utility is the imputility. You cannot read the dump file yourself. If you do, you run the risk of parsing the file incorrectly.
If you already have the data in an ORACLE table:
To extract the table data into a file, create a shell script that calls SQL*PLUS and causes SQL*PLUS to spool the table data to a file. You need one script per table.
#!/bin/sh
#NOTE: The path to sqlplus will vary on your system,
# but it is generally $ORACLE_HOME/bin/sqlplus.
#YOU NEED TO UNCOMMENT THESE LINES AND SET APPROPRIATELY.
#export ORACLE_SID=YOUR_SID
#export ORACLE_HOME=PATH_TO_YOUR_ORACLE_HOME
#export PATH=$PATH:$ORACLE_HOME/bin
sqlplus -s user/pwd#db << EOF
set pagesize 0
set linesize 0
set linesize 255
set heading off
set echo off
SPOOL TABLE1_DATA.txt
REM FOR EACH COLUMN IN TABLE1, SET THE FORMAT
COL FIELD_ID format 999,999,999
COL FIELD_DATA format a99
select FIELD_ID,FIELD_DATA from TABLE1;
SPOOL OFF
EOF
Make sure you set the line size of each line and set the format of each column. See FIELD_ID above for a number format column and FIELD_DATA for a character column.
NOTE: You need to remove the "N rows selected" from the end of the file.
(You can still import the file you created into another schema using the imputility.)

Import / export oracle scheme with correct character set

I have exported a scheme successfully. On the import however the log says that the character sets don't match. The strange thing is that on the server the export was done the character set is the same as on the target database.
This is from the source:
SQL> select * from v$NLS_PARAMETERS
2 ;
**NLS_CHARACTERSET
WE8MSWIN1252**
**NLS_NCHAR_CHARACTERSET
AL16UTF16**
And this is from the log of the import:
Importvorgang mit Zeichensatz WE8MSWIN1252 und Zeichensatz AL16UTF16 NCHAR durchgeführt
Export-Client verwendet Zeichensatz US7ASCII (mögliche Zeichensatzkonvertierung)
Why is the dump recognized as US7ASCII set? The source and target both are non-US machines.
Thank you
Yes, Looks like issue with char set of client session. Set it to globally supported and recommended UTF8 format.
Pls take the export again and try importing. (Do the following before export):
In Windows
set NLS_LANG=AMERICAN_AMERICA.UTF8
In Unix
export NLS_LANG=AMERICAN_AMERICA.UTF8
These days DB char set is also recommended to be 'AL32UTF8'.

DB2: How to set encoding for db2clp under Windows?

I have a DB2 that was created with encoding set to UTF-8
db2 create database mydb using codeset UTF-8
My data insert scripts are also stored in encoding UTF-8.
The problem now is that the command line processor seems to work with a different encoding as the Windows installation doesn't use UTF-8:
C:\Users\Administrator>chcp
Active code page: 850
This leads to the problem that my data (which contains special characters) is not stored correctly to the database.
Under Linux/AIX I could change the command line encoding by setting
export LC_ALL=en_US.UTF-8
How do I achieve this under Windows? I already tried
chcp 65001
UPDATE:
But that won't have any effect? It seems like the db2clp can't deal with the UTF-8 encoded file because it will print out junk:
D:\Program Files\ibm_db2\SQLLIB\BIN>chcp 65001
Active code page: 65001
D:\Program Files\ibm_db2\SQLLIB\BIN>type d:\tmp\encoding.sql
INSERT INTO MY_TABLE (ID, TXT) VALUES (99, 'äöü');
D:\Program Files\ibm_db2\SQLLIB\BIN>db2 connect to mydb
Datenbankverbindungsinformationen
Datenbank-Server = DB2/NT64 9.5.0
SQL-Berechtigungs-ID = MYUSER
Aliasname der lokalen Datenbank = MYDB
D:\Program Files\ibm_db2\SQLLIB\BIN>db2 -tvf d:\tmp\encoding.sql
INSERT INTO MY_TABLE (ID, TXT) VALUES (99, 'äöü')
DB20000I Der Befehl SQL wurde erfolgreich ausgeführt.
You need to set both:
CHCP 65001
SET DB2CODEPAGE=1208
on the db2cmd command line, before running db2 -tvf. This works for databases that have CODESET set to UTF-8. To check the CODESET setup for database run:
db2 get db cfg for <your database>
and look for "Database code page" and "Database code set" they should be 1208 and UTF-8 respectively.
when dealing with encodings, you have to take a careful look into your envirnoments, and where you are currently. So in your case:
the Server stores its data in encoding A (like UTF-8)
the client resides in an environment which has encoding B (like windows-1252)
in your client, you have to have to use the encoding of your client (or tell the client you intentionally use another encoding on client side (like UTF-8-encoded file inside a windows-1251 environment)!). The connection between the Client and the server is doing the work for you to change encoding B into encoding A for storing the data into the database.
It's work for me by setting db2codepage, thanks to Mr. Zoran Regvart.
by the way, after setting, you need to execute "db2 terminate" to reset client, and then reconnect.

Why this is not valid MySQL query?

mysql> ALTER TABLE bdds_arts ADD test VARBINARY;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right syntax to use near
'' at line 1
something wrong with varbinary type?
here is output of mysql --version
mysql Ver 14.12 Distrib 5.0.24a, for Win32 (ia32)
UPDATE
If I change VARBINARY for BINARY or, say, VARBINARY(25) it works.
Since this is piece of auto-generated script in order to fix this I should know what is going on.
The BINARY and VARBINARY types are similar to CHAR and VARCHAR.
You have to use a length for it like Varchar(255)
Remove the comma after bdds_arts.

Resources