MariaDB utf8 problem with some chinese characters - utf-8

I have some problems with this chinese character: 𦰡
My setup:
CentOS 7
MariaDB 10.1.48
Standard charset utf8_bin
The character ist only shown as '???'.
I already tried different charsets: ut8mb4, utf16
Any recommendations?
Or is anyone able to diplay the character with a simliar setup?
Cheers

Related

Displaying the japanese characters in the Command Prompt

I just installed the japanese language pack to make my IDE VS Community 2022 to be able identifying japanese character. But when i run it the Command Prompt doesn't recognize it and inserts question marks instead.
I'm not allowed to insert an image, so here is a link
Could you please tell me how could i solve this issue?
Thank you in advance.

Mismatch charset while executing application on Windows and Wine (Ubuntu)

We have a small exe client application that used for loading the scanned images from the remote sources, what we need to do is enter the address from the images to specific text box and submit.
The data then will be saved in remote database and will be read by another apps.
When we run the client application on Windows, everything is normal. The problem only happen when we run application on Ubuntu via Wine:
Whether we input the address that contain German characters, the other apps will read from database the wrong characters, for example:
What we enter:
Lößnitzstr
What the other apps see:
Lößnitzstr
We found out this is mismatch charset when encoding with UTF-8 and decode with Windows-1252 code page.
Since the default charset of Ubuntu is UTF-8, we tried to use command line to force WINE to run with windows charset setting:
LANG=de_DE.CP1252 wine client.exe
We also tried to set Operating system default locale by localectl to language with charset Windows-1252 (CP1252) but it seem not to work
Any idea how to fix this, really appreciate your help.

Toad € to € symbol Oracle euro UTF-8

I am using Toad 9.0.1.8.
A table in a column was showing a euro € symbol as €
I've tried changing the environment variable NLS_LANG=AMERICAN_AMERICA.AL32UTF8 in my windows machine, I've also tried using American_America.UTF8 and American_America.WE8ISO8859P1, yet it did not get resolved, I'm still seeing the characters but not a euro symbol. Every time i changed the env var, I restarted Toad.
Could somebody help? Tried some solutions by searching online, nothing worked.
Maybe you have a character set issue in you DB? From this thread it seems to be a problem with DB settings.
Also some suggestion on charset issues suggest upgrading to version 10 of Toad if that is an option since it's a unicode release.

Change encoding (collation?) of SQL Server 2008 R2 to UTF-8

We'd like to move our Confluence system to a SQL Server 2008 R2. Now, since Confluence uses UTF-8 encoding, I'd need a database using the same encoding (I guess that's the collation?).
There's the command
alter database confluence set collation COLLATION_NAME
Now, as it seems, there is no utf-8, and as I found out SQL Server uses ucs-2 which is basically the same. But I can't figure out what the collation name of ucs-2 would be? Does somebody know about that?
Edit: I do see the difference between encoding and collation now. The Confluence documentation suggests that I should create an schema which relies on UCS-2 (because MS SQL has missing support for UTF-8). I have looked trough the Managment Studio and I found an entry for schemas in the Security directory of the database. However, I can not figure out how to assign UCS-2 encoding to the schema. What do I have to realize this in the Managment Studio to do so (or which query should I use)?
According to the confluence documentation you should set the collation to SQL_Latin1_General_CP1_CS_AS
We have followed this document and have had a successful confluence deployment on SQL Server 2008 R2:
Database Setup for SQL Server

SAS Oracle Data Issues

I'm having an issue regarding special characters. I use SAS to connect to an Oracle database and then download tables from Oracle to SAS datasets.
Previously, special characters where downloaded correctly without a problem. I've recently received a new laptop at work and since then there has been some data issues.
Basically, what is happening is that special characters are removed or completely replaced. For instance, é is being replaced by e. á replaced with a. Other special characters are just completely removed and replaced with '?'
I've read a bunch of articles about encoding, transcoding and NLS_LANGUAGE, but I just can't figure out why this is happening and how to fix it. My other colleagues who are still using old laptops do not have this same issue!
Please, any help would be GREATLY appreciated
Check your Windows Registry. On my machine the setting is at HKEY_LOCAL_MACHINE\SOFTWARE\oracle\KEY_OraClient11g_home1\NLS_LANG. Compare this key to that of coworkers who use SAS w/ Oracle and still have the older laptops.

Resources