Japanese text is unrecognizable in phpMyAdmin? - utf-8

I am phpMyAdmin, why is Japanese text in tables unrecognizable? However, I can output the Japanese text from the tables and it can be displayed correctly. Any idea? How to fix phpMyAdmin?

Most likely you have wrongly set encoding in table structure and phpMyAdmin uses this information. What encoding you see configured for columns in table structure?
MySQL documentation on this topic might help you to fix this.

Related

When I write a string to an image field in SQL it gets saved in UTF-16 LE encoding and I need it to be UTF-8

I have about 1200 databases in SQL 2016 Enterprise in which documents are stored as BLOB's in image fields in the database. I migrated all the documents to our Document Management System and now I want to replace the files in the database with shortcuts to the corresponding files in the DMS. I tried it a few months ago and that went well. Now I run into an issue. When I use this command
convert(varbinary(max), <string value of shortcut>)
It get's written to the field. When I try to open the shortcut I get an error. If I create the same shortcut from our DMS it is exactly the same except for the encoding. My binary = UTF-16 little endian and the DMS shortcut is encoded in UTF-8. The filesize has also doubled, which is logical since UTF-16 uses two bytes for every character. When I change the encoding in Notepad++ my shortcut works.
I need my blob to be encoded in UTF-8. That is possible, I can upload a shortcut to the system that the database uses and then it's stored correctly. I can't change the collation of the table or field because this is a vendor database. It's a pretty old-fashioned system. Who uses Blob's in the first place and if you do, why image and not varbinary?
I'm not much of a programmer so any help would be greatly appreciated.
I tried updating the database to the latest client version (not SQL just the application). That didn't change anything.
I tried nvarchar but that doesn't work on image fields.

IIB: INSERT INTO Oracle Database "¿" (inverted question marks) and chinese characters

I am a greenhorn in Oracle Database and IBM Integration Bus and I'm trying to use the INSERT INTO function of ESQL in the IBM Integration Bus to insert data of a CSV file.
I'm using a DFDL with ISO-8859-1 encoding to read the file. When using the debugger, the message is fine and readable in SQLPLUS and SQL Developer.
I already tried to change the NLS_CHARARCTERSET setting in my Oracle Database although im not really sure which encoding I need. On default it was AL32UTF8 and I tried UTF8, WE8ISO8859P1.
What I also did was changing the encoding of the DFDL and changing the ODBC driver's settings to Use Oracle NLS Settings (Default), Use Microsoft Regional Settings and Use US settings.
If I try to use the INSERT INTO command the Database returns inverted question marks or Chinese characters, which is obviously not what I want.
EDIT:
If I hardcode the INSERT INTO values it also returns the question mark. The CSV's encoding doesn't matter. What I also found out is that the CSV file's data is displayed as null. When I hardcode the values in the INSERT INTO I get inverted question marks.
Its a simple option in the IIB ODBC Driver for Oracle. Just make sure that "Enable SQL Describe Param" in the advanced tab is checked.
If you mean the encoding of the CSV file. It is Windows 1250 since the input file has characters like 'ß'
I don't understand that statement. The use of the character 'ß' does not necessarily imply Windows 1250. Do you have any other supporting evidence for that claim?
If your claim is correct then your DFDL schema is incorrect. You cannot parse a multi-byte encoding using a single-byte decoder. So the first thing you should do is to change the 'encoding' property in the format block of your DFDL schema to "5346" (according to https://www.ibm.com/support/knowledgecenter/en/SSRH46_3.0.0_SWS/dni_ccsids_and_char_set_names.html ).
But (and I'm sorry for repeating this, but it really matters...) CHECK that your assumption about Windows 1250 is correct. Then make sure that the encoding in the DFDL schema matches the encoding of the CSV file.

MS-Oracle ODBC Driver Function Sequence Error

I'm using the Microsoft-Oracle ODBC Driver to access an Oracle database in MS Access, but on about half of my linked tables, I get a [Function Sequence Error] whenever I try to pull up the Datasheet view. I've looked around for alternative drivers, but no luck.
Does anyone know how to stop getting these function sequence errors? And if I need a new driver, could you provide a link if possible to a download site? Thanks
I figured it out. The problem was that the Microsoft-Oracle ODBC driver mistakenly converted Oracle's CLOB (Character Large Object) fields into mere Text[255 char] fields in Access. And then Access freaked out whenever it tried to render these CLOBS with more than 255 characters.
So I just excluded those CLOB fields from all my queries and migration tables. It does mean that I can't migrate over fields like "Description" or "Notes," but at least I can migrate over the primary keys and relationships. That's good enough for me, for now.

Complex Table formatting in restructeredtext

I am trying to generate a complex table with rows and columns spanning multiple cells. Below is a snapshot of my reST code.
However, the Latex generated PDF output from Sphinx is not representing the format correctly.
Please let me know what might be wrong in my reST format to correct this issue?
The HTML snapshot, as per comment, is attached below and it is correct.
Thank you!
This is a known bug: the Docutils LaTeX writer fails with (some)
komplrc tables.
http://docutils.sourceforge.net/docs/user/latex.html#tables
I have not tried it because I don't want to type that whole table, but rst2pdf should not have a problem processing that.

Replace invalid character in oracle (by editing dmp file)

We have a portal written in php/mysql and an enterprise application based on Java EE and Oracle. Recently we found out that a certain Unicode character (0643 to be precise) is invalid (due to improper data entry by end users) in text columns and must be changed to another character (06A9).
In MySQL I simply changed the export file using a text editor's find and replace tool. But in oracle, the dmp file is a binary file and i have no idea about how to edit the dmp file.
How can I change the invalid character?
Is there an alternative to iterating through all text columns in all tables?
(I have saved that as a last resort!)
Editing an Oracle dump file may be possible but isn't practical; even if you could get in and change something you'd risk corrupting it, and I doubt Oracle support would be impressed. (See this AskTom question for example).
If you're using data pump and you know which column(s) the data is in you might be able to use the REMAP_DATA parameter to change it on the fly, or the QUERY parameter to skip the data, but it doesn't sound like you're in that situation. You could potentially add temporary constraints to the relevant column(s) to block the value, so import would reject (and log) the affected rows, but that's painful and messy.
If you do have to check all columns on all tables, this link may be helpful.

Resources