Console output spits out Chinese(?) characters - windows

This is a real shot in the dark, however maybe someone had a similar issue. Some console apps are being invoked by either SQL Server 2008, or Autosys (job schedule) under Windows Server 2008; output results of execution are being saved into .txt files. Every so often, with no definite pattern as far as I can tell saved output is displayed as a series of what I presume are Chinese characters. Have anyone encountered phenomenon above?

Typically when you discover chinese characters in output unexpectedly, it's because someone passed a 7-bit or 8-bit character array to an API which expected an array of unicode characters. The system interprets the 8-bit characters as 16 bit unicode characters and they end up being interpreted as unicode characters. At some point later the unicode characters are converted back to 8-bit characters, probably just before they're saved to the text file.
Note: This is an oversimplification but it should be enough to help you figure it out.

Related

Chr (169) not giving copyright symbol

I have an mdb split FE and BE which on my Windows 10 / Office 365 Acess is giving © for Chr(169).
On my client's machine (recent update to Windows 10) with Office 2013, Access is giving � (Actually, in the Immediate window it looks a 1 with an umlaut, then an upside down question mark then a superscript 1/2) for Chr(169).
In the Immediate window on the client's machine, Asc("©") gives 176 instead of 169.
It seems the character maps are different between the two machines, although when I go to the Character Map app, for Arial and Times New Roman it shows © as being 169.
How do I get the client's machine to match mine?
The first thing I was recommended to do was an Office Repair. After this, in the Immediate window I get a different value each time I do
?asc("©")
48 then 16 then 72 then 112 then 144 and so on, random numbers.
This is the same whether I copy a copyright symbol from MS Word or Character Map App.
Actually, when I paste the copyright symbol into the Immedate Window, it comes through as A(with circumflex over it)©. Does that help/mean anything?
Also the £ sign has an empty square next to it.
Anyone got any ideas? Office issue? Windows issue?
Thanks!
Always use AscW when using non-ASCII characters.
A copyright sign is not part of standard ASCII, so may or may not be represented in the codepage Windows uses, depending on locale settings.
AscW uses unicode instead of the Windows codepage, which means it will reliably produce the same result.
Also, never store non-ASCII characters in VBA. If you need a specific character, you can look up the codepoint and use ChrW to generate the string. For multiple characters, I recommend you look at this answer, or store them outside of VBA.
Then, as for the immediate window, that also does not support unicode characters, so you can't trust what gets displayed there. Nor does MsgBox. This makes debugging a pain. Look at this answer for having a messagebox with unicode support.

UTF-8 on Windows with Ada

It is my understanding that by default, Character is Latin_1, Wide_Character is UCS-2, and Wide_Wide_Character is UCS-4, but that GNAT can have specified pragma Wide_Character_Encoding(UTF8); or -gnatW8 and that those characters and their strings will be UTF-8 encoded instead.
At least on Linux and FreeBSD, the results fit with my expectations. But on Windows the results are odd.
For either Wide or Wide_Wide variants, once a character moves beyond the ASCII set, I get a garbled mess. I beleive this is called emojibake by some. So I figured it was a codepage issue. After all, the default codepage in Windows, and therefore what the Console Host would load with, is 437 which isn't the UTF-8 codepage. chcp 65001 and now instead of the mess of extra characters, there's an immediate exception raised ADA.IO_EXCEPTIONS.DEVICE_ERROR : a-ztexio.adb:1295. Looking at where the exception occurred, it seems to be in the putc binding of fputc(). But this is Standard_Output, shouldn't an EOF never happen?
Is there some kind of special consideration Windows needs? How can I get UTF-8 output?
edit:
I tried piping the output into a text file. The supposed UTF-8 encoded program still generates emojibake in the file. Not sure why this would immediately throw an exception in the console though.
So then I tried directly opening and writing to a file instead of the console/pipe. Oddly this works exactly as it should. The text is completely correct.
I've never seen this kind of behavior with any other language, so it should still be possible to get proper UTF-8 at the console, right?
The deficiency so many others, not just here, describe in the Windows Console Host has either been fixed or never existed in the first place. Based on this document, I feel it was probably always very misunderstood. Windows doesn't treat the console like files, and it's easy to fall into that trap.
Using this very straight forward code, along with what Windows needs and expects behind the scenes...
It correctly produces the following, as long as either pragma Wide_Character_Encoding(UTF8); or -gnatW8 is used.
Piping the output of this test program into a file works as it should. Similarly, piping the output of this test program into another program works as it should. And also similarly, taking the file from piped output, and piping it into another program works as it should.
Full UTF-8 behavior as one would expect under Linux, on Windows.
What needs to be done is twofold. In the package initializer, the Console Host needs to be told what it's working with, which can be done like this.
Character output is then done through fputwc. According to MS Docs fputc should never be used for UNICODE on Windows, which is part of the problem GNAT has. String output and character/string input is all similar.
Based on others comments and some further research to confirm, I'm pretty sure this is a deficiency of the Windows Console Host.
edit: don't listen to this

Reading UTF8 from System in C++ Windows

My Windows application calls a system command using _wpopen. This command produces a UTF8 response that I attempt to read using fgetws into a buffer of wchar_t. The problem is that the result in my buffer is not correct. There might be a problem with character widths as my buffer contains 12 characters where in UTF8 it should contain only 4. I use Microsoft Visual Studio 2010.
I have independently verified that the system command produces proper output. Thus, it is somehow the reading operation that messes up the encoding. What to do? Thank you!
The fgetws function expects the input to be either MBCS or UTF-16 depending on whether you open the file in text or binary mode. It does not process UTF-8.
Instead, use fgets and then explicitly convert from UTF-8 to whatever encoding you're wanting to use.

difference between text file and binary file

Why should we distinguish between text file and binary files when transmitting them? Why there are some channels designed only for textual data? At the bottom level, they are all bits.
At the bottom level, they are all bits... true. However, some transmission channels have seven bits per byte, and other transmission channels have eight bits per byte. If you transmit ASCII text over a seven-bit channel, then all is fine. Binary data gets mangled.
Additionally, different systems use different conventions for line endings: LF and CRLF are common, but some systems use CR or NEL. A text transmission mode will convert line endings automatically, which will damage binary files.
However, this is all mostly of historical interest these days. Most transmission channels are eight bit (such as HTTP) and most users are fine with whatever line ending they get.
Some examples of 7-bit channels: SMTP (nominally, without extensions), SMS, Telnet, some serial connections. The internet wasn't always built on TCP/IP, and it shows.
Additionally, the HTTP spec states that,
When in canonical form, media subtypes of the "text" type use CRLF as the text line break. HTTP relaxes this requirement and allows the transport of text media with plain CR or LF alone representing a line break when it is done consistently for an entire entity-body.
All files are saved in one of two file formats - binary or text. The two file types may look the same on the surface, but their internal structures are different.
While both binary and text files contain data stored as a series of (bits (binary values of 1s and 0s), the bits in text files represent characters, while the bits in binary files represent custom data.
Distinguishing between the two is important as different OSs treat text files differently. For example in *nix you end your lines with just \n while in MS OSs you use \r\n and in Macs you use \n\r. Software such as FTP clients try to change the line endings on text files to match the destination OS by adding/removing the characters. This is to make sure that the text file will look properly on the destination OS.
for example, if you create a text file in *nix with line breaks and try to copy it to a windows box as a binary file and open it in notepad, you will not see any of the line endings, but just a clog of text.
Important to add to the answers already provided is that text files and binary files both represent bytes but text files differ from binary files in that the bytes are understood to represent characters. The mapping of bytes to characters is done consistently over the file using a certain code page or Unicode. When using 7 or 8-bit code pages you can spin the dial when reading these files and interpret them with an English alphabet, a German alphabet, Russian alphabet, or others. This spinning the dial doesn't affect the bytes, it does affect which characters are chosen to correspond to the bytes.
As others have stated, there is also the issue of the encoding of line break separators which is unique to text files and which may differ from platform to platform. The "line break" is not a letter in our alphabet or a symbol you can write, so other rules apply to it.
With binary files there is no implicit convention on character encoding or on the definition of a "line".
All machine language files are actually binary files.
For opening a binary file, file mode has to be mentioned as "rb"or "wb"in fopen command. Otherwise all files are opened in default mode, which is text mode.
It may be noted that text files can also be stored and processed as binary files but not viceversa.
The binary files differ from text file in 2 ways:
The storage of newline characters
The EOF character
Eg:
wt-t stands for textfile
Wb-b stands for binaryfile
Binary files do not store any special character at the end either file end is verified by ueing their size itself.

when using a FTPS connection to transfer a file, what is the difference between a 'Binary mode taransfer' and 'ASCII mode transfer'?

I am using a FTPS connection to send a text file
[this file will contain EDI(Electronic Data Interchange) information]to a mailbox INOVIS.I have configured the system to open a FTPS connection and using the PUT command I write the file to a folder on the FTP server.
The problem is: what mode of file transfer should I use? How do I switch between modes?
Moreover which mode is the 'best-practice' to use when transferring file over FTPS connection.
If some one can provide me a small ftp script it would be helpful.
Many of the other answers to this question are a collection of nearly correct to outright wrong information.
ASCII mode means that the file should be converted to canonical text form on the wire. Among other things this means:
NVT-ASCII character set. Even if the original file is in some other character set, such as ASCII, EBCDIC or UTF-8. Technically this disallows characters with the 8th bit set, but most implementations won't enforce this.
CRLF line endings.
EBCDIC mode means a similar set of rules, except that the data on the wire should be in EBCDIC.
LOCAL mode allows sending data with a size other than 8 bits per byte.
IMAGE (or BINARY) mode means that the data should be send without any changes. It is up to the user to ensure that the target system can understand the data once it arrives.
Among other things, this means that the recommendation to use BINARY mode to send text data will fail if one of the systems involved doesn't use a ASCII based character set.
ASCII mode changes new line characters between unix and DOS formats. \n to \r\n and viceversa.
Actually, ASCII/BINARY has nothing to do with the 8th bit. It's a convention for translating line endings.
When you are on a windows machine talking to a Unix FTP server (FTPS or FTP - doesn't matter - the protocol is the same), the server will replace any <CR><LF>-Combination with <LF> before storing the file and consequently do the translation in reverse in case you get the file from the unix server.
The idea behind ASCII mode is to convert the line endings to the respective endings of the target platform.
As todays world seems to be converging to the unix convention (<LF>) and as nearly all of todays editors (aside of notepad) can easily handle Unix-Line-Endings, the days of ASCII mode are, indeed, numbered and I would by all means recommend to always use BINARY transfer mode.
The prospect of having data altered in mid-transfer is somewhat frightening anyways.
ASCII mode also makes file sharing of text files across different platforms more straightforward for end users. They won't have to worry about the default line ending (cr/lf versus just lf for example) since the ASCII mode will do that translation for them on the fly.
For most file types you will ALWAYS want to use BINARY mode though.
ACSII mode converts text files between UNIX and Windows formats based on the server and client platform (CR/LF vs LF), Binary doesn't. Of course, if you transfer nearly anything in ASCII mode that isn't text, it will probably be corrupted for that reason.
If you want an exact copy the data use binary mode - using ascii mode will assume the data is 7bit text (chars 0-127) and truncate any data outside of this range. Dates back to arcane 7bit networking days where ascii mode could save you time.
In a globalized environment that we live in - such that it is quite common to find non-ascii characters e.g. foreign languages, currency symbols etc. - you should always use BINARY mode.
For the FTP protocol, the ASCII transfer mode will consider the 8th bit of each of your character as insignificant and will use it for error checking. As for binary transfer mode, your data will be sent as is. Note that sending binary data in ASCII mode will (almost) always end up in data corruption. However, transferring ASCII data in binary mode will work as long as the sending and receiving systems use the 8th bit in the same way (in modern system the 8th bit should stay at 0 to prevent collision with extended ASCII charsets).

Resources