shorten chinesse string to fit in character array C++ - c++11

I am trying to fit pinyin string in character array. for example If I have pinyin string like below.
string str = "转换汉字为拼音音"; // needs at least 25 bytes to store
char destination[22];
strncpy(destination, str.c_str(), 20);
destination[21] = '\0';
since Chinese characters takes 3 bytes i can do strncpy(destination, str.c_str(), (20/3)*3); but if str contains any character other than Chinese (that takes 2 bytes of 4 bytes in utf8 encoding) this logic will fill.
Later If i try to convert destination to print pinyin characters, only first 6 Chinese characters are printed properly and 2 bytes are printed in hexadecimal.
Is there any way, I can shorten the string before copying to destination so that when destination is printed, proper Chinese characters are printed (without any individual hex bytes)? using POCO::Textendcoing or POCO::UTF8Encoding class?
Thanks in Advance.

Nothing short of creating own way to encode text would work. But even in that case you would have to create 25 characters (don't forget zero at end!) array to store string at end to be printed properly, unless you create own printing routines.
I.e. amount of work required doesn't balance out win of extra 3 bytes.
Note, that code is practically C. In C++ you don't use that style of code.

Related

reading a file stored in storage or memory

Everything is stored as 0 1 in digital , binary format file or other format file
So when we are trying to open a Executive file with hex editor it could show all the random ASCII character that will be produced based on 7, 0 1 bits that randomly created that ( anything ) file because it is 0 1 at the storage, in memory.
So why it shows strange character that are not ASCII char?
A hex reader doesn't just parse 7 bit per 7 bit or 8 bit? it read some meta data in file and then read based on that?
Your hex editor is choosing to decode the bytes not as ASCII but as some other character encoding.
You are right the ASCII character set has 128 codepoints and the ASCII character encoding encodes them in single bytes in the range 0 to 127. Since bytes in an arbitrary file could range from 0 to 255 and the ASCII character set isn't used much for text files, decoding as ASCII wouldn't reveal as much information about potential text as a more likely character encoding and would reveal information about only half of the values of binary files.
A hex editor's job is to display and allow you to edit bytes. Additional presentations and editing capability are extra features. Many do present text, some allow search and replace of text. Some even work with other data formats such as decimal integers, multibyte integers, floating point, etc.
There is no text but encoded text. So, to support text, a hex editor—and any other program (including compilers)—must choose or allow you to choose a character encoding. For byte values that can't be decoded using that encoding, a dot or question mark is often substituted in the text display of a hex editor.
If you don't find which character encoding is used by your hex editor, you could test it by creating a file with byte values 0 to 255, see what it displays and match it against the many, many possibilities. It might be one that your operating system uses for your "default". In Windows cmd, go chcp; In Linux terminal, go locale.

Convert a string (representing UTF-8 hex) to string

I have a string in UTF-8 hex like this:
s = "0059006F007500720020006300720065006400690074002000680061007300200067006F006E0065002000620065006C006F00770020003500200064006F006C006C006100720073002E00200049006600200079006F00750020006800610076006500200061006E0020004100640064002D004F006E0020006F007200200042006F006E0075007300200079006F007500720020007200650073006F00750072006300650073002000770069006C006C00200077006F0072006B00200075006E00740069006C0020006500780068006100750073007400650064002E00200054006F00200074006F00700020007500700020006E006F007700200076006900730069007400200076006F006400610066006F006E0065002E0063006F002E006E007A002F0074006F007000750070"
I want to convert this into actual UTF-8 string. It should read:
Your credit has gone below 5 dollars. If you have an Add-On or Bonus your resources will work until exhausted. To top up now visit vodafone.co.nz/topup
This works:
s.scan(/.{4}/).map { |a| [a.hex].pack('U') }.join
but I'm wondering if there's a better way to do this: whether I should be using Encoding#convert.
The extra 00s suggest that the string is actually the hex representation of a UTF-16 string, rather than UTF-8. Assuming that is the case the steps you need to carry out to get a UTF-8 string are first convert the string into the actual bytes the hex digits represents (Array#pack can be used for this), second mark it as being in the appropriate encoding with force_encoding (which looks like UTF-16BE) and finally use encode to convert it to UTF-8:
[s].pack('H*').force_encoding('utf-16be').encode('utf-8')
I think there are extra null characters all along the string (it's valid, but wasteful), but you can try:
[s].pack('H*').force_encoding('utf-8')
although, it seems "Your credit has gone below 5 dollars"...
The string prints with puts, but I can't read all the unicode characters on the terminal when the string is dumped.
If you are intending to use this on other oddly encoded strings, you could unpad the leading bytes:
[s.gsub(/..(..)/,'\1')].pack('H*')
Or use them:
s.gsub(/..../){|p|p.hex.chr}
If you want to use Encoding::Converter
ec = Encoding::Converter.new('UTF-16BE','UTF-8') # save converter for reuse
ec.convert( [s].pack('H*') ) # or: ec.convert [s].pack'H*'

Windows CE / UTF-16 / Chinese

I've read that Windows CE uses the "UTF-16 version of UNICODE" (i'm a newbie with encodings).
What happens when a string contains a character that requires more that 2 bytes, like chinese characters ? Does it take 3 ?
If i have a string containing chinese characters, accessing the N-th couple of bytes will not necessaily access the N-th visible symbol ?
Also what about performance ? If i understand well, encodings that have a variable number of bytes per visible symbol require the string to be scanned from the beginning to access the N-th visible symbol right ? If yes is it also true for UTF-16 ?
Thank you.
What happens when a string contains a character that requires more that 2 bytes, like Chinese characters? Does it take 3?
No, four.
Wikipedia: UTF-16:
In UTF-16, code points greater or equal to 216 are encoded using two 16-bit code units.
If I understand well, encodings that have a variable number of bytes per visible symbol require the string to be scanned from the beginning to access the N-th visible symbol right?
Yes. See for example Why use multibyte string functions in PHP?.

UTF-8 string delimiter

I am parsing a binary protocol which has UTF-8 strings interspersed among raw bytes. This particular protocol prefaces each UTF-8 string with a short (two bytes) indicating the length of the following UTF-8 string. This gives a maximum string length 2^16 > 65 000 which is more than adequate for the particular application.
My question is, is this a standard way of delimiting UTF-8 strings?
I wouldn't call that delimiting, more like "length prefixing". Some people call them Pascal strings since in the early days the language Pascal was one of the popular ones that stored strings that way in memory.
I don't think there's a formal standard specifically for just that, as it's a rather obvious way of storing UTF-8 strings (or any strings of bytes for that matter). It's defined over and over as a part of many standards that deal with messages that contain strings, though.
UTF8 is not normally de-limited, you should be able to spot the multibyte characters in there by using the rules mentioned here: http://en.wikipedia.org/wiki/UTF-8#Description
i would use a delimiter which starts with 0x11......
but if you send raw bytes you will have to exclude this delimiter from the data\messages processed ,this means that if there is a user input similar to that delimiter, you will have to convert it.
if the user inputs any utf8 represented char you may simply send it as is.

Importing extended ASCII into Oracle

I have a procedure that imports a binary file containing some strings. The strings can contain extended ASCII, e.g. CHR(224), 'à'. The procedure is taking a RAW and converting the BCD bytes into characters in a string one by one.
The problem is that the extended ASCII characters are getting lost. I suspect this is due to their values meaning something else in UTF8.
I think what I need is a function that takes an ASCII character index and returns the appropriate UTF8 character.
Update: If I happen to know the equivalent Oracle character set for the incoming text can I then convert the raw bytes to UTF8? The source text will always be single byte.
There's no such thing as "extended ASCII." Or, to be more precise, so many encodings are supersets of ASCII, sharing the same first 127 code points, that the term is too vague to be meaningful. You need to find out if the strings in this file are encoded using UTF-8, ISO-8859-whatever, MacRoman, etc.
The answer to the second part of your question is the same. UTF-8 is, by design, a superset of ASCII. Any ASCII character (i.e. 0 through 127) is also a UTF-8 character. To translate some non-ASCII character (i.e. >= 128) into UTF-8, you first need to find out what encoding it's in.

Resources