What kind of strings does CFStringCreateWithFormat expects as arguments? - macos

The below example should work with Unicode strings but it doesn't.
CFStringRef aString = CFSTR("one"); // in real life this is an Unicode string
CFStringRef formatString = CFSTR("This is %s example"); // also tried %S but without success
CFStringRef resultString = CFStringCreateWithFormat(NULL, NULL, formatString, aString);
// Here I should have a valid sentence in resultString but the current result is like aString would contain garbage.

Use %# if you want to include a CFStringRef via CFStringCreateWithFormat.
See the Format Specifiers section of Strings Programming Guide for Core Foundation.
%# is for Objective C objects, OR CFTypeRef objects (CFStringRef is compatible with CFTypeRef)
%s is for a null-terminated array of 8-bit unsigned characters (i.e. normal C strings).
%S is for a null-terminated array of 16-bit Unicode characters.
A CFStringRef object is not the same as “a null-terminated array of 16-bit Unicode characters”.

As an answer to the comment in the other answer, I would recommend the poster to
generate a UTF8 string in a portable way into char*
and, at the last minute, convert it to CFString using CFStringCreateWithCString with kCFStringEncodingUTF8 as the encoding.
Please, please do not use %s in CFStringCreateWithFormat. Please do not rely on the "system encoding", which is MacRoman on Western European environments, but not in other languages. The concept of the system encoding is inherently brain-dead, especially in east Asian environments (which I came from) where even the characters inside ASCII code range (below 127!) is modified. Hell breaks loose if you rely on "system encoding". Fortunately, since 10.4, all of the methods which use "system encoding" are now deprecated, except %s... .
I'm sorry I write this much for this small topic, but it was a real pity a few years ago when there were many nice apps which didn't work on Japanese/Korean Macs because of just this "system encoding." Please refer to this detailed explanation which I wrote a few years ago, if you're interested.

Related

Why Do Microsoft APIs have letters appended like PdhAddEnglishCounterA [duplicate]

What is the difference in calling the Win32 API function that have an A character appended to the end as opposed to the W character.
I know it means ASCII and WIDE CHARACTER or Unicode, but what is the difference in the output or the input?
For example, If I call GetDefaultCommConfigA, will it fill my COMMCONFIG structure with ASCII strings instead of WCHAR strings? (Or vice-versa for GetDefaultCommConfigW)
In other words, how do I know what Encoding the string is in, ASCII or UNICODE, it must be by the version of the function I call A or W? Correct?
I have found this question, but I don't think it answers my question.
The A functions use Ansi (not ASCII) strings as input and output, and the W functions use Unicode string instead (UCS-2 on NT4 and earlier, UTF-16 on W2K and later). Refer to MSDN for more details.

Simplest way to extract first Unicode codepoint of an NSString (outside the BMP)?

For historical reasons, Cocoa's Unicode implementation is 16-bit: it handles Unicode characters above 0xFFFF via "surrogate pairs". This means that the following code is not going to work:
NSString myString = #"𠬠";
uint32_t codepoint = [myString characterAtIndex:0];
printf("%04x\n", codepoint); // incorrectly prints "d842"
Now, this code works 100% of the time, but it's ridiculously verbose:
NSString myString = #"𠬠";
uint32_t codepoint;
[#"𠬠" getBytes:&codepoint maxLength:4 usedLength:nil
encoding:NSUTF32StringEncoding options:0
range:NSMakeRange(0,2) remainingRange:nil];
printf("%04x\n", codepoint); // prints "20d20"
And this code using mbtowc works, but it's still pretty verbose, affects global state, isn't thread-safe, and probably fills up the autorelease pool on top of all that:
setlocale(LC_CTYPE, "UTF-8");
wchar_t codepoint;
mbtowc(&codepoint, [#"𠬠" UTF8String], 16);
printf("%04x\n", codepoint); // prints "20d20"
Is there any simple Cocoa/Foundation idiom for extracting the first (or Nth) Unicode codepoint from an NSString? Preferably a one-liner that just returns the codepoint?
The answer given in this otherwise excellent summary of Cocoa Unicode support (near the end of the article) is simply "Don't try it. If your input contains surrogate pairs, filter them out or something, because there's no sane way to handle them properly."
A single Unicode code point might be a Surrogate Pair, but also not all language characters are single code points. i.e. not all language characters are represented by one or two UTF-16 units. Many characters are represented by a sequence of Unicode code points.
This means that unless you are dealing with Ascii you have to think of language characters as substrings, not unicode code points at indexes.
To get the substring for the character at index 0:
NSRange r = [[myString rangeOfComposedCharacterSequenceAtIndex:0];
[myString substringWithRange:r];
This may or may not be what you want depending on what you are actually hoping to do. e.g. although this will give you 'character boundaries' these won't correspond to cursor insertion points, which are language specific.

Invalid Unicode characters in XCode

I am trying to put Unicode characters (using a custom font) into a string which I then display using Quartz, but XCode doesn't like the escape codes for some reason, and I'm really stuck.
CGContextShowTextAtPoint (context, 15, 15, "\u0066", 1);
It doesn't like this (Latin lowercase f) and says it is an "invalid universal character".
CGContextShowTextAtPoint (context, 15, 15, "\ue118", 1);
It doesn't complain about this but displays nothing. When I open the font in FontForge, it shows the glyph as there and valid. Also Font Book validated the font just fine. If I use the font in TextEdit and put in the Unicode character with the character viewer Unicode table, it appears just fine. Just Quartz won't display it.
Any ideas why this isn't working?
The "invalid universal character" error is due to the definition in C99: Essentially \uNNNN escapes are supposed to allow one programmer to call a variable føø and another programmer (who might not be able to type ø) to refer to it as f\u00F8\u00F8. To make parsing easier for everyone, you can't use a \u escape for a control character or a character that is in the "basic character set" (perhaps a lesson learned from Java's unicode escapes which can do crazy things like ending comments).
The second error is probably because "\ue118" is getting compiled to the UTF-8 sequence "\xee\x8e\x98" — three chars. CGContextShowTextAtPoint() assumes that one char (byte) is one glyph, and CGContextSelectFont() only supports the encodings kCGEncodingMacRoman (which decodes the bytes to "Óéò") and kCGEncodingFontSpecific (what happens is anyone's guess. The docs say not to use CGContextSetFont() (which does not specify the char-to-glyph mapping) in conjunction with CGContextShowText() or CGContextShowTextAtPoint().
If you know the glyph number, you can use CGContextShowGlyphs(), CGContextShowGlyphsAtPoint(), or CGContextShowGlyphsAtPositions().
I just changed the font to use standard alphanumeric characters in the end. Much simpler.

What is the difference between the `A` and `W` functions in the Win32 API?

What is the difference in calling the Win32 API function that have an A character appended to the end as opposed to the W character.
I know it means ASCII and WIDE CHARACTER or Unicode, but what is the difference in the output or the input?
For example, If I call GetDefaultCommConfigA, will it fill my COMMCONFIG structure with ASCII strings instead of WCHAR strings? (Or vice-versa for GetDefaultCommConfigW)
In other words, how do I know what Encoding the string is in, ASCII or UNICODE, it must be by the version of the function I call A or W? Correct?
I have found this question, but I don't think it answers my question.
The A functions use Ansi (not ASCII) strings as input and output, and the W functions use Unicode string instead (UCS-2 on NT4 and earlier, UTF-16 on W2K and later). Refer to MSDN for more details.

Determining if a Unicode character is visible?

I am writing a text editor which has an option to display a bullet in place of any invisible Unicode character. Unfortunately there appears to be no easy way to determine whether a Unicode character is invisible.
I need to find a text file containing every Unicode character in order that I can look through for invisible characters. Would anyone know where I can find such a file?
EDIT: I am writing this app in Cocoa for Mac OS X.
Oh, I see... actual invisble characters ;) This FAQ will probably be useful:
http://www.unicode.org/faq/unsup_char.html
It lists the current invisible codepoints and has other information that you might find helpful.
EDIT: Added some Cocoa-specific information
Since you're using Cocoa, you can get the unicode character set for control characters and compare against that:
NSCharacterSet* controlChars = [NSCharacterSet controlCharacterSet];
You might also want to take a look at the FAQ link I posted above and add any characters that you think you may need based on the information there to the character set returned by controlCharacterSet.
EDIT: Added an example of creating a Unicode string from a Unicode character
unichar theChar = 0x000D;
NSString* thestring = [NSStirng stringWithCharacters:&theChar length:1];
Let me know if this code helps at all:
-(NSString*)stringByReplacingControlCharacters:(NSString*)originalString
{
NSUInteger length = [originalString length];
unichar *strAsUnichar = (unichar*)malloc(length*sizeof(unichar));
NSCharacterSet* controlChars = [NSCharacterSet controlCharacterSet];
unichar bullet = 0x2022;
[originalString getCharacters:strAsUnichar];
for( NSUInteger i = 0; i < length; i++ ) {
if( [controlChars characterIsMember:strAsUnichar[i]] )
strAsUnichar[i] = bullet;
}
NSString* newString = [NSString stringWithCharacters:strAsUnichar length:length];
free(strAsUnichar);
return newString;
}
Important caveats:
This probably isn't the most efficient way of doing this, so you will have to decide how you want to optimize after you get it working. This only works with characters on the BMP, support for composted characters would have to be added if you have such a requirement. This does no error checking at all.
A good place to start is the Unicode Consortium itself which provides a large body of data, some of which would be what you're looking for.
I'm also in the process of producing a DLL which you give a string and it gives back the UCNs of each character. But don't hold your breath.
The current official Unicode version is 5.1.0, and text files describing all of the code points in that can be found at http://www.unicode.org/standard/versions/components-latest.html
For Java, java.lang.Character.getType. For C, u_charType() or u_isgraph().
you might find this code to be of interest: http://gavingrover.blogspot.com/2008/11/unicode-for-grerlvy.html
Its an impossible task, Unicode supports even Klingon, so it's not going to work. However most text editors use the standard ANSI invisible characters. And if your Unicode library is good, it will support finding equivalent characters and/or categories, you can use these two features to do it as well as any editor out there
Edit: Yes I was being silly about Klingon support, but that doesn't make it not true... of course Klingon is not supported by the Consortium, however there is a movement for Klingon in the Unicode's "Private Use Area" defined for Klingon alphabet (U+F8D0 - U+F8FF). Link here for those interested :)
Note: Wonder what editor Klingon programmers use...

Resources