How do I use CGEventKeyboardSetUnicodeString with multiple characters? - macos

I'm trying to use event taps to create an OS X program that will listen for Yiddish typed in transliteration and post the result in Hebrew characters. I made a very short program to test one things I'd have to do: http://pastie.org/791398
As is, the program successfully replaces every typed 'q' with 'w':
if(inputString[0] == 'q') { inputString[0] = 'w'; }
But how does one post a string of more than one character? For instance, if someone types 'sh' you'd presumably have to post a backspace (to delete the character that was posted for 's' alone) and then post the character that corresponds to 'sh'. However, this code results in only a backspace being posted:
else if(inputString[0] == 'm') { inputString[0] = '\b'; inputString[1] = 'n'; }
I apologize if these are basic questions; I have read all the documentation I could find, but I might not have understood it all. It's also possible that I'm going about this entirely the wrong way.

Ideally you should be using an input method instead of a program with event taps, most likely using Input Method Kit if you don't need to support pre-10.5. Using event taps for this purpose is inherently a bad idea because the user can change where he/she is typing with the mouse as well as the keyboard. So if the user typed a "s" in one text field followed by a "h" in another, you wouldn't be able to tell the difference.
That said, here's a direct answer to your question.
The string is length-counted, so you can't just provide the incoming length (1); the second character will be ignored. However, most applications also don't like to get more than a single character per event, so they'll just discard the remaining characters. (Terminal is a notable exception.)
So what you can do is simply post a second event with the second character in it.
else if(inputString[0] == 'm') {
inputString[0] = 'n';
CGEventKeyboardSetUnicodeString(event, 1, inputString);
CGEventPost(kCGSessionEventTap, event);
inputString[0] = '\b';
}
In the general case (simulating > 2 keypresses) you'll need to create an event for each character you want to insert. This mailing list post includes a simple example.

This is how I send a string to the first responder ( foreground application )
// 1 - Get the string length in bytes.
NSUInteger l = [string lengthOfBytesUsingEncoding:NSUTF16StringEncoding];
// 2 - Get bytes for unicode characters
UniChar *uc = malloc(l);
[string getBytes:uc maxLength:l usedLength:NULL encoding:NSUTF16StringEncoding options:0 range:NSMakeRange(0, l) remainingRange:NULL];
// 3 - create an empty tap event, and set unicode string
CGEventRef tap = CGEventCreateKeyboardEvent(NULL,0, YES);
CGEventKeyboardSetUnicodeString(tap, string.length, uc);
// 4 - Send event and tear down
CGEventPost(kCGSessionEventTap, tap);
CFRelease(tap);
free(uc);

Related

What is %P in Tcl?

I saw this code en Tcl:
entry .amount -validate key -validatecommand {
expr {[string is int %P] || [string length %P]==0}
}
I know that it's an entry validation but, what does "%P" in that code? I was looking in the Tcl's doc but I didn't find nothing.
I think this is another way to do it but it has the same symbols:
proc check_the_input_only_allows_digits_only {P} {
expr {[string is int P] || [string length P] == 0}
}
entry .amount \
-validate key \
-validatecommand {check_the_input_only_allows_digits_only %P}
The tcl-tk page for entry says
%P
The value of the entry if the edit is allowed. If you are configuring the entry widget to have a new textvariable, this will be the value of that textvariable.
https://www.tcl.tk/man/tcl8.4/TkCmd/entry.html#M25
I think this is another way to do it but it has the same symbols:
You're close. You just have to use $ in a few places because you're just running a procedure and that's as normal for using parameters to procedures.
proc check_the_input_only_allows_digits_only {P} {
expr {[string is int $P] || [string length $P] == 0}
}
entry .amount \
-validate key \
-validatecommand {check_the_input_only_allows_digits_only %P}
It's recommended that you write things like that using a procedure for anything other than the most trivial of validations (or other callbacks); putting the complexity directly in the callback gets confusing quickly.
I recommend keeping validation loose during the input phase, and only making stuff strictly validated on form submission (or pressing the OK/Apply button, or whatever it is that makes sense in the GUI) precisely because it's really convenient to have invalid states there for a while in many forms while the input is being inputted. Per-key validation therefore probably should be used to only indicate whether it's believed that form submission will work, not to outright stop even transients from existing.
The string is int command returns true for zero-length input precisely because it was originally put in to work with that validation mechanism. It grinds my gears that actual validation of an integer needs string is int -strict. Can't change it now though; it's just a wrong default…
entry .amount -validate key -validatecommand {string is int %P}

What is the benefit of NSScanner's charactersToBeSkipped?

I have the string #" ILL WILL KILLS ", and I'm using NSScanner's scanUpToString:intoString: to find every occurrence of "ILL". If it's accurate, it will NSLog 4, 9, and 14.
My string begins with 4 spaces, which I realize are members of the NSScanner's default charactersToBeSkipped NSCharacterSet. If I set charactersToBeSkipped to nil, as in the example below, then this code accurately finds the 3 occurrences of "ILL".
NSScanner* scanner = [NSScanner scannerWithString:#" ILL WILL KILLS "] ;
scanner.charactersToBeSkipped = nil ;
NSString* scannedCharacters ;
while ( TRUE ) {
BOOL didScanUnignoredCharacters = [scanner scanUpToString:#"ILL" intoString:&scannedCharacters] ;
if ( scanner.isAtEnd ) {
break ;
}
NSLog(#"Found match at index: %tu", scanner.scanLocation) ;
// Since stopString "ILL" is 3 characters long, advance scanLocation by 3 to find the next "ILL".
scanner.scanLocation += 3 ;
}
However, if I don't nullify the default charactersToBeSkipped, here's what happens:
scanner is initialized with scanLocation == 0.
scanUpToString executes for the 1st time, it "looks past" 4 empty spaces and "sees" ILL at index 4, so it immediately stops. scanLocation is still 0.
I believe that I found a match, and I increment scanLocation by 3.
scanUpToString executes for the 2nd time, it "looks past" 1 empty space and "sees" ILL at index 4, so it immediately stops. scanLocation is still 3.
To me, it's a design flaw that scanner stopped at scanLocation == 0 the first time, since I expected it to stop at scanLocation == 4. If you believe that the above code can be rewritten to accurately NSLog 4, 9, and 14 without settings charactersToBeSkipped to nil, then please, show me how. For now, my opinion is that charactersToBeSkipped exists solely to make NSScanners more difficult to use.
For now, my opinion is that charactersToBeSkipped exists solely to make NSScanners more difficult to use.
Then you aren't very imaginative. The "benefit" of charactersToBeSkipped is to… wait for it… skip characters. For example, if you have a string like #" 8 9 10 ", you can scan those three integers using -scanInt: three times. You don't have to care about the precise amount of whitespace that separates them.
Given the task you describe, where you're just looking for instances of a string within a string, NSScanner is probably not the right tool. You probably want to use -[NSString rangeOfString:options:range:].
The docs for -scanUpToString:intoString: are fairly clear. If stopString is the first string in the receiver (taking into account that charactersToBeSkipped will be skipped), then the method returns NO, meaning it didn't scan anything. Consequently, the scan location won't be changed.
The return value indicates success or failure. If the stop string is next (ignoring characters to be skipped), then there's nothing to scan "up to" the stop string; the scanner is already at the stop string, so the method fails.

Sending KeyPress events in X11

I have a program where for various reasons i need to send keypress events to various windows. What I am using at the moment
XEvent event;
/* set some other stuff*/
event.type = KeyPress;
event.xkey.keycode = XKeysymToKeycode(display,XStringToKeysym(curr_key));
works for lower case letters and numbers, but I need to modify this so that it is capable of sending the enter key and upper case letters.
From the XStringToKeysym man page:
void XConvertCase(KeySym keysym, KeySym *lower_return, KeySym *upper_return);
The XConvertCase function returns the uppercase and lowercase forms of the specified Keysym, if the KeySym is subject to case conversion; otherwise, the specified KeySym is returned to both lower_return and upper_return. Support for conversion of other than Latin and Cyrillic KeySyms is implementation-dependent.
All the keysyms are in /usr/include/X11/keysymdef.h e.g. the enter key is XK_Return. The letters are there too e.g. XK_a and XK_A.

Easy way to get NUMBERFMT populated with defaults?

I'm using the Windows API GetNumberFormatEx to format some numbers for display with the appropriate localization choices for the current user (e.g., to make sure they have the right separators in the right places). This is trivial when you want exactly the user default.
But in some cases I sometimes have to override the number of digits after the radix separator. That requires providing a NUMBERFMT structure. What I'd like to do is to call an API that returns the NUMBERFMT populated with the appropriate defaults for the user, and then override just the fields I need to change. But there doesn't seem to be an API to get the defaults.
Currently, I'm calling GetLocaleInfoEx over and over and then translating that data into the form NUMBERFMT requires.
NUMBERFMT fmt = {0};
::GetLocaleInfoEx(LOCALE_NAME_USER_DEFAULT,
LOCALE_IDIGITS | LOCALE_RETURN_NUMBER,
reinterpret_cast<LPWSTR>(&fmt.NumDigits),
sizeof(fmt.NumDigits)/sizeof(WCHAR));
::GetLocaleInfoEx(LOCALE_NAME_USER_DEFAULT,
LOCALE_ILZERO | LOCALE_RETURN_NUMBER,
reinterpret_cast<LPWSTR>(&fmt.LeadingZero),
sizeof(fmt.LeadingZero)/sizeof(WCHAR));
WCHAR szGrouping[32] = L"";
::GetLocaleInfoEx(LOCALE_NAME_USER_DEFAULT, LOCALE_SGROUPING, szGrouping,
ARRAYSIZE(szGrouping));
if (::lstrcmp(szGrouping, L"3;0") == 0 ||
::lstrcmp(szGrouping, L"3") == 0
) {
fmt.Grouping = 3;
} else if (::lstrcmp(szGrouping, L"3;2;0") == 0) {
fmt.Grouping = 32;
} else {
assert(false); // unexpected grouping string
}
WCHAR szDecimal[16] = L"";
::GetLocaleInfoEx(LOCALE_NAME_USER_DEFAULT, LOCALE_SDECIMAL, szDecimal,
ARRAYSIZE(szDecimal));
fmt.lpDecimalSep = szDecimal;
WCHAR szThousand[16] = L"";
::GetLocaleInfoEx(LOCALE_NAME_USER_DEFAULT, LOCALE_STHOUSAND, szThousand,
ARRAYSIZE(szThousand));
fmt.lpThousandSep = szThousand;
::GetLocaleInfoEx(LOCALE_NAME_USER_DEFAULT,
LOCALE_INEGNUMBER | LOCALE_RETURN_NUMBER,
reinterpret_cast<LPWSTR>(&fmt.NegativeOrder),
sizeof(fmt.NegativeOrder)/sizeof(WCHAR));
Isn't there an API that already does this?
I just wrote some code to do this last week. Alas, there does not seem to be a GetDefaultNumberFormat(LCID lcid, NUMBERFMT* fmt) function; you will have to write it yourself as you've already started. On a side note, the grouping string has a well-defined format that can be easily parsed; your current code is wrong for "3" (should be 30) and obviously will fail on more exotic groupings (though this is probably not much of a concern, really).
If all you want to do is cut off the fractional digits from the end of the string, you can go with one of the default formats (like LOCALE_NAME_USER_DEFAULT), then check for the presence of the fractional separator (comma in continental languages, point in English) in the resulting character string, and then chop off the fractional part by replacing it with a null byte:
#define cut_off_decimals(sz, cch) \
if (cch >= 5 && (sz[cch-4] == _T('.') || sz[cch-4] == _T(','))) \
sz[cch-4] = _T('\0');
(Hungarian alert: sz is the C string, cch is character count, including the terminating null byte. And _T is the Windows generic text makro for either char or wchar_t depending on whether UNICODE is defined, only needed for compatibility with Windows 9x/ME.)
Note that this will produce incorrect results for the very odd case of a user-defined format where the third-to-last character is a dot or a comma that has some special meaning to the user other than fractional separator. I have never seen such a number format in my whole life, and hence I conclude that this is good and safe enough.
And of course this won't do anything if the third-to-last character is neither a dot nor a comma.

Obtaining modifier key pressed in CGEvent tap

Having setup an event tap, I'm not able to identify what modifier key was pressed given a CGEvent.
CGEventFlags flagsP;
flagsP=CGEventGetFlags(event);
NSLog(#"flags: 0x%llX",flagsP);
NSLog(#"stored: 0x%llX",kCGEventFlagMaskCommand);
if (flagsP==kCGEventFlagMaskCommand) {
NSLog(#"command pressed");
}
Given the above snippet, the first NSLog returns a different value from the second NSLog. No surprise that the conditional is never triggered when the command modifier key is pressed.
I need to identify whether command, alternate, option, control or shift are pressed for a given CGEvent. First though, I need help to understand why the above isn't working.
Thanks!
These are bit masks, which will be bitwise-ORed together into the value you receive from CGEventGetFlags (or pass when creating an event yourself).
You can't test equality here because no single bit mask will be equal to a combination of multiple bit masks. You need to test equality of a single bit.
To extract a single bit mask's value from a combined bit mask, use the bitwise-AND (&) operator. Then, compare that to the single bit mask you're interested in:
BOOL commandKeyIsPressed = (flagsP & kCGEventFlagMaskCommand) == kCGEventFlagMaskCommand;
Why both?
The & expression evaluates to the same type as its operands, which is CGEventFlags in this case, which may not fit in the size of a BOOL, which is a signed char. The == expression resolves that to 1 or 0, which is all that will fit in a BOOL.
Other solutions to that problem include negating the value twice (!!) and declaring the variable as bool or _Bool rather than Boolean or BOOL. C99's _Bool type (synonymized to bool when you include stdbool.h) forces its value to be either 1 or 0, just as the == and !! solutions do.

Resources