I am quite confused about the stdin and the key_events of GUI widget.
Usually in my mind, I thought stdin is the ordinary way to get the keyboard input for a process. E.g., If I have a process, then I could use stdin to have the keyboard inputs. And this is usually used to make I/O direction for the subprocess to get keyboard inputs. E.g., I could make subprocess.Popen(stdin=PIPE)
On the other hand for GUI, I am using wx.TexCtrl or py.Shell.shell, to catch the key events, like inputs.
So I am quite confused here, if I have a GUI or pyShell running, when I am typing via the keyboard, is it via the stdin or via the GUI key event catching system? If via the GUI key events system, how can I get the keyboard stdin? Can I still simply redirect the keyboard inputs to my child process (inside the GUI) as the ordinary non-GUI programming?
Thanks a lot for any comments.
When you type, the input comes from the GUI event mechanism, not from stdin. You asked how to get the "keyboard stdin", and the answer to that is the same as for any other type of program: you read it (but it will almost certainly be empty). It's important to realize that the GUI probably doesn't have a stdin if it was started by double-clicking on an icon on the desktop.
And no, you can't "redirect keyboard inputs to [your] child process", if I understand your question. Stdin really has absolutely nothing to do with GUIs at all. How keyboard input is read via a GUI is completely disconnected from stdin.
Related
I'm working on a custom cross platform UI library that needs a synchronous "ShowPopup" method that shows a popup, runs an event loop until it's finished and automatically cancels when clicking outside the popup or pressing escape. Keyboard, mouse and scroll wheel events need to be dispatched to the popup but other events (paint, draw, timers etc...) need to be dispatched to their regular targets while the loop runs.
Edit: for clarification, by popup, I mean this kind of menu style popup window, not an alert/dialog etc...
On Windows I've implemented this fairly simply by calling GetMessage/DispatchMessage and filtering and dispatching messages as appropriate. Works fine.
I've much less experience with Cocoa/OS X however and finding the whole event loop/dispatch paradigm a bit confusing. I've seen the following article which explains how to implement a mouse tracking loop which is very similar to what I need:
http://stpeterandpaul.ca/tiger/documentation/Cocoa/Conceptual/EventOverview/HandlingMouseEvents/chapter_5_section_4.html
but... there's some things about this that concern me.
The linked article states: "the application’s main thread is unable to process any other requests during an event-tracking loop and timers might not fire". Might not? Why not, when not, how to make sure they do?
The docs for nextEventMatchingMask:untilDate:inMode:dequeue: states "events that do not match one of the specified event types are left in the queue.". That seems a little odd. Does this mean that if an event loop only asks for mouse events then any pressed keys will be processed once the loop finishes? That'd be weird.
Is it possible to peek at a message in the event queue without removing it. eg: the Windows version of my library uses this to close the popup when it's clicked outside, but leaves the click event in the queue so that clicking outside the popup on a another button doesn't require a second click.
I've read and re-read about run loop modes but still don't really get it. A good explanation of what these are for would be great.
Are there any other good examples of implementing an event loop for a popup. Even better would be pseudo-code for what the built in NSApplication run loop does.
Another way of putting all this... what's the Cocoa equivalent of Windows' PeekMessage(..., PM_REMOVE), PeekMessage(..., PM_NOREMOVE) and DispatchMessage().
Any help greatly appreciated.
What exactly is a "popup" as you're using the term? That term means different things in different GUI APIs. Is it just a modal dialog window?
Update for edits to question:
It seems you just want to implement a custom menu. Apple provides a sample project, CustomMenus, which illustrates that technique. It's a companion to one of the WWDC 2010 session videos, Session 145, "Key Event Handling in Cocoa Applications".
Depending on exactly what you need to achieve, you might want to use an NSAlert. Alternatively, you can use a custom window and just run it modally using the -runModalForWindow: method of NSApplication.
To meet your requirement of ending the modal session when the user clicks outside of the window, you could use a local event monitor. There's even an example of just such functionality in the (modern, current) Cocoa Event Handling Guide: Monitoring Events.
All of that said, here are (hopefully no longer relevant) answers to your specific questions:
The linked article states: "the application’s main thread is unable to process any other requests during an event-tracking loop and
timers might not fire". Might not? Why not, when not, how to make
sure they do?
Because timers are scheduled in a particular run loop mode or set of modes. See the answer to question 4, below. You would typically use the event-tracking mode when running an event-tracking loop, so timers which are not scheduled in that mode will not run.
You could use the default mode for your event-tracking loop, but it really isn't a good idea. It might cause unexpected re-entrancy.
Assuming your pop-up is similar to a modal window, you should probably use NSModalPanelRunLoopMode.
The docs for nextEventMatchingMask:untilDate:inMode:dequeue:
states "events that do not match one of the specified event types are
left in the queue.". That seems a little odd. Does this mean that if
an event loop only asks for mouse events then any pressed keys will be
processed once the loop finishes? That'd be weird.
Yes, that's what it means. It's up to you to prevent that weird outcome. If you were to read a version of the Cocoa Event Handling Guide from this decade, you'd find there's a section on how to deal with this. ;-P
Is it possible to peek at a message in the event queue without removing it. eg: the Windows version of my library uses this to close
the popup when it's clicked outside, but leaves the click event in the
queue so that clicking outside the popup on a another button doesn't
require a second click.
Yes. Did you notice the "dequeue:" parameter of nextEventMatchingMask:untilDate:inMode:dequeue:? If you pass NO for that, then the event is left in the queue.
I've read and re-read about run loop modes but still don't really get it. A good explanation of what these are for would be great.
It's hard to know what to tell you without knowing what you're confused about and how the Apple guide failed you.
Are you familiar with handling multiple asynchronous communication channels using a loop around select(), poll(), epoll(), or kevent()? It's kind of like that, but a bit more automated. Not only do you build a data structure which lists the input sources you want to monitor and what specific events on those input sources you're interested in, but each input source also has a callback associated with it. Running the run loop is like calling one of the above functions to wait for input but also, when input arrives, calling the callback associated with the source to handle that input. You can run a single turn of that loop, run it until a specific time, or even run it indefinitely.
With run loops, the input sources can be organized into sets. The sets are called "modes" and identified by name (i.e. a string). When you run a run loop, you specify which set of input sources it should monitor by specifying which mode it should run in. The other input sources are still known to the run loop, but just ignored temporarily.
The -nextEventMatchingMask:untilDate:inMode:dequeue: method is, more or less, running the thread's run loop internally. In addition to whatever input sources were already present in the run loop, it temporarily adds an input source to monitor events from the windowing system, including mouse and key events.
Are there any other good examples of implementing an event loop for a popup. Even better would be pseudo-code for what the built in
NSApplication run loop does.
There's old Apple sample code, which is actually their implementation of GLUT. It provides a subclass of NSApplication and overrides the -run method. When you strip away some stuff that's only relevant for application start-up or GLUT, it's pretty simple. It's just a loop around -nextEventMatchingMask:... and -sendEvent:.
Is there any way to give focus to two application in same time in windows.
They need to be controlled by two input type (one can be mouse, other can be keyboard, or both can be controlled with two keyboards). On windows only one window (application) can have focus and you can send input to one of window.
No. Focus is used for controlling which process and or scope has user input. It's associated with the message pump and follows focus. All user events pertinent to the application are trapped by the system and sent to the application that has focus. The best you can hope for is something that will take focus and redirect according to input type.
This could very well be another silly question, but I can't seem to find the answer (or any for that matter), so here goes.
I have a command line program that uses SIGWINCH on Linux to detect the window size change, and I apparently have a user who is using the program on Windows. The problem, is that the program uses SIGWINCH to detect changes in the window size and this signal is unsupported on Windows. I've tried Googling for every combination of search terms I can think of, but due to the relationship between SIGWINCH and changes in the size of the window, I'm having trouble finding any useful results. I'm looking for a Windows equivalent, or the method most often used to detect changes in the window size on Windows computers.
How do you detect changes in window size on Windows?
Since I don't think you can subclass console windows (and thus catch WM_SIZE messages), you may just have to poll GetConsoleScreenBufferInfo.
EDIT: Upon further investigation (not tested!), it might also be doable without polling using ReadConsoleInput. Summary: Call SetConsoleMode to turn on window input events. From a different thread, wait for the console input handle to become signaled using WaitForSingleObject or a similar function. Read all pending console events; the presence of window buffer size events means something's resized your console window.
Can SendInput be used to simulate a drap & drop operation?
I've got an application that accepts files of a certain format that are dropped on it, but not from the command line, and I want to associate it with a file. I thought I'd create a small tool that finds the window, and simulates a drag & drop of the file - is this at all possible? Do I need to use SendInput or possibly SendMessage? What would be the parameters?
Yes, pretty likely. SendInput injects mouse events at a very low level. SendMessage won't work.
You'll need a thread since DoDragDrop is a blocking call. Fake the mouse down first, start the thread, call DoDragDrop. The thread should sleep to give enough time for DoDragDrop to get started, then fake mouse move and mouse up. Keep fingers crossed that it works the first time, it is impossible to debug if it doesn't.
The shell already has a function that simulates a drop: SHDoDragDrop, no need for hacks like faking mouse input.
Since you are talking about the commandline, XP added support for simulating D&D for applications/registered file types: How do I accept files to be opened via IDropTarget instead of on the command line?
When you press F2 to edit a filename in Windows Shell, there is a limited set of editing keys that is understood - e.g. CTRL+Arrow Keys, Home, End, CTRL+X. For example, when you type CTRL+Right Arrow, the cursor will stop right after a dash, but will not stop at a period. Are these actions customizable, and if so, how?
Any additional information not directly related but which you feel might help the topic will also be appreciated.
You can set a custom word-break procedure for your edit control using EM_SETWORDBREAKPROC; EditWordBreakProc is the corresponding callback function that the OS calls when it needs to find where a word break occurs.
From the docs:
Either a multiline or a single-line edit control might call this function when the user presses arrow keys in combination with the CTRL key to move the caret to the next word or previous word.
The key combinations themselves are not directly customizable, and for a good reason -- so that the user experience is uniform across all applications. Of course, you could subclass the edit control and handle keyboard messages yourself but I guess that's not the point here.
The Windows version matters, but in general this behavior is baked into SysListView32, the native list view control. No, keyboard handling is hard-baked. Subclassing the control is technically possible, just not practical since it lives inside Explorer.exe. And having no clue where the caret is located inside the label, there are no messages for it.
By "Windows Shell" I assume you mean Windows Explorer, but the answer is likely the same no matter what program you are talking about.
Explorer simply creates an EDIT control and moves it into position. The editing behavior comes from this stock system control, plus whatever additional logic Explorer adds to its own instance of it.
While you can easily alter the behavior of an EDIT control that belongs to a thread in your own process, doing so in another process requires a global hook. We will stipulate that you understand the amount of work involved in doing a global hook correctly, and which will function in both x86 and x64 environments.
You cannot directly interfere with the behavior of an EDIT control in another process with WH_CALLWNDPROC, but you can use WH_CALLWNDPROCRET to observe keyboard messages, check that the window is and EDIT control, check that the EDIT control belongs to Explorer, and then knowing precicesly how the EDIT control responded to that keyboard event, do something additional like backing up to that period.
Or maybe you could use a WH_CBT hook to monitor HCBT_CREATEWND and subclass the EDIT control each time it gets created.
The effort is probably not worth the benefit.