orbBasic async interaction/interruption from client - sphero-api

I'm trying to figure out if an orbBasic program can take inputs/flags from a BT client in an asynchronous manner.
The documented 'input' statement is blocking (but can have a timeout) and I believe the client can only send input when the orbBasic execution is waiting on the 'input' statement.
Is there a set of variables settable by the BT client that can be accessed by orbBasic? This would be more powerful and efficient. I know there may be issues with thread safety but good execution design should be able to handle this
Example usage -
app sends down target x,y location or target colour and the orbBasic handles the transition (I know macros have a fade to colour command that could do this)

OK ... I have since found that roll and set colour commands can be intercepted by the executing orbBasic program. So that answers my question (... haven't tested yet).

Related

Porting an old DOS TUI to ncurses

I would like to have some advice about how to port an old C++ program written for MS-DOS in the early 90s.
This program implements a quite complex text-user interface. The interface code is well separated from the logic, and I don't think it would be too difficult to make it use ncurses.
Being a complete novice, I have a few questions:
The DOS program intercepts interrupt 0x33 to handle mouse events. The interrupt handler store events in a FIFO, which the main program pools periodically. (Every element in the FIFO is a C structure containing information about the nature of the event, the position of the mouse and the state of its buttons.) To keep the logic of the code unchanged, I was thinking of firing a thread which calls getch() asynchronously within an infinite loop and fills the FIFO in the same way the old program did. My idea is that this thread, and only this thread, should access stdin, while the main thread would only have the responsibility to access stdout (through add_wch() and similar). Is ncurses safe to use in this way, or do stdin/stdout accesses need to be always done within the same thread?
The way colors are set in this app is quite byzantine, as it uses the concept of "inherited palettes". Basically, a window usually specifies the background and foreground colors, and every widget within that window sets the foreground only (but a few widgets redefine both fg/bg). I understand that ncurses' attr() always wants to specify colors using pairs, which must be initialized using initp(), and this doesn't play nicely with the logic of this program. I am therefore thinking of using tiparm() to directly send setaf/setbf sequences when the program wants to change the fg/bg color, respectively. (I would lose the ability to run the code on terminals which do not support setaf/setbf, but this would not be a huge loss.) Is it safe to send setaf/setbf control sequences and then call functions like add_wch(), or should the latter be used only in association with attr()?
I could write a few test scripts to check that my ideas work, but I would not be sure that this approach is supposed to work always.
Thanks for any help!
There's a lot of possibilities - but the approach described sounds like terminfo (low-level) rather than curses, except for the mention of add_wch. Rather than tiparm, a curses application would use wattr_set, init_pair, start_color, etc.
ncurses I/O has to be in one thread; while ncurses can be compiled to help (by using mutexes in some places), packagers have generally ignored that (and even with that configuration, application developers still would have work to do).
Further reading:
curses color manipulation routines
curses character and window attribute control routines
curses thread support

GetMessage() while the thread is blocked in SwapBuffers()

Vsync blocks SwapBuffers(), which is what I want. My problem is that, since input messages go to the same thread that owns the window, any messages that come in while SwapBuffers() is blocked won't be processed immediately, but only after the vsync triggers the buffer swap and SwapBuffers() returns. So I have all my compute threads sitting idle instead of processing the scene for rendering in the next frame using the most recent input. I'm particularly concerned with having very low latency. I need some way to access all pending input messages to the window from other threads.
Windows API provides a way to wait for either Windows events or input messages using MsgWaitForMultipleObjects(), yet there's no similar way to wait for a buffer swap together with other things. That's very unfortunate.
I considered calling SwapBuffers() in another thread, but that requires glFinish() to be called in the window's thread before signalling another thread to SwapBuffers(), and glFinish() is still a blocking call so it's not a good solution.
I considered hooking, but that also looks like a dead end. Hooking with WH_GETMESSAGE will have the GetMsgProc() be called not asynchronously, but when the window's thread calls GetMessage()/PeekMessage(), so it's no help. Installing a global hook doesn't help me either due to the need of calling RegisterTouchWindow() with a specific window handle to process WM_TOUCH -- and my input is touch. And, while for mouse and keyboard, you can install low level hooks that capture messages as they're posted to the thread's queue, rather than when the thread calls GetMessage()/PeekMessage(), there appears to be no similar option for touch.
I also looked at wglDelayBeforeSwapNV(), but I don't see what's preventing the OS from preempting a thread sometimes after the call to that function but before SwapBuffers(), causing a miss of the next vsync signal.
So what's a good workaround? Can I make a second, invisible window, that will somehow be always the active one and so get all input messages, while the visible one is displaying the rendering? According to another discussion, message-only windows (CreateWindow with HWND_MESSAGE) are not compatible with WM_TOUCH. Is there perhaps some undocumented event that SwapBuffers() is internally waiting on that I could access and feed to MsgWaitForMultipleObjects()? My target is a fixed platform (Windows 8.1 64-bit) so I'm fine with using undocumented functionality, should it exist. I do want to avoid writing my own touchscreen driver, however.
Out of curiosity, why not implement your entire drawing logic in that other thread? It appears the problem you are running into is that the message pump is driven by the same thread that draws. Since Windows does not let you drive the message pump from a different thread than the one that created the window, the easiest solution would just be to push all the GL stuff into a different thread.
SwapBuffers (...) is also not necessarily going to block. As-per requirements of VSYNC an implementation need only block the next command that would modify the backbuffer while all backbuffers are pending a swap. Triple buffering changes things up a little bit by introducing a second backbuffer.
One possible implementation of triple buffering will discard the oldest backbuffer when it comes time to swap, thus SwapBuffers (...) would never cause blocking (this is effectively how modern versions of Windows work in windowed mode with the DWM enabled). Other implementations will eventually present both backbuffers, this reduces (but does not eliminate) blocking but also results in the display of late frames.
Unfortunately WGL does not let you request the number of backbuffers in a swap-chain (beyond 0 single-buffered or 1 double-buffered); the only way to get triple buffering on Windows is using driver settings. Lowest message latency will come from driving GL in a different thread, but triple buffering can help a little bit while requiring no effort on your part.

Monitoring files asynchronously

On Unix: I’ve been through FAM and Gamin, and both seem to provide a client/server file monitoring system. I would rather have a system where I tell the kernel to monitor some inodes and it pokes me back when events occur. Inotify looked promising at first on that side: inotify_init1 let me pass IN_NONBLOCK which in turn caused poll() to return directly. However I understood that I would have to call it regularly if I wanted to have news about the monitored files. Now I’m a bit short of ideas.
Is there something to monitor files asynchronously?
PS: I haven’t looked on Windows yet, but I would love to have some answers about it too.
As Celada says in the comments above, inotify and poll are the right way to do this.
Signals are not a mechanism for reasonable asynchronous programming -- and signal handlers are remarkably dangerous for the inexperienced and even for the experienced. One does not use them for such purposes voluntarily.
Instead, one should structure one's program around an event loop (see http://en.wikipedia.org/wiki/Event-driven_programming for an overall explanation) using poll, select, or some similar system call as the core of your program's event handling mechanism.
Alternatively, you can use threads, or threads plus an event loop.
However interesting are you answers, I am sorry but I can’t accept a mechanism based on blocking calls on poll or select, when the question states “asynchronously”, regardless of how deep it is hidden.
On the other hand, I found out that one could manage to run inotify asynchronously by passing to inotify_init1 the flag IN_NONBLOCK. Signals are not triggered as they would have with aio, and a read call that would block blocking would set errno to EWOULDBLOCK instead.

Is it a good idea to implement a TCP/IP socket client-server with signals?

To clarify, I am wondering what are the cons and pros of writing a "multiple simultaneous clients to a single server" using TCP/IP sockets and signal handlers that are called in response to "can read / can write" signal conditions on client socket file descriptors? As far as I understand at least the Linux kernel uses signals to notify a process of conditions related to socket descriptors? Obviously one has to be careful in a signal handler, which, again as I understand, interrupts the process - reentrancy, atomicity, undefined state for variables, etc.
But one does not have to have signals do most work, in fact quite the opposite - add the socket to a set of sockets ready for reading, writing, much like select, poll and epoll_wait do, and let the default process code flow work with these sets? In effect, one emulates much the same pattern as with the functions mentioned, but purely principally, is it doable and how can it be worth it?
There is already a couple of such methods. One is using the SIGIO signal, check man 7 socket and look for the section named "Signals" for more information.
The other method is standardized by POSIX and called async I/O. The functions to use are all prefixed with aio_ (for example aio_read). See this link for an example on how to use this or check the manual page.

How does an OS or a systems program wait for user input?

I come from the world of web programming and usually the server sets a superglobal variable through the specified method (get, post, etc) that makes available the data a user inputs into a field. Another way is to use AJAX to register a callback method to an event that the AJAX XMLhttpRequest object will initiate once notified by the browser (I'm assuming...). So I guess my question would be if there is some sort of dispatch interface that a systems programmer's code must interact with vicariously to execute in response to user input or does the programmer control the "waiting" process directly? And if there is a dispatch is there a loop structure in an OS that waits for a particular event to occur?
I was prompted to ask this question here because I'm in a basic programming logic class and the professor won't answer such a "sophisticated" question as this one. My book gives a vague pseudocode example like:
//start
sentinel_val = 'stop';
get user_input;
while (user_input not equal to sentinel_val)
{
// do something.
get user_input;
}
//stop
This example leads me to believe 1) that if no input is received from the user the loop will continue to repeat the sequence "do something" with the old or no input until the new input magically appears and then it will repeat again with that or a null value. It seems the book has tried to use the example of priming and reading from a file to convey how a program would get data from event driven input, no?
I'm confused :(
At the lowest level, input to the computer is asynchronous-- it happens via "interrupts", which is basically something external to the CPU (a keyboard controller) sending a signal to the CPU that says "stop what you're doing and accept this data". (It's complex, but this is the general idea). So the CPU stops, grabs the keystroke, and puts it in a buffer to be read, and then continues doing what it was doing before the interrupt.
Very similar things happen with inbound network traffic, and the results of reading from a disk, etc.
At a higher level, it gets more dependent on the operating system or framework that you're using.
With keyboard input, there might be a process (application, basically) that is blocked, waiting for user input. That "block" doesn't mean the computer just sits there waiting, it lets other processes run instead. But when the keyboard result comes in, it will wake up the one who was waiting for it.
From the point of view of that waiting process, they called some function "get_next_character()" and that function returned with the character. Etc.
Frankly, how all this stuff ties together is super interesting and useful to understand. :)
An OS is driven by hardware event (called interrupt). An OS does not wait for an interrupt, instead, it execute a special instruction to put the CPU a nap in a loop. If a hardware event occurs, the corresponding interrupt will be invoked.
It seems the book has tried to use the example of priming and reading from a file
to convey how a program would get data from event driven input, no?
Yes that is what the book is doing. In fact... the unix operating system is built on the idea of abstracting all input and output of any device to look like this.
In reality most operating systems and hardware make use of interrupts that jump to what we can call a sub-routine to perform the low level data read and then return control back to the operating system.
Also on most systems many of the devices work independent of the rest of the operating system and present a high level API to the operating system. For example a keyboard port (or maybe a better example is a network card) on a computer process interrupts itself and then the keyboard driver presents the operating system with a different api. You can look at standards for devices to see what these are. If you want to know the api the keyboard port presents for example you could look at the source code for the keyboard driver in a linix distro.
A basic explanation based on my understanding...
Your get user_input pseudo function is often something like readLine. That means that the function will block until the data read contains a new line character.
Below this the OS will use interrupts (this means it's not dealing with the keyboard unessesarily, but only when required) to allow it to respond when it the user hits some keys. The keyboard interrupt will cause execution to jump to a special routine which will fill an input buffer with data from the keyboard. The OS will then allow the appropriate process - generally the active one - to use readLine functions to access this data.
There's a bunch more complexity in there but that's a simple view. If someone offers a better explanation I'll willingly bow to superior knowledge.

Resources