Capture Ctrl+C (SIGINT) when terminal is in raw mode? - terminal

So, imagine I have raw mode enabled in an application.
The application may or may not take input.
What would be the best practise to capture Ctrl+C (SIGINT)? Remember it doesn't always take input, so that makes the whole thing a tiny bit harder.
My idea is that I always take input in the background, and use that to see if somebody presses Ctrl+C, and discarding all input unless, well, Ctrl+C, or that I do actually do something with the input.
Flaws with this idea is that I believe you can rebind SIGINT to other keys. Also, always taking input might not be the best practise, nor would it be completely painless to implement.
Do I just give up on raw mode completely? Do I only enable it when I need it? (A.K.A. every time I want a single char?)

Listen for character code 3, which gets sent when you press ctrl-c in raw mode.

Related

Porting an old DOS TUI to ncurses

I would like to have some advice about how to port an old C++ program written for MS-DOS in the early 90s.
This program implements a quite complex text-user interface. The interface code is well separated from the logic, and I don't think it would be too difficult to make it use ncurses.
Being a complete novice, I have a few questions:
The DOS program intercepts interrupt 0x33 to handle mouse events. The interrupt handler store events in a FIFO, which the main program pools periodically. (Every element in the FIFO is a C structure containing information about the nature of the event, the position of the mouse and the state of its buttons.) To keep the logic of the code unchanged, I was thinking of firing a thread which calls getch() asynchronously within an infinite loop and fills the FIFO in the same way the old program did. My idea is that this thread, and only this thread, should access stdin, while the main thread would only have the responsibility to access stdout (through add_wch() and similar). Is ncurses safe to use in this way, or do stdin/stdout accesses need to be always done within the same thread?
The way colors are set in this app is quite byzantine, as it uses the concept of "inherited palettes". Basically, a window usually specifies the background and foreground colors, and every widget within that window sets the foreground only (but a few widgets redefine both fg/bg). I understand that ncurses' attr() always wants to specify colors using pairs, which must be initialized using initp(), and this doesn't play nicely with the logic of this program. I am therefore thinking of using tiparm() to directly send setaf/setbf sequences when the program wants to change the fg/bg color, respectively. (I would lose the ability to run the code on terminals which do not support setaf/setbf, but this would not be a huge loss.) Is it safe to send setaf/setbf control sequences and then call functions like add_wch(), or should the latter be used only in association with attr()?
I could write a few test scripts to check that my ideas work, but I would not be sure that this approach is supposed to work always.
Thanks for any help!
There's a lot of possibilities - but the approach described sounds like terminfo (low-level) rather than curses, except for the mention of add_wch. Rather than tiparm, a curses application would use wattr_set, init_pair, start_color, etc.
ncurses I/O has to be in one thread; while ncurses can be compiled to help (by using mutexes in some places), packagers have generally ignored that (and even with that configuration, application developers still would have work to do).
Further reading:
curses color manipulation routines
curses character and window attribute control routines
curses thread support

How can I introduce input lag (keyboard and mouse) to my system?

(I work in QA, this really is for legitimate use.)
I'm trying to come up with a way to introduce forced input lag for both keyboard and mouse (in Windows). Like, when I press 'A' on the keyboard, I want to introduce a very slight delay before the OS processes that A. Or if I move the mouse, I'd like the same mouse speed, but just with the same slight delay before it kicks in. This lag needs to be present across any threads, not just the one that kicked off the process. But, the lag doesn't have to be to-the-millisecond precise every time.
I'm not even sure how to go about setting this up. I'm capable of writing it in whatever language/environment we may need, I'm just not sure where to start. I think something like AutoHotkey may be able to do what I want by essentially making an arbitrary key call a macro that delays very slightly before sending that key, but I'm not sure what function calls I may need to make it happen. Or, maybe there's a way in C to get at the input across the OS before it kicks in. I'm just not sure.
Can anyone can point me to some resources or a language/function(s) that can accomplish this? (Or even an already existing program or service.)
If you want purely software solution, I’m afraid you’ll need to develop a filter driver for your keyboard and mouse. Very expensive to develop.
Instead, you can plug your mouse and keyboard into somewhere else, have the input messages come through the network, and then introduce network latency. You could use second PC + VNC software, or second PC + software USB/IP, or hardware USB/IP device like this one.
There’s an easy but less reliable way.
You could develop a system-wide WH_KEYBOARD_LL and WH_MOUSE_LL hooks, discard original messages, and after a while send the delayed messages with SendInput API. This should work mostly, however there’re cases where it wont, e.g. I don’t expect anything happens with most videogames because raw input.

Ruby: Is there a way to capture keypresses and the timing between them in ruby

I'm trying to write a program that asks the user to enter a password and then measures the timing between each keystroke after they have entered it. Is this possible?
In so far as you can ask Ruby to give you raw i/o and then get the time stamp after each read returns, yes you can. If you really need very high precision though, ruby itself will add some microseconds of latency.
Presuming that is not a problem, do something like getting a raw file descriptor for /dev/tty, use ioctls to set it for raw mode, and use the read method to get each character as it is entered. This is, of course, messy, but what you are asking for is hard to do in a non-messy manner. (It is also not portable between OSes, but it is portable across versions of Unix. You will not be able to do precisely the same thing on Windows, you'll need different code for that.)
I think the standard way is to use the library curses. To get a key press, you can use the method getch. You should insert Time.now right after getch to get the time immediately after the key was pressed. By doing subtraction with the time of the previous key press, you can get the duration.

Pipe output(stdout) from running process Win32Api

I need to get (or pipe) the output from a process that is already running, using the windows api.
Basically my application should allow the user to select a window to pipe the input from, and all input will be displayed in a console. I would also be looking on how to get a pipe on stderr later on.
Important: I did not start the process using CreateProcess() or otherwise. The process is already running, and all I have is the handle to the process (returned from GetWindowThreadProcessId()).
The cleanest way of doing this without causing any ill effects, such that may occur if you used the method Adam implied of swapping the existing stdout handle with your own, is to use hooking.
If you inject a thread into the existing application and swap calls to WriteFile with an intercepted version that will first give you a copy of what's being written (filtered by handle, source, whatever) then pass it along to the real ::WriteFile with no harm done. Or you can intercept the call higher up by only swapping out printf or whichever call it is that the software is using (some experimentation needed, obviously).
HOWEVER, Adam is spot-on when he says this isn't what you want to do. This is a last resort, so think very, very carefully before going down this line!
Came across this article from MS while searching on the topic.
http://support.microsoft.com/kb/190351
The concept of piping input and output on Unix is trivial, there seems no great reason for it to be so complex on Windows. - Karl
Whatever you're trying to do, you're doing it wrong. If you're interacting with a program for which you have the source code, create a defined interface for your IPC: create a socket, a named pipe, windows messaging, shared memory segment, COM server, or whatever your preferred IPC mechanism is. Do not try to graft IPC onto a program that wasn't intending to do IPC.
You have no control over how that process's stdout was set up, and it is not yours to mess with. It was created by its parent process and handed off to the child, and from there on out, it's in control of the child. You don't go in and change the carpets in somebody else's house.
Do not even think of going into that process, trying to CloseHandle its stdout, and CreateFile a new stdout pointing to your pipe. That's a recipe for disaster and will result in quirky behavior and "impossible" crashes.
Even if you could do what you wanted to do, what would happen if two programs did this?

Speeding up text output on Windows, for a console

We have an application that has one or more text console windows that all essentially represent serial ports (text input and output, character by character). These windows have turned into a major performance problem in the way they are currently code... we manage to spend a very significant chunk of time in them.
The current code is structured by having the window living its own little life, and the main application thread driving it across "SendMessage()" calls. This message-passing seems to be the cause of incredible overhead. Basically, having a detour through the OS feels to be the wrong thing to do.
Note that we do draw text lines as a whole where appropriate, so that easy optimization is already done.
I am not an expert in Windows coding, so I need to ask the community if there is some other architecture to drive the display of text in a window than sending messages like this? It seems pretty heavyweight.
Note that this is in C++ or plain C, as the main application is a portable C/C++/some other languages program that also runs on Linux and Solaris.
We did some more investigations, seems that half of the overhead is preparing and sending each message using SendMessage, and the other half is the actual screen drawing. The SendMessage is done between functions in the same file...
So I guess all the advice given below is correct:
Look for how much things are redrawn
Draw things directly
Chunk drawing operations in time, to not send every character to the screen, aiming for 10 to 20 Hz update rate of the serial console.
Can you accept ALL answers?
I agree with Will Dean that the drawing in a console window or a text box is a performance bottleneck by itself. You first need to be sure that this isn't your problem. You say that you draw each line as a whole, but even this could be a problem, if the data throughput is too high.
I recommend that you don't use the SendMessage to pass data from the main application to the text window. Instead, use some other means of communication. Are these in the same process? If not, you could use shared memory. Even a file in the disk could do in some circumstances. Have the main application write to this file and the text console read from it. You could send a SendMessage notification to the text console to inform it to update the view. But do not send the message whenever a new line arrives. Define a minimum interval between two subsequent updates.
You should try profiling properly, but in lieu of that I would stop worrying about the SendMessage, which almost certainly not your problem, and think about the redrawing of the window itself.
You describe these are 'text console windows', but then say you have multiple of them - are they actually Windows Consoles? Or are they something your application is drawing?
If the latter, then I would be looking at measuring my paint code, and whether I'm invalidating too much of a window on each update.
Are the output windows part of the same application? It almost sounds like they aren't...
If they are, you should look into the Observer design pattern to get away from SendMessage(). I've used it for the same type of use case, and it worked beautifully for me.
If you can't make a change like that, perhaps you could buffer your output for something like 100ms so that you don't have so many out-going messages per second, but it should also update at a comfortable rate.
Are the output windows part of the
same application? It almost sounds
like they aren't...
Yes they are, all in the same process.
I did not write this code... but it seems like SendMessage is a bit heavy for this all in one application case.
You describe these are 'text console
windows', but then say you have
multiple of them - are they actually
Windows Consoles? Or are they
something your application is drawing?
Our app is drawing them, they are not regular windows consoles.
Note that we also need to get data back when a user types into the console, as we quite often have interactive serial sessions. Think of it as very similar to what you would see in a serial terminal program -- but using an external application is obviously even more expensive than what we have now.
If you can't make a change like that,
perhaps you could buffer your output
for something like 100ms so that you
don't have so many out-going messages
per second, but it should also update
at a comfortable rate.
Good point. Right now, every single character output causes a message to be sent.
And when we scroll the window up when a newline comes, then we redraw it line-by-line.
Note that we also have a scrollback buffer of arbitrary size, but scrolling back is an interactive case with much lower performance requirements.

Resources