I am making it so that it stops asking for input upon CTRL-C.
What I have currently is that a separate go-routine, upon receiving a CTRL-C, changes the value of a variable so it won't ask for another line. However, I can't seem to find a way around the current line.
i.e. I still have to press enter once, to get out of the current iteration of reading for \n.
Is there perhaps a way to push a "\n" into stdin for the reader.ReadString to read. Or a way to stop its execution altogether.
The only decent mechanism that Go gives you to proceed when either of two things happens is select, and select only selects on channel reads, so your only option is to change your signal-handler goroutine to write to a channel, and add another goroutine that handles stdin and passes lines of input to a channel, then select on the two channels.
However, that still leaves your question half-unanswered: your main program can stop waiting for input on a Ctrl-C, but the goroutine that's reading input will still be waiting for input. In some cases that might be okay... if you will never need stdin again, or if you will go right back to processing lines in the same exact way. But if you want to do something other than ReadString from that reader, you're stuck... literally. The only solution I see would be to write your own state machine around Read or ReadByte that is capable of changing its behavior in response to external conditions, but that can easily get horribly complicated.
Basically, this looks like a case where Go simplifies things compared to the underlying system (not exposing anything like EINTR, not allowing select on filehandles), but ends up providing less power to the programmer.
Related
I am currently doing the Ruby on the Web project for The Odin Project. The goal is to implement a very basic webserver that parses and responds to GET or POST requests.
My solution uses IO#gets and IO#read(maxlen) together with the Content-Length Header attribute to do the parsing.
Other solution use IO#read_nonblock. I googled for it, but was quite confused with the documentation for it. It's often mentioned together with Kernel#select, which didn't really help either.
Can someone explain to me what the nonblock calls do differently than the normal ones, how they avoid blocking the thread of execution, and how they play together with the Kernel#select method?
explain to me what the nonblock calls do differently than the normal ones
The crucial difference in behavior is when there is no data available to read at call time, but not at EOF:
read_nonblock() raises an exception kind of IO::WaitReadable
normal read(length) waits until length bytes are read (or EOF)
how they avoid blocking the thread of execution
According to the documentation, #read_nonblock is using the read(2) system call after O_NONBLOCK is set for the underlying file descriptor.
how they play together with the Kernel#select method?
There's also IO.select. We can use it in this case to wait for availability of input data, so that a subsequent read_nonblock() won't cause an error. This is especially useful if there are multiple input streams, where it is not known from which stream data will arrive next and for which read() would have to be called.
In a blocking write you wait until bytes got written to a file, on the other hand a nonblocking write exits immediately. It means, that you can continue to execute your program, while operating system asynchronously writes data to a file. Then, when you want to write again, you use select to see whether the file is ready to accept next write.
I want to press a key at any point, causing the simulation to stop without loosing data collected until that point. I don't know how to do the exit command. Can you give me some examples?
I think, WandMaker's comment tells only half of the story.
First, there is no general rule, that Control-C will interrupt your program (see the for instance here), but assume that this works in your case (since it will work in many cases):
If I understand you write, you want to somehow "process" the data collected up to this point. This means that you need to intercept the effect of Control-C (which, IF it works as expected, will make the controlling shell deliver a SIGINT), or that you need to interecept the "exit" (since the default behaviour upon receiving a SIGINT would be to exit the program).
If you want to go along the first path, you need to catch the Interrupt exception; see for example here.
If you want to follow the second route, you need to install an exit handler. Note that it will be called too when the program is exited in the normal way.
If you are unsure, which way is better - and I see no general way to recommend one over the other -, try the first one. There is less chance that you will accidentally ruin something.
I have to create a script (ksh or perl) that starts certain number of parallel jobs (another scripts), each of them runs as a foreground process in a separate session. Plus I start monitoring job that has to determine if any of those scripts is expecting input from operator, and switch to the corresponding session if necessary.
My problem is that I have not found a good way to determine that process is expecting input. For the background process it's pretty easy: process state is "stopped" and this can be easily checked with 'ps' command. In case of foreground process this does not work.
So far I tried to attach to the process with dbx or truss to see if it's hanging on 'read', but this approach seems too heavyweight.
Could you suggest some better solution? Perl, shell, C, Java, etc. … is ok as long as it’s standard and does not require extra 3rd party or OS-specific stuff to install.
Thank you.
What you're asking isn't possible, at least not reliably. The process may be using select or other polling method rather than blocking on a read call. You can't know whether it's waiting for operator input or busy doing other stuff, and in general it could be both (doing stuff in the background while being responsive to operator input).
The normal way for a program to signal that it's waiting for operator input is to print a prompt. Thus you should consider a session to be active if it's displayed a prompt since the last time you fed it input.
If your programs don't behave this way, you'll need to find some other program-specific way to know that these processes are waiting for input.
I have a tcl script which takes a few minutes to run (the execution time varies based on different configurations).
I want the users to have some kind of an idea of whether it's still executing and how long it would take to complete while the script executes.
Some of the ideas I've had so far:
1) Indicate it using ... which keep increasing with each internal command run or so. But again it doesn't really give a sense of how much more to go for a first time user.
2) Use the revolving slash which I've seen used many places.
3) Have an actual percentage completed output on screen. No idea if this is viable or how to go about it.
Does anyone have any ideas on what could be done so that users of the script understand what's going on and how to do this?
Also if I'm implementing it using ... , how do I get them to print the . on the same line each time. If I use puts to do this in the tcl script the . just gets printed on the next line.
And for the revolving slash, I would need to replace something which was already printed on screen. How can I do this with tcl?
First off, the reason you were having problems printing dots was that Tcl was buffering its output, waiting for a new line. That's often a useful behavior (often enough that it's the default) but it isn't wanted in this case so you turn it off with:
fconfigure stdout -buffering none
(The other buffering options are line and full, which offer progressively higher levels of buffering for improved performance but reduced responsiveness.)
Alternatively, do flush stdout after printing a dot. (Or print the dots to stderr, which is unbuffered by default due to mainly being for error messages.)
Doing a spinner isn't much harder than printing dots. The key trick is to use a carriage return (a non-printable character sometimes visualized as ^M) to move the cursor position back to the start of the line. It's nice to factor the spinner code out into a little procedure:
proc spinner {} {
global spinnerIdx
if {[incr spinnerIdx] > 3} {
set spinnerIdx 0
}
set spinnerChars {/ - \\ |}
puts -nonewline "\r[lindex $spinnerChars $spinnerIdx]"
flush stdout
}
Then all you need to do is call spinner regularly. Easy! (Also, print something over the spinner once you've finished; just do puts "\r$theOrdinaryMessage".)
Going all the way to an actual progress meter is nice, and it builds on these techniques, but it requires that you work out how much processing there is to do and so on. A spinner is much easier to implement! (Especially if you've not yet nailed down how much work there is to do.)
The standard output stream is initially line buffered, so you won't see new output until you write a newline character, call flush or close it (which is automatically done when your script exits). You could turn this buffering off with...
fconfigure stdout -buffering none
...but diagnostics, errors, messages, progress etc should really be written to the stderr stream instead. It has buffering set to none by default so you won't need fconfigure.
The systemu page says:
systemu can be used on any platform to return status, stdout, and stderr of any command. unlike other methods like open3/popen4 there is zero danger of full pipes or threading issues hanging your process or subprocess.
(https://github.com/ahoward/systemu)
Could anyone explain this a little bit?
Methods like popen and its various spinoffs are convenient and are part of the expected API for a full I/O library.
However, they must be used either casually or carefully because they are prone to deadlock. By casually, I mean, if you both write and read from the command, it's still OK as long as you either don't write a lot or don't read a lot. By carefully, I mean, you can move large amounts of data, but only if you keep the inner details of the operation in mind and deliberately engineer against deadlock.
Imagine writing lots of stuff to your popened command and then reading a result. If you write more than a pipe will buffer, then your process will sleep. That's OK in practice, most of the time, but what if the command has to write a lot of stuff? Now it may sleep and not finish reading input that you are sending. You won't finish sending input so you will never wake up and read results.
Deadlock!