TCL hangs when trying to close TCL pipe - user-interface

When launching tclsh and typing this:
close [open "|tclsh" w]
it works fine.
But, when in ~/.tclshrc you have package require Tk, the same line makes tclsh to HANG!
The same issue is with all GUI packages like Tk, Itk, Img, Iwidgets, however with not GUI packages like Itcl, it worsk fine.
How can I fix this issue? The point is to make tclsh not to hang, when typing close [open "|tclsh" w] with package require Tk in ~/.tclshrc.
The same issue is with wish. close [open "|wish" w] makes wish to HANG (with an empty ~/.wishrc file)!
I got this issue on both 32 and 64 bit CentOS.
I have the following versions of packages: tcl-8.5.8, tk-8.5.8, img-1.3, itcl-3.4.b1, itk-3.3, iwidgets-4.0.1.

Tcl applications mostly exit when they have finished their script, whether or not it is provided interactively. However the Tk package changes things around so that when the end of the script is reached, it instead goes into a loop handling events. If you're relying on an end-of-file causing things to exit, that's going to look a lot like a hang, but really it's just waiting properly for the GUI app to finish (so it can report the exit status of the subprocess).
The fix is to make a channel-readable event handler for stdin in the subprocess. There's a few ways to do this in detail, but here's a simple one that can go at the end of the bulk of code that you normally send:
proc ReadFromStdin {} {
if {[gets stdin line] >= 0} {
uplevel "#0" $line
} elseif {[eof stdin]} {
exit
} else {
# Partial read; try later when rest of data available
}
}
fileevent stdin readable ReadFromStdin
This assumes that each line is a full executable command; that might not be true, of course, but writing the code to use info complete to compose lines is less clear and possibly unnecessary here. (You know what you're actually sending better than I…)

My thought would be that it's waiting for wish to finish running, as per the man page:
If channelId is a blocking channel for
a command pipeline then close waits
for the child processes to complete.
Since wish enters an infinite loop (the event loop) and never exits, the close command will hang. Along the same lines, [package require Tk] (I believe) starts the event loop, so will cause the same behavior.
I'll admit though that it's loading .tclshrc at all, since
If there exists a file .tclshrc (or tclshrc.tcl on the Windows platforms) in the home directory of the user, interactive tclsh evaluates the file as a Tcl script just before reading the first command from standard input.
It seems odd to me that [open "|tclsh" w] winds up in an interactive shell.
As a side note, [pacakge require Tk] seems like a really strange thing to do in .tclshrc. In theory, you won't always want Tk (the window and event loop) when running Tcl (ie, command line only apps)... and, when you do want it, you know you do. To each their own, I suppose, it just seems odd to me.

Related

Python subprocess.Popen terminates when stdin=PIPE is specified

I'm new to subprocesses in python, I need to spawn a number of independent subprocesses, keep them alive and pass commands into them. At first sight, subprocess library is what I'm looking for.
I've read the documenations for it and as I'm understanding to pass any command into the subprocess, I'd have to specify the input.
I need to run commands via windows command line, hence the toy example below is good enough that if I have it working, I'm pretty much done. Running code below via IDLE opens a new cmd window, printing a list of cwd files, however I can't write to it as stdin is not specified (would be writing to it using p.stdin.write('DIR') with 'DIR' being an example command).
from subprocess import Popen, PIPE
p = Popen(['cmd', '/K', 'DIR'])
Therefore I specify the stdin as PIPE, as per documentations.
from subprocess import Popen, PIPE
p = Popen(['cmd', '/K', 'DIR'], stdin=PIPE)
However, running the second snippet of code instantly terminates the opened cmd window. Is that the expected behavior? As far as I could find in the documentations, only p.kill() or p.terminate() end the child process. If so, what are the possible workarounds? If not, what am I doing incorrectly, what other libraries should I be using? Thanks!

Ruby can't read TCP socket when run in background

I was writing a Slack bot in Ruby under Windows and everything worked just fine until I decided to run it on a Linux server. When I access my shell and run the script it works correctly in the foreground, but once I move it to the background it stops working. I'm getting a timeout error on a HTTP request with Net::HTTP, or an EOFError on the socket read.
I'm using Ruby 2.3 on Debian 7.
I think that the Ruby process stops on its own, because I only get the errors once I return the process to the foreground, and, if I run ps aux when the process is in the background it has the "T" (stopped) flag listed.
Since I want to become more familiar with Linux, I'd like to know what is causing the issue, rather than how to solve it.
EDIT: I found that my user input handler is causing the problem. Here is the problematic bit:
def input_handler
return Thread.new {
loop do
user_input = gets.chomp
end
}
end
The problem looks like it's gets.
By default gets reads from STDIN. The documentation says:
Returns (and assigns to $_) the next line from the list of files in ARGV (or $*), or from standard input if no files are present on the command line.
The code/thread will stop and wait for a prompt from the keyboard, or read from the piped input if STDIN is redirected or from a file given as a parameter to the script on the command-line.

Tcl on osx: Open pipe for reading: No `readable` event generated on EOF

I'll try to be specific. This is OSX-Yosemite, Tcl 8.5.9. I'm doing something like that:
set fd [open "|$external"]
fconfigure $fd -blocking 0 -buffering line
fileevent $fd readable "set $tracername $fd"
This tracername holds the name of a global variable (with ::namespace-based name), which is next used here:
# Ok, now clear the variable
set $tracername ""
# If vwait causes error, it may happen that the process
# has finished before it could be added to the event list
# (just the script had no opportunity to see it). If this
# happens, just go on, and in the next roll you'll find it out anyway.
$mkv::debug "--- Waiting for an event from any of fileevent-registered processes ($tracername)..."
catch {vwait $tracername}
$mkv::debug "--- UNBLOCKED BY: $tracername=[set $tracername]"
This works perfectly on Linux and Cygwin. On Mac it stalls on vwait (to the best of my knowledge).
This catch is here added just for a case when all "files" from the fileevent have been closed - according to the documentation, doing vwait when no events were scheduled results in exception of type error (to prevent waiting forever). However I doubt this is the case here because the command started is a C++ compiler command call that takes at least more than half a second to run.
The program is a Tcl version of make tool and this code is a part of parallel build support. This happens when I run without parallel support. If I run at least 2 process simultaneously, everything works fine.
What I observe when debugging is that the child process spawned by open |$external stops printing anything on the output, so looxlike it has finished. But normally I have a readable event generated, which gives me opportunity to check if [eof $fd] and close it.

Are shell scripts read in their entirety when invoked?

I ask because I recently made a change to a KornShell (ksh) script that was executing. A short while after I saved my changes, the executing process failed. Judging from the error message, it looked as though the running process had seen some -- but not all -- of my changes. This strongly suggests that when a shell script is invoked, the entire script is not read into memory.
If this conclusion is correct, it suggests that one should avoid making changes to scripts that are running.
$ uname -a
SunOS blahblah 5.9 Generic_122300-61 sun4u sparc SUNW,Sun-Fire-15000
No. Shell scripts are read either line-by-line, or command-by-command followed by ;s, with the exception of blocks such as if ... fi blocks which are interpreted as a chunk:
A shell script is a text file containing shell commands. When such a
file is used as the first non-option argument when invoking Bash, and
neither the -c nor -s option is supplied (see Invoking Bash), Bash
reads and executes commands from the file, then exits. This mode of
operation creates a non-interactive shell.
You can demonstrate that the shell waits for the fi of an if block to execute commands by typing them manually on the command line.
http://www.gnu.org/software/bash/manual/bashref.html#Executing-Commands
http://www.gnu.org/software/bash/manual/bashref.html#Shell-Scripts
It's funny that most OS'es I know, do NOT read the entire content of any script in memory, and run it from disk. Doing otherwise would allow making changes to the script, while running. I don't understand why that is done, given the fact :
scripts are usually very small (and don't take many memory anyway)
at some point, and shown in this thread, people would start making changes to a script that is already running anyway
But, acknowledging this, here's something to think about: If you decided that a script is not running OK (because you are writing/changing/debugging), do you care on the rest of the running of that script ? you can go ahead making the changes, save them, and ignore all output and actions, done by the current run.
But .. Sometimes, and that depends on the script in question, a subsequent run of the same script (modified or not), can become a problem since the current/previous run is doing an abnormal run. It would typically skip some stuff, or sudenly jump to parts in the script, it shouldn't. And THAT may be a problem. It may leave "things" in a bad state; particularly if file manipulation/creation is involved.
So, as a general rule : even if the OS supports the feature or not, it's best to let the current run finish, and THEN save the updated script. You can change it already, but don't save it.
It's not like in the old days of DOS, where you actually have only one screen in front of you (one DOS screen), so you can't say you need to wait on run completion, before you can open a file again.
No they are not and there are many good reasons for that.
One of the things you should keep in mind is that a shell is not an interpreter even if there are some similarities. Shells are designed to work with a stream of commands. Either from the TTY ,a PIPE, FIFO or even a socket.
The shell reads from its resource line by line until a EOF is returned by the kernel.
The most shells have no extra support for interpreting files. they work with a file as they would work with a terminal.
In fact this is considered to be a nice feature because you can do interesting stuff like this How do Linux binary installers (.bin, .sh) work?
You can use a binary file and prepend shell scripts. You can't do this with an interpreter. because it parses the whole file or at least it would try it and fail. A shell would just interpret it line by line and doesnt care about the garbage at the end of the file. You just have to make sure the execution of the script gets terminated before it reaches the binary part.

Why will a script not accept input on stdin when invoked from bash's bind?

I have a number of bash/bind tools that I've written to simplify my command-line existence, and have recently wanted to make one of these tools interactive. If I try to read from stdin in one of these scripts, execution locks up at the point of reading. My example here is in python, but I have seen the exact same behavior when the invoked script is written in ruby:
~> cat tmp.py
import sys
sys.stdout.write(">>>")
sys.stdout.flush()
foo = sys.stdin.readline()
print "foo: %s" % foo,
~> python tmp.py
>>>yodeling yoda
foo: yodeling yoda
So the script works. When I invoke it, I can give it input and it prints what I fed it.
~> bind -x '"\eh":"echo yodeling yoda"'
[output deleted]
~> [Alt-H]
yodeling yoda
bind works as expected. The bound keystroke invokes the command. I use this stuff all the time, but until now, I've only invoked scripts that required no stdin reads.
Let's bind [Alt-H] to the script:
~> bind -x '"\eh":"python tmp.py"'
[output deleted]
Now we're configured to have the script read from stdin while invoked by the bound keystroke. Hitting [Alt-H] starts the script but nothing typed is echoed back. Even hitting [Crl-D] doesn't end it. The only way to get out is to hit [Crl-C], killing the process in the readline. (sys.stdin.read() suffers the same fate.)
~> [Alt-H]
>>>Traceback (most recent call last):
File "tmp.py", line 7, in <module>
foo = sys.stdin.readline()
KeyboardInterrupt
As I mentioned at the top, I see the same issue with ruby, so I know it's nothing to do with the language I'm using. (I've omitted the script.)
~> bind -x '"\eh":"ruby tmp.rb"'
[Output deleted]
~> [Alt-H]
>>>tmp.rb:3:in `gets': Interrupt
from tmp.rb:3
I've looked through the Bash Reference Manual entry on bind, and it says nothing about a restriction on input. Any thoughts?
EDIT:
If I cat /proc/[PID]/fd/0 while the process is stuck, I see the script's input being displayed. (Oddly enough, a fair number of characters - seemingly at random - fail to appear here. This symptom only appears after I've given a few hundred bytes of input.)
Found this, a description of how and when a terminal switches between cooked and raw modes. Calling stty cooked and stty echo at the beginning of prompting, then stty sane or stty raw afterward triggers a new cascade of problems; mostly relating to how bound characters are handled, but suffice it to say that it destroys most alt bindings (and more) until return has been hit a couple times.
In the end, the best answer proved to be cooking the tty and manually turning on echo, then reverting the tty settings back to where they were when I started:
def _get_terminal_settings():
proc = subprocess.Popen(['/bin/stty', '-g'], stdout=subprocess.PIPE)
settings = proc.communicate()[0]
os.system('stty cooked echo')
return settings
def _set_terminal_settings(settings):
os.system('stty %s' % settings)
...
...
settings = _get_terminal_settings()
user_input = sys.stdin.readline()
_set_terminal_settings(settings)
...
...
You should be able to do this in any language you choose.
If you're curious about why this insanity is required, I would encourage you to read the link I added (under EDIT), above. The article doesn't cover anywhere enough detail, but you'll at least understand more than I did when I started.
Hmm, my guess is that what's happening is the python script is running and waiting for input from stdin when you press [Alt-H],but that it's stdin scope is not the same as the stdin scope of the calling script. When you type in something, it goes to the Bash scripts stdin, not the pythons. Perhaps look up a way to "reverse pipe" or forward the stdin from the bash shell to the stdin of a called script?
Edit:
Okay, I researched it a bit, and it looks like pipes might work. Here's a really informative link:
bash - redirect specific output from 2nd script back to stdin of 1st program?

Resources