Detect end of TCL background process in a TCL script - user-interface

I'm working on a program the uses an EXEC command to run a make file. This can take a long time so I want to put it in the background so the GUI doesn't lock up. However I also want the GUI to be disabled and a progress bar to run only while the make file is compiling.
So how can I detect when a background progress has finished in TCL?
Edit: It gets more complicated because my boss wants the command window to stay open (or be visable) so the user can see the progress of the make and see if it errors.
P.S. Would figuring out threading be easier? I need some way to prevent the GUI from locking up (prevent NOT RESPONDING).'
Edit: The GUI is made with TK.
I think TK is single-threaded which causes the problem. Or it could be that it defaults to single threaded and I want to set it to multi-thread.

As #glenn-jackman pointed out, the use of fileevent is preferred (because it should work everywhere).
proc handle_bgexec {callback chan} {
append ::bgexec_data($chan) [read $chan]
if {[eof $chan]} {
# end of file, call the callback
{*}$callback $::bgexec_data($chan)
unset ::bgexec_data($chan)
}
}
proc bgexec {callback args} {
set chan [open "| $args" r]
fconfigure $chan -blocking false
fileevent $chan readable [list handle_bgexec $callback $chan]
return
}
Invoke this as bgexec job_done cmd /c start /wait cmd /c make all-all. job_done gets called with the output of the command after it finishes.
It is also possible to use threads for this things, but this requires a threaded tcl build (which is now default for all platforms AFAIK, but older versions of Tcl esp. under unix don't build a threaded Tcl by default.) and the Thread package (which is included by default). An approach to use it with Threads would be:
thread::create "[list exec cmd /c start /wait cmd /c make all-all];[list thread::send [thread::id] {callback code}];thread::exit"
If you need to call this on a regular basis it might be worth to use only one worker thread instead of creating a new one for each job.
Edit: Add /wait as parameter for start the keep the first cmd running.
cmd /c start /wait cmd /c make all-all

You want to run the make process in a pipeline and use the event loop and fileevent to monitor its progress (see http://wiki.tcl.tk/880)
proc handle_make_output {chan} {
# The channel is readable; try to read it.
set status [catch { gets $chan line } result]
if { $status != 0 } {
# Error on the channel
puts "error reading $chan: $result"
set ::DONE 2
} elseif { $result >= 0 } {
# Successfully read the channel
puts "got: $line"
} elseif { [chan eof $chan] } {
# End of file on the channel
puts "end of file"
set ::DONE 1
} elseif { [chan blocked $chan] } {
# Read blocked. Just return
} else {
# Something else
puts "can't happen"
set ::DONE 3
}
}
set chan [open "|make" r]
chan configure $chan -blocking false
chan event $chan readable [list handle_make_output $chan]
vwait ::DONE
close $chan
I'm not certain about the use of vwait within Tk's event loop. Perhaps an expert will help me out here.

Related

expect fails when running proc inside proc

My script works fine (retrieves sftp prompt) when using one proc. But when I try to use proc inside proc, script gets stuck, and I do not know why.
Please do not refactor the code, that is not the point, I need to understand what is the issue here.
Working code:
proc sftp_connect {} {
set times 0;
set connection_retry 2
set timeout 1;
while { $times < $connection_retry } {
spawn sftp ${SFTP_USER}#${SFTP_SERVER}
expect {
timeout { puts "Connection timeout"; exit 1}
default {exit 2}
"*assword:*" {
send "${SFTP_PASSWORD}\n";
expect {
"sftp>" { puts "Connected"; set times [ expr $times+1]; exp_continue}
}
}
}
}
send "quit\r";
}
sftp_connect
Debug output:
expect: does "\r\nsftp> " (spawn_id exp5) match glob pattern "sftp>"? yes
But after moving send password into separate proc, expect does not retrieve sftp prompt anymore ("sftp>"):
proc sftp_send_password {} {
send "${SFTP_PASSWORD}\n";
expect {
"sftp>" { puts "Connected"; set times [ expr $times+1]; exp_continue}
}
}
proc sftp_connect {} {
set times 0;
set connection_retry 2
set timeout 1;
while { $times < $connection_retry } {
spawn sftp ${SFTP_USER}#${SFTP_SERVER}
expect {
timeout { puts "Connection timeout"; exit 1}
default {exit 2}
"*assword:*" { sftp_send_password }
}
}
send "quit\r";
}
sftp_connect
Debug output:
expect: does "" (spawn_id exp0) match glob pattern "sftp>"? yes
I don't have my copy of "Exploring Expect" handy, but I think you're running into a variable scoping issue. spawn invisibly sets a variable named spawn_id. When you call spawn in a proc, that variable is scoped only for that proc. Declare it as global:
proc sftp_connect {} {
global spawn_id
# ... rest is the same
}
I think you don't have to do the same thing in sftp_send_password because expect has a more forgiving scoping scheme than Tcl (if expect does not find a local variable, look in the global namespace).
Your sftp_send_password proc will not affect the times variable in sftp_connect though, due to the same variable scoping issue. I'd recommend
proc sftp_send_password {times_var} {
upvar 1 $times_var times ;# link this var to that in the caller
send "${SFTP_PASSWORD}\n";
expect {
"sftp>" { puts "Connected"; incr times; exp_continue}
}
# note use of `incr` instead of `expr`
}
And then the sftp_connect proc sends the times variable name:
sftp_send_password times
The following is from the expect's man page:
Expect takes a rather liberal view of scoping. In particular, variables read by commands specific to the Expect program will be sought first from
the local scope, and if not found, in the global scope. For example, this obviates the need to place global timeout in every procedure you write
that uses expect. On the other hand, variables written are always in the local scope (unless a global command has been issued). The most common
problem this causes is when spawn is executed in a procedure. Outside the procedure, spawn_id no longer exists, so the spawned process is no longer
accessible simply because of scoping. Add a global spawn_id to such a procedure.

Event loop in Tcl

I am a beginnner in Tcl. I am trying to learn Tcl without getting involved in Tk. I came across commands like vwait and after but I was confused as most explanations involved the notion of event loop and further mostly demonstrated the concept by using Tk. I would like to understand the notion of events and event loop and how the commands I mentiond relate to them, please refere me some reference for this. The explantion should not use Tk as examples, use no Tcl extensions, assume no prior knowledge of events. Some (minimal) toy examples and real-wordl application of tcl event loop except GUI programming/Tk would be appreciated.
I came across this tutorial on Tclers wiki. I am looking for OTHER reference or explanation like this.
If you're not using Tk, the main reasons for using the event loop are for waiting in the background while you do some other I/O-bound task, and for handling server sockets.
Let's look at server sockets.
When you open a server socket with:
socket -server myCallbackProcedure 12345
You're arranging for an event handler to be set on the server socket so that when there is an incoming connection, that connection is converted you a normal socket and your supplied callback procedure (myCallbackProcedure) is called to handle the interaction with the socket. That's often in turn done by setting a fileevent handler so that incoming data is processed when it arrives instead of blocking the process waiting for it, but it doesn't have to be.
The event loop? That's a piece of code that calls into the OS (via select(), poll(), WaitForMultipleObject(), etc., depending on OS and build options) to wait until something happens on any nominated channel or a timeout occurs. It's very efficient, since the thread making the call can be suspended while waiting. If something happens, the OS call returns and Tcl arranges for the relevant callbacks to be called. (There's a queue internally.) It's a loop because once the events are processed, it's normal to go back and wait for some more. (That's what Tk does until there are no more windows for it to control, and what vwait does until the variable it is waiting for is set by some event handler.)
Asynchronous waits are managed using a time-ordered queue, and translate into setting the timeout on the call into the OS.
An example:
socket -server myCallbackProcedure 12345
proc myCallbackProcedure {channel clientHost clientPort} {
puts "Connection from $clientHost"
puts $channel "Hi there!"
flush $channel
close $channel
}
vwait forever
# The “forever” is an idiom; it's just a variable that isn't used elsewhere
# and so is never set, and it indicates that we're going to run the process
# until we kill it manually.
A somewhat more complicated example with asynchronous connection handling so we can serve multiple connections at once (CPU needed: minimal):
socket -server myCallbackProcedure 12345
proc myCallbackProcedure {channel clientHost clientPort} {
puts "Connection from $clientHost"
fileevent $channel readable [list incoming $channel $clientHost]
fconfigure $channel -blocking 0 -buffering line
puts $channel "Hi there!"
}
proc incoming {channel host timeout} {
if {[gets $channel line] >= 0} {
puts $channel "You said '$line'"
} elseif {[eof $channel]} {
puts "$host has gone"
close $channel
}
}
vwait forever
An even more complicated example that will close connections 10 seconds (= 10000ms) after the last message on them:
socket -server myCallbackProcedure 12345
proc myCallbackProcedure {channel clientHost clientPort} {
global timeouts
puts "Connection from $clientHost"
set timeouts($channel) [after 10000 [list timeout $channel $clientHost]]
fileevent $channel readable [list incoming $channel $clientHost]
fconfigure $channel -blocking 0 -buffering line
puts $channel "Hi there!"
}
proc incoming {channel host timeout} {
global timeouts
after cancel $timeouts($channel)
if {[gets $channel line] >= 0} {
puts $channel "You said '$line'"
} elseif {[eof $channel]} {
puts "$host has gone"
close $channel
unset timeouts($channel)
return
}
# Reset the timeout
set timeouts($channel) [after 10000 [list timeout $channel $host]]
}
proc timeout {channel host} {
global timeouts
puts "Timeout for $host, closing anyway..."
close $channel
unset -nocomplain timeouts($channel)
}
vwait forever

expect - how to discard the buffer if expected string is not found

I need to spawn a script that produces lots of output which makes the regex matching of the output slow. Also the buffer fills quickly even when I use quite a large match_max value.
I would like to check the output for a particular string. If the string does not exist, I would like to discard the output read so far.
I have tried using default matches, globs and negative regexes to catch the unwanted strings, but could not get this working.
How can this be done with expect?
This 'seems' to work (more testing is required):
set success_string "\[INFO\] Started Jetty\r\n"
spawn "/usr/bin/mvn" "-pl" ":cloud-client-ui" "jetty:run"
expect {
-re "(\[^\r]*\)\r\n"
{
set current_line $expect_out(buffer)
if { [string equal "$current_line" "$success_string"] } {
puts "exiting with matched string $current_line"
exit 0
} else {
puts "discarding $current_line"
exp_continue
}
}
eof { puts "eof"; exit 1; }
timeout { puts "timeout"; exit 1; }
}

D: executeShell on Windows to run another program not returning immediately

I'm using D as a scripting language for Windows 7 console stuff to automate boring tasks. One of my scripts (open.exe) is supposed to allow me to open stuff from the command line without me having to specify which program I use (I have a configuration file with this stuff). Now, I use executeShell to do this, and call something like start [name of program I want to use] [name of input file]. If I do this directly from the shell, it returns immediately, but if I do it using my D script, it doesn't return until the program that it opens is closed. What should I do to allow it to return immediately?
For reference purposes, this is the business logic of my script (the main method just does some argument parsing for piping purposes):
immutable path = "some//path//going//to//config//file.conf";
void process(string input) {
string extension = split(input,".")[1]; //get file extension from input
auto config = File(path,"r"); auto found = false;
while (!config.eof()){
auto line = chomp(config.readln());
if (line[0]!='#') { //skip comment lines
auto divided = split(line, ":");
if (divided[0] == extension) {
found = true;
auto command = "start " ~ divided[1] ~ " " ~ input;
auto result = executeShell(command);
//test for error code and output if necessary
writeln(result.output);
}
}
}
if (!found)
writeln("ERROR: Don't know how to open " ~ input);
}
From the top of the std.process documentation:
Execute and wait for completion, collect output - executeShell
The Windows start program spawns a process and exits immediately. D's executeShell does something else. If you'd like to spawn another program, use the appropriate functions: spawnProcess or spawnShell.

ruby Process.spawn stdout => pipe buffer size limit

In Ruby, I'm using Process.spawn to run a command in a new process. I've opened a bidirectional pipe to capture stdout and stderr from the spawned process. This works great until the bytes written to the pipe (stdout from the command) exceed 64Kb, at which point the command never finishes. I'm thinking the pipe buffer size has been hit, and writes to the pipe are now blocked, causing the process to never finish. In my actual application, i'm running a long command that has lots of stdout that I need to capture and save when the process has finished. Is there a way to raise the buffer size, or better yet have the buffer flushed so the limit is not hit?
cmd = "for i in {1..6600}; do echo '123456789'; done" #works fine at 6500 iterations.
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
Process.wait(cmd_pid)
pipe_cmd_out.close
out = pipe_cmd_in.read
puts "child: cmd out length = #{out.length}"
UPDATE
Open3::capture2e does seem to work for the simple example I showed. Unfortunately for my actual application, I need to be able to get the pid of the spawned process as well, and have control of when I block execution. The general idea is I fork a non blocking process. In this fork, I spawn a command. I send the command pid back to the parent process, then I wait on the command to finish to get the exit status. When command is finished, exit status is sent back to parent. In the parent, a loop is iterating every 1 second checking the DB for control actions such as pause and resume. If it gets a control action, it sends the appropriate signal to the command pid to stop, continue. When the command eventually finishes, the parent hits the rescue block and reads the exit status pipe, and saves to DB. Here's what my actual flow looks like:
#pipes for communicating the command pid, and exit status from child to parent
pipe_parent_in, pipe_child_out = IO.pipe
pipe_exitstatus_read, pipe_exitstatus_write = IO.pipe
child_pid = fork do
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
pipe_child_out.write cmd_pid #send command pid to parent
pipe_child_out.close
Process.wait(cmd_pid)
exitstatus = $?.exitstatus
pipe_exitstatus_write.write exitstatus #send exitstatus to parent
pipe_exitstatus_write.close
pipe_cmd_out.close
out = pipe_cmd_in.read
#save out to DB
end
#blocking read to get the command pid from the child
pipe_child_out.close
cmd_pid = pipe_parent_in.read.to_i
loop do
begin
Process.getpgid(cmd_pid) #when command is done, this will except
#job.reload #refresh from DB
#based on status in the DB, pause / resume command
if #job.status == 'pausing'
Process.kill('SIGSTOP', cmd_pid)
elsif #job.status == 'resuming'
Process.kill('SIGCONT', cmd_pid)
end
rescue
#command is no longer running
pipe_exitstatus_write.close
exitstatus = pipe_exitstatus_read.read
#save exit status to DB
break
end
sleep 1
end
NOTE: I cannot have the parent poll the command output pipe because the parent would then be blocked waiting for the pipe to close. It would not be able to pause and resume the command via the control loop.
This code seems to do what you want, and may be illustrative.
cmd = "for i in {1..6600}; do echo '123456789'; done"
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
#exitstatus = :not_done
Thread.new do
Process.wait(cmd_pid);
#exitstatus = $?.exitstatus
end
pipe_cmd_out.close
out = pipe_cmd_in.read;
sleep(0.1) while #exitstatus == :not_done
puts "child: cmd out length = #{out.length}; Exit status: #{#exitstatus}"
In general, sharing data between threads (#exitstatus) requires more care, but it works
here because it's only written once, by the thread, after initialization. (It turns out
$?.exitstatus can return nil, which is why I initialized it to something else.) The call
to sleep() is unlikely to execute even once since the read() just above it won't complete
until the spawned process has closed its stdout.
Indeed, your diagnosis is likely correct. You could implement a select and read loop on the pipe while waiting for the process to end, but likely you can get what you want more simply with stdlib Open3::capture2e.

Resources