Event loop in Tcl - events

I am a beginnner in Tcl. I am trying to learn Tcl without getting involved in Tk. I came across commands like vwait and after but I was confused as most explanations involved the notion of event loop and further mostly demonstrated the concept by using Tk. I would like to understand the notion of events and event loop and how the commands I mentiond relate to them, please refere me some reference for this. The explantion should not use Tk as examples, use no Tcl extensions, assume no prior knowledge of events. Some (minimal) toy examples and real-wordl application of tcl event loop except GUI programming/Tk would be appreciated.
I came across this tutorial on Tclers wiki. I am looking for OTHER reference or explanation like this.

If you're not using Tk, the main reasons for using the event loop are for waiting in the background while you do some other I/O-bound task, and for handling server sockets.
Let's look at server sockets.
When you open a server socket with:
socket -server myCallbackProcedure 12345
You're arranging for an event handler to be set on the server socket so that when there is an incoming connection, that connection is converted you a normal socket and your supplied callback procedure (myCallbackProcedure) is called to handle the interaction with the socket. That's often in turn done by setting a fileevent handler so that incoming data is processed when it arrives instead of blocking the process waiting for it, but it doesn't have to be.
The event loop? That's a piece of code that calls into the OS (via select(), poll(), WaitForMultipleObject(), etc., depending on OS and build options) to wait until something happens on any nominated channel or a timeout occurs. It's very efficient, since the thread making the call can be suspended while waiting. If something happens, the OS call returns and Tcl arranges for the relevant callbacks to be called. (There's a queue internally.) It's a loop because once the events are processed, it's normal to go back and wait for some more. (That's what Tk does until there are no more windows for it to control, and what vwait does until the variable it is waiting for is set by some event handler.)
Asynchronous waits are managed using a time-ordered queue, and translate into setting the timeout on the call into the OS.
An example:
socket -server myCallbackProcedure 12345
proc myCallbackProcedure {channel clientHost clientPort} {
puts "Connection from $clientHost"
puts $channel "Hi there!"
flush $channel
close $channel
}
vwait forever
# The “forever” is an idiom; it's just a variable that isn't used elsewhere
# and so is never set, and it indicates that we're going to run the process
# until we kill it manually.
A somewhat more complicated example with asynchronous connection handling so we can serve multiple connections at once (CPU needed: minimal):
socket -server myCallbackProcedure 12345
proc myCallbackProcedure {channel clientHost clientPort} {
puts "Connection from $clientHost"
fileevent $channel readable [list incoming $channel $clientHost]
fconfigure $channel -blocking 0 -buffering line
puts $channel "Hi there!"
}
proc incoming {channel host timeout} {
if {[gets $channel line] >= 0} {
puts $channel "You said '$line'"
} elseif {[eof $channel]} {
puts "$host has gone"
close $channel
}
}
vwait forever
An even more complicated example that will close connections 10 seconds (= 10000ms) after the last message on them:
socket -server myCallbackProcedure 12345
proc myCallbackProcedure {channel clientHost clientPort} {
global timeouts
puts "Connection from $clientHost"
set timeouts($channel) [after 10000 [list timeout $channel $clientHost]]
fileevent $channel readable [list incoming $channel $clientHost]
fconfigure $channel -blocking 0 -buffering line
puts $channel "Hi there!"
}
proc incoming {channel host timeout} {
global timeouts
after cancel $timeouts($channel)
if {[gets $channel line] >= 0} {
puts $channel "You said '$line'"
} elseif {[eof $channel]} {
puts "$host has gone"
close $channel
unset timeouts($channel)
return
}
# Reset the timeout
set timeouts($channel) [after 10000 [list timeout $channel $host]]
}
proc timeout {channel host} {
global timeouts
puts "Timeout for $host, closing anyway..."
close $channel
unset -nocomplain timeouts($channel)
}
vwait forever

Related

Shell script AT Commands : not able to send sms through serial port

I have the below shell script (expect) where I am trying to send SMS. I have referred many stack overflow references and found out that ctrl-z maps to \x1a. However, even after appending it to the message and sending to the port or sending ctrl z separately to the port didn't help me. It timeouts later.
The script is written to send sms in pdu format. Irrespective of that, I believe, this is a generic issue to send ctrl-z to port. If you feel the script has some other errors, please share the solution for the same.
Also the length (34) mentioned below is the length of the (PDU_LENGTH -2)/2 as per the specification. This length doesn't include ctrl-z character.
at_command = "AT+CMGS=34\r"
message_content = "0011000C810056890......"
Script:
set PROMPT "0"
set timeout "$COMMAND_TIMEOUT"
send "$at_command"
expect {
"OK" { puts "Command Accepted\n"; }
"ERROR" { puts "Command Failed\n"; }
timeout { puts "Unable to connect to $HOSTIP at $HOSTPORT"; exit 1 }
"*>*" { set PROMPT "1"; }
}
if { "$PROMPT" == "1" } {
send "$message_content"
send "\x1a"
expect {
"OK" { puts "\nCommand accepted"; }
"ERROR" { puts "\nCommand failed"; }
"*>*" { puts "CTRL-Z dint reach UT. Error..."; }
"*" { puts "Unexpected return value received"; }
}
}
Am very sure the script sends $message_content" to port but exits immediately after sending "$message_content".
OUTPUT:
AT+CMGS=34
>
I did something like this in c# with an SMS-Gateway-Modul.
I had to switch to PDU-Mode first!
After that i had to transmit the expected PDU-Length and finally the PDU itself.
Every command has to be committed with can carriage return ASC[13] and the PDU had to be committed with an ASC[26] finally.
Here you can see a schematic(!) flow, how i did it in c#:
1) Create PDU and get length
int len;
var pdu = PDUGenerator.GetPdu(destination, message, "", out len);
2) Switch to PDUMode
SendToCom("AT+CMGF=0" + System.Convert.ToChar(13));
3) Announce message length
SendToCom("AT+CMGS=" + len + System.Convert.ToChar(13));
4) Send PDU and commit
SendToCom(pdu + System.Convert.ToChar(26));

expect fails when running proc inside proc

My script works fine (retrieves sftp prompt) when using one proc. But when I try to use proc inside proc, script gets stuck, and I do not know why.
Please do not refactor the code, that is not the point, I need to understand what is the issue here.
Working code:
proc sftp_connect {} {
set times 0;
set connection_retry 2
set timeout 1;
while { $times < $connection_retry } {
spawn sftp ${SFTP_USER}#${SFTP_SERVER}
expect {
timeout { puts "Connection timeout"; exit 1}
default {exit 2}
"*assword:*" {
send "${SFTP_PASSWORD}\n";
expect {
"sftp>" { puts "Connected"; set times [ expr $times+1]; exp_continue}
}
}
}
}
send "quit\r";
}
sftp_connect
Debug output:
expect: does "\r\nsftp> " (spawn_id exp5) match glob pattern "sftp>"? yes
But after moving send password into separate proc, expect does not retrieve sftp prompt anymore ("sftp>"):
proc sftp_send_password {} {
send "${SFTP_PASSWORD}\n";
expect {
"sftp>" { puts "Connected"; set times [ expr $times+1]; exp_continue}
}
}
proc sftp_connect {} {
set times 0;
set connection_retry 2
set timeout 1;
while { $times < $connection_retry } {
spawn sftp ${SFTP_USER}#${SFTP_SERVER}
expect {
timeout { puts "Connection timeout"; exit 1}
default {exit 2}
"*assword:*" { sftp_send_password }
}
}
send "quit\r";
}
sftp_connect
Debug output:
expect: does "" (spawn_id exp0) match glob pattern "sftp>"? yes
I don't have my copy of "Exploring Expect" handy, but I think you're running into a variable scoping issue. spawn invisibly sets a variable named spawn_id. When you call spawn in a proc, that variable is scoped only for that proc. Declare it as global:
proc sftp_connect {} {
global spawn_id
# ... rest is the same
}
I think you don't have to do the same thing in sftp_send_password because expect has a more forgiving scoping scheme than Tcl (if expect does not find a local variable, look in the global namespace).
Your sftp_send_password proc will not affect the times variable in sftp_connect though, due to the same variable scoping issue. I'd recommend
proc sftp_send_password {times_var} {
upvar 1 $times_var times ;# link this var to that in the caller
send "${SFTP_PASSWORD}\n";
expect {
"sftp>" { puts "Connected"; incr times; exp_continue}
}
# note use of `incr` instead of `expr`
}
And then the sftp_connect proc sends the times variable name:
sftp_send_password times
The following is from the expect's man page:
Expect takes a rather liberal view of scoping. In particular, variables read by commands specific to the Expect program will be sought first from
the local scope, and if not found, in the global scope. For example, this obviates the need to place global timeout in every procedure you write
that uses expect. On the other hand, variables written are always in the local scope (unless a global command has been issued). The most common
problem this causes is when spawn is executed in a procedure. Outside the procedure, spawn_id no longer exists, so the spawned process is no longer
accessible simply because of scoping. Add a global spawn_id to such a procedure.

Ruby - Character-wise IO.select

Is there a way to have IO.select return a single input character without receiving an EOF? I would like to be able to read user input from the keyboard the same way I read from any other stream, like an open TCP socket connection. It would allow me to make an event loop like this:
loop {
rd, _, _ = IO.select([long_lived_tcp_connection, stdin])
case rd[0]
when long_lived_tcp_connection
handle_server_sent_event(rd[0].read)
when stdin
handle_keypress(rd[0].read)
end
}
I've been looking into io/console but it doesn't quite give me this capability (although IO#getch comes pretty close).
You can set stdin to raw mode (taken from this answer):
begin
state = `stty -g`
`stty raw -echo -icanon isig`
loop do
rd, _, _ = IO.select([$stdin])
handle_keypress(rd[0].getc)
end
ensure
`stty #{state}`
end
IO#getc returns a single character from stdin. Another option is IO#read_nonblock to read all available data.
To read one character at a time I would use:
STDIN.each_char {|char| p char}

ruby Process.spawn stdout => pipe buffer size limit

In Ruby, I'm using Process.spawn to run a command in a new process. I've opened a bidirectional pipe to capture stdout and stderr from the spawned process. This works great until the bytes written to the pipe (stdout from the command) exceed 64Kb, at which point the command never finishes. I'm thinking the pipe buffer size has been hit, and writes to the pipe are now blocked, causing the process to never finish. In my actual application, i'm running a long command that has lots of stdout that I need to capture and save when the process has finished. Is there a way to raise the buffer size, or better yet have the buffer flushed so the limit is not hit?
cmd = "for i in {1..6600}; do echo '123456789'; done" #works fine at 6500 iterations.
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
Process.wait(cmd_pid)
pipe_cmd_out.close
out = pipe_cmd_in.read
puts "child: cmd out length = #{out.length}"
UPDATE
Open3::capture2e does seem to work for the simple example I showed. Unfortunately for my actual application, I need to be able to get the pid of the spawned process as well, and have control of when I block execution. The general idea is I fork a non blocking process. In this fork, I spawn a command. I send the command pid back to the parent process, then I wait on the command to finish to get the exit status. When command is finished, exit status is sent back to parent. In the parent, a loop is iterating every 1 second checking the DB for control actions such as pause and resume. If it gets a control action, it sends the appropriate signal to the command pid to stop, continue. When the command eventually finishes, the parent hits the rescue block and reads the exit status pipe, and saves to DB. Here's what my actual flow looks like:
#pipes for communicating the command pid, and exit status from child to parent
pipe_parent_in, pipe_child_out = IO.pipe
pipe_exitstatus_read, pipe_exitstatus_write = IO.pipe
child_pid = fork do
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
pipe_child_out.write cmd_pid #send command pid to parent
pipe_child_out.close
Process.wait(cmd_pid)
exitstatus = $?.exitstatus
pipe_exitstatus_write.write exitstatus #send exitstatus to parent
pipe_exitstatus_write.close
pipe_cmd_out.close
out = pipe_cmd_in.read
#save out to DB
end
#blocking read to get the command pid from the child
pipe_child_out.close
cmd_pid = pipe_parent_in.read.to_i
loop do
begin
Process.getpgid(cmd_pid) #when command is done, this will except
#job.reload #refresh from DB
#based on status in the DB, pause / resume command
if #job.status == 'pausing'
Process.kill('SIGSTOP', cmd_pid)
elsif #job.status == 'resuming'
Process.kill('SIGCONT', cmd_pid)
end
rescue
#command is no longer running
pipe_exitstatus_write.close
exitstatus = pipe_exitstatus_read.read
#save exit status to DB
break
end
sleep 1
end
NOTE: I cannot have the parent poll the command output pipe because the parent would then be blocked waiting for the pipe to close. It would not be able to pause and resume the command via the control loop.
This code seems to do what you want, and may be illustrative.
cmd = "for i in {1..6600}; do echo '123456789'; done"
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
#exitstatus = :not_done
Thread.new do
Process.wait(cmd_pid);
#exitstatus = $?.exitstatus
end
pipe_cmd_out.close
out = pipe_cmd_in.read;
sleep(0.1) while #exitstatus == :not_done
puts "child: cmd out length = #{out.length}; Exit status: #{#exitstatus}"
In general, sharing data between threads (#exitstatus) requires more care, but it works
here because it's only written once, by the thread, after initialization. (It turns out
$?.exitstatus can return nil, which is why I initialized it to something else.) The call
to sleep() is unlikely to execute even once since the read() just above it won't complete
until the spawned process has closed its stdout.
Indeed, your diagnosis is likely correct. You could implement a select and read loop on the pipe while waiting for the process to end, but likely you can get what you want more simply with stdlib Open3::capture2e.

Detect end of TCL background process in a TCL script

I'm working on a program the uses an EXEC command to run a make file. This can take a long time so I want to put it in the background so the GUI doesn't lock up. However I also want the GUI to be disabled and a progress bar to run only while the make file is compiling.
So how can I detect when a background progress has finished in TCL?
Edit: It gets more complicated because my boss wants the command window to stay open (or be visable) so the user can see the progress of the make and see if it errors.
P.S. Would figuring out threading be easier? I need some way to prevent the GUI from locking up (prevent NOT RESPONDING).'
Edit: The GUI is made with TK.
I think TK is single-threaded which causes the problem. Or it could be that it defaults to single threaded and I want to set it to multi-thread.
As #glenn-jackman pointed out, the use of fileevent is preferred (because it should work everywhere).
proc handle_bgexec {callback chan} {
append ::bgexec_data($chan) [read $chan]
if {[eof $chan]} {
# end of file, call the callback
{*}$callback $::bgexec_data($chan)
unset ::bgexec_data($chan)
}
}
proc bgexec {callback args} {
set chan [open "| $args" r]
fconfigure $chan -blocking false
fileevent $chan readable [list handle_bgexec $callback $chan]
return
}
Invoke this as bgexec job_done cmd /c start /wait cmd /c make all-all. job_done gets called with the output of the command after it finishes.
It is also possible to use threads for this things, but this requires a threaded tcl build (which is now default for all platforms AFAIK, but older versions of Tcl esp. under unix don't build a threaded Tcl by default.) and the Thread package (which is included by default). An approach to use it with Threads would be:
thread::create "[list exec cmd /c start /wait cmd /c make all-all];[list thread::send [thread::id] {callback code}];thread::exit"
If you need to call this on a regular basis it might be worth to use only one worker thread instead of creating a new one for each job.
Edit: Add /wait as parameter for start the keep the first cmd running.
cmd /c start /wait cmd /c make all-all
You want to run the make process in a pipeline and use the event loop and fileevent to monitor its progress (see http://wiki.tcl.tk/880)
proc handle_make_output {chan} {
# The channel is readable; try to read it.
set status [catch { gets $chan line } result]
if { $status != 0 } {
# Error on the channel
puts "error reading $chan: $result"
set ::DONE 2
} elseif { $result >= 0 } {
# Successfully read the channel
puts "got: $line"
} elseif { [chan eof $chan] } {
# End of file on the channel
puts "end of file"
set ::DONE 1
} elseif { [chan blocked $chan] } {
# Read blocked. Just return
} else {
# Something else
puts "can't happen"
set ::DONE 3
}
}
set chan [open "|make" r]
chan configure $chan -blocking false
chan event $chan readable [list handle_make_output $chan]
vwait ::DONE
close $chan
I'm not certain about the use of vwait within Tk's event loop. Perhaps an expert will help me out here.

Resources