How to write specified arguments in tcl something like -h [duplicate] - shell

Does anyone know a standard package for tcl to easily parse the input arguments ? or a ready proc ? ( I have only 3 flags but something general is preferable ).

The documentation includes an example. Here is a simple example:
package require cmdline
set parameters {
{server.arg "" "Which server to search"}
{debug "Turn on debugging, default=off"}
}
set usage "- A simple script to demo cmdline parsing"
array set options [cmdline::getoptions ::argv $parameters $usage]
parray options
Sample runs:
$ tclsh simple.tcl
options(debug) = 0
options(server) =
$ tclsh simple.tcl -server google.com
options(debug) = 0
options(server) = google.com
$ tclsh simple.tcl -server google.com -debug
options(debug) = 1
options(server) = google.com
$ tclsh simple.tcl -help
simple - A simple script to demo cmdline parsing
-server value Which server to search <>
-debug Turn on debugging, default=off
-help Print this message
-? Print this message
while executing
"error [usage $optlist $usage]"
(procedure "cmdline::getoptions" line 15)
invoked from within
"cmdline::getoptions ::argv $parameters $usage"
invoked from within
"array set options [cmdline::getoptions ::argv $parameters $usage]"
(file "simple.tcl" line 11)
Discussion
Unlike most Linux utilities, TCL uses single dash instead of double dashes for command-line options
When a flags ends with .arg, then that flag expects an argument to follow, such as in the case of server.arg
The debug flag does not end with .arg, therefore it does not expect any argument
The user defines the command-line parameters by a list of lists. Each sub-list contains 2 or 3 parts:
The flag (e.g. debug)
The default value (e.g. 0), only if the parameter takes an argument (flag ends with .arg).
And the help message
Invoke usage/help with -help or -?, however, the output is not pretty, see the last sample run.
Update: Help/Usage
I have been thinking about the message output when the user invoke help (see the last sample run above). To get around that, you need to trap the error yourself:
set usage "- A simple script to demo cmdline parsing"
if {[catch {array set options [cmdline::getoptions ::argv $parameters $usage]}]} {
puts [cmdline::usage $parameters $usage]
} else {
parray options
}
Sample run 2:
$ tclsh simple.tcl -?
simple - A simple script to demo cmdline parsing
-server value Which server to search <>
-debug Turn on debugging, default=off
-help Print this message
-? Print this message

Tcllib has such a package, cmdline. It's a bit underdocumented, but it works.

Here is a simple, native, no-package argument parser:
#
# arg_parse simple argument parser
# Example `arg_parse {help version} {with-value} {-with-value 123 positional arguments}`
# will return:
# `positionals {positional arguments} with-value 123`
#
# #param boolean_flags flags which does not requires additional arguments (like help)
# #param argument_flags flags which requires values (-with-value value)
# #param args the got command line arguments
#
# #return stringified array of parsed arguments
#
proc arg_parse { boolean_flags argument_flags args } {
set argsarr(positionals) {}
for {set i 0} {$i < [llength $args]} {incr i} {
set arg [lindex $args $i]
if { [sstartswith $arg "-" ] } {
set flag [string range $arg 1 end]
if { [lsearch $boolean_flags $flag] >= 0 } {
set argsarr($flag) 1
} elseif { [lsearch $argument_flags $flag] >= 0 } {
incr i
set argsarr($flag) [lindex $args $i]
} else {
puts "ERROR: Unknown flag argument: $arg"
return
}
} else {
lappend argsarr(positionals) $arg
}
}
return [array get argsarr]
}
USE argument parser
#
# USE argument parser:
#
proc my_awesome_proc { args } {
array set argsarr [arg_parse "help version" "with-value" {*}$args]
parray argsarr
}
USE my_awesome_proc :
% my_awesome_proc -help
argsarr(help) = 1
argsarr(positionals) =
% my_awesome_proc -with-value 123
argsarr(positionals) =
argsarr(with-value) = 123
% my_awesome_proc -wrong
ERROR: Unknown flag argument: -wrong
% my_awesome_proc positional arguments
argsarr(positionals) = positional arguments
%

Related

What is the meaning of "-" in getopts template?

What is the meaning of the following template in getopts?
while getopts ':x:y-:' val;
I know that it expects two options -x or -y but what is the meaning of the symbol - at the end of the template ?
Empirical approach
I was unable to find any documentation about this "-" in the option string. So, I tried an empirical approach to see how it influences the behavior of getopts. I found that passing "--something" to the script (without spaces after "--") makes it accept "--" as an option and report "something" in OPTARG:
#!/bin/bash
xopt=
yopt=
mopt=
while getopts ':x:y-:' val
do
case $val in
x) xopt=1
xval="$OPTARG";;
y) yopt=1;;
-) mopt=1
mval="$OPTARG";;
?) echo "Usage: $0: [-x value] [-y] [--long_opt_name] args" >&2
exit 2;;
esac
done
[ ! -z "$xopt" ] && echo "Option -x specified with parameter '$xval'"
[ ! -z "$yopt" ] && echo "Option -y specified"
[ ! -z "$mopt" ] && echo "Option -- specified with optname '$mval'"
shift $(($OPTIND - 1))
echo "Remaining arguments are: $*"
Examples of executions:
$ t.sh --v
Option -- specified with optname 'v'
Remaining arguments are:
$ t.sh --vv other1 other2
Option -- specified with optname 'vv'
Remaining arguments are: other1 other2
$ t.sh --help -x 123 -y others
Option -x specified with parameter '123'
Option -y specified
Option -- specified with optname 'help'
Remaining arguments are: others
$ t.sh --help -x 123 -- -y others
Option -x specified with parameter '123'
Option -- specified with optname 'help'
Remaining arguments are: -y others
$ t.sh -y -x val --x -- param1 -h -j -x -y
Option -x specified with parameter 'val'
Option -y specified
Option -- specified with optname 'x'
Remaining arguments are: param1 -h -j -x -y
Would it be a "hidden" feature to manage gnu-like long options but without parameters (i.e. only the "--long_opt_name") or am I promoting the side effect of a bug? Anyway, using such undocumented behavior is not advised as this may change after some future fixes or evolution of the command.
Nevertheless, if spaces are put after the double "-", the latter continues to play its usual documented role separating options from additional parameters:
$ t.sh --help -y -x val -- param1 -h -j -x -y
Option -x specified with parameter 'val'
Option -y specified
Option -- specified with optname 'help'
Remaining arguments are: param1 -h -j -x -y
$ t.sh -- -v
Remaining arguments are: -v
Verification in the source code
As getopts is a builtin of bash, I downloaded its source code (version 5.0) from here. The builtins are located in the eponym sub-directory. getopts source code is: builtins/getopts.def. For each argument on the command line, it calls sh_getopt(argc, argv, optstr). This function is defined in builtins/getopt.c:
[...]
int
sh_getopt (argc, argv, optstring)
int argc;
char *const *argv;
const char *optstring;
{
[...]
/* Look at and handle the next option-character. */
c = *nextchar++; sh_charindex++;
temp = strchr (optstring, c);
sh_optopt = c;
/* Increment `sh_optind' when we start to process its last character. */
if (nextchar == 0 || *nextchar == '\0')
{
sh_optind++;
nextchar = (char *)NULL;
}
if (sh_badopt = (temp == NULL || c == ':'))
{
if (sh_opterr)
BADOPT (c);
return '?';
}
if (temp[1] == ':')
{
if (nextchar && *nextchar)
{
/* This is an option that requires an argument. */
sh_optarg = nextchar;
/* If we end this ARGV-element by taking the rest as an arg,
we must advance to the next element now. */
sh_optind++;
}
else if (sh_optind == argc)
{
if (sh_opterr)
NEEDARG (c);
sh_optopt = c;
sh_optarg = ""; /* Needed by getopts. */
c = (optstring[0] == ':') ? ':' : '?';
}
else
/* We already incremented `sh_optind' once;
increment it again when taking next ARGV-elt as argument. */
sh_optarg = argv[sh_optind++];
nextchar = (char *)NULL;
}
return c;
}
In the previous source lines, nextchar points on the option character (i.e. the one located right after '-') in argv[] and temp points on the option character in the optstring (which is '-'). We can see when temp[1] == ':' (i.e. the optstring specifies "-:"), sh_optarg is set with the incremented value of nextchar which is the first letter of the option name located behind "--".
In our example, where optstring is ":x:y-:" and we pass to the script "--name", the above code does:
optstring = ":x:y-:"
^
|
temp
argv[x] = "--name"
^^
/ \
c nextchar (+ 1)
temp[1] == ':' ==> sh_optarg=nextchar="name"
Hence, with the above algorithm in bash, when ":-" is specified in the option string, any "--name" option on the command line reports "name" in OPTARG variable.
This is merely the output of the code path when the parameter is concatenated to the option name (e.g. -xfoo = option "x" and parameter "foo", --foo = option "-" and parameter "foo").

Testing if a docopt command-line option is set in nim

I'm trying to write a nim program that can read either from the standard input or from a file given as a command-line option. I use docopt to parse the command line.
import docopt
const doc = """
This program takes input from a file or from stdin.
Usage:
testinput [-i <filename> | --input <filename>]
-h --help Show this help message and exit.
-i --input <filename> File to use as input.
"""
when isMainModule:
let args = docopt(doc)
var inFilename: string
for opt, val in args.pairs():
case opt
of "-i", "--input":
inFilename = $args[opt]
else:
echo "Unknown option" & opt
quit(QuitFailure)
let inputSource =
if inFilename.isNil:
stdin
else:
echo "We have inFilename: " & inFilename
open(inFilename)
The program compiles.
It doesn't crash when I give it a file on the command line:
$ ./testinput -i testinput.nim
We have inFilename: testinput.nim
But I get an IOError if I try to feed it from its stdin:
$ ./testinput < testinput.nim
We have inFilename: nil
testinput.nim(28) testinput
system.nim(2833) sysFatal
Error: unhandled exception: cannot open: nil [IOError]
How come inFilename.isNil is false, and yet the execution of the else branch tells me that inFilename "is" nil?
Is there a correct and elegant way to do this, using docopt?
I'm not familiar with docopt, but it seems to create an entry for each option in the doc, not for the options specified by user so your code's been getting args == {"--input": nil} and stringifying the nil.
The following will work correctly:
import docopt
const doc = """
This program takes input from a file or from stdin.
Usage:
testinput [-i <filename> | --input <filename>]
-h --help Show this help message and exit.
-i --input <filename> File to use as input.
"""
when isMainModule:
let args = docopt(doc)
var inFilename: string
if args["--input"]:
inFilename = $args["--input"]
if not inFilename.isNil:
echo "We have inFilename: " & inFilename
let inputSource =
if inFilename.isNil:
stdin
else:
open(inFilename)
Also note that you don't have to check for "-i" option as docopt knows it's an alias to "--input".
Instead of transforming the value of the option into a string with $, one can keep it as a Value, which is the type returned by docopt.
According to the documentation:
vkNone (No Value)
This kind of Value appears when there is an option which hasn't been set and has no default. It is false when converted toBool
One can apparently use the value of the option in a boolean expression, and it seems to be automatically interpreted as a bool:
import docopt
const doc = """
This program takes input from a file or from stdin.
Usage:
testinput [-i <filename> | --input <filename>]
-h --help Show this help message and exit.
-i --input <filename> File to use as input.
"""
when isMainModule:
let args = docopt(doc)
var inFilename: Value
for opt, val in args.pairs():
case opt
of "-i", "--input":
inFilename = val
else:
echo "Unknown option" & opt
quit(QuitFailure)
let inputSource =
if not bool(inFilename):
stdin
else:
echo "We have inFilename: " & $inFilename
open($inFilename)
Another usage of this behaviour is given in this other anwser, and avoids setting the variable, therefore keeping it nil.

Query Parameter manipulation using Shell

I am a newbie to shell scripting. Can anyone please help me with this script ?
Question:
Given a url with some query parameters:
Ex: URL: http://xyz.ubs.com/xyzApp.do?lang=fr&fmt=xml&showresults=true&cty=DE"
I have an array of elements with which I want to replace the value of each query parameter in the above URL and make a curl call to get the response from the server. I am successful in making successful curl calls for a single input. I want to do the same for each and every possible combinations
The Vector Array elements I am using are:
Vectors=("\script>alert (0)" '"/ ()' "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" "lasfasf")
I need help to construct the various possible requests using a shell script.
Ex: http://xyz.ubs.com/xyzApp.do?lang='\script>alert (0)'&fmt=xml&showresults=true&cty=DE"
http://xyz.ubs.com/xyzApp.do?lang=fr&fmt='\script>alert (0)'&showresults=true&cty=DE"
http://xyz.ubs.com/xyzApp.do?lang=fr&fmt=xml&showresults='\script>alert (0)'&cty=DE"
and so on..
Thanks in Advance :)
Here is how I would do it in perl:
if ( #ARGV > 0 ) {
#print "Number of arguments: " . scalar #ARGV . "\n";
foreach (#ARGV) {
print "$_\n";
}
if ( #ARGV > 3 ) {
print "Too many arguments! Usage: script.pl arg1 arg2\n";
die;
}
} else {
print "No arguments! Usage: script.pl arg1 arg2\n";
die;
}
$arg1 = $ARGV[0];
$arg2 = $ARGV[1];

How to evaluate a tclsh script?

tclsh is a shell containing the TCL commands.
The TCL uplevel command evaluates the given TCL script, but it fails to evaluate a tclsh script (which can contain bash commands).
How can I obtain an analogue of uplevel for the tclsh script?
Consider this TCL script:
# file main.tcl
proc prompt { } \
{
puts -nonewline stdout "MyShell > "
flush stdout
}
proc process { } \
{
catch { uplevel #0 [gets stdin] } got
if { $got ne "" } {
puts stderr $got
flush stderr
}
prompt
}
fileevent stdin readable process
prompt
while { true } { update; after 100 }
This is a kind of TCL shell, so when you type tclsh main.tcl it shows a prompt MyShell > and it acts like you are in interactive tclsh session. However, you are in non-interactive tclsh session, and everything you type is evaluated by the uplevel command. So here you can't use bash commands like you can do it int interactive tclsh session. E.g. you can't open vim right from the shell, also exec vim will not work.
What I want is to make MyShell > act like interactive tclsh session. The reason why I can't just use tclsh is the loop at the last line of main.tcl: I have to have that loop and everything has to happen in that loop. I also have to do some stuff at each iteration of that loop, so can use vwait.
Here is the solution.
I have found no better solution then to overwrite the ::unknown function.
# file main.tcl
proc ::unknown { args } \
{
variable ::tcl::UnknownPending
global auto_noexec auto_noload env tcl_interactive
global myshell_evaluation
if { [info exists myshell_evaluation] && $myshell_evaluation } {
set level #0
} else {
set level 1
}
# If the command word has the form "namespace inscope ns cmd"
# then concatenate its arguments onto the end and evaluate it.
set cmd [lindex $args 0]
if {[regexp "^:*namespace\[ \t\n\]+inscope" $cmd] && [llength $cmd] == 4} {
#return -code error "You need an {*}"
set arglist [lrange $args 1 end]
set ret [catch {uplevel $level ::$cmd $arglist} result opts]
dict unset opts -errorinfo
dict incr opts -level
return -options $opts $result
}
catch {set savedErrorInfo $::errorInfo}
catch {set savedErrorCode $::errorCode}
set name $cmd
if {![info exists auto_noload]} {
#
# Make sure we're not trying to load the same proc twice.
#
if {[info exists UnknownPending($name)]} {
return -code error "self-referential recursion in \"unknown\" for command \"$name\"";
}
set UnknownPending($name) pending;
set ret [catch {
auto_load $name [uplevel $level {::namespace current}]
} msg opts]
unset UnknownPending($name);
if {$ret != 0} {
dict append opts -errorinfo "\n (autoloading \"$name\")"
return -options $opts $msg
}
if {![array size UnknownPending]} {
unset UnknownPending
}
if {$msg} {
if {[info exists savedErrorCode]} {
set ::errorCode $savedErrorCode
} else {
unset -nocomplain ::errorCode
}
if {[info exists savedErrorInfo]} {
set ::errorInfo $savedErrorInfo
} else {
unset -nocomplain ::errorInfo
}
set code [catch {uplevel $level $args} msg opts]
if {$code == 1} {
#
# Compute stack trace contribution from the [uplevel].
# Note the dependence on how Tcl_AddErrorInfo, etc.
# construct the stack trace.
#
set errorInfo [dict get $opts -errorinfo]
set errorCode [dict get $opts -errorcode]
set cinfo $args
if {[string bytelength $cinfo] > 150} {
set cinfo [string range $cinfo 0 150]
while {[string bytelength $cinfo] > 150} {
set cinfo [string range $cinfo 0 end-1]
}
append cinfo ...
}
append cinfo "\"\n (\"uplevel\" body line 1)"
append cinfo "\n invoked from within"
append cinfo "\n\"uplevel $level \$args\""
#
# Try each possible form of the stack trace
# and trim the extra contribution from the matching case
#
set expect "$msg\n while executing\n\"$cinfo"
if {$errorInfo eq $expect} {
#
# The stack has only the eval from the expanded command
# Do not generate any stack trace here.
#
dict unset opts -errorinfo
dict incr opts -level
return -options $opts $msg
}
#
# Stack trace is nested, trim off just the contribution
# from the extra "eval" of $args due to the "catch" above.
#
set expect "\n invoked from within\n\"$cinfo"
set exlen [string length $expect]
set eilen [string length $errorInfo]
set i [expr {$eilen - $exlen - 1}]
set einfo [string range $errorInfo 0 $i]
#
# For now verify that $errorInfo consists of what we are about
# to return plus what we expected to trim off.
#
if {$errorInfo ne "$einfo$expect"} {
error "Tcl bug: unexpected stack trace in \"unknown\"" {} [list CORE UNKNOWN BADTRACE $einfo $expect $errorInfo]
}
return -code error -errorcode $errorCode -errorinfo $einfo $msg
} else {
dict incr opts -level
return -options $opts $msg
}
}
}
if { ( [info exists myshell_evaluation] && $myshell_evaluation ) || (([info level] == 1) && ([info script] eq "") && [info exists tcl_interactive] && $tcl_interactive) } {
if {![info exists auto_noexec]} {
set new [auto_execok $name]
if {$new ne ""} {
set redir ""
if {[namespace which -command console] eq ""} {
set redir ">&#stdout <#stdin"
}
uplevel $level [list ::catch [concat exec $redir $new [lrange $args 1 end]] ::tcl::UnknownResult ::tcl::UnknownOptions]
dict incr ::tcl::UnknownOptions -level
return -options $::tcl::UnknownOptions $::tcl::UnknownResult
}
}
if {$name eq "!!"} {
set newcmd [history event]
} elseif {[regexp {^!(.+)$} $name -> event]} {
set newcmd [history event $event]
} elseif {[regexp {^\^([^^]*)\^([^^]*)\^?$} $name -> old new]} {
set newcmd [history event -1]
catch {regsub -all -- $old $newcmd $new newcmd}
}
if {[info exists newcmd]} {
tclLog $newcmd
history change $newcmd 0
uplevel $level [list ::catch $newcmd ::tcl::UnknownResult ::tcl::UnknownOptions]
dict incr ::tcl::UnknownOptions -level
return -options $::tcl::UnknownOptions $::tcl::UnknownResult
}
set ret [catch {set candidates [info commands $name*]} msg]
if {$name eq "::"} {
set name ""
}
if {$ret != 0} {
dict append opts -errorinfo "\n (expanding command prefix \"$name\" in unknown)"
return -options $opts $msg
}
# Filter out bogus matches when $name contained
# a glob-special char [Bug 946952]
if {$name eq ""} {
# Handle empty $name separately due to strangeness
# in [string first] (See RFE 1243354)
set cmds $candidates
} else {
set cmds [list]
foreach x $candidates {
if {[string first $name $x] == 0} {
lappend cmds $x
}
}
}
if {[llength $cmds] == 1} {
uplevel $level [list ::catch [lreplace $args 0 0 [lindex $cmds 0]] ::tcl::UnknownResult ::tcl::UnknownOptions]
dict incr ::tcl::UnknownOptions -level
return -options $::tcl::UnknownOptions $::tcl::UnknownResult
}
if {[llength $cmds]} {
return -code error "ambiguous command name \"$name\": [lsort $cmds]"
}
}
return -code error "invalid command name \"$name\""
}
proc prompt { } \
{
puts -nonewline stdout "MyShell > "
flush stdout
}
proc process { } \
{
global myshell_evaluation
set myshell_evaluation true
catch { uplevel #0 [gets stdin] } got
set myshell_evaluation false
if { $got ne "" } {
puts stderr $got
flush stderr
}
prompt
}
fileevent stdin readable process
prompt
while { true } { update; after 100 }
The idea is to modify the ::unknown function so that it handles MyShell evaluations as the ones of tclsh interactive session.
This is an ugly solution, as I am fixing the code of ::unknown function which can be different for different systems and diferent versions of tcl.
Is there any solution which circumvents these issues?
uplevel does not only evaluate a script, but it evaluates it in the stack context of the caller of the instance where it's executed. It's a pretty advanced command which should be used when you define your own execution control structures, and OFC it's TCL specific - I find myself unable to imagine how a tclsh equivalent should work.
If you just want to evaluate another script, the proper TCL command would be eval. If that other script is tclsh, why don't you just open another tclsh?
The simplest answer, I think, would be to use the approach you're using; to rewrite the unknown command. Specifically, there is a line in it that checks to make sure the current context is
Not run in a script
Interactive
At the top level
If you replace that line:
if {([info level] == 1) && ([info script] eq "") && [info exists tcl_interactive] && $tcl_interactive} {
with something that just checks the level
if ([info level] == 1} {
you should get what you want.
Vaghan, you do have the right solution. Using ::unknown is how tclsh itself provides the interactive-shell-functionality you're talking about (invoking external binaries, etc). And you've lifted that same code and included it in your MyShell.
But, if I understand your concerns about it being an "ugly solution", you'd rather not reset ::unknown ?
In which case, why not just append the additional functionality you want to the end of the pre-existing ::unknown's body (or prepend it - you choose)
If you search on the Tcl'ers wiki for "let unknown know", you'd see a simple proc which demonstrates this. It prepends new code to the existing ::unknown, so you can keep adding additional "fallback code" as you go along.
(apologies if I've misunderstood why you feel your solution is "ugly")
Instead of changing the unknown proc, I suggest that you make the changes to evaluate the expresion
if {([info level] == 1) && ([info script] eq "") && [info exists tcl_interactive] && $tcl_interactive} {
to true.
info level: call your stuff with uplevel #0 $code
info script: call info script {} to set it to an empty value
tcl_interactive. Simple: set ::tcl_interactive 1
so your code would be
proc prompt { } {
puts -nonewline stdout "MyShell > "
flush stdout
}
proc process { } {
catch { uplevel #0 [gets stdin] } got
if { $got ne "" } {
puts stderr $got
flush stderr
}
prompt
}
fileevent stdin readable process
set tcl_interactive 1
info script {}
prompt
vwait forever

How to have expect timeout when trying to login to an ssh session it has spawned?

I am writing an bash script that uses expect to login to a bunch of Cisco ASAs (they don't support certificate login, hence using expect), makes a change to the configuration and then logs out.
I'd like the script to move onto the next ASA if it is unable to login.
Here is the script:
#!/bin/bash
# Scriptname: set-mtu
for asa in $(cat asa-list-temp)
do
/usr/bin/expect << EndExpect
spawn ssh admin_15#$asa
expect "assword:"
send "pa$$w0rd\r"
expect ">"
send "do something\r"
expect ">"
send "exit\r"
EndExpect
done
I think I can set a timeout on expect "assword:" but I can't figure out how to get it to close the spawned ssh session and then move onto the next ASA in the for list.
First of all I would use an expect script for this and lose the bash scripting.
Then for the expect part:
You can do this by using a switch that also matches for timeout (page 12 of exploring expect). In that way you can explicitly have some action when expect timeouts.
Otherwise by setting the timeout it will just continue with the next command in line.
set timeout 60
expect {
"assword:" {
}
timeout {
exit 1 # to exit the expect part of the script
}
}
I've created something similar where I used an overall expect script to run an expect script in parallel.
multiple.exp
#!/bin/sh
# the next line restarts using tclsh \
exec expect "$0" "$#"
# multiple.exp --
#
# This file implements the running of multiple expect scripts in parallel.
# It has some settings that can be found in multiple.config
#
# Copyright (c) 2008
#
# Author: Sander van Knippenberg
#####
# Setting the variables
##
source [file dirname $argv0]/.multiple.config
# To determine how long the script runs
set timingInfo("MultipleProcesses") [clock seconds]
# ---------------------------------------------------------------------
######
# Procedure to open a file with a certain filename and retrieve the contents as a string
#
# Input: filename
# Output/Returns: content of the file
##
proc openFile {fileName} {
if {[file exists $fileName] } {
set input [open $fileName r]
} else {
puts stderr "fileToList cannot open $fileName"
exit 1
}
set contents [read $input]
close $input
return $contents
}
######
# Procedure to write text to a file with the given filename
#
# Input: string, filename
##
proc toFile {text filename} {
# Open the filename for writing
set fileId [open $filename "w"]
# Send the text to the file.
# Failure to add '-nonewline' will reslt in an extra newline at the end of the file.
puts -nonewline $fileId $text
# Close the file, ensuring the data is written out before continueing with processing
close $fileId
}
# ---------------------------------------------------------------------
# Check for the right argument
if {$argc > 0 } {
set hostfile [lindex $argv 0]
} else {
puts stderr "$argv0 --- usage: $argv0 <hosts file>"
exit 1
}
# Create the commands that can be spawned in parallel
set commands {}
# Open the file with devices
set hosts [split [openFile $hostfile] "\n"]
foreach host $hosts {
if { [string length $host] > 1 } {
lappend commands "$commandDir/$commandName $host" # Here you can enter your own command!
}
}
# Run the processes in parallel
set idlist {}
set runningcount 0
set pattern "This will never match I guess"
# Startup the first round of processes until maxSpawn is reached,
# or the commands list is empty.
while { [llength $idlist] < $maxSpawn && [llength $commands] > 0} {
set command [lindex $commands 0]
eval spawn $command
lappend idlist $spawn_id
set commands [lreplace $commands 0 0]
incr runningcount
set commandInfo($spawn_id) $command
set timingInfo($spawn_id) [clock seconds]
send_user " $commandInfo($spawn_id) - started\n"
}
# Finally start running the processes
while {$runningcount > 0} {
expect {
-i $idlist $pattern {
}
eof {
set endedID $expect_out(spawn_id)
set donepos [lsearch $idlist $endedID]
set idlist [lreplace $idlist $donepos $donepos]
incr runningcount -1
set elapsedTime [clock format [expr [clock seconds] - $timingInfo($endedID)] -format "%M:%S (MM:SS)"]
send_user " $commandInfo($endedID) - finished in: $elapsedTime\n"
# If there are more commands to execute then do it!
if {[llength $commands] > 0} {
set command [lindex $commands 0]
eval spawn $command
lappend idlist $spawn_id
set commands [lreplace $commands 0 0]
incr runningcount
set commandInfo($spawn_id) $command
set timingInfo($spawn_id) [clock seconds]
}
}
timeout {
break
}
}
}
set elapsed_time [clock format [expr [clock seconds] - $timingInfo("MultipleProcesses")] -format "%M:%S (MM:SS)"]
send_user "$argv0 $argc - finished in: $elapsedTime\n"
multiple.config
# The dir from where the commands are executed.
set commandDir "/home/username/scripts/expect/";
set commandName "somecommand.exp";
# The maximum number of simultanious spawned processes.
set maxSpawn 40;
# The maximum timeout in seconds before any of the processes should be finished in minutes
set timeout 20800;
To make the answer clear: the solution is typical, just make timeout treatment inside the curly braced expect notation. So, your Tcl/Expect part in the shell script should be:
spawn ssh user#host
expect {
"assword:" {
send "password\r"
}
timeout {
exit
}
}
expect "prompt>"
. . .
login success
. . .
Here is another example, how to expect/treat timeout to resume waiting for the result string while the spawned command is still running.

Resources