Is it possible to pass variable to Tcl through makefile - makefile

I may need some help here.
The scenario is,
let's say, I have a TCL script "test.tcl", which contains something like below,
set condition true
if {$condition==true} {
puts "Message1"
} elseif {$condition==false} {
puts "Message2"
}
Then I have another makefile to simply run this TCL script, in which,
runScript:
tclsh test.tcl
When I run it with
make runScript
is there any way that variable "condition" inside TCL script can be somehow provided by Makefile, rather than writing inside TCL script itself?
Any help would be grateful. Thank you!

If you find a way to pass that variable to your script when you invoke it from the command line, then you can use the same method from your makefile. This isn't related to make or makefiles, it's just a TCL question.
Googling "set tcl variable from command line" got me to this page: https://www.tcl.tk/man/tcl8.5/tutorial/Tcl38.html
So, something like this might work:
$ cat Makefile
runScript:
myvalue=true tclsh test.tcl
$ cat test.tcl
set condition $env(myvalue)
...
But my days of writing TCL are far, far behind me.

The usual way to pass information to a Tcl script (or any command line program) is via arguments:
Makefile
CONDITION = true
runScript:
tclsh test.tcl ${CONDITION}
Inside the Tcl script the command line arguments can be accessed via the argv variable.
test.tcl
if {[llength $argv] != 1} {
puts stderr "Usage: $argv0 <condition>"
exit 1
}
lassign $argv condition

Related

Facing error while running tcl script with an argument using source command

I am trying to source a tcl script inside another script using source command . The syntax i am using is as follows :
source /path/script.tcl vikas #vikas is as a argument to the script
But i am facing an issue while executing it. Th the error i am getting is as follows:
TCLERR: couldn't read file "/path/script.tcl vikas" : no such file or directory.
kindly help me with the solution .
Thank You !
The source command doesn't pass arguments; it just reads the script in and evaluates it (with a minor nuance for info script).
How would you expect the arguments to be seen by the script? If it is via the argv global variable, then you can just set that up before calling source. It's not special at all except that tclsh and wish write the list of arguments to it during start up.
You can script things easily enough.
proc sourceWithArguments {filename args} {
global argv
set old $argv
try {
set argv $args
uplevel "#0" [list source $filename]
} finally {
# Restore the original arguments at the end
set argv $old
}
}
The source <file> command simply reads the commands in <file> almost just like you copy-pasted the commands.
If you have a main file and other file which is sourced from the main file, then you could just set a variable in the main file and use that variable in the sourced file.
# sourced.tcl
puts $parameter_from_main
# main.tcl
set parameter_from_main "Hello"
source sourced.tcl
In this case, both the main.tcl and sourced.tcl files are running in the same global scope. Some people may dislike this solution because you can get namespace pollution, but it might be good enough for what you need to do.

In tcl, how do I write to a variable in which shell I am?

In a tcl shell, I want to store in a variable in which shell I am. This script is also meant to be run by others and I need to check in which shell they are.
I tried using exec, but I am having trouble storing the results in a variable that I can compare to a string, basically, I want to find out whether a user is in bash, csh or ksh. Users switch between shells, so it is necessary to identify the active shell they are in. I tried:
set shell [exec echo \$0]
puts stdout "echo $shell;"
if { [string first "bash" $shell] != -1 } {
puts stdout "echo should be bash, is $shell;"
} elseif { [string first "ksh" $shell] != -1 } {
puts stdout "echo should be KSH, is $shell;"
} else {
puts stdout "echo should be csh, is $shell;"
}
This unfortunately only writes the string $0 into shell and not the actual shell. Is there another way of doing this? For context, this is for a modulefile on linux, which allows me (and others) to set an appropriate environment for a specific software package.
modulefile has
set shell [module-info shell]
but that does not give the active shell.
EDIT:
To clarify, I want to know in which shell I am, so that I can run
something.sh if I am in bash
something.ksh if I am in ksh
something.csh if I am in csh
all those scripts have to do slightly different things before the software people want to run can be run in the given shell. This would be invoked in a terminal and the software people want to use would be run in that terminal as well.
Thanks!
If what you wish to know is the value of the SHELL environment variable, Tcl provides a global array named env which contains the environment that the Tcl shell was started in. Try:
set myshell $::env(SHELL)
You will need to handle cases where SHELL is not defined, but that's another question.

Implement multiple Bash Scripts into Tcl Script

I want to plant multiple Bash scripts inside my "main" tcl program - so no external .sh scripts - I want it all in one place. I already found out how to plant one script into tcl:
set printScript { echo $HOME }
proc printProc {printScript} {
puts [exec /bin/bash << $printScript]
}
printProc $printScript
My question is now:
How can I use this concept to implement scripts that call other scripts without hardcoding the called script into the calling script?
Let's say I have something like this:
script1.sh
script2="$PWD/script2.sh"
#do some stuff
if [something]
then
$script2
fi
#do some more stuff
Can the above mentioned concept be used to solve my problem? How can this be done?
Every script is a string, so, yes, you can use string manipulation to build scripts out of script primitives, as it were. It's not the best solution, but it's possible. If you choose to build scripts by string manipulation, substitution by string map is probably better than substitution by variable. Something along the lines of the following:
set script1 {
#do some stuff
if [something]
then
%X
fi
#do some more stuff
}
set maplist {%% %}
lappend maplist %X {$PWD/script2.sh}
set executable_script [string map $maplist $script1]
Other solutions include
Writing everything in Tcl
Writing everything in bash script, if possible
Writing a master bash script with functions and calling those functions from Tcl

Difference in Tcl script execution with and without "-f"

I have a tcl script file called hello.tcl, with the following content:
puts "hello world"
When I run it on bash using the command,
tclsh hello.tcl
I get the hello print.
However, if I use tclsh -f instead of just tclsh above, I don't get any print, only the tcl prompt. Why don't I get any prints in the second case?
tclsh syntax is:
tclsh ?-encoding name? ?fileName arg arg ...?
That means you can call it like this:
tclsh hell.tcl
tclsh -encoding (something) hell.tcl
Anything else is an error. Instead of barking at you, tclsh silently ignores the error and open up the shell.
Tcl Command Notation
I found Tcl command notation a bit strange at first, and could not find any reference any where. However, here are what I understand:
The ? ... ? notation means optional
Tcl commands use a single dash as opposed to double dash: -encoding instead of '--encoding`
Manual
As for help, I installed ActiveState Tcl 8.5 and it comes with a file called ActiveTclHelp8.5.chm, which is my bible. This file is very detailed, with complete search capability. I cannot give you that file for fear of legal implications, but you can install ActiveState Tcl to get it.

Run external process from groovy

i have a bash script which i want to execute from groovy like
some_shell_script.sh param1 "report_date=`some_function 0 \"%Y%m%d\"`"
that script runs successfully from the command line, but when i try to execute it from Groovy
def command = "some_shell_script.sh param1 "report_date=`some_function 0 \"%Y%m%d_%H%M%S\"`""
def sout = new StringBuffer()
def serr = new StringBuffer()
//tried to use here different shells /bin/sh /bin/bash bash
ProcessBuilder pb = new ProcessBuilder(['sh', '-c',command])
Process proc = pb.start()
proc.consumeProcessOutput(sout, serr)
def status = proc.waitFor()
println 'sout: ' + sout
println 'serr: ' + serr
i have the following error
serr: sh: some_function: command not found
at the same time
which some_function
returns functional definition like
some_function ()
{
;some definition here
}
looks like when i run external script from groovy it start different process without context of parent process. I mean no function definitions of parent process are exists.
Anyone have cue how to cope with such a situation?
You should replace the double quotes in your command definition with single quotes.
def command = 'some_shell_script.sh param1 "report_date=`some_function 0 "%Y%m%d_%H%M%S"`'
Add:
println command
to ensure that you are executing the correct command.
Also open a new bash shell and ensure that some_function is defined.
Definitely check out those quotes as indicated by #Reimeus. I had some doubts about those.
In addition, some_function() may be defined in ~/.bashrc, /etc/bash.bashrc or in a file sourced by either of those when you run bash interactively. This does not happen if you run a script.
(Which is good for making script run predictably - you can't have your script depend on people's login environment.)
If this is the case, move some_function() to another file, and put its full path in the BASH_ENV variable, so that bash picks it up when processing scripts.
man bash:
When bash is started non-interactively, to run a shell script, for
example, it looks for the variable BASH_ENV in the environment, expands
its value if it appears there, and uses the expanded value as the name
of a file to read and execute. Bash behaves as if the following com-
mand were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
but the value of the PATH variable is not used to search for the file
name.
[Manual page bash(1) line 158]
This seems a path problem. Can you put the full path to the script and try again?
DISCLAIMER: there are limitations with this solution, and, the shell sub-script commands should be properly tested before deployment. However if multithreading were not required e.g. the function provides immediately some short results, there is an alternative as I implemented in here.
For instance, if the result of mycmd depends on an environment variable set in ~/.bashrc I could display its result: (tried as a groovy-script/v1.8.1, and yes, this is a stupid example and it might be risky!)
commands = '''source ~/.bashrc; cd ~/mytest; ./mycmd'''
"bash".execute().with{
out << commands
out << ';exit $?\n'
waitFor()
[ok:!exitValue(), out:in.text, err:err.text]
}.with{ println ok?out:err }

Resources