I want to plant multiple Bash scripts inside my "main" tcl program - so no external .sh scripts - I want it all in one place. I already found out how to plant one script into tcl:
set printScript { echo $HOME }
proc printProc {printScript} {
puts [exec /bin/bash << $printScript]
}
printProc $printScript
My question is now:
How can I use this concept to implement scripts that call other scripts without hardcoding the called script into the calling script?
Let's say I have something like this:
script1.sh
script2="$PWD/script2.sh"
#do some stuff
if [something]
then
$script2
fi
#do some more stuff
Can the above mentioned concept be used to solve my problem? How can this be done?
Every script is a string, so, yes, you can use string manipulation to build scripts out of script primitives, as it were. It's not the best solution, but it's possible. If you choose to build scripts by string manipulation, substitution by string map is probably better than substitution by variable. Something along the lines of the following:
set script1 {
#do some stuff
if [something]
then
%X
fi
#do some more stuff
}
set maplist {%% %}
lappend maplist %X {$PWD/script2.sh}
set executable_script [string map $maplist $script1]
Other solutions include
Writing everything in Tcl
Writing everything in bash script, if possible
Writing a master bash script with functions and calling those functions from Tcl
Related
I may need some help here.
The scenario is,
let's say, I have a TCL script "test.tcl", which contains something like below,
set condition true
if {$condition==true} {
puts "Message1"
} elseif {$condition==false} {
puts "Message2"
}
Then I have another makefile to simply run this TCL script, in which,
runScript:
tclsh test.tcl
When I run it with
make runScript
is there any way that variable "condition" inside TCL script can be somehow provided by Makefile, rather than writing inside TCL script itself?
Any help would be grateful. Thank you!
If you find a way to pass that variable to your script when you invoke it from the command line, then you can use the same method from your makefile. This isn't related to make or makefiles, it's just a TCL question.
Googling "set tcl variable from command line" got me to this page: https://www.tcl.tk/man/tcl8.5/tutorial/Tcl38.html
So, something like this might work:
$ cat Makefile
runScript:
myvalue=true tclsh test.tcl
$ cat test.tcl
set condition $env(myvalue)
...
But my days of writing TCL are far, far behind me.
The usual way to pass information to a Tcl script (or any command line program) is via arguments:
Makefile
CONDITION = true
runScript:
tclsh test.tcl ${CONDITION}
Inside the Tcl script the command line arguments can be accessed via the argv variable.
test.tcl
if {[llength $argv] != 1} {
puts stderr "Usage: $argv0 <condition>"
exit 1
}
lassign $argv condition
I have few bash functions like
#!/bin/sh
git-ci() {
...
}
When I was not using fish I had a source ~/.my_functions line in my ~/.bash_profile but now it doesn't work.
Can I use my bash functions with fish? Or the only way is to translate them into fish ones and then save them via funcsave xxx?
As #Barmer said fish doesn't care about compatibility because one of its goals is
Sane Scripting
fish is fully scriptable, and its syntax is simple, clean, and consistent. You'll never write esac again.
The fish folks think bash is insane and I personally agree.
One thing you can do is to have your bash functions in separate files and call them as functions from within fish.
Example:
Before
#!/bin/bash
git-ci() {
...
}
some_other_function() {
...
}
After
#!/bin/bash
# file: git-ci
# Content of git-ci function here
#!/bin/bash
# file: some_other_function
# Content of some_other_function function here
Then put your script files somewhere in your path. Now you can call them from fish.
Hope that helps.
The syntax for defining functions in fish is very different from POSIX shell and bash.
The POSIX function:
hi () {
echo hello
}
is translated to:
function hi
echo hello
end
There are other differences in scripting syntax. See the section titled Blocks in Fish - The friendly interactive shell for examples.
So it's basically not possible to try to use functions that were written for bash in fish, they're as different as bash and csh. You'll have to go through all your functions and convert them to fish syntax.
If you don't want to change all the syntax, one workaround is to simply create a fish function that runs a bash script and passes the arguments right along.
Example
If you have a function like this
sayhi () {
echo Hello, $1!
}
you'd just change it by stripping away the function part, and save it as an executable script
echo Hello, $1!
and then create a fish function which calls that script (with the name sayhi.fish, for example)
function sayhi
# run bash script and pass on all arguments
/bin/bash absolute/path/to/bash/script $argv
end
and, voila, just run it as you usually would
> sayhi ivkremer
Hello, ivkremer!
I am writing an upstart configuration file. Within it, I have a pre-start script, a script, and a pre-stop script stanza. Each of these had a large amount of identical code. Thus, I attempted to refactor this code into a few bash functions. However, in so doing I discovered that "one does not simply write bash in an upstart configuration file". A bash function keyword is not allowed as it is interpreted as a stanza, and it isn't a valid upstart stanza.
# Some header stanzas that are boring
...
env ONE_OF_A_FEW_VARIABLES
...
function an_illegal_function () {...}
pre-start script
function_call_A
function_call_B
end script
script
function_call_A
function_call_C
echo "I love pie."
end script
pre-stop script
function_call_B
function_call_C
end script
I would really like to avoid the kind of code duplication that will exist if I have to copy-paste the contents of each function into stanzas like those above. How can I get some bash commands DRY'd up into a common location and have each of my *script stanzas reference them?
I'm trying to get around a problem that seems to me you cannot pass open db2 connection to a sub-shell.
My code organization is as follows:
Driver script (in my_driver.sh)
# foo.sh defines baz() bar(), which use a db2 connection
# Also the "$param_file" is set in foo.sh!
source foo.sh
db2 "connect to $dbName USER $dbUser using $dbPass"
function doit
{
cat $param_file | while read params
do
baz $params
bar $params
done
}
doit
I've simplified my code, but the above is enough the give the idea. I start the above:
my_driver.sh
Now, my real issue is that the db2 connection is not available in sub-shell:
I tried:
. my_driver.sh
Does not help
If I do it manually from the command line:
source foo.sh
And I set $params manually:
baz $params
bar $params
Then it does work! So it seems that doit or something else acts as if bar and baz are executed from a sub-shell.
I would be elated if I can somehow figure out how to pass db2 open connection to sub-shell would be best.
Otherwise, these shell functions seem to me that they run in a sub-shell. Is there a way around that?
The shell does not create a subshell to run a function.
Of course, it does create subshells for many other purposes, not all of which might be obvious. For example, it creates subshells in the implementation of |.
db2 requires that the all db2 commands have the same parent as the db2 command which established the connection. You could log the PID using something like:
echo "Execute db2 from PID $$" >> /dev/stderr
db2 ...
(as long as the db2 command isn't execute inside a pipe or shell parentheses.)
One possible problem in the code shown (which would have quite a different symptom) is the use of the non-standard syntax
function f
To define a function. A standard shell expects
f()
Bash understands both, but if you don't have a shebang line or you execute the scriptfile using the sh command, you will end up using the system's default shell, which might not be bash.
Found solution, but can't yet fully explain the problem ....
if you change doit as follows it works!
function doit
{
while read params
do
baz $params
bar $params
done < $param_file
}
Only, I'm not sure why? and how I can prove it ...
If I stick in debug code:
echo debug check with PID=$$ PPID=$PPID and SHLVL=$SHLVL
I get back same results with the | or not. I do understand that cat $param_file | while read params creates a subshell, however, my debug statements always show the same PID and PPID...
So my problem is solved, but I'm missing some explanations.
I also wonder if this question would not be more well suited in the unix.stackexchange community?
A shell function in such shells as sh (i.e. Dash) or Bash may be considered as a labelled commands group or named "code block" which may be called multiple times by its name. A command group surrounded by {} does not create a subshell or "fork" a process, but executes in the same process and environment.
Some might find it relatively similar to goto where function names represent labels as in other programming languages, including C, Basic, or Assembler. However, the statements vary quite greatly (e.g. functions return, but goto - doesn't) and Go To Statement may be Considered Harmful.
Shell Functions
Shell functions are a way to group commands for later
execution using a single name for the group. They are executed just
like a "regular" command. When the name of a shell function is used as
a simple command name, the list of commands associated with that
function name is executed. Shell functions are executed in the current
shell context; no new process is created to interpret them.
Functions are declared using this syntax:
fname () compound-command [ redirections ]
or
function fname [()] compound-command [ redirections ]
This defines a shell function named fname. The reserved word function is optional. If the function reserved word is supplied, the parentheses are optional.
Source: https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html or man bash.
Grouping Commands Together
Commands may be grouped by writing either
(list)
or
{ list; }
The first of these executes the commands in a subshell. Builtin commands grouped into a (list) will not affect the current shell. The
second form does not fork another shell so is slightly more efficient.
Grouping commands together this way allows you to redirect their
output as though they were one program:
{ printf " hello " ; printf " world\n" ; } > greeting
Note that "}" must follow a control operator (here, ";") so that it is recognized as a reserved word and not as another command
argument.
Functions
The syntax of a function definition is
name () command
A function definition is an executable statement; when executed it installs a function named name and returns an exit status of zero. The command is normally a list enclosed between "{" and "}".
Source: https://linux.die.net/man/1/dash or man sh.
Transfers control unconditionally.
Used when it is otherwise impossible to transfer control to the desired location using other statements... The goto statement transfers control to the location specified by label. The goto statement must be in the same function as the label it is referring, it may appear before or after the label.
Source: https://en.cppreference.com/w/cpp/language/goto
Goto
... It performs a one-way transfer of control to another line of code; in
contrast a function call normally returns control. The jumped-to
locations are usually identified using labels, though some languages
use line numbers. At the machine code level, a goto is a form of
branch or jump statement, in some cases combined with a stack
adjustment. Many languages support the goto statement, and many do not...
Source: https://en.wikipedia.org/wiki/Goto
Related:
https://mywiki.wooledge.org/BashProgramming#Functions
https://uomresearchit.github.io/shell-programming-course/04-subshells_and_functions/index.html (Subshells and Functions...)
Is there a "goto" statement in bash?
What's the difference between "call" and "invoke"?
https://en.wikipedia.org/wiki/Call_stack
https://mywiki.wooledge.org/BashPitfalls
Have people noticed that if you modify the source of a shell script, any instances that are currently running are liable to fail?
This in my opinion is very bad; it means that I have to make sure all instances of a script are stopped before I make changes. My preferred behavior would be that existing scripts continue running with old source code and that new instances use the new code (e.g. what happens for perl and python programs).
Do folks have any good workarounds for this behavior, other than pre-copying the shell script to a tempfile and running from that?
Thanks,
/YGA
Very slight addition to the other answers:
#!/bin/sh
{
# Your stuff goes here
exit
}
The exit at the end is important. Otherwise, the script file might still be accessed at the end to see if there are any more lines to interpret.
This question was later reposted here: Can a shell script indicate that its lines be loaded into memory initially?
Make sure the shell has to parse the whole file before executing any of it:
#!/bin/ksh
{
all the original script here
}
That does the trick.
Incidentally, with Perl (and I assume Python), the program parses the entire file before executing any of it, exactly as recommended here. Which is why you don't usually run into the problem with Perl or Python.
The desired behavior may not be possible, depending on complexity of the shell scripts that are involved.
If the full shell script is contained in a single source file, and that file is fully parsed before execution, then the shell script is generally safe from modifications to the copy on the disc during execution. Wrapping all the executable statements into a function (or series of functions) will generally achieve the goal you are after.
#!/bin/sh
doit()
{
# Stuff goes here
}
# Main
doit
The difficulty comes when the shell script "includes" other shell scripts (e.g. ".", or "source"). If these includes are wrapped in a function, they are not parsed until that statement is reached in the flow of execution. This makes the shell script vulnerable to changes to that external code.
In addition, if the shell script runs any external program (e.g. shell script, compiled program, etc), that result is not captured until that point in the execution is reached (if ever).
#!/bin/sh
doit()
{
if [[some_condition]] ; then
resultone=$(external_program)
fi
}
# Main
doit
this answer contains a robust and self contained way to make a script resistant to this problem: have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)