Calling Shell-methods by chain of files in subdirectories - shell

I'm trying to call methods from file to file with structure like:
/root
/subDir
/subSubDir
inSubSub.sh
inSub.sh
inRoot.sh
Files contents:
inRoot.sh:
#!/bin/bash
source ./subDir/inSub.sh
subMethod;
inSub.sh:
#!/bin/bash
source ./subSubDir/inSubSub.sh
subMethod () {
echo "I'm in sub"
}
subSubMethod;
inSubSub.sh:
#!/bin/bash
subSubMethod () {
echo "I'm in subSub"
}
subSubMethod;
Result of running $ ./inRoot.sh
subDir/inSub.sh: line 2: subSubDir/inSubSub.sh: No such file or directory
subDir/inSub.sh: line 6: subSubMethod: command not found
I'm in sub
So, it works for the first call but doesn't work deeper.
btw: using . ./ instead of source ./ returns the same
How to do it right, if it's possible?

You must change your inSub.sh like that
cat ./subDir/inSub.sh
#!/bin/bash
var="${BASH_SOURCE[0]}"
source "${var%/*}"/subSubDir/inSubSub.sh
subMethod () {
echo "I'm in sub"
}
subSubMethod;

Related

Are there any existing methods for importing functions from other scripts without sourcing the entire script?

I am working on a large shell program and need a way to import functions from other scripts as required without polluting the global scope with all the internal functions from that script.
UPDATE: However, those imported functions have internal dependancies. So the imported function must be executed in the context of its script.
I came up with this solution and wonder if there is any existing strategy out there and if not, perhaps this is a really bad idea?
PLEASE TAKE A LOOK AT THE POSTED SOLUTION BEFORE RESPONDING
example usage of my solution:
main.sh
import user get_name
import user set_name
echo "hello $(get_name)"
echo "Enter a new user name :"
while true; do
read user_input < /dev/tty
done
set_name $user_input
user.sh
import state
set_name () {
state save "user_name" "$1"
}
get_name () {
state get_value "user_name"
}
As one approach, you could put a comment in the script to indicate where you want to stop sourcing:
$ cat script
fn() { echo "You are running fn"; }
#STOP HERE
export var="Unwanted name space pollution"
And then, if you are using bash, source it like this:
source <(sed '/#STOP HERE/q' script)
<(...) is process substitution and our process, sed '/#STOP HERE/q' script just extracts the lines from script until the stop line is reached.
Adding more precise control
We can select particular sections from a file if we add both start and stop flags:
$ cat script
export var1="Unwanted name space pollution"
#START
fn1() { echo "You are running fn1"; }
#STOP
export var2="More unwanted name space pollution"
#START
fn2() { echo "You are running fn2"; }
#STOP
export var3="More unwanted name space pollution"
And then source the file like this:
source <(sed -n '/#START/,/#STOP/p' script)
create standalone shel script that do this
will have 2 argument the file name and the function name
it will source the input file first
it will then use declare -f function name
in your code you can include functions like this
eval "./importfunctions.sh filename functionaname"
what is happening here :
step 1 basically read the file and source it in new shell environment . then it will echo the function declaration
step 2 will eval that function into our main code
So final result is as if we wrote just that function in our main script
When the functions in the script indent untill the closing } and all start with the keyword function, you can include specific functions without changing the original files:
largeshell.sh
#!/bin/bash
function demo1 {
echo "d1"
}
function demo2 {
echo "d2"
}
function demo3 {
echo "d3"
}
function demo4 {
echo "d4"
}
echo "Main code of largeshell... "
demo2
Now show how to source demo1() and forget demo4():
source <(sed -n '/^function demo1 /,/^}/p' largeshell.sh)
source <(sed -n '/^function demo3 /,/^}/p' largeshell.sh)
demo1
demo4
Or source all functions in a loop:
for f in demo1 demo3; do
echo sourcing $f
source <(sed -n '/^function '$f' /,/^}/p' largeshell.sh)
done
demo1
demo4
You can make it more fancy when you source a special script that will:
grep all strings starting with largeshell., like largefile.demo1
generate functions like largefile.demo1 that will call demo1
and source all functions that are called.
Your new script will look like
source function_includer.sh
largeshell.demo1
largeshell.demo4
EDIT:
You might want to reconsider your requirements.
Above solution is not only slow, but it will also make it hard for the
guys and ladies who made tha largeshell.sh. As soon as they are going to refactor their code or replace it with something in another language,
they have to refactor, test and deploy your code as well.
A better path is extracting the functions from largeshell.sh into some smaller files ("modules"), and put them in a shared directory (shlib?).
With names as sqlutil.sh, datetime.sh, formatting.sh, mailstuff.sh and comm.sh you can pick the includes file you need (and largefile.sh will include them all).
It's been a while and it would appear that my original solution is the best one out there. Thanks for the feedback.

Only use functions of .ksh script into other script

I have a script a.sh which has :
a() {
echo "123"
}
echo "dont"
Then I have other script b.sh which has :
b() {
echo "345"
}
All I want to do is to use a in b, but when I source it I don't want to print whatever is in a() or echo "Dont".
I just want to source it for now.
so I did, source a.sh in b.sh
But it doesn't work.
Reason for sourcing is. so if I want I can call any functions when I want too.
If I do . /a.sh in b.sh it prints everything in a.sh.
One approach which will work on any POSIX-compliant shell is this:
# put function definitions at the top
a() {
echo "123"
}
# divide them with a conditional return
[ -n "$FUNCTIONS_ONLY" ] && return
# put direct code to execute below
echo "dont"
...and in your other script:
FUNCTIONS_ONLY=1 . other.sh
Make a library of common functions in a file called functionLib.sh like this:
#!/bin/sh
a(){
echo Inside a, with $1
}
b(){
echo Inside b, with $1
}
Then in script1, do this:
#!/bin/sh
. functionLib.sh # Source in the functions
a 42 # Use one
b 37 # Use another
and in another script, script2 re-use the functions:
#!/bin/sh
. functionLib.sh # Source in the functions
a 23 # Re-use one
b 24 # Re-use another
I have adopted a style in my shell scripts that allows me to design every script as a potential library, making it behave differently when it is sourced (with . .../path/script) and when it is executed directly. You can compare this to the python if __name__ == '__main__': trick.
I have not found a method that is portable across all Bourne shell descendants without explicitly referring to the script's name, but this is what I use:
a() {
echo a
}
b() {
echo b
}
(program=xyzzy
set -u -e
case $0 in
*${program}) : ;;
*) exit;;
esac
# main
a
b
)
The rules for this method are strictly:
Start a section with nothing but functions.
No variable assignments or any other activity.
Then, at the very end, create a subshell ( ... )
The first action inside the subshell tests whether
it's being sourced. If so, exit from the subshell.
If not, run a command.

passing parameters from external file (template) to puppet script

Hi I'm trying to execute an exe file using puppet script. My exe file is accepting 3 parameters like param1, param2 and param3. All I want is to pass these parameters through external file. How can I do this?
Here is my sample code:
exec { "executing exe file":
command => 'copyfile.exe "DestinatoinPath" "sourcefilename" "destinationfilename" ',
}
All I want is to pass all these values from external file and use it here.
Can someone help me to resolve this
Here is my trail:
Here is my directory structure:
puppet\modules\mymodule\manifests\myfile.pp and
puppet\modules\mymodule\templates\params.erb
and my erb file is having a value of path ex: d:\test1.txt e:\test1.txt testfilename
$myparams = template("mymodule/params.erb")
exec { "executing exe file":
command => '$myparams',
}
EDIT:
The root of the problem was trying to call the module manifest directly, thus the template lookup failed. The solution was not to use a module and specify the full template path.
There are 2 main ways to go about it:
Declare the variables in scope
#acceptable for a throwaway manifest
$path = "DestinationPath"
$source = "sourcefilename"
$destination "destinationfilename"
exec { "executing exe file":
command => 'copyfile.exe ${path} ${source} ${destination}',
}
Wrap it in a parameterized class/defined type
# parameterized class, included only once
class executing_exe_file ($path, $source, $destination) {
exec { "executing exe file":
command => 'copyfile.exe ${path} ${source} ${destination}',
}
}
OR
# defined resource, can be repeated multiple times
define executing_exe_file ($path, $source, $destination) {
exec { "executing exe file":
command => 'copyfile.exe ${path} ${source} ${destination}',
}
}
THEN
executing_exe_file { "executing exe file":
path: "DestinationPath",
source: "sourcefilename",
destination: "destinationfilename",
}
Also as a side note, you have to make sure copyfile.exe is fully qualified.

How can i change directory in shell script with variables

this script file name "1sr" and i can work in terminal ". 1sr"
i want to change directory "home/byram/workspace/1/src/com/seri/*"
#!bin/sh
f=$(basename $0 | tr -d "sr")
pth="/home/byram/workspace/$f"
my1=$(ls $pth/src/com/seri)
cd $etc/src/com/seri/$my1
after ". 1sr" command f variable set "bash"
how can i fix it?
I would suggest a function called "prj" to put in your .bashrc:
prj () {
cd /home/byram/workspace/"$1"/src/com/seri
}
Then use it like this
prj 1 # Switch to ...1/src/com/seri
prj 2 # Switch to ...2/src/com/seri
i add in .bashrc this lines:
wr (){
cd /home/byram/workspace/"$1"/w
v1=$(ls /home/byram/workspace/"$1"/src/*/*)
v2=$(ls /home/byram/workspace/"$1"/src/*)
v3=$(ls /home/byram/workspace/"$1"/src/)
echo "$v3.$v2.$v1"
}
works for any project eg. com.example.abc,org.samp.xyz
thanks for #chepner

Writing to file shell script

I have been to trying to write some output to a CSV file using the method below in a shell script:
writeToResultFile()
{
resultFile="ShakeoutResult.csv"
msg=" $*"
echo "writing to resultFile..$msg"
echo $msg >> $resultFile
}
When I tried to call this method:
writeToResultFile "column1,column2,column3"
It works fine and was written to output file. But when I tried to call this method from another method such as:
doProcess()
{
writeToResultFile "data1,data2,data3"
}
Nothing is written to the output file. Stepping through, I know that writeToResultFile() is getting invoked and the param also is echoed in the console, but not getting appended to the output file.
Just to make sure: what do you use? Bash? Because it's working:
#!/bin/bash
writeToResultFile() {
msg=" $*"
echo "messaage: $msg"
echo $msg >> output.txt
}
doProcess()
{
writeToResultFile "function1,function2,function3"
}
writeToResultFile "main1,main2,main3"
doProcess
The output will be (cat output.txt):
main1,main2,main3
function1,function2,function3

Resources