How to keep verbosity when sourcing from a bash script - bash

I need to check the run of a bash script, with a source call, something like:
#!/bin/bash
some code here
source script_b.sh
more code here
I run:
$bash -x script_a.sh
and I get,
+ some echo here
+ script_b.sh
+ some more echo here
But all echoes are from script_a.sh. All the code from script_b.sh is hidden, so I can not trace what is really happening.
Is there any way I can check the execution of script_b.sh within script_a.sh?

You could try "bash -x script_b.sh" inside of the parent script.
Edit:
This worked for me. If you run the parent script with bash -x you will see everything for both. "set -x" will set the debug flag for the environment...in the script? I'm not sure, and fifo is still magic to me.
echo "start of script"
set -x
mkfifo fifo
cat /dev/null < fifo | fifo > source .bash_profile
rm fifo
echo "end of script"

Related

Redirect copy of stdin to file from within bash script itself

In reference to https://stackoverflow.com/a/11886837/1996022 (also shamelessly stole the title) where the question is how to capture the script's output I would like to know how I can additionally capture the scripts input. Mainly so scripts that also have user input produce complete logs.
I tried things like
exec 3< <(tee -ia foo.log <&3)
exec <&3 <(tee -ia foo.log <&3)
But nothing seems to work. I'm probably just missing something.
Maybe it'd be easier to use the script command? You could either have your users run the script with script directly, or do something kind of funky like this:
#!/bin/bash
main() {
read -r -p "Input string: "
echo "User input: $REPLY"
}
if [ "$1" = "--log" ]; then
# If the first argument is "--log", shift the arg
# out and run main
shift
main "$#"
else
# If run without log, re-run this script within a
# script command so all script I/O is logged
script -q -c "$0 --log $*" test.log
fi
Unfortunately, you can't pass a function to script -c which is why the double-call is necessary in this method.
If it's acceptable to have two scripts, you could also have a user-facing script that just calls the non-user-facing script with script:
script_for_users.sh
--------------------
#!/bin/sh
script -q -c "/path/to/real_script.sh" <log path>
real_script.sh
---------------
#!/bin/sh
<Normal business logic>
It's simpler:
#! /bin/bash
tee ~/log | your_script
The wonderful thing is your_script can be a function, command or a {} command block!

How to capture shell script output

I have an unix shell script. I have put -x in shell to see all the execution step. Now I want to capture these in one log file on a daily basis.
Psb script.
#!/bin/ksh -x
Logfile= path.log.date
Print " copying file" | tee $logifle
Scp -i key source destination | tee -a $logfile.
Exit 0;
First line of the shell script is known as shebang , which indicates what interpreter has to be execute the below script.
Similarly first line is commented which denotes coming lines not related to that interpreted session.
To capture the output, run the script redirect your output while running the script.
ksh -x scriptname >> output_file
Note:it will output what your script's doing line by line
There are two cases, using ksh as your shell, then you need to do IO redirection accordingly, and using some other shell and executing a .ksh script, then IO redirection could be done based on that shell. Following method should work for most of the shells.
$ cat somescript.ksh
#!/bin/ksh -x
printf "Copy file \n";
printf "Do something else \n";
Run it:
$ ./somescript.ksh 1>some.log 2>&1
some.log will contain,
+ printf 'Copy file \n'
Copy file
+ printf 'Do something else \n'
Do something else
In your case, no need to specify logfile and/or tee. Script would look something like this,
#!/bin/ksh -x
printf "copying file\n"
scp -i key user#server /path/to/file
exit 0
Run it:
$ ./myscript 1>/path/to/logfile 2>&1
2>&1 captures both stderr and stdout into stdout and 1>logfile prints it out into logfile.
I would prefer to explicitly redirecting the output (including stderr 2> because set -x sends output to stderr).
This keeps the shebang short and you don't have to cram the redirecton and filename-building into it.
#!/bin/ksh
logfile=path.log.date
exec >> $logfile 2>&1 # redirecting all output to logfile (appending)
set -x # switch on debugging
# now start working
echo "print something"

bash script copy output from set -x to file, called from within script rather than using tee

I am wondering if there is a way to write set -x output to a file from within a script, rather than call a tee from the command prompt.
For example, I usually use myScript.sh 2>&1 | tee mylog.log form the command prompt. This copies the set -x as I expect to the log file.
Is there a way to internalise this within myScript.sh so I can set it as a flag to be turned off if I do not need to debug. running only myScript.sh from the command prompt.
thx
Art
Inside your script place this line at top:
#!/bin/bash
[[ $1 == "debug" ]] && { exec 2>err.txt; set -x; }
# rest of the script
Now when you call your script as:
./myScript.sh debug
You will get a file created as err.txt containing output of set -x (with other error, if any)

Getting exit code of last shell command in another script

I am trying to beef up my notify script. The way the script works is that I put it behind a long running shell command and then all sorts of notifications get invoked after the long running script finished.
For example:
sleep 100; my_notify
It would be nice to get the exit code of the long running script. The problem is that calling my_notify creates a new process that does not have access to the $? variable.
Compare:
~ $: ls nonexisting_file; echo "exit code: $?"; echo "PPID: $PPID"
ls: nonexisting_file: No such file or directory
exit code: 1
PPID: 6203
vs.
~ $: ls nonexisting_file; my_notify
ls: nonexisting_file: No such file or directory
exit code: 0
PPID: 6205
The my_notify script has the following in it:
#!/bin/sh
echo "exit code: $?"
echo "PPID: $PPID"
I am looking for a way to get the exit code of the previous command without changing the structure of the command too much. I am aware of the fact that if I change it to work more like time, e.g. my_notify longrunning_command... my problem would be solved, but I actually like that I can tack it at the end of a command and I fear complications of this second solution.
Can this be done or is it fundamentally incompatible with the way that shells work?
My shell is Z shell (zsh), but I would like it to work with Bash as well.
You'd really need to use a shell function in order to accomplish that. For a simple script like that it should be pretty easy to have it working in both zsh and bash. Just place the following in a file:
my_notify() {
echo "exit code: $?"
echo "PPID: $PPID"
}
Then source that file from your shell startup files. Although since that would be run from within your interactive shell, you may want to use $$ rather than $PPID.
It is incompatible. $? only exists within the current shell; if you want it available in subprocesses then you must copy it to an environment variable.
The alternative is to write a shell function that uses it in some way instead.
One method to implement this could be to use EOF tag and a master script which will create your my_notify script.
#!/bin/bash
if [ -f my_notify ] ; then
rm -rf my_notify
fi
if [ -f my_temp ] ; then
rm -rf my_temp
fi
retval=`ls non_existent_file &> /dev/null ; echo $?`
ppid=$PPID
echo "retval=$retval"
echo "ppid=$ppid"
cat >> my_notify << 'EOF'
#!/bin/bash
echo "exit code: $retval"
echo " PPID =$ppid"
EOF
sh my_notify
You can refine this script for your purpose.

stop a calling script upon error

I have 2 shell scripts, namely script A and script B.
I have both of them "set -e", telling them to stop upon error.
However, when script A call script B, and script B had an error and stopped, script A didn't stop.
What can I stop the mother script when the child script dies?
It should work as you'd expect. For example:
In mother.sh:
#!/bin/bash
set -ex
./child.sh
echo "you should not see this (a.sh)"
In child.sh:
#!/bin/bash
set -ex
ls &> /dev/null # good cmd
ls /path/that/does/not/exist &> /dev/null # bad cmd
echo "you should not see this (b.sh)"
Calling mother.sh:
[me#home]$ ./mother.sh
++ ./child.sh
+++ ls
+++ ls /path/that/does/not/exist
Why is it not working for you?
One possible situation where it won't work as expected is if you specified -e in the shabang line (#!/bin/bash -e) and passed the script directly to bash which will treat that as a comment.
For example, if we change mother.sh to:
#!/bin/bash -ex
./child.sh
echo "you should not see this (a.sh)"
Notice how it behaves differently depending on how you call it:
[me#home]$ ./mother.sh
+ ./child.sh
+ ls
+ ls /path/that/does/not/exist
[me#home]$ bash mother.sh
+ ls
+ ls /path/that/does/not/exist
you should not see this (a.sh)
Explicitly calling set -e within the script will solve this problem.

Resources