How can I monitor a bash script? - bash

I am running a bash script that takes hours. I was wondering if there is way to monitor what is it doing? like what part of the script is currently running, how long did it take to run the whole script, if it crashes at what line of the script stopped working, etc. I just want to receive feedback from the script. Thanks!!!

from man page for bash,
set -x
After expanding each simple command, for command, case command, select command, or arithmetic for command, display the expanded value of PS4, followed by the command and its expanded arguments or associated word list.
add these to the start of your script,
export PS4='+{${BASH_SOURCE}:$LINENO} '
set -x
Example,
#!/bin/bash
export PS4='+{${BASH_SOURCE}:$LINENO} '
set -x
echo Hello World
Result,
+{helloworld.sh:6} echo Hello World
Hello World

Make a status or log file. For example add this inside your script:
echo $(date) - Ok >> script.log
Or for a real monitoring you can use strace on linux for see system call, example:
$ while true ; do sleep 5 ; done &
[1] 27190
$ strace -p 27190

Related

Bash script not ending because of the background processes

I have a bash script as given below: It runs the python script with different arguments, each one as a background process (note that I have used '&')
#!/bin/bash
declare -a arr=("arg1" "arg2" "arg3")
for i in "${arr[#]}"
do
echo "$i"
python3 test.py $i &
echo "hi"
done
exit
The test.py file is as shown below:
import sys
print('Argument List:', str(sys.argv))
I tried to run the bash script with the command ./bash_script_test.sh.
Output is also right, but the script just doesnt end running. Plus the python code's output starts in a new command line. Refer below for the output.
arg1
hi
arg2
hi
arg3
hi
[root#csit-openstack1 risav]# Argument List: ['test.py', 'arg2']
Argument List: ['test.py', 'arg3']
Argument List: ['test.py', 'arg1']
Why is a new command line coming up and why is the shell script not exiting? Is it because of the use of & ? If yes, can somebody explain?
Take a cup of red color and a cup of green color and pour them into the same bucket. The result is a brown mess. The same happens with your terminal.
You have two processes, the foreground and the background process. Both write at the same time to the same terminal. The result is a mess. Background processes should write to log files instead.
Replace the line
python3 test.py $i &
with
python3 test.py $i > $i.log &
to give each background process its own log file.
If you want to merge the different sources, you have to use a tool like Syslog.
BTW: the script is ending. The last thing it does in the loop is printing "hi". And your output shows three times a "hi".

Logging into server (ssh) with bash script

I want to log into server based on user's choice so I wrote bash script. I am totally newbie - it is my first bash script:
#!/bin/bash
echo -e "Where to log?\n 1. Server A\n 2. Server B"
read to_log
if [ $to_log -eq 1 ] ; then
echo `ssh user#ip -p 33`
fi
After executing this script I am able to put a password but after nothing happens.
If someone could help me solve this problem, I would be grateful.
Thank you.
The problem with this script is the contents of the if statement. Replace:
echo `ssh user#ip -p 33`
with
ssh user#ip
and you should be good. Here is why:
Firstly, the use of back ticks is called "command substitution". Back ticks have been deprecated in favor of $().
Command substitution tells the shell to create a sub-shell, execute the enclosed command, and capture the output for assignment/use elsewhere in the script. For example:
name=$(whoami)
will run the command whoami, and assign the output to the variable name.
the enclosed command has to run to completion before the assignment can take place, and during that time the shell is capturing the output, so nothing will display on the screen.
In your script, the echo command will not display anything until the ssh command has completed (i.e. the sub-shell has exited), which never happens because the user does not know what is happening.
You have no need to capture the output of the ssh command, so there is no need to use command substitution. Just run the command as you would any other command in the script.

"Set echo" doesn't seem to show code in tcsh script

Please nothing in the realms of "Why are you using TCSH?". I have my reasons.
I'm trying to debug a tcsh script, but using the options "set echo" and "set verbose" don't actually seem to show the code that I'm trying to debug.
Per this question, I tried "set echo" and "set verbose" in tcsh. I then ran this script 'test.tcsh':
echo "Hello world"
foo=1
bar=2
foobar=$(expr $foo + $bar)
echo $foobar
It returns the following output:
test.tcsh
test.tcsh
Hello world
3
history -S
history -M
So it shows clearly the output of the code. However, what I want to see is the code itself - the echo, the call to expr and so on. In bash, set -xv would do what I want, but it's seemingly not working here.
Anything I'm missing?
To be sure your script is run by the tcsh shell and to get it showing the code, simply add the following line as the first line of your script :
#!/bin/tcsh -v
This will make your script run by tcsh shell and set the tcsh shell to echo each script commands.
For reference, your actual script in the question doesn't seem to be a tcsh script, see comment under your question.
EDIT: To debug without altering the script, you can also simply launch the tcsh shell with the -v parameter followed by the script filename :
$ /bin/tcsh -v test.tcsh

Getting last executed command name in bash script

In a bash script I want to get the name of the last command executed in terminal and store it in the variable for later use. I know that !:0 doesn't work in bash script, and I'm looking for some replacement of it.
For example:
#user enters pwd
> pwd
/home/paul
#I call my script and it show the last command
> ./last_command
pwd
this didn't help, it just prints empty line.
getting last executed command from script
Tell the shell to continuously append commands to the history file:
export PROMPT_COMMAND="history -a"
Put the following into your script:
#!/bin/bash
echo "Your command was:"
tail -n 1 ~/.bash_history
as far as I benefit the working one in my .bashrc;
export HISTCONTROL=ignoredups:erasedups
then do this, on console or in script respectively
history 2
cm=$(history 1)

Shell: How to call one shell script from another shell script?

I have two shell scripts, a.sh and b.sh.
How can I call b.sh from within the shell script a.sh?
There are a couple of different ways you can do this:
Make the other script executable with chmod a+x /path/to/file(Nathan Lilienthal's comment), add the #!/bin/bash line (called shebang) at the top, and the path where the file is to the $PATH environment variable. Then you can call it as a normal command;
Or call it with the source command (which is an alias for .), like this:
source /path/to/script
Or use the bash command to execute it, like:
/bin/bash /path/to/script
The first and third approaches execute the script as another process, so variables and functions in the other script will not be accessible.
The second approach executes the script in the first script's process, and pulls in variables and functions from the other script (so they are usable from the calling script).
In the second method, if you are using exit in second script, it will exit the first script as well. Which will not happen in first and third methods.
Check this out.
#!/bin/bash
echo "This script is about to run another script."
sh ./script.sh
echo "This script has just run another script."
There are a couple of ways you can do this. Terminal to execute the script:
#!/bin/bash
SCRIPT_PATH="/path/to/script.sh"
# Here you execute your script
"$SCRIPT_PATH"
# or
. "$SCRIPT_PATH"
# or
source "$SCRIPT_PATH"
# or
bash "$SCRIPT_PATH"
# or
eval '"$SCRIPT_PATH"'
# or
OUTPUT=$("$SCRIPT_PATH")
echo $OUTPUT
# or
OUTPUT=`"$SCRIPT_PATH"`
echo $OUTPUT
# or
("$SCRIPT_PATH")
# or
(exec "$SCRIPT_PATH")
All this is correct for the path with spaces!!!
The answer which I was looking for:
( exec "path/to/script" )
As mentioned, exec replaces the shell without creating a new process. However, we can put it in a subshell, which is done using the parantheses.
EDIT:
Actually ( "path/to/script" ) is enough.
If you have another file in same directory, you can either do:
bash another_script.sh
or
source another_script.sh
or
. another_script.sh
When you use bash instead of source, the script cannot alter environment of the parent script. The . command is POSIX standard while source command is a more readable bash synonym for . (I prefer source over .). If your script resides elsewhere just provide path to that script. Both relative as well as full path should work.
Depends on.
Briefly...
If you want load variables on current console and execute you may use source myshellfile.sh on your code. Example:
#!/bin/bash
set -x
echo "This is an example of run another INTO this session."
source my_lib_of_variables_and_functions.sh
echo "The function internal_function() is defined into my lib."
returned_value=internal_function()
echo $this_is_an_internal_variable
set +x
If you just want to execute a file and the only thing intersting for you is the result, you can do:
#!/bin/bash
set -x
./executing_only.sh
bash i_can_execute_this_way_too.sh
bash or_this_way.sh
set +x
You can use /bin/sh to call or execute another script (via your actual script):
# cat showdate.sh
#!/bin/bash
echo "Date is: `date`"
# cat mainscript.sh
#!/bin/bash
echo "You are login as: `whoami`"
echo "`/bin/sh ./showdate.sh`" # exact path for the script file
The output would be:
# ./mainscript.sh
You are login as: root
Date is: Thu Oct 17 02:56:36 EDT 2013
First you have to include the file you call:
#!/bin/bash
. includes/included_file.sh
then you call your function like this:
#!/bin/bash
my_called_function
Simple source will help you.
For Ex.
#!/bin/bash
echo "My shell_1"
source my_script1.sh
echo "Back in shell_1"
Just add in a line whatever you would have typed in a terminal to execute the script!
e.g.:
#!bin/bash
./myscript.sh &
if the script to be executed is not in same directory, just use the complete path of the script.
e.g.:`/home/user/script-directory/./myscript.sh &
This was what worked for me, this is the content of the main sh script that executes the other one.
#!/bin/bash
source /path/to/other.sh
The top answer suggests adding #!/bin/bash line to the first line of the sub-script being called. But even if you add the shebang, it is much faster* to run a script in a sub-shell and capture the output:
$(source SCRIPT_NAME)
This works when you want to keep running the same interpreter (e.g. from bash to another bash script) and ensures that the shebang line of the sub-script is not executed.
For example:
#!/bin/bash
SUB_SCRIPT=$(mktemp)
echo "#!/bin/bash" > $SUB_SCRIPT
echo 'echo $1' >> $SUB_SCRIPT
chmod +x $SUB_SCRIPT
if [[ $1 == "--source" ]]; then
for X in $(seq 100); do
MODE=$(source $SUB_SCRIPT "source on")
done
else
for X in $(seq 100); do
MODE=$($SUB_SCRIPT "source off")
done
fi
echo $MODE
rm $SUB_SCRIPT
Output:
~ ❯❯❯ time ./test.sh
source off
./test.sh 0.15s user 0.16s system 87% cpu 0.360 total
~ ❯❯❯ time ./test.sh --source
source on
./test.sh --source 0.05s user 0.06s system 95% cpu 0.114 total
* For example when virus or security tools are running on a device it might take an extra 100ms to exec a new process.
pathToShell="/home/praveen/"
chmod a+x $pathToShell"myShell.sh"
sh $pathToShell"myShell.sh"
#!/bin/bash
# Here you define the absolute path of your script
scriptPath="/home/user/pathScript/"
# Name of your script
scriptName="myscript.sh"
# Here you execute your script
$scriptPath/$scriptName
# Result of script execution
result=$?
chmod a+x /path/to/file-to-be-executed
That was the only thing I needed. Once the script to be executed is made executable like this, you (at least in my case) don't need any other extra operation like sh or ./ while you are calling the script.
Thanks to the comment of #Nathan Lilienthal
Assume the new file is "/home/satya/app/app_specific_env" and the file contents are as follows
#!bin/bash
export FAV_NUMBER="2211"
Append this file reference to ~/.bashrc file
source /home/satya/app/app_specific_env
When ever you restart the machine or relogin, try echo $FAV_NUMBER in the terminal. It will output the value.
Just in case if you want to see the effect right away, source ~/.bashrc in the command line.
There are some problems to import functions from other file.
First: You needn't to do this file executable. Better not to do so!
just add
. file
to import all functions. And all of them will be as if they are defined in your file.
Second: You may be define the function with the same name. It will be overwritten. It's bad. You may declare like that
declare -f new_function_name=old_function_name
and only after that do import.
So you may call old function by new name.
Third: You may import only full list of functions defined in file.
If some not needed you may unset them. But if you rewrite your functions after unset they will be lost. But if you set reference to it as described above you may restore after unset with the same name.
Finally In common procedure of import is dangerous and not so simple. Be careful! You may write script to do this more easier and safe.
If you use only part of functions(not all) better split them in different files. Unfortunately this technique not made well in bash. In python for example and some other script languages it's easy and safe. Possible to make partial import only needed functions with its own names. We all want that in next bush versions will be done the same functionality. But now We must write many additional cod so as to do what you want.
Use backticks.
$ ./script-that-consumes-argument.sh `sh script-that-produces-argument.sh`
Then fetch the output of the producer script as an argument on the consumer script.

Resources