Passing in parameter to bash script - bash

I have this simple bash script:
#!/bin/sh
(echo "AUTH xxx xxx"
sleep 3
number=0161XXXXXXX
echo "ACTI $number"
sleep 3
echo "SET $number 1 S:$number#x.x.x.x"
sleep 3
echo "STAT $number"
sleep 3
echo "QUIT") | telnet xxx.xxx 777
I want to pass the number in as a parameter when I call the script, i.e.
bash number.sh 0161XXXXXXX
How can I do that?
Thanks

Use positional parameters. You could also directly use $1 instead storing in a variable.
#!/bin/sh
arg=$1
(echo "AUTH xxx xxx"
sleep 3
number=$arg
echo "ACTI $number"
sleep 3
echo "SET $number 1 S:$number#x.x.x.x"
sleep 3
echo "STAT $number"
sleep 3
echo "QUIT") | telnet xxx.xxx 777

From bash man page:
Arguments
If arguments remain after option processing, and neither the -c nor
the -s option has been supplied, the first argument is assumed to be
the name of a file containing shell commands. If bash is invoked in
this fashion, $0 is set to the name of the file, and the positional
parameters are set to the remaining arguments. Bash reads and executes
commands from this file, then exits. Bash's exit status is the exit
status of the last command executed in the script. If no commands are
executed, the exit status is 0. An attempt is first made to open the
file in the current directory, and, if no file is found, then the
shell searches the directories in PATH for the script.
So the first argument can be referred as $1, the second as $2 (until $9, if more you need to process it in other way such as using shift...)

Related

Why "bash -c" can't receive full list of arguments?

I have next two scripts:
test.sh:
echo "start"
echo $#
echo "end"
run.sh:
echo "parameters from user:"
echo $#
echo "start call test.sh:"
bash -c "./test.sh $#"
Execute above run.sh:
$ ./run.sh 1 2
parameters from user:
1 2
start call test.sh:
start
1
end
You could see although I pass 2 arguments to run.sh, the test.sh just receive the first argument.
But, if I change run.sh to next which just drop bash -c:
echo "parameters from user:"
echo $#
echo "start call test.sh:"
./test.sh $#
The behavior becomes as expected which test.sh receive 2 arguments:
$ ./run.sh 1 2
parameters from user:
1 2
start call test.sh:
start
1 2
end
Question:
For some reason, I have to use bash -c in my full scenario, then could you kindly tell me what's wrong here? How I could fix that?
It is because of the quoting of the arguments is in wrong place. When you run a sequence of commands inside bash -c, think of that as it being a full shell script in itself, and need to pass arguments accordingly. From the bash manual
If Bash is started with the -c option (see Invoking Bash), then $0 is set to the first argument after the string to be executed, if one is present. Otherwise, it is set to the filename used to invoke Bash, as given by argument zero.
But if one notices your command below,
bash -c "./test.sh $#"
when your expectation was to pass the arguments to the test.sh, inside '..', but the $# inside double-quotes expanded pre-maturely, undergoing word-splitting to produce the first argument value only, i.e. value of $1
But even when you have fixed it by using single quotes as below, it still can't work, because remember the contents passed to -c is evaluated in its own shell context and needs arguments passed explicitly,
set -- 1 2
bash -c 'echo $#' # Both the cases still don't work, as the script
bash -c 'echo "$#"' # inside '-c' is still not passed any arguments
To fix, the above, you need an explicit passing of arguments the contents inside -c as below. The _ (underscore) character represents the pathname of the shell invoked to execute the script (in this case bash). More at Bash Variables on the manual
set -- 1 2
bash -c 'printf "[%s]\n" "$#"' _ "$#"
[1]
[2]
So to fix your script, in run.sh, pass the arguments as
bash -c './test.sh "$#"' _ "$#"
Besides the accept one, find another solution just now. If add -x when call the run.sh, I could see next:
$ bash -x ./run.sh 1 2
+ echo 'parameters from user:'
parameters from user:
+ echo 1 2
1 2
+ echo 'start call test.sh:'
start call test.sh:
+ bash -c './test.sh 1' 2
start
1
end
So, it looks bash -c "./test.sh $#" is interpreted as bash -c './test.sh 1' 2.
Inspired from this, I tried to use $* to replace $#, which then just pass all params as a single parameter, then with next it also works well:
run.sh:
echo "parameters from user:"
echo $*
echo "start call test.sh:"
bash -c "./test.sh $*"
Execution:
$ bash -x ./run.sh 1 2
+ echo 'parameters from user:'
parameters from user:
+ echo 1 2
1 2
+ echo 'start call test.sh:'
start call test.sh:
+ bash -c './test.sh 1 2'
start
1 2
end

How do I programmatically execute a carriage return in bash?

My main file is main.sh:
cd a_folder
echo "executing another script"
source anotherscript.sh
cd ..
#some other operations.
anotherscript.sh:
pause(){
read -p "$*"
}
echo "enter a number: "
read number
#some operation
pause "Press enter to continue..."
I wanted to skip the pause command. But when I do:
echo "/n" | source anotherscript.sh
It doesn't allow to enter the number. I want the "/n" to occur so that I allow the user to enter a number but skip the pause statement.
PS: can't do any changes in anotherscript.sh. All changes to be done in main.sh.
Try
echo | source anotherscript.sh
Your approach does not work, because the script to be sourced expectes two lines from stdin: First a line containing a number, then an empty line (which is doing the pause). Hence you would have to feed two lines, the number and the empty line, to the script. If you still want to get the number from your own stdin, you would have to use a read command before:
echo "executing another script"
echo "enter a number: "
read number
printf "$number\n\n" | source anotherscript.sh
But this still has some danger lurking: The source command is executed in a subshell; hence, any changes in the environment performed by anotherscript.sh won't be visible in your shell.
A workaround would be to be to put the number-reading logic outside of main.sh:
# This is script supermain.sh
echo "executing another script"
echo "enter a number: "
read number
printf "$number\n\n"|bash main.sh
where in main.sh, you simply keep your source anotherscript.sh without any piping.
As user1934428 comments, the bash pipeline causes the cascading
commands to be executed in subshells and the variable modifications
there are not reflected in the current process.
To change this behavior, you can set lastpipe with shopt builtin.
Then bash changes the job control so that the last command in the
pipeline is executed in the current shell (as tsch does).
Then would you please try:
main_sh
#!/bin/bash
shopt -s lastpipe # this changes the job control
read -p "enter a number: " x # ask for the number in main_sh instead
cd a_folder
echo "executing another script"
echo "$x" | source anotherscript.sh > /dev/null
# anotherscript.sh is executed in the current process
# unnecessary messages are redirected to /dev/null
cd ..
echo "you entered $number" # check the result
#some other operations.
which will properly print the value of number.
Alternatively you can also say as:
#!/bin/bash
read -p "enter a number: " x
cd a_folder
echo "executing another script"
source anotherscript.sh <<< "$x" > /dev/null
cd ..
echo "you entered $number"
#some other operations.

applescript blocks shell script cmd when writing to pipe

The following script works as expected when executed from an Applescript do shell script command.
#!/bin/sh
sleep 10 &
#echo "hello world" > /tmp/apipe &
cpid=$!
sleep 1
if ps -ef | grep $cpid | grep sleep | grep -qv grep ; then
echo "killing blocking cmd..."
kill -KILL $cpid
# non zero status to inform launch script of problem...
exit 1
fi
But, if the sleep command (line 2) is swaped to the echo command in (line 3) together with the if statement, the script blocks when run from Applescript but runs fine from the terminal command line.
Any ideas?
EDIT: I should have mentioned that the script works properly when a consumer/reader is connected to the pipe. It only block when nothing is reading from the pipe...
OK, the following will do the trick. It basically kills the job using its jobid. Since there is only one, it's the current job %%.
I was lucky that I came across the this answer or it would have driven me crazy :)
#!/bin/sh
echo $1 > $2 &
sleep 1
# Following is necessary. Seems to need it or
# job will not complete! Also seen at
# https://stackoverflow.com/a/10736613/348694
echo "Checking for running jobs..."
jobs
kill %% >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo "Taking too long. Killed..."
exit 1
fi
exit 0

Simple bash script for starting application silently

Here I am again. Today I wrote a little script that is supposed to start an application silently in my debian env.
Easy as
silent "npm search 1234556"
This works but not at all.
As you can see, I commented the section where I have some troubles.
This line:
$($cmdLine) &
doesn't hide application output but this one
$($1 >/dev/null 2>/dev/null) &
works perfectly. What am I missing? Many thanks.
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>/dev/null"
fi
# not working
$($cmdLine) &
# works perfectly
#$($1 >/dev/null 2>/dev/null) &
With the use of evil eval following script will work:
#!/bin/sh
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>&1"
fi
eval "$cmdLine &"
Rather than building up a command with redirection tacked on the end, you can incrementally apply it:
#!/bin/sh
if [ -z "$1" ]; then
exit
fi
exec >/dev/null
if [ -n "$2" ]; then
exec 2>&1
fi
exec $1
This first redirects stdout of the shell script to /dev/null. If the second argument is given, it redirects stderr of the shell script too. Then it runs the command which will inherit stdout and stderr from the script.
I removed the ampersand (&) since being silent has nothing to do with running in the background. You can add it back (and remove the exec on the last line) if it is what you want.
I added exec at the end as it is slightly more efficient. Since it is the end of the shell script, there is nothing left to do, so you may as well be done with it, hence exec.
& means that you're doing sort of multitask whereas
1 >/dev/null 2>/dev/null
means that you redirect the output to a sort of garbage and that's why you don't see anything.
Furthermore cmdLine="$1 >/dev/null" is incorrect, you should use ' instead of " :
cmdLine='$1 >/dev/null'
you can build your command line in a var and run a bash with it in background:
bash -c "$cmdLine"&
Note that it might be useful to store the output (out/err) of the program, instead of trow them in null.
In addition, why do you need errorsRedirect??
You can even add a wait at the end, just to be safe...if you want...
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
[ ! $1 ] && echo "Please, don't joke me..." && exit 1
cmdLine="$1>/dev/null"
# if passed a second parameter, errors will be hidden
[ $2 ] && cmdLine+=" 2>/dev/null"
# not working
echo "Running \"$cmdLine\""
bash -c "$cmdLine" &
wait

Shell scripting: die on any error

Suppose a shell script (/bin/sh or /bin/bash) contained several commands. How can I cleanly make the script terminate if any of the commands has a failing exit status? Obviously, one can use if blocks and/or callbacks, but is there a cleaner, more concise way? Using && is not really an option either, because the commands can be long, or the script could have non-trivial things like loops and conditionals.
With standard sh and bash, you can
set -e
It will
$ help set
...
-e Exit immediately if a command exits with a non-zero status.
It also works (from what I could gather) with zsh. It also should work for any Bourne shell descendant.
With csh/tcsh, you have to launch your script with #!/bin/csh -e
May be you could use:
$ <any_command> || exit 1
You can check $? to see what the most recent exit code is..
e.g
#!/bin/sh
# A Tidier approach
check_errs()
{
# Function. Parameter 1 is the return code
# Para. 2 is text to display on failure.
if [ "${1}" -ne "0" ]; then
echo "ERROR # ${1} : ${2}"
# as a bonus, make our script exit with the right error code.
exit ${1}
fi
}
### main script starts here ###
grep "^${1}:" /etc/passwd > /dev/null 2>&1
check_errs $? "User ${1} not found in /etc/passwd"
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
check_errs $? "Cut returned an error"
echo "USERNAME: $USERNAME"
check_errs $? "echo returned an error - very strange!"

Resources