How would I make my bash script non interactive? - bash

Hi I am doing a project where I must write a bash script calculator that allows you to perform multiple arithmetic operations and also to the power of. I have done the basic bit but have been told I must make it non interactive and was just wondering how would I do this? My code has been provided below.
clear
bash
while [ -z "$arg1" ]
do
read -p "Enter argument1: " arg1
done
while [ -z "$arg2" ]
do
read -p "Enter argument2: " arg2
done
echo "You've entered: arg1=$arg1 and arg2=$arg2"
let "addition=arg1+arg2"
let "subtraction=arg1-arg2"
let "multiplication=arg1*arg2"
let "division=arg1 / arg2"
let "power=arg1**arg2"
echo -e "results:\n"
echo "$arg1+$arg2=$addition"
echo "$arg1-$arg2=$subtraction"
echo "$arg1*arg2=$multiplication"
echo "$arg1/$arg2=$division"
echo "$arg1^$arg2=$power"
I was thinking of trying to make it so that the user does not have to type in 2 numbers but I am still pretty new to bash and scripts as a whole so I am wondering how to do this.

Use getopts to process arguments to your script. Here is an example from this site. You could replace the lines from your first line to your echo "You've entered …" with modified code from the other answer I linked to. You might want to read the PARAMETERS section of man bash, in particular understand what the Special Parameters are (*, #, #, ?, -, $, !, and 0). You should know those cold if you are scripting bash.
An alternative is to use the builtin read. I would use getopts.

Related

Torque PBS passing environment variables that contain quotes

I have a python script. Normally I would run this like this:
./make_graph data_directory "wonderful graph title"
I have to run this script through the scheduler. I am using -v to pass the arguments for the python script through qsub.
qsub make_graph.pbs -v ARGS="data_directory \"wonderful graph title\""
I have tried many combinations of ', ", \" escaping and I just can't get it right. The quoting around 'wonderful graph title' is always either lost or mangled.
Here is an excerpt from the pbs script
if [ -z "${ARGS+xxx}" ]; then
echo "NO ARGS SPECIFIED!"
exit 1
fi
CMD="/path/make_graph $ARGS"
echo "CMD: $CMD"
echo "Job started on `hostname` at `date`"
${CMD}
What is the proper way to pass a string parameter that contains spaces through qsub as an environment variable? Is there a better way to do this? Maybe this is a more general bash problem.
Update: This answer is based on SGE qsub rather than TORQUE qsub, so the CLI is somewhat different. In particular, TORQUE qub doesn't seem to support direct argument passing, so the second approach doesn't work.
This is mainly a problem of proper quoting and has little to do with grid engine submission itself. If you just want to fix your current script, you should use eval "${CMD}" rather than ${CMD}. Here's a detailed analysis of what happens when you do ${CMD} alone (in the analysis we assume there's nothing funny in path):
Your qsub command line is processed and quotes removed, so the ARGS environment variable passed is data_directory "wonderful graph title".
You did CMD="/path/make_graph $ARGS", so the value of CMD is /path/make_graph data_directory "wonderful graph title" (I'm presenting the string literal without quoting, that is, the value literally contains the quote characters).
You did ${CMD}. Bash performs a parameter expansion on this, which amounts to:
Expanding ${CMD} to its value /path/make_graph data_directory "wonderful graph title";
Since ${CMD} is not quoted, perform word splitting, so in the end the command line has five words: /path/make_graph, data_directory, "wonderful, graph, title". The last four are treated as arguments to your make_graph, which is certainly not what you want.
On the other hand, if you use eval "${CMD}", then it is as if you typed /path/make_graph data_directory "wonderful graph title" into an interactive shell, which is the desired behavior.
You should read more about eval, parameter expansion, etc. in the Bash Reference Manual.
The corrected script:
#!/usr/bin/env bash
[[ -z ${ARGS+xxx} ]] && { echo "NO ARGS SPECIFIED!" >&2; exit 1; }
CMD="/path/make_graph ${ARGS}"
echo "CMD: ${CMD}"
echo "Job started on $(hostname) at $(date)" # backticks are deprecated
eval "${CMD}"
By the way, to test this, you don't need to submit it to the grid engine; just do
ARGS="data_directory \"wonderful graph title\"" bash make_graph.pbs
Okay, I just pointed out what's wrong and patched it. But is it really the "proper way" to pass arguments to grid engine jobs? No, I don't think so. Arguments are arguments, and should not be confused with environment variables. qsub allows you to pass arguments directly (qsub synopsis: qsub [ options ] [ command | -- [ command_args ]]), so why encode them in an env var and end up worrying about quoting?
Here's a better way to write your submission script:
#!/usr/bin/env bash
[[ $# == 0 ]] && { echo "NO ARGS SPECIFIED!" >&2; exit 1; }
CMD="/path/make_graph $#"
echo "CMD: ${CMD}"
echo "Job started on $(hostname) at $(date)" # backticks are deprecated
/path/make_graph "$#"
Here "$#" is equivalent to "$1" "$2" ... — faithfully passing all arguments as is (see relevant section in the Bash Reference Manual).
One thing unfortunate about this, though, is that although the command executed is correct, the one printed may not be properly quoted. For instance, if you do
qsub make_graph.pbs data_directory "wonderful graph title"
then what gets executed is make_graph.pbs data_directory "wonderful graph title", but the printed CMD is make_graph.pbs data_directory wonderful graph title. And there's no easy way to fix this, as far as I know, since quotes are always removed from arguments no matter how word splitting is done. If the command printed is really important to you, there are two solutions:
Use a dedicated "shell escaper" (pretty easy to write one for yourself) to quote the arguments before printing;
Use another scripting language where shell quoting is readily available, e.g., Python (shlex.quote) or Ruby (Shellwords.shellescape).

shell script : check variable and then execute interactive mode

I am writing a utility that can run in command line or interactive mode. In the code, I'd like to check if interactive flag is set and then echo the questions to the User for reading the input. However, for ever question , i dont want to check interactive flag with if condition. In bash script, is there a more efficient way to achieve this ?
Any pointers are greatly appreciated!
Thank you
Something like this maybe?
#! /bin/bash
function interactive {
shift
while read line; do
something with $line
done
}
getopts 'i' option
[[ $option = 'i' ]] && interactive "$#"
Note that this isn't the best style if you have multiple options. In that case use while getopts and shift using argument index.
If you want to tell if the shell is in "interactive mode", as defined by the shell, then you can use something like:
case $- in
*i*) # Interactive
;;
*) # Non-interactive
;;
esac
This, however, is probably not what you want. Shell scripts, for example, are, by default, "non-interactive".
If you want to know if you can ask the user a question, then you are more interested in finding out if stdin is connected to a terminal. In that case, you can use the -t test on file descriptor zero (which is standard input):
if [ -t 0 ]
then
# Interactive: ask user
read -p "Enter a color: " color
read -p "Enter a number: " number
else
# Non-interactive: assign defaults
color="Red"
number=3
fi
echo "color=$color and number=$number"
If there is the possibility that your script might be run remotely over, say, ssh, then a slightly more complicated test is needed:
if [ -t "$fd" ] || [ -p /dev/stdin ]
then
echo interactive
else
echo non-interactive
fi

Indirect reference in bash, why isn't this working?

I'm trying to tidy up one of my bash scripts by using a function for something that happens 6 times. The script sets a number of variables from a config.ini file and then lists them and asks for confirmation that the user wishes to proceed with these predefined values. If not, it steps through each variable and asks for a new one to be entered (or to leave it blank and press enter to use the predefined value). This bit of code accomplishes that:
echo Current output folder: $OUTPUT_FOLDER
echo -n "Enter new output folder: "
read C_OUTPUT_FOLDER
if [ -n "$C_OUTPUT_FOLDER" ]; then OUTPUT_FOLDER=$C_OUTPUT_FOLDER; fi
The idea is to set $OUTPUT_FOLDER to the value of $C_OUTPUT_FOLDER but only if $C_OUTPUT_FOLDER is not null. If $C_OUTPUT_FOLDER IS null, it will not do anything and leave $OUTPUT_FOLDER as it was for use later in the script.
There are 6 variables that are set from the config.ini so this block is currently repeated 6 times. I've made a function new_config () which is as follows:
new_config () {
echo Current $1: ${!2}
echo -n "Enter new $1: "
read $3
if [ -n "${!3}" ]; then $2=${!3}; fi
}
I'm calling it with (in this instance):
new_config "output folder" OUTPUT_FOLDER C_OUTPUT_FOLDER
When I run the script, it has an error on the if line:
./test.sh: line 9: OUTPUT_FOLDER=blah: command not found
So, what gives? The block of code in the script works fine and (in my quite-new-to-bash eyes), the function should be doing exactly the same thing.
Thanks in advance for any pointers.
The problem is that bash splits the command into tokens before variable substitution, see http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_01_04.html#sect_01_04_01_01. Specifically there's rules for POSIX shells that make assignments a special case for tokenization: "If all the characters preceding '=' form a valid name (see XBD Name), the token ASSIGNMENT_WORD shall be returned." - it's the ASSIGNMENT_WORD token that triggers the assignment path. It doesn't repeat the tokenization after variable substitution, which is why your code doesn't work.
You can get your code to work like so:
new_config () {
echo Current $1: ${!2}
echo -n "Enter new $1: "
read $3
if [[ -n "${!3}" ]]; then echo setting "$2='${!3}'"; eval "$2='${!3}'"; fi
}
new_config "output folder" OUTPUT_FOLDER C_OUTPUT_FOLDER
echo $OUTPUT_FOLDER
As #chepner points out, you can use declare -g $2="${!3}" instead of eval here, and on newer bash versions that's a better answer. Unfortunately declare -g requires bash 4.2, and even though that's 3 years old it's still not everywhere - for example, OS X Mavericks is stuck on 3.2.51.

Specify command line arguments like name=value pairs for shell script

Is it possible to pass command line arguments to shell script as name value pairs, something like
myscript action=build module=core
and then in my script, get the variable like
$action and process it?
I know that $1....and so on can be used to get variables, but then won't be name value like pairs. Even if they are, then the developer using the script will have to take care of declaring variables in the same order. I do not want that.
This worked for me:
for ARGUMENT in "$#"
do
KEY=$(echo $ARGUMENT | cut -f1 -d=)
KEY_LENGTH=${#KEY}
VALUE="${ARGUMENT:$KEY_LENGTH+1}"
export "$KEY"="$VALUE"
done
# from this line, you could use your variables as you need
cd $FOLDER
mkdir $REPOSITORY_NAME
Usage
bash my_scripts.sh FOLDER="/tmp/foo" REPOSITORY_NAME="stackexchange"
STEPS and REPOSITORY_NAME are ready to use in the script.
It does not matter what order the arguments are in.
Changelog
v1.0.0
In the Bourne shell, there is a seldom-used option '-k' which automatically places any values specified as name=value on the command line into the environment. Of course, the Bourne/Korn/POSIX shell family (including bash) also do that for name=value items before the command name:
name1=value1 name2=value2 command name3=value3 -x name4=value4 abc
Under normal POSIX-shell behaviour, the command is invoked with name1 and name2 in the environment, and with four arguments. Under the Bourne (and Korn and bash, but not POSIX) shell -k option, it is invoked with name1, name2, name3, and name4 in the environment and just two arguments. The bash manual page (as in man bash) doesn't mention the equivalent of -k but it works like the Bourne and Korn shells do.
I don't think I've ever used it (the -k option) seriously.
There is no way to tell from within the script (command) that the environment variables were specified solely for this command; they are simply environment variables in the environment of that script.
This is the closest approach I know of to what you are asking for. I do not think anything equivalent exists for the C shell family. I don't know of any other argument parser that sets variables from name=value pairs on the command line.
With some fairly major caveats (it is relatively easy to do for simple values, but hard to deal with values containing shell meta-characters), you can do:
case $1 in
(*=*) eval $1;;
esac
This is not the C shell family. The eval effectively does the shell assignment.
arg=name1=value1
echo $name1
eval $arg
echo $name1
env action=build module=core myscript
You said you're using tcsh. For Bourne-based shells, you can drop the "env", though it's harmless to leave it there. Note that this applies to the shell from which you run the command, not to the shell used to implement myscript.
If you specifically want the name=value pairs to follow the command name, you'll need to do some work inside myscript.
It's quite an old question, but still valid
I have not found the cookie cut solution. I combined the above answers. For my needs I created this solution; this works even with white space in the argument's value.
Save this as argparse.sh
#!/bin/bash
: ${1?
'Usage:
$0 --<key1>="<val1a> <val1b>" [ --<key2>="<val2a> <val2b>" | --<key3>="<val3>" ]'
}
declare -A args
while [[ "$#" > "0" ]]; do
case "$1" in
(*=*)
_key="${1%%=*}" && _key="${_key/--/}" && _val="${1#*=}"
args[${_key}]="${_val}"
(>&2 echo -e "key:val => ${_key}:${_val}")
;;
esac
shift
done
(>&2 echo -e "Total args: ${#args[#]}; Options: ${args[#]}")
## This additional can check for specific key
[[ -n "${args['path']+1}" ]] && (>&2 echo -e "key: 'path' exists") || (>&2 echo -e "key: 'path' does NOT exists");
#Example: Note, arguments to the script can have optional prefix --
./argparse.sh --x="blah"
./argparse.sh --x="blah" --yy="qwert bye"
./argparse.sh x="blah" yy="qwert bye"
Some interesting use cases for this script:
./argparse.sh --path="$(ls -1)"
./argparse.sh --path="$(ls -d -1 "$PWD"/**)"
Above script created as gist, Refer: argparse.sh
Extending on Jonathan's answer, this worked nicely for me:
#!/bin/bash
if [ "$#" -eq "0" ]; then
echo "Error! Usage: Remind me how this works again ..."
exit 1
fi
while [[ "$#" > "0" ]]
do
case $1 in
(*=*) eval $1;;
esac
shift
done

Running shell script inside shell script - good or bad?

Recently i got an assignment at school, where we are to write a small program in Bash Scripting Language.
This shell script is supposed to accept some Positional Parameters or Arguments on the command line.
Now, with the help of an if-else statement i check if the argument is present, if the argument is present the script does what it is supposed to do, but if the argument is not present - i display an error message, prompting the user to input an argument and pass the argument again to the same shell script...
Now, i want to know if this approach is good or bad in Bash Programming paradigm. I'm slightly suspicious that this might run too many tasks in the background that are kept open or that are never ended and keep on consuming memory... please help.
Here's a small snippet (assume the script name to be script1.bash):
#!/bin/bash
if [ $# -gt 0 ]; then
read -p "Please enter your name: " name
script1.bash $name
elif [ $# -eq 1 ]; then
echo "Your name is $1!"
fi
It's ... questionable :-)
The main problem is that you're spawning a subshell everytime someone refuses to input their name.
You would be better of with something like:
#!/bin/bash
name=$1
while [[ -z "$name" ]] ; do
read -p "Please enter your name: " name
done
echo "Your name is $name!"
which spawns no subshells.

Resources