I am writing a utility that can run in command line or interactive mode. In the code, I'd like to check if interactive flag is set and then echo the questions to the User for reading the input. However, for ever question , i dont want to check interactive flag with if condition. In bash script, is there a more efficient way to achieve this ?
Any pointers are greatly appreciated!
Thank you
Something like this maybe?
#! /bin/bash
function interactive {
shift
while read line; do
something with $line
done
}
getopts 'i' option
[[ $option = 'i' ]] && interactive "$#"
Note that this isn't the best style if you have multiple options. In that case use while getopts and shift using argument index.
If you want to tell if the shell is in "interactive mode", as defined by the shell, then you can use something like:
case $- in
*i*) # Interactive
;;
*) # Non-interactive
;;
esac
This, however, is probably not what you want. Shell scripts, for example, are, by default, "non-interactive".
If you want to know if you can ask the user a question, then you are more interested in finding out if stdin is connected to a terminal. In that case, you can use the -t test on file descriptor zero (which is standard input):
if [ -t 0 ]
then
# Interactive: ask user
read -p "Enter a color: " color
read -p "Enter a number: " number
else
# Non-interactive: assign defaults
color="Red"
number=3
fi
echo "color=$color and number=$number"
If there is the possibility that your script might be run remotely over, say, ssh, then a slightly more complicated test is needed:
if [ -t "$fd" ] || [ -p /dev/stdin ]
then
echo interactive
else
echo non-interactive
fi
Related
Hi I am doing a project where I must write a bash script calculator that allows you to perform multiple arithmetic operations and also to the power of. I have done the basic bit but have been told I must make it non interactive and was just wondering how would I do this? My code has been provided below.
clear
bash
while [ -z "$arg1" ]
do
read -p "Enter argument1: " arg1
done
while [ -z "$arg2" ]
do
read -p "Enter argument2: " arg2
done
echo "You've entered: arg1=$arg1 and arg2=$arg2"
let "addition=arg1+arg2"
let "subtraction=arg1-arg2"
let "multiplication=arg1*arg2"
let "division=arg1 / arg2"
let "power=arg1**arg2"
echo -e "results:\n"
echo "$arg1+$arg2=$addition"
echo "$arg1-$arg2=$subtraction"
echo "$arg1*arg2=$multiplication"
echo "$arg1/$arg2=$division"
echo "$arg1^$arg2=$power"
I was thinking of trying to make it so that the user does not have to type in 2 numbers but I am still pretty new to bash and scripts as a whole so I am wondering how to do this.
Use getopts to process arguments to your script. Here is an example from this site. You could replace the lines from your first line to your echo "You've entered …" with modified code from the other answer I linked to. You might want to read the PARAMETERS section of man bash, in particular understand what the Special Parameters are (*, #, #, ?, -, $, !, and 0). You should know those cold if you are scripting bash.
An alternative is to use the builtin read. I would use getopts.
The first thing in my bashrc file is this expression:
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
Can somebody explain what this means?
all these symbols make it really hard to google, and there is no Haskell "hoogle" equivalent for bash so I can search for symbol expressions.
The intended behavior seems to be similar to this.
nonsourced=0;
# if sourced,
if [[ "$0" == "$BASH_SOURCE" ]]; then
nonsourced=1;
else
nonsourced=0;
fi
echo "nonsourced? $nonsourced";
case $- in
*i*)
# this case is entered if "$-" contains "i".
if [[ "$nonsourced" -eq "0" ]]; then
echo "1. " "$-";
fi
;; # leave case?
*) # this case is entered in all other cases.
if [[ "$nonsourced" -eq "0" ]]; then
echo "2. " "$-";
return
else
# cannot return from nonsourced, use exit.
echo "avoided return from nonsourced #2";
exit 0;
fi
;; # leave case?
esac
echo "3";
Can somebody explain what this means?
$- The list of options set in the shell at the point of evaluation.
When a shell (bash) starts it accepts some options:
LESS=+'/^ *OPTIONS' man bash
All of the single-character shell options documented in the description of the set builtin command can be used as options when the shell is invoked. In addition, bash interprets the following options when it is invoked:
One of such options is -i. So calling bash as bash -i … should [a] trigger that option inside[a] the shell.
[a] I say should as some other conditions are also required to have an effective interactive shell. Also, an interactive shell may be started by simply writing bash in a terminal (no -i option used)
[b] The way to print some options that have been set is with echo $-
*i*) ;; tests if the string from $- contains an i, if so, do nothing.
*) return;; On any other value of $- return (go out the script[c] ).
[c] Please read this answer for return vs. exit.
In overall, it does what the comment says:
# If not running interactively, don't do anything
Or with a clearer wording:
# If running interactively, exit[d].
[d] It may be more technically correct to use the word return instead of exit, but the idea is cleaner, I believe.
Note that there is a quite similar construct with $PS1 (used in /etc/bash.bashrc and repeated in ~/.bashrc in debian based systems, for example):
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
As for the problem of finding symbols:
> all these symbols make it really hard to google
even if it doesn't cover so many pages, SymbolHound may be of help here.
If we try it
we find this
Which clearly explains what you are asking.
See the documentation for Bash variables:
$-: A hyphen expands to the current option flags as specified upon invocation, by the set built-in command, or those set by the shell itself (such as the -i).
The asterisks in the case patterns are wildcards, so essentially the whole case says “if there’s an i [as interactive] somewhere in the arguments, go on, otherwise return”.
$- lists the current shell options.
The two cases are whether the -i interactive flag is present anywhere in that list of options.
Is it possible to pass command line arguments to shell script as name value pairs, something like
myscript action=build module=core
and then in my script, get the variable like
$action and process it?
I know that $1....and so on can be used to get variables, but then won't be name value like pairs. Even if they are, then the developer using the script will have to take care of declaring variables in the same order. I do not want that.
This worked for me:
for ARGUMENT in "$#"
do
KEY=$(echo $ARGUMENT | cut -f1 -d=)
KEY_LENGTH=${#KEY}
VALUE="${ARGUMENT:$KEY_LENGTH+1}"
export "$KEY"="$VALUE"
done
# from this line, you could use your variables as you need
cd $FOLDER
mkdir $REPOSITORY_NAME
Usage
bash my_scripts.sh FOLDER="/tmp/foo" REPOSITORY_NAME="stackexchange"
STEPS and REPOSITORY_NAME are ready to use in the script.
It does not matter what order the arguments are in.
Changelog
v1.0.0
In the Bourne shell, there is a seldom-used option '-k' which automatically places any values specified as name=value on the command line into the environment. Of course, the Bourne/Korn/POSIX shell family (including bash) also do that for name=value items before the command name:
name1=value1 name2=value2 command name3=value3 -x name4=value4 abc
Under normal POSIX-shell behaviour, the command is invoked with name1 and name2 in the environment, and with four arguments. Under the Bourne (and Korn and bash, but not POSIX) shell -k option, it is invoked with name1, name2, name3, and name4 in the environment and just two arguments. The bash manual page (as in man bash) doesn't mention the equivalent of -k but it works like the Bourne and Korn shells do.
I don't think I've ever used it (the -k option) seriously.
There is no way to tell from within the script (command) that the environment variables were specified solely for this command; they are simply environment variables in the environment of that script.
This is the closest approach I know of to what you are asking for. I do not think anything equivalent exists for the C shell family. I don't know of any other argument parser that sets variables from name=value pairs on the command line.
With some fairly major caveats (it is relatively easy to do for simple values, but hard to deal with values containing shell meta-characters), you can do:
case $1 in
(*=*) eval $1;;
esac
This is not the C shell family. The eval effectively does the shell assignment.
arg=name1=value1
echo $name1
eval $arg
echo $name1
env action=build module=core myscript
You said you're using tcsh. For Bourne-based shells, you can drop the "env", though it's harmless to leave it there. Note that this applies to the shell from which you run the command, not to the shell used to implement myscript.
If you specifically want the name=value pairs to follow the command name, you'll need to do some work inside myscript.
It's quite an old question, but still valid
I have not found the cookie cut solution. I combined the above answers. For my needs I created this solution; this works even with white space in the argument's value.
Save this as argparse.sh
#!/bin/bash
: ${1?
'Usage:
$0 --<key1>="<val1a> <val1b>" [ --<key2>="<val2a> <val2b>" | --<key3>="<val3>" ]'
}
declare -A args
while [[ "$#" > "0" ]]; do
case "$1" in
(*=*)
_key="${1%%=*}" && _key="${_key/--/}" && _val="${1#*=}"
args[${_key}]="${_val}"
(>&2 echo -e "key:val => ${_key}:${_val}")
;;
esac
shift
done
(>&2 echo -e "Total args: ${#args[#]}; Options: ${args[#]}")
## This additional can check for specific key
[[ -n "${args['path']+1}" ]] && (>&2 echo -e "key: 'path' exists") || (>&2 echo -e "key: 'path' does NOT exists");
#Example: Note, arguments to the script can have optional prefix --
./argparse.sh --x="blah"
./argparse.sh --x="blah" --yy="qwert bye"
./argparse.sh x="blah" yy="qwert bye"
Some interesting use cases for this script:
./argparse.sh --path="$(ls -1)"
./argparse.sh --path="$(ls -d -1 "$PWD"/**)"
Above script created as gist, Refer: argparse.sh
Extending on Jonathan's answer, this worked nicely for me:
#!/bin/bash
if [ "$#" -eq "0" ]; then
echo "Error! Usage: Remind me how this works again ..."
exit 1
fi
while [[ "$#" > "0" ]]
do
case $1 in
(*=*) eval $1;;
esac
shift
done
Recently i got an assignment at school, where we are to write a small program in Bash Scripting Language.
This shell script is supposed to accept some Positional Parameters or Arguments on the command line.
Now, with the help of an if-else statement i check if the argument is present, if the argument is present the script does what it is supposed to do, but if the argument is not present - i display an error message, prompting the user to input an argument and pass the argument again to the same shell script...
Now, i want to know if this approach is good or bad in Bash Programming paradigm. I'm slightly suspicious that this might run too many tasks in the background that are kept open or that are never ended and keep on consuming memory... please help.
Here's a small snippet (assume the script name to be script1.bash):
#!/bin/bash
if [ $# -gt 0 ]; then
read -p "Please enter your name: " name
script1.bash $name
elif [ $# -eq 1 ]; then
echo "Your name is $1!"
fi
It's ... questionable :-)
The main problem is that you're spawning a subshell everytime someone refuses to input their name.
You would be better of with something like:
#!/bin/bash
name=$1
while [[ -z "$name" ]] ; do
read -p "Please enter your name: " name
done
echo "Your name is $name!"
which spawns no subshells.
Is it possible to pass command line arguments into a function from within a bourne script, in order to allow getopts to process them.
The rest of my script is nicely packed into functions, but it's starting to look like I'll have to move the argument processing into the main logic.
The following is how it's written now, but it doesn't work:
processArgs()
{
while getopts j:f: arg
do
echo "${arg} -- ${OPTARG}"
case "${arg}" in
j) if [ -z "${filename}" ]; then
job_number=$OPTARG
else
echo "Filename ${filename} already set."
echo "Job number ${OPTARG} will be ignored.
fi;;
f) if [ -z "${job_number}" ]; then
filename=$OPTARG
else
echo "Job number ${job_number} already set."
echo "Filename ${OPTARG} will be ignored."
fi;;
esac
done
}
doStuff1
processArgs
doStuff2
Is it possible to maybe define the function in a way that it can read the scripts args? Can this be done some other way? I like the functionality of getopts, but it looks like in this case I'm going to have to sacrifice the beauty of the code to get it.
You can provide args to getopts after the variable. The default is $#, but that's also what shell functions use to represent their arguments. Solution is to pass "$#" — representing all the script's command-line arguments as individual strings — to processArgs:
processArgs "$#"
Adding that to your script (and fixing the quoting in line 11), and trying out some gibberish test args:
$ ./try -j asdf -f fooo -fasdfasdf -j424pyagnasd
j -- asdf
f -- fooo
Job number asdf already set.
Filename fooo will be ignored.
f -- asdfasdf
Job number asdf already set.
Filename asdfasdf will be ignored.
j -- 424pyagnasd