I'm writing a bash script that will
behave differently based on the first argument passed (no prob so far)
pass the rest of the arguments to a command (no prob so far)
behave differently, if an argument contains a string
It looks like:
[[ "${1}" = "-h" || "${1}" = "--help" ]] && echo -e "somehelp"
[[ "${1}" = "echo" ]] && echo ${*:2}
[[ "${1}" = "emerge" ]] && emerge -uDN ${*:2}
some-magic-here
Now, if I do
myscript emerge -a whatever whatever2 --option
it would run
emerge -uDN -a whatever whatever2 --option
But, in case that "whatever" is a string containing *, such as
myscript emerge -uDN -a whatever/* whatever2 --option
I'd want it to run
emerge -uDN -a $(eix -u --only-names whatever/*) whatever2 --option
instead. Any tips?
First, if you are going to pass * to the script, you must prevent expansion by the shell on the command line. The simplest way is to quote it:
myscript emerge -a 'whatever/*' whatever2 --option
Since you are already using the [[ operator, note: the following is a bash only solution and not portable to sh. To determine if $3 contains a * you can use the =~ operator:
[[ "${1}" == "emerge" ]] && {
[[ "$3" =~ "*" ]] && \
emerge -uDN $2 $(eix -u --only-names $3) ${*:4} || \
emerge -uDN ${*:2}
}
You can also rewrite the compound commands into nested if-else statements if the logic gets a bit murky. Give it a try and let me know if you have any issues.
You mention the command line:
myscript emerge -uDN -a whatever/* whatever2 --option
The only ways myscript will see the * in its argument list is if there is no subdirectory whatever under the current directory, or it is an empty directory (strictly: if it contains any files, the names all start with .), and in either case, you don't have shopt -s nullglob set. If those conditions aren't met, the shell invoking myscript will replace the * (in)appropriately and myscript will not see the *. (Of course, if you quote the argument — "whatever/*" or 'whatever/*', then myscript will also see the * metacharacter, regardless of nullglob and the presence or absence of a whatever subdirectory.)
It is not clear whether the * needs to be replaced only when it follows the -a option, or if it should be replaced in any argument whatsoever. I will assume that all arguments need to be replaced; it is not very different if it is only the argument after the -a that should be replaced.
Without the code to handle whatever/*, the command looks like:
[[ "${1}" = "emerge" ]] && exec emerge -uDN "${#:2}" || exit 1
Differences:
The exec is optional but guarantees that nothing after the emerge command will be executed (unless the emerge command can't be found, in which case the || exit 1 ensures nothing else is executed).
Use "$#" to preserve the arguments as presented to your script. Without double quotes, there's no difference between $# and $*. Inside double quotes, "$*" generates a single string, but "$#" generates each argument as passed to the script. The "${#:2}" notation does this for the second up to the last argument.
To handle the * for any argument, we need to detect the *. This is going to be easiest if we use arrays.
The array arglist will contain the arguments to be passed to the emerge command. We need to iterate over the arguments to the script, checking for appearances of *:
arglist=( "-uDN" )
for arg in "${#:2}"
do
case "$arg" in
(*\**) arglist+=( $(eix -u --only-names "$arg") );;
(*) arglist+=( "$arg" );;
esac
done
exec emerge "${arglist[#]}"
exit 1
Note that this assumes that eix will expand metacharacters (* to be precise), rather than relying on the shell to do so. It also assumes, as the question assumes, that there are no spaces in the names generated by eix. If there are any, the $(…) notation will split the names at the spaces (or tabs, or newlines, …).
This code would be best handled as the body of the then clause in
if [[ "${1}" = "emerge" ]]
then
…
fi
This would be clearer than trying to squeeze all that code onto a single line (which could be done, but there is no point in doing so and many points to not doing so).
Here's a clue:
#!/bin/sh
x="bbbccc" # Put different strings in here, and see what happens
case "$x" in
*\**)
echo "Yes"
;;
*)
echo "No"
;;
esac
Related
My bash script writes an another bash script using printf.
printf "#!/bin/bash
HOME=${server}
file=gromacs*
file_name=\$(basename "\${file}")
date=\$(date +"\%m_\%d_\%Y")
for sim in \${HOME}/* ; do
if [[ -d \$sim ]]; then
simulation=$(basename "\$sim")
pushd \${sim}
cp \$file \${server}/\${results}/\${file_name}.\${simulation}.\${date}
echo "\${file_name}\ from\ \${simulation}\ has\ been\ collected!"
popd
fi
done" > ${output}/collecter.sh
Here there is a problem in escappiong of the elements within date variable
date=\$(date +"\%m_\%d_\%Y")
where the below part did not work properly
"\%m_\%d_\%Y"
it results in incomplete of a new bash script produced by printf.
How it should be fixed?
Thanks!
Use a quoted heredoc.
{
## print the header, and substitute our own value for HOME
printf '#!/bin/bash\nHOME=%q\n' "$server"
## EVERYTHING BELOW HERE UNTIL THE EOF IS LITERAL
cat <<'EOF'
file=( gromacs* )
(( ${#file[#]} == 1 )) && [[ -e $file ]] || {
echo "ERROR: Exactly one file starting with 'gromacs' should exist" >&2
exit 1
}
file_name=$(basename "${file}")
date=$(date +"%m_%d_%Y")
for sim in "$HOME"/* ; do
if [[ -d $sim ]]; then
simulation=$(basename "$sim")
(cd "${sim}" && exec cp "$file" "${server}/${results}/${file_name}.${simulation}.${date}")
echo "${file_name} from ${simulation} has been collected!"
fi
done
EOF
} >"${output}/collecter.sh"
Note:
Inside a quoted heredoc (cat <<'EOF'), no substitutions are performed, so no escaping is needed. We're thus able to write our code exactly as we want it to exist in the generated file.
When generating code, use printf %q to escape values in such a way as to evaluate back to their original values. Otherwise, a variable containing $(rm -rf ~) could cause the given command to be run (if it were substituted inside literal single quotes, making the contents $(rm -rf ~)'$(rm -rf ~)' would escape them).
Glob expansions return a list of results; the proper data type in which to store their results is an array, not a string. Thus, file=( gromacs* ) makes the storage of the result in an array explicit, and the following code checks for both the case where we have more than one result, and the case where our result is the original glob expression (meaning no matches existed).
All expansions need to be quoted to prevent string-splitting. This means "$HOME"/*, not $HOME/* -- otherwise you'll have problems whenever a user has a home directory containing whitespace (and yes, this does happen -- consider Windows-derived platforms where you have /Users/Firstname Lastname, or sites where you've mounted a volume for home directories off same).
pushd and popd are an interactive extension, as opposed to a tool intended for writing scripts. Since spawning an external program (such as cp) involves a fork() operation, and any directory change inside a subshell terminates when that subshell does, you can avoid any need for them by spawning a subshell, cd'ing within that subshell, and then using the exec builtin to replace the subshell's PID with that of cp, thus preventing the fork that would otherwise have taken place to allow cp to be started in a separate process from the shell acting as its parent.
You have to escapes in printf with esscaps, e. g.:
date=\$(date +"%%m_%%d_%%Y")
should print date=\$(date +"%m_%d_%Y"). But you should avoid using printf, because you don't use it's capabilities. Instead you could cat the string to the file:
cat > ${output}/collecter.sh <<'END'
HOME=${server}
...
done
END
This would allow you to avoid many escapes, and make the code more readable.
Try this
date=\$(date +\"%%m_%%d_%%Y\")"
Script nerf calls script herd, which calls script er. nerf uses a flag on herd that explicitly takes arguments needing to be passed to er.
This was not a problem before nerf existed - when herd was just called from the command line, we could single-quote the arguments to the -p flag, and they would never be interpreted by herd's getopts, but instead they would be interpreted by er's getopts.
But now we have generated values in the flags that eventually need to go to er, so I need to expand the variable $file_contents in nerf, but not let them be interpreted by getopts until they get to er.
Any of these three scripts can be modified.
$ cat nerf
#!/bin/bash
file_contents="`cat one_liner_file`"
er_args="-jkl -m $file_contents"
./herd -p "$er_args" # <-- the problem
$ cat herd
#!/bin/bash
passthru_args=""
while getopts "p:a:b:cde" opt
do
case $opt in
p) passthru_args="$OPTARGS" ;;
...
esac
done
./er "$passthru_args"
$ cat er
#!/bin/bash
while getopts "jklm:" opt
do
case $opt in
...
esac
done
If I use single quotes on the marked line above, I get the literal string "$er_args" passed through. Using double quotes, the flags are directly interpreted by herd. Using single inside double quotes, the flags aren't interpreted by ANY getopts.
I'm thinking there's no elegant solution here, but please let me know if I'm wrong. The only solutions I can think of are crappy:
Expose all of er's flags explicitly through herd.
Remove the er call from herd and place it directly into nerf.
???
What many tools do is when passed -p "-jkl -m something", they split up the string using pseudo-shell syntax. This is a bad idea because it makes space and quote handling unpredictable.
Instead, the better way is to have a way to pass individual words to the command. This is what find -exec does -- all arguments after -exec and up until + or ; are passed literally as separate arguments.
Here's a simple example of a herd with the same semantics:
#!/bin/bash
passthru_args=()
while getopts "pa:b:cde" opt
do
case $opt in
p)
while [[ ${!OPTIND} != ';' ]]
do
passthru_args+=("${!OPTIND}")
let OPTIND++
done
let OPTIND++
;;
*) echo "herd: $opt is $OPTARG"
;;
esac
done
./er "${passthru_args[#]}"
You can now run ./herd -p -jkl -m "some stuff" \; -a foo
This will run ./er -jkl -m "some stuff" safely without any space issues (but you'll have a hard time nesting multiple calls that use ; as an argument terminator).
I'm trying to write a simple script that will tell me if a filename exist in $Temp that starts with the string "Test".
For example, I have these files
Test1989.txt
Test1990.txt
Test1991.txt
Then I just want to echo that a file was found.
For example, something like this:
file="home/edward/bank1/fiche/Test*"
if test -s "$file"
then
echo "found one"
else
echo "found none"
fi
But this doesn't work.
One approach:
(
shopt -s nullglob
files=(/home/edward/bank1/fiche/Test*)
if [[ "${#files[#]}" -gt 0 ]] ; then
echo found one
else
echo found none
fi
)
Explanation:
shopt -s nullglob will cause /home/edward/bank1/fiche/Test* to expand to nothing if no file matches that pattern. (Without it, it will be left intact.)
( ... ) sets up a subshell, preventing shopt -s nullglob from "escaping".
files=(/home/edward/bank1/fiche/Test*) puts the file-list in an array named files. (Note that this is within the subshell only; files will not be accessible after the subshell exits.)
"${#files[#]}" is the number of elements in this array.
Edited to address subsequent question ("What if i also need to check that these files have data in them and are not zero byte files"):
For this version, we need to use -s (as you did in your question), which also tests for the file's existence, so there's no point using shopt -s nullglob anymore: if no file matches the pattern, then -s on the pattern will be false. So, we can write:
(
found_nonempty=''
for file in /home/edward/bank1/fiche/Test* ; do
if [[ -s "$file" ]] ; then
found_nonempty=1
fi
done
if [[ "$found_nonempty" ]] ; then
echo found one
else
echo found none
fi
)
(Here the ( ... ) is to prevent file and found_file from "escaping".)
You have to understand how Unix interprets your input.
The standard Unix shell interpolates environment variables, and what are called globs before it passes the parameters to your program. This is a bit different from Windows which makes the program interpret the expansion.
Try this:
$ echo *
This will echo all the files and directories in your current directory. Before the echo command acts, the shell interpolates the * and expands it, then passes that expanded parameter back to your command. You can see it in action by doing this:
$ set -xv
$ echo *
$ set +xv
The set -xv turns on xtrace and verbose. Verbose echoes the command as entered, and xtrace echos the command that will be executed (that is, after the shell expansion).
Now try this:
$ echo "*"
Note that putting something inside quotes hides the glob expression from the shell, and the shell cannot expand it. Try this:
$ foo="this is the value of foo"
$ echo $foo
$ echo "$foo"
$ echo '$foo'
Note that the shell can still expand environment variables inside double quotes, but not in single quotes.
Now let's look at your statement:
file="home/edward/bank1/fiche/Test*"
The double quotes prevent the shell from expanding the glob expression, so file is equal to the literal home/edward/bank1/finche/Test*. Therefore, you need to do this:
file=/home/edward/bank1/fiche/Test*
The lack of quotes (and the introductory slash which is important!) will now make file equal to all files that match that expression. (There might be more than one!). If there are no files, depending upon the shell, and its settings, the shell may simply set file to that literal string anyway.
You certainly have the right idea:
file=/home/edward/bank1/fiche/Test*
if test -s $file
then
echo "found one"
else
echo "found none"
fi
However, you still might get found none returned if there is more than one file. Instead, you might get an error in your test command because there are too many parameters.
One way to get around this might be:
if ls /home/edward/bank1/finche/Test* > /dev/null 2>&1
then
echo "There is at least one match (maybe more)!"
else
echo "No files found"
fi
In this case, I'm taking advantage of the exit code of the ls command. If ls finds one file it can access, it returns a zero exit code. If it can't find one matching file, it returns a non-zero exit code. The if command merely executes a command, and then if the command returns a zero, it assumes the if statement as true and executes the if clause. If the command returns a non-zero value, the if statement is assumed to be false, and the else clause (if one is available) is executed.
The test command works in a similar fashion. If the test is true, the test command returns a zero. Otherwise, the test command returns a non-zero value. This works great with the if command. In fact, there's an alias to the test command. Try this:
$ ls -li /bin/test /bin/[
The i prints out the inode. The inode is the real ID of the file. Files with the same ID are the same file. You can see that /bin/test and /bin/[ are the same command. This makes the following two commands the same:
if test -s $file
then
echo "The file exists"
fi
if [ -s $file ]
then
echo "The file exists"
fi
You can do it in one line:
ls /home/edward/bank1/fiche/Test* >/dev/null 2>&1 && echo "found one" || echo "found none"
To understand what it does you have to decompose the command and have a basic awareness of boolean logic.
Directly from bash man page:
[...]
expression1 && expression2
True if both expression1 and expression2 are true.
expression1 || expression2
True if either expression1 or expression2 is true.
[...]
In the shell (and in general in unix world), the boolean true is a program that exits with status 0.
ls tries to list the pattern, if it succeed (meaning the pattern exists) it exits with status 0, 2 otherwise (have a look at ls man page for details).
In our case there are actually 3 expressions, for the sake of clarity I will put parenthesis, although they are not needed because && has precedence on ||:
(expression1 && expression2) || expression3
so if expression1 is true (ie: ls found the pattern) it evaluates expression2 (which is just an echo and will exit with status 0). In this case expression3 is never evaluate because what's on the left site of || is already true and it would be a waste of resources trying to evaluate what's on the right.
Otherwise, if expression1 is false, expression2 is not evaluated but in this case expression3 is.
for entry in "/home/loc/etc/"/*
do
if [ -s /home/loc/etc/$entry ]
then
echo "$entry File is available"
else
echo "$entry File is not available"
fi
done
Hope it helps
The following script will help u to go to a process if that script exist in a specified variable,
cat > waitfor.csh
#!/bin/csh
while !( -e $1 )
sleep 10m
end
ctrl+D
here -e is for working with files,
$1 is a shell variable,
sleep for 10 minutes
u can execute the script by ./waitfor.csh ./temp ; echo "the file exits"
One liner to check file exist or not -
awk 'BEGIN {print getline < "file.txt" < 0 ? "File does not exist" : "File Exists"}'
Wildcards aren't expanded inside quoted strings. And when wildcard is expanded, it's returned unchanged if there are no matches, it doesn't expand into an empty string. Try:
output="$(ls home/edward/bank1/fiche/Test* 2>/dev/null)"
if [ -n "$output" ]
then echo "Found one"
else echo "Found none"
fi
If the wildcard expanded to filenames, ls will list them on stdout; otherwise it will print an error on stderr, and nothing on stdout. The contents of stdout are assigned to output.
if [ -n "$output" ] tests whether $output contains anything.
Another way to write this would be:
if [ $(ls home/edward/bank1/fiche/Test* 2>/dev/null | wc -l) -gt 0 ]
Is it possible to pass command line arguments to shell script as name value pairs, something like
myscript action=build module=core
and then in my script, get the variable like
$action and process it?
I know that $1....and so on can be used to get variables, but then won't be name value like pairs. Even if they are, then the developer using the script will have to take care of declaring variables in the same order. I do not want that.
This worked for me:
for ARGUMENT in "$#"
do
KEY=$(echo $ARGUMENT | cut -f1 -d=)
KEY_LENGTH=${#KEY}
VALUE="${ARGUMENT:$KEY_LENGTH+1}"
export "$KEY"="$VALUE"
done
# from this line, you could use your variables as you need
cd $FOLDER
mkdir $REPOSITORY_NAME
Usage
bash my_scripts.sh FOLDER="/tmp/foo" REPOSITORY_NAME="stackexchange"
STEPS and REPOSITORY_NAME are ready to use in the script.
It does not matter what order the arguments are in.
Changelog
v1.0.0
In the Bourne shell, there is a seldom-used option '-k' which automatically places any values specified as name=value on the command line into the environment. Of course, the Bourne/Korn/POSIX shell family (including bash) also do that for name=value items before the command name:
name1=value1 name2=value2 command name3=value3 -x name4=value4 abc
Under normal POSIX-shell behaviour, the command is invoked with name1 and name2 in the environment, and with four arguments. Under the Bourne (and Korn and bash, but not POSIX) shell -k option, it is invoked with name1, name2, name3, and name4 in the environment and just two arguments. The bash manual page (as in man bash) doesn't mention the equivalent of -k but it works like the Bourne and Korn shells do.
I don't think I've ever used it (the -k option) seriously.
There is no way to tell from within the script (command) that the environment variables were specified solely for this command; they are simply environment variables in the environment of that script.
This is the closest approach I know of to what you are asking for. I do not think anything equivalent exists for the C shell family. I don't know of any other argument parser that sets variables from name=value pairs on the command line.
With some fairly major caveats (it is relatively easy to do for simple values, but hard to deal with values containing shell meta-characters), you can do:
case $1 in
(*=*) eval $1;;
esac
This is not the C shell family. The eval effectively does the shell assignment.
arg=name1=value1
echo $name1
eval $arg
echo $name1
env action=build module=core myscript
You said you're using tcsh. For Bourne-based shells, you can drop the "env", though it's harmless to leave it there. Note that this applies to the shell from which you run the command, not to the shell used to implement myscript.
If you specifically want the name=value pairs to follow the command name, you'll need to do some work inside myscript.
It's quite an old question, but still valid
I have not found the cookie cut solution. I combined the above answers. For my needs I created this solution; this works even with white space in the argument's value.
Save this as argparse.sh
#!/bin/bash
: ${1?
'Usage:
$0 --<key1>="<val1a> <val1b>" [ --<key2>="<val2a> <val2b>" | --<key3>="<val3>" ]'
}
declare -A args
while [[ "$#" > "0" ]]; do
case "$1" in
(*=*)
_key="${1%%=*}" && _key="${_key/--/}" && _val="${1#*=}"
args[${_key}]="${_val}"
(>&2 echo -e "key:val => ${_key}:${_val}")
;;
esac
shift
done
(>&2 echo -e "Total args: ${#args[#]}; Options: ${args[#]}")
## This additional can check for specific key
[[ -n "${args['path']+1}" ]] && (>&2 echo -e "key: 'path' exists") || (>&2 echo -e "key: 'path' does NOT exists");
#Example: Note, arguments to the script can have optional prefix --
./argparse.sh --x="blah"
./argparse.sh --x="blah" --yy="qwert bye"
./argparse.sh x="blah" yy="qwert bye"
Some interesting use cases for this script:
./argparse.sh --path="$(ls -1)"
./argparse.sh --path="$(ls -d -1 "$PWD"/**)"
Above script created as gist, Refer: argparse.sh
Extending on Jonathan's answer, this worked nicely for me:
#!/bin/bash
if [ "$#" -eq "0" ]; then
echo "Error! Usage: Remind me how this works again ..."
exit 1
fi
while [[ "$#" > "0" ]]
do
case $1 in
(*=*) eval $1;;
esac
shift
done
Is it possible to pass command line arguments into a function from within a bourne script, in order to allow getopts to process them.
The rest of my script is nicely packed into functions, but it's starting to look like I'll have to move the argument processing into the main logic.
The following is how it's written now, but it doesn't work:
processArgs()
{
while getopts j:f: arg
do
echo "${arg} -- ${OPTARG}"
case "${arg}" in
j) if [ -z "${filename}" ]; then
job_number=$OPTARG
else
echo "Filename ${filename} already set."
echo "Job number ${OPTARG} will be ignored.
fi;;
f) if [ -z "${job_number}" ]; then
filename=$OPTARG
else
echo "Job number ${job_number} already set."
echo "Filename ${OPTARG} will be ignored."
fi;;
esac
done
}
doStuff1
processArgs
doStuff2
Is it possible to maybe define the function in a way that it can read the scripts args? Can this be done some other way? I like the functionality of getopts, but it looks like in this case I'm going to have to sacrifice the beauty of the code to get it.
You can provide args to getopts after the variable. The default is $#, but that's also what shell functions use to represent their arguments. Solution is to pass "$#" — representing all the script's command-line arguments as individual strings — to processArgs:
processArgs "$#"
Adding that to your script (and fixing the quoting in line 11), and trying out some gibberish test args:
$ ./try -j asdf -f fooo -fasdfasdf -j424pyagnasd
j -- asdf
f -- fooo
Job number asdf already set.
Filename fooo will be ignored.
f -- asdfasdf
Job number asdf already set.
Filename asdfasdf will be ignored.
j -- 424pyagnasd