pass a command as an argument to bash script - bash

How do I pass a command as an argument to a bash script?
In the following script, I attempted to do that, but it's not working!
#! /bin/sh
if [ $# -ne 2 ]
then
echo "Usage: $0 <dir> <command to execute>"
exit 1;
fi;
while read line
do
$($2) $line
done < $(ls $1);
echo "All Done"
A sample usage of this script would be
./myscript thisDir echo
Executing the call above ought to echo the name of all files in the thisDir directory.

First big problem: $($2) $line executes $2 by itself as a command, then tries to run its output (if any) as another command with $line as an argument to it. You just want $2 $line.
Second big problem: while read ... done < $(ls $1) doesn't read from the list of filenames, it tries to the contents of a file specified by the output of ls -- this will fail in any number of ways depending on the exact circumstances. Process substitution (while read ... done < <(ls $1)) would do more-or-less what you want, but it's a bash-only feature (i.e. you must start the script with #!/bin/bash, not #!/bin/sh). And anyway it's a bad idea to parse ls, you should almost always just use a shell glob (*) instead.
The script also has some other potential issues with spaces in filenames (using $line without double-quotes around it, etc), and weird stylistic oddities (you don't need ; at the end of a line in shell). Here's my stab at a rewrite:
#! /bin/sh
if [ $# -ne 2 ]; then
echo "Usage: $0 <dir> <command to execute>"
exit 1
fi
for file in "$1"/*; do
$2 "$file"
done
echo "All done"
Note that I didn't put double-quotes around $2. This allows you to specify multiword commands (e.g. ./myscript thisDir "cat -v" would be interpreted as running the cat command with the -v option, rather than trying to run a command named "cat -v"). It would actually be a bit more flexible to take all arguments after the first one as the command and its argument, allowing you to do e.g. ./myscript thisDir cat -v, ./myscript thisDir grep -m1 "pattern with spaces", etc:
#! /bin/sh
if [ $# -lt 2 ]; then
echo "Usage: $0 <dir> <command to execute> [command options]"
exit 1
fi
dir="$1"
shift
for file in "$dir"/*; do
"$#" "$file"
done
echo "All done"

your command "echo" command is "hidden" inside a sub-shell from its argments in $line.
I think I understand what your attempting in with $($2), but its probably overkill, unless this isn't the whole story, so
while read line ; do
$2 $line
done < $(ls $1)
should work for your example with thisDir echo. If you really need the cmd-substitution and the subshell, then put you arguments so they can see each other:
$($2 $line)
And as D.S. mentions, you might need eval before either of these.
IHTH

you could try: (in your codes)
echo "$2 $line"|sh
or the eval:
eval "$2 $line"

Related

Speed up shell script/Performance enhancement of shell script

Is there a way to speed up the below shell script? It's taking me a good 40 mins to update about 150000 files everyday. Sure, given the volume of files to create & update, this may be acceptable. I don't deny that. However, if there is a much more efficient way to write this or re-write the logic entirely, I'm open to it. Please I'm looking for some help
#!/bin/bash
DATA_FILE_SOURCE="<path_to_source_data/${1}"
DATA_FILE_DEST="<path_to_dest>"
for fname in $(ls -1 "${DATA_FILE_SOURCE}")
do
for line in $(cat "${DATA_FILE_SOURCE}"/"${fname}")
do
FILE_TO_WRITE_TO=$(echo "${line}" | awk -F',' '{print $1"."$2".daily.csv"}')
CONTENT_TO_WRITE=$(echo "${line}" | cut -d, -f3-)
if [[ ! -f "${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}" ]]
then
echo "${CONTENT_TO_WRITE}" >> "${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}"
else
if ! grep -Fxq "${CONTENT_TO_WRITE}" "${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}"
then
sed -i "/${1}/d" "${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}"
"${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}"
echo "${CONTENT_TO_WRITE}" >> "${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}"
fi
fi
done
done
There are still parts of your published script that are unclear like the sed command. Although I rewrote it with saner practices and much less external calls witch should really speed it up.
#!/usr/bin/env sh
DATA_FILE_SOURCE="<path_to_source_data/$1"
DATA_FILE_DEST="<path_to_dest>"
for fname in "$DATA_FILE_SOURCE/"*; do
while IFS=, read -r a b content || [ "$a" ]; do
destfile="$DATA_FILE_DEST/$a.$b.daily.csv"
if grep -Fxq "$content" "$destfile"; then
sed -i "/$1/d" "$destfile"
fi
printf '%s\n' "$content" >>"$destfile"
done < "$fname"
done
Make it parallel (as much as you can).
#!/bin/bash
set -e -o pipefail
declare -ir MAX_PARALLELISM=20 # pick a limit
declare -i pid
declare -a pids
# ...
for fname in "${DATA_FILE_SOURCE}/"*; do
if ((${#pids[#]} >= MAX_PARALLELISM)); then
wait -p pid -n || echo "${pids[pid]} failed with ${?}" 1>&2
unset 'pids[pid]'
fi
while IFS= read -r line; do
FILE_TO_WRITE_TO="..."
# ...
done < "${fname}" & # forking here
pids[$!]="${fname}"
done
for pid in "${!pids[#]}"; do
wait -n "$((pid))" || echo "${pids[pid]} failed with ${?}" 1>&2
done
Here’s a directly runnable skeleton showing how the harness above works (with 36 items to process and 20 parallel processes at most):
#!/bin/bash
set -e -o pipefail
declare -ir MAX_PARALLELISM=20 # pick a limit
declare -i pid
declare -a pids
do_something_and_maybe_fail() {
sleep $((RANDOM % 10))
return $((RANDOM % 2 * 5))
}
for fname in some_name_{a..f}{0..5}.txt; do # 36 items
if ((${#pids[#]} >= MAX_PARALLELISM)); then
wait -p pid -n || echo "${pids[pid]} failed with ${?}" 1>&2
unset 'pids[pid]'
fi
do_something_and_maybe_fail & # forking here
pids[$!]="${fname}"
echo "${#pids[#]} running" 1>&2
done
for pid in "${!pids[#]}"; do
wait -n "$((pid))" || echo "${pids[pid]} failed with ${?}" 1>&2
done
Strictly avoid external processes (such as awk, grep and cut) when processing one-liners for each line. fork()ing is extremely inefficient in comparison to:
Running one single awk / grep / cut process on an entire input file (to preprocess all lines at once for easier processing in bash) and feeding the whole output into (e.g.) a bash loop.
Using Bash expansions instead, where feasible, e.g. "${line/,/.}" and other tricks from the EXPANSION section of the man bash page, without fork()ing any further processes.
Off-topic side notes:
ls -1 is unnecessary. First, ls won’t write multiple columns unless the output is a terminal, so a plain ls would do. Second, bash expansions are usually a cleaner and more efficient choice. (You can use nullglob to correctly handle empty directories / “no match” cases.)
Looping over the output from cat is a (less common) useless use of cat case. Feed the file into a loop in bash instead and read it line by line. (This also gives you more line format flexibility.)

SoF2 shell script not running

I've got the following code in my shell script:
SERVER=`ps -ef | grep -v grep | grep -c sof2ded`
if ["$SERVER" != "0"]; then
echo "Already Running, exiting"
exit
else
echo "Starting up the server..."
cd /home/sof2/
/home/sof2/crons/start.sh > /dev/null 2>&1
fi
I did chmod a+x status.sh
Now I try to run the script but it's returning this error:
./status.sh: line 5: [1: command not found
Starting up the server...
Any help would be greatly appreciated.
Could you please try changing a few things in your script as follows and let me know if that helps you?(changed back-tick to $ and changed [ to [[ in code)
SERVER=$(ps -ef | grep -v grep | grep -c sof2ded)
if [[ "$SERVER" -eq 0 ]]; then
echo "Already Running, exiting"
exit
else
echo "Starting up the server..."
cd /home/sof2/
/home/sof2/crons/start.sh > /dev/null 2>&1
fi
The problem is with the test command. "But", I hear you say, "I am not using the test command". Yes you are, it is also known as [.
if statement syntax is if command. The brackets are not part of if syntax.
Commands have arguments separated (tokenized) by whitespace, so:
[ "$SERVER" != "0" ]
The whitespace is needed because the command is [ and then there are 4 arguments passed to it (the last one must be ]).
A more robust way of comparing numerics is to use double parentheses,
(( SERVER == 0 ))
Notice that you don't need the $ or the quotes around SERVER. Also the spacing is less important, but useful for readability.
[[ is used for comparing text patterns.
As a comment, backticks ` ` are considered deprecated because they are difficult to read, they are replaced with $( ... ).

Replacing 'source file' with its content, and expanding variables, in bash

In a script.sh,
source a.sh
source b.sh
CMD1
CMD2
CMD3
how can I replace the source *.sh with their content (without executing the commands)?
I would like to see what the bash interpreter executes after sourcing the files and expanding all variables.
I know I can use set -n -v or run bash -n -v script.sh 2>output.sh, but that would not replace the source commands (and even less if a.sh or b.sh contain variables).
I thought of using a subshell, but that still doesn't expand the source lines. I tried a combination of set +n +v and set -n -v before and after the source lines, but that still does not work.
I'm going to send that output to a remote machine using ssh.
I could use <<output.sh to pipe the content into the ssh command, but I can't log as root onto the remote machine, but I am however a sudoer.
Therefore, I thought I could create the script and send it as a base64-encoded string (using that clever trick )
base64 script | ssh remotehost 'base64 -d | sudo bash'
Is there a solution?
Or do you have a better idea?
You can do something like this:
inline.sh:
#!/usr/bin/env bash
while read line; do
if [[ "$line" =~ (\.|source)\s+.+ ]]; then
file="$(echo $line | cut -d' ' -f2)"
echo "$(cat $file)"
else
echo "$line"
fi
done < "$1"
Note this assumes the sourced files exist, and doesn't handle errors. You should also handle possible hashbangs. If the sourced files contain themselves source, you need to apply the script recursively, e.g. something like (not tested):
while egrep -q '^(source|\.)' main.sh; do
bash inline.sh main.sh > main.sh
done
Let's test it
main.sh:
source a.sh
. b.sh
echo cc
echo "$var_a $var_b"
a.sh:
echo aa
var_a="stack"
b.sh:
echo bb
var_b="overflow"
Result:
bash inline.sh main.sh
echo aa
var_a="stack"
echo bb
var_b="overflow"
echo cc
echo "$var_a $var_b"
bash inline.sh main.sh | bash
aa
bb
cc
stack overflow
BTW, if you just want to see what bash executes, you can run
bash -x [script]
or remotely
ssh user#host -t "bash -x [script]"

conditional redirection in bash

I have a bash script that I want to be quiet when run without attached tty (like from cron).
I now was looking for a way to conditionally redirect output to /dev/null in a single line.
This is an example of what I had in mind, but I will have many more commands that do output in the script
#!/bin/bash
# conditional-redirect.sh
if tty -s; then
REDIRECT=
else
REDIRECT=">& /dev/null"
fi
echo "is this visible?" $REDIRECT
Unfortunately, this does not work:
$ ./conditional-redirect.sh
is this visible?
$ echo "" | ./conditional-redirect.sh
is this visible? >& /dev/null
what I don't want to do is duplicate all commands in a with-redirection or with-no-redirection variant:
if tty -s; then
echo "is this visible?"
else
echo "is this visible?" >& /dev/null
fi
EDIT:
It would be great if the solution would provide me a way to output something in "quiet" mode, e.g. when something is really wrong, I might want to get a notice from cron.
For bash, you can use the line:
exec &>/dev/null
This will direct all stdout and stderr to /dev/null from that point on. It uses the non-argument version of exec.
Normally, something like exec xyzzy would replace the program in the current process with a new program but you can use this non-argument version to simply modify redirections while keeping the current program.
So, in your specific case, you could use something like:
tty -s
if [[ $? -eq 1 ]] ; then
exec &>/dev/null
fi
If you want the majority of output to be discarded but still want to output some stuff, you can create a new file handle to do that. Something like:
tty -s
if [[ $? -eq 1 ]] ; then
exec 3>&1 &>/dev/null
else
exec 3>&1
fi
echo Normal # won't see this.
echo Failure >&3 # will see this.
I found another solution, but I feel it is clumsy, compared to paxdiablo's answer:
if tty -s; then
REDIRECT=/dev/tty
else
REDIRECT=/dev/null
fi
echo "Normal output" &> $REDIRECT
You can use a function:
function the_code {
echo "is this visible?"
# as many code lines as you want
}
if tty -s; then # or other condition
the_code
else
the_code >& /dev/null
fi
This works well for me. If DUMP_FILE is empty things go to stdout otherwise to the file. It does the job without using explicit redirection, but just uses pipes and existing applications.
function stdout_or_file
{
local DUMP_FILE=${1:-}
if [ -z "${DUMP_FILE}" ]; then
cat
else
sed -n "w ${DUMP_FILE}"
fi
}
function foo()
{
local MSG=$1
echo "info: ${MSG}"
}
foo "bar" | stdout_or_file ${DUMP_FILE}
Of course, you can squeeze this also in one line
foo "bar" | if [ -z "${DUMP_FILE}" ]; then cat; else sed -n "w ${DUMP_FILE}"; fi
Besides sed -n "w ${DUMP_FILE}" another command that does the same is dd status=none of=${DUMP_FILE}
The simplest solution is to use eval (a shell builtin), as it will act on the redirection in the expanded variable... and also act on anything else in the command line, so add extra quoting as required (note the extra single quotes added around the echo string below due to the '?' which would otherwise cause shell filename expansion to be attempted).
#!/bin/bash
# conditional-redirect.sh
if tty -s; then
REDIRECT=
else
REDIRECT=">& /dev/null"
fi
eval echo '"is this visible?"' $REDIRECT

how to create the option for printing out statements vs executing them in a shell script

I'm looking for a way to create a switch for this bash script so that I have the option of either printing (echo) it to stdout or executing the command for debugging purposes. As you can see below, I am just doing this manually by commenting out one statement over the other to achieve this.
Code:
#!/usr/local/bin/bash
if [ $# != 2 ]; then
echo "Usage: testcurl.sh <localfile> <projectname>" >&2
echo "sample:testcurl.sh /share1/data/20110818.dat projectZ" >&2
exit 1
fi
echo /usr/bin/curl -c $PROXY --certkey $CERT --header "Test:'${AUTH}'" -T $localfile $fsProxyURL
#/usr/bin/curl -c $PROXY --certkey $CERT --header "Test:'${AUTH}'" -T $localfile $fsProxyURL
I'm simply looking for an elegant/better way to create like a switch from the command line. Print or execute.
One possible trick, though it will only work for simple commands (e.g., no pipes or redirection (a)) is to use a prefix variable like:
pax> cat qq.sh
${PAXPREFIX} ls /tmp
${PAXPREFIX} printf "%05d\n" 72
${PAXPREFIX} echo 3
What this will do is to insert you specific variable (PAXPREFIX in this case) before the commands. If the variable is empty, it will not affect the command, as follows:
pax> ./qq.sh
my_porn.gz copy_of_the_internet.gz
00072
3
However, if it's set to echo, it will prefix each line with that echo string.
pax> PAXPREFIX=echo ./qq.sh
ls /tmp
printf %05d\n 72
echo 3
(a) The reason why it will only work for simple commands can be seen if you have something like:
${PAXPREFIX} ls -1 | tr '[a-z]' '[A-Z]'
When PAXPREFIX is empty, it will simply give you the list of your filenames in uppercase. When it's set to echo, it will result in:
echo ls -1 | tr '[a-z]' '[A-Z]'
giving:
LS -1
(not quite what you'd expect).
In fact, you can see a problem with even the simple case above, where %05d\n is no longer surrounded by quotes.
If you want a more robust solution, I'd opt for:
if [[ ${PAXDEBUG:-0} -eq 1 ]] ; then
echo /usr/bin/curl -c $PROXY --certkey $CERT --header ...
else
/usr/bin/curl -c $PROXY --certkey $CERT --header ...
fi
and use PAXDEBUG=1 myscript.sh to run it in debug mode. This is similar to what you have now but with the advantage that you don't need to edit the file to switch between normal and debug modes.
For debugging output from the shell itself, you can run it with bash -x or put set -x in your script to turn it on at a specific point (and, of course, turn it off with set +x).
#!/usr/local/bin/bash
if [[ "$1" == "--dryrun" ]]; then
echoquoted() {
printf "%q " "$#"
echo
}
maybeecho=echoquoted
shift
else
maybeecho=""
fi
if [ $# != 2 ]; then
echo "Usage: testcurl.sh <localfile> <projectname>" >&2
echo "sample:testcurl.sh /share1/data/20110818.dat projectZ" >&2
exit 1
fi
$maybeecho /usr/bin/curl "$1" -o "$2"
Try something like this:
show=echo
$show /usr/bin/curl ...
Then set/unset $show accordingly.
This does not directly answer your specific question, but I guess you're trying to see what command gets executed for debugging. If you replace #!/usr/local/bin/bash with #!/usr/local/bin/bash -x bash will run and echo the commands in your script.
I do not know of a way for "print vs execute" but I know of a way for "print and execute", and it is using "bash -x". See this link for example.

Resources