Example:
check_prog hostname.com /bin/check_awesome -c 10 -w 13
check_remote -H $HOSTNAME -C "$ARGS"
#To be expanded as
check_remote -H hostname.com -C "/bin/check_awesome -c 10 -w 13"
I hope the above makes sense. The arguments will change as I will be using this for about 20+ commands. Its a odd method of wrapping a program, but it's to work around a few issues with a few systems we are using here (gotta love code from the 70s).
The above could be written in Perl or Python, but Bash would be the preferred method.
You can use shift
shift is a shell builtin that operates on the positional parameters. Each time you invoke shift, it "shifts" all the positional parameters down by one. $2 becomes $1, $3 becomes $2, $4 becomes $3, and so on
example:
$ function foo() { echo $#; shift; echo $#; }
$ foo 1 2 3
1 2 3
2 3
As a programmer I would strongly recommend against shift because operations that modify the state can affect large parts of a script and make it harder to understand, modify, and debug:sweat_smile:. You can instead use the following:
#!/usr/bin/env bash
all_args=("$#")
first_arg="$1"
second_args="$2"
rest_args=("${all_args[#]:2}")
echo "${rest_args[#]}"
Adapting Abdullah's answer a bit:
your_command "$1" "${#:2}"
Tested on Bash (v3.2 and v5.1) and Zsh (v5.8)
Related
Here is myscript.sh
#!/bin/bash
for i in {1..$1};
do
echo $1 $i;
done
If I run myscript.sh 3 the output is
3 {1..3}
instead of
3 1
3 2
3 3
Clearly $3 contains the right value, so why doesn't for i in {1..$1} behave the same as if I had written for i in {1..3} directly?
You should use a C-style for loop to accomplish this:
for ((i=1; i<=$1; i++)); do
echo $i
done
This avoids external commands and nasty eval statements.
Because brace expansion occurs before expansion of variables. http://www.gnu.org/software/bash/manual/bashref.html#Brace-Expansion.
If you want to use braces, you could so something grim like this:
for i in `eval echo {1..$1}`;
do
echo $1 $i;
done
Summary: Bash is vile.
You can use seq command:
for i in `seq 1 $1`
Or you can use the C-style for...loop:
for((i=1;i<=$1;i++))
Here is a way to expand variables inside braces without eval:
end=3
declare -a 'range=({'"1..$end"'})'
We now have a nice array of numbers:
for i in ${range[#]};do echo $i;done
1
2
3
I know you've mentioned bash in the heading, but I would add that 'for i in {$1..$2}' works as intended in zsh. If your system has zsh installed, you can just change your shebang to zsh.
Using zsh with the example 'for i in {$1..$2}' also has the added benefit that $1 can be less than $2 and it still works, something that would require quite a bit of messing about if you wanted that kind of flexibility with a C-style for loop.
I occasionally run a bash command line like this:
n=0; while [[ $n -lt 10 ]]; do some_command; n=$((n+1)); done
To run some_command a number of times in a row -- 10 times in this case.
Often some_command is really a chain of commands or a pipeline.
Is there a more concise way to do this?
If your range has a variable, use seq, like this:
count=10
for i in $(seq $count); do
command
done
Simply:
for run in {1..10}; do
command
done
Or as a one-liner, for those that want to copy and paste easily:
for run in {1..10}; do command; done
Using a constant:
for ((n=0;n<10;n++)); do
some_command;
done
Using a variable (can include math expressions):
x=10; for ((n=0; n < (x / 2); n++)); do some_command; done
Another simple way to hack it:
seq 20 | xargs -Iz echo "Hi there"
run echo 20 times.
Notice that seq 20 | xargs -Iz echo "Hi there z" would output:
Hi there 1
Hi there 2
...
If you're using the zsh shell:
repeat 10 { echo 'Hello' }
Where 10 is the number of times the command will be repeated.
Using GNU Parallel you can do:
parallel some_command ::: {1..1000}
If you do not want the number as argument and only run a single job at a time:
parallel -j1 -N0 some_command ::: {1..1000}
Watch the intro video for a quick introduction:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (http://www.gnu.org/software/parallel/parallel_tutorial.html). You command line
with love you for it.
A simple function in the bash config file (~/.bashrc often) could work well.
function runx() {
for ((n=0;n<$1;n++))
do ${*:2}
done
}
Call it like this.
$ runx 3 echo 'Hello world'
Hello world
Hello world
Hello world
Another form of your example:
n=0; while (( n++ < 10 )); do some_command; done
for _ in {1..10}; do command; done
Note the underscore instead of using a variable.
If you are OK doing it periodically, you could run the following command to run it every 1 sec indefinitely. You can put other custom checks in place to run it n number of times.
watch -n 1 some_command
If you wish to have visual confirmation of changes, append --differences prior to the ls command.
According to the OSX man page, there's also
The --cumulative option makes highlighting "sticky", presenting a
running display of all positions that have ever changed. The -t
or --no-title option turns off the header showing the interval,
command, and current time at the top of the display, as well as the
following blank line.
Linux/Unix man page can be found here
xargs is fast:
#!/usr/bin/bash
echo "while loop:"
n=0; time while (( n++ < 10000 )); do /usr/bin/true ; done
echo -e "\nfor loop:"
time for ((n=0;n<10000;n++)); do /usr/bin/true ; done
echo -e "\nseq,xargs:"
time seq 10000 | xargs -I{} -P1 -n1 /usr/bin/true
echo -e "\nyes,xargs:"
time yes x | head -n10000 | xargs -I{} -P1 -n1 /usr/bin/true
echo -e "\nparallel:"
time parallel --will-cite -j1 -N0 /usr/bin/true ::: {1..10000}
On a modern 64-bit Linux, gives:
while loop:
real 0m2.282s
user 0m0.177s
sys 0m0.413s
for loop:
real 0m2.559s
user 0m0.393s
sys 0m0.500s
seq,xargs:
real 0m1.728s
user 0m0.013s
sys 0m0.217s
yes,xargs:
real 0m1.723s
user 0m0.013s
sys 0m0.223s
parallel:
real 0m26.271s
user 0m4.943s
sys 0m3.533s
This makes sense, as the xargs command is a single native process that spawns the /usr/bin/true command multiple time, instead of the for and while loops that are all interpreted in Bash. Of course this only works for a single command; if you need to do multiple commands in each iteration the loop, it will be just as fast, or maybe faster, than passing sh -c 'command1; command2; ...' to xargs
The -P1 could also be changed to, say, -P8 to spawn 8 processes in parallel to get another big boost in speed.
I don't know why GNU parallel is so slow. I would have thought it would be comparable to xargs.
For one, you can wrap it up in a function:
function manytimes {
n=0
times=$1
shift
while [[ $n -lt $times ]]; do
$#
n=$((n+1))
done
}
Call it like:
$ manytimes 3 echo "test" | tr 'e' 'E'
tEst
tEst
tEst
xargs and seq will help
function __run_times { seq 1 $1| { shift; xargs -i -- "$#"; } }
the view :
abon#abon:~$ __run_times 3 echo hello world
hello world
hello world
hello world
All of the existing answers appear to require bash, and don't work with a standard BSD UNIX /bin/sh (e.g., ksh on OpenBSD).
The below code should work on any BSD:
$ echo {1..4}
{1..4}
$ seq 4
sh: seq: not found
$ for i in $(jot 4); do echo e$i; done
e1
e2
e3
e4
$
I solved with this loop, where repeat is an integer that represents the loops's number
repeat=10
for n in $(seq $repeat);
do
command1
command2
done
You can use this command to repeat your command 10 times or more
for i in {1..10}; do **your command**; done
for example
for i in {1..10}; do **speedtest**; done
Yet another answer: Use parameter expansion on empty parameters:
# calls curl 4 times
curl -s -w "\n" -X GET "http:{,,,}//www.google.com"
Tested on Centos 7 and MacOS.
For loops are probably the right way to do it, but here is a fun alternative:
echo -e {1..10}"\n" |xargs -n1 some_command
If you need the iteration number as a parameter for your invocation, use:
echo -e {1..10}"\n" |xargs -I# echo now I am running iteration #
Edit: It was rightly commented that the solution given above would work smoothly only with simple command runs (no pipes, etc.). you can always use a sh -c to do more complicated stuff, but not worth it.
Another method I use typically is the following function:
rep() { s=$1;shift;e=$1;shift; for x in `seq $s $e`; do c=${#//#/$x};sh -c "$c"; done;}
now you can call it as:
rep 3 10 echo iteration #
The first two numbers give the range. The # will get translated to the iteration number. Now you can use this with pipes too:
rep 1 10 "ls R#/|wc -l"
with give you the number of files in directories R1 .. R10.
The script file
bash-3.2$ cat test.sh
#!/bin/bash
echo "The argument is arg: $1"
for ((n=0;n<$1;n++));
do
echo "Hi"
done
and the output below
bash-3.2$ ./test.sh 3
The argument is arg: 3
Hi
Hi
Hi
bash-3.2$
A little bit naive but this is what I usually remember off the top of my head:
for i in 1 2 3; do
some commands
done
Very similar to #joe-koberg's answer. His is better especially if you need many repetitions, just harder for me to remember other syntax because in last years I'm not using bash a lot. I mean not for scripting at least.
How about the alternate form of for mentioned in (bashref)Looping Constructs?
This is almost the exact same question as in this post, except that I do not want to use eval.
Quick question short, I want to execute the command echo aaa | grep a by first storing it in a string variable Command='echo aaa | grep a', and then running it without using eval.
In the post above, the selected answer used eval. That works for me too. What concerns me a lot is that there are plenty of warnings about eval below, followed by some attempts to circumvent it. However, none of them are able to solve my problem (essentially the OP's). I have commented below their attempts, but since it has been there for a long time, I suppose it is better to post the question again with the restriction of not using eval.
Concrete Example
What I want is a shell script that runs my command when I am happy:
#!/bin/bash
# This script run-this-if.sh runs the commands when I am happy
# Warning: the following script does not work (on nose)
if [ "$1" == "I-am-happy" ]; then
"$2"
fi
$ run-if.sh I-am-happy [insert-any-command]
Your sample usage can't ever work with an assignment, because assignments are scoped to the current process and its children. Because there's no reason to try to support assignments, things get suddenly far easier:
#!/bin/sh
if [ "$1" = "I-am-happy" ]; then
shift; "$#"
fi
This then can later use all the usual techniques to run shell pipelines, such as:
run-if-happy "$happiness" \
sh -c 'echo "$1" | grep "$2"' _ "$untrustedStringOne" "$untrustedStringTwo"
Note that we're passing the execve() syscall an argv with six elements:
sh (the shell to run; change to bash etc if preferred)
-c (telling the shell that the following argument is the code for it to run)
echo "$1" | grep "$2" (the code for sh to parse)
_ (a constant which becomes $0)
...whatever the shell variable untrustedStringOne contains... (which becomes $1)
...whatever the shell variable untrustedStringTwo contains... (which becomes $2)
Note here that echo "$1" | grep "$2" is a constant string -- in single-quotes, with no parameter expansions or command substitutions -- and that untrusted values are passed into the slots that fill in $1 and $2, out-of-band from the code being evaluated; this is essential to have any kind of increase in security over what eval would give you.
I'm trying to wrap a bash script b with a script a.
However I want to pass the options passed to a also to b as they are.
#!/bin/bash
# script a
./b ${#:$OPTIND}
This will also print $1 (if any). What's the simplest way not to?
So calling:
./a -c -d 5 first-arg
I want b to execute:
./b -c -d 5 # WITHOUT first-arg
In bash, you can build an array containing the options, and use that array to call the auxiliary program.
call_b () {
typeset -i i=0
typeset -a a; a=()
while ((++i <= OPTIND)); do # for i=1..$OPTIND
a+=("${!i}") # append parameter $i to $a
done
./b "${a[#]}"
}
call_b "$#"
In any POSIX shell (ash, bash, ksh, zsh under sh or ksh emulation, …), you can build a list with "$1" "$2" … and use eval to set different positional parameters.
call_b () {
i=1
while [ $i -le $OPTIND ]; do
a="$a \"\$$i\""
i=$(($i+1))
done
eval set -- $a
./b "$#"
}
call_b "$#"
As often, this is rather easier in zsh.
./b "${(#)#[1,$OPTIND]}"
Why are you using ${#:$OPTIND} and not just $# or $*?
The ${parameter:index} syntax says to use index to parse $parameter. If you're using $#, it'll use index as an index into the parameters.
$ set one two three four #Sets "$#"
$ echo $#
one two three four
$ echo ${#:0}
one two three four
$ echo ${#:1}
one two three four
$ echo ${#:2}
two three four
$OPTIND is really only used if you're using getopts. This counts the number of times getopts processes the parameters in $#. According to the bash manpage:
OPTIND is initialized to 1 each time the shell or a shell script is invoked.
Which may explain why you're constantly getting the value of 1.
EDITED IN RESPONSE TO EDITED QUESTION
#David - "./b $# " still prints the arguments of passed to a (see Q edit). I want to pass only the options of a and not the args
So, if I executed:
$ a -a foo -b bar -c fubar barfu barbar
You want to pass to b:
$ b -a foo -b bar -c fubar
but not
$ b -arg1 foo -arg2 bar -arg3 fubar barfu barbar
That's going to be tricky...
Is there a reason why you can't pass the whole line to b and just ignore it?
I believe it might be possible to use regular expressions:
$ echo "-a bar -b foo -c barfoo foobar" | sed 's/\(-[a-z] [^- ][^- ]*\) *\([^-][^-]*\)$/\1/'
-a bar -b foo -c barfoo
I can't vouch that this regular expression will work in all situations (i.e. what if there are no parameters?). Basically, I'm anchoring it to the end of the line, and then matching for the last parameter and argument and the rest of the line. I do a replace with just the last parameter and argument.
I've tested it in a few situations, but you might simply be better off using getopts to capture the arguments and then passing those to b yourself, or simply have b ignore those extra arguments if possible.
In order to separate the command options from the regular arguments, you need to know which options take arguments, and which stand alone.
In the example command:
./a -c -d 5 first-arg
-c and -d might be standalone options and 5 first-arg the regular arguments
5 might be an argument to the -d option (this seems to be what you mean)
-d might be an argument to the -c option and (as in the first case) 5 first-arg the regular arguments.
Here's how I'd handle it, assuming -a, -b, -c and -d are the only options, and that -b and -d is the only ones that take an option argument. Note that it is necessary to parse all of the options in order to figure out where they end.
#!/bin/bash
while getopts ab:cd: OPT; do
case "$OPT" in
a|b|c|d) : ;; # Don't do anything, we're just here for the parsing
?) echo "Usage: $0 [-ac] [-b something] [-d something] [args...]" >&2
exit 1 ;;
esac
done
./b "${#:1:$((OPTIND-1))}"
The entire while loop is there just to compute OPTIND. The ab:cd: in the getopts command defines what options are allowed and which ones take arguments (indicated by colons). The cryptic final expression means "elements 1 through OPTIND-1 of the argument array, passed as separate words".
I occasionally run a bash command line like this:
n=0; while [[ $n -lt 10 ]]; do some_command; n=$((n+1)); done
To run some_command a number of times in a row -- 10 times in this case.
Often some_command is really a chain of commands or a pipeline.
Is there a more concise way to do this?
If your range has a variable, use seq, like this:
count=10
for i in $(seq $count); do
command
done
Simply:
for run in {1..10}; do
command
done
Or as a one-liner, for those that want to copy and paste easily:
for run in {1..10}; do command; done
Using a constant:
for ((n=0;n<10;n++)); do
some_command;
done
Using a variable (can include math expressions):
x=10; for ((n=0; n < (x / 2); n++)); do some_command; done
Another simple way to hack it:
seq 20 | xargs -Iz echo "Hi there"
run echo 20 times.
Notice that seq 20 | xargs -Iz echo "Hi there z" would output:
Hi there 1
Hi there 2
...
If you're using the zsh shell:
repeat 10 { echo 'Hello' }
Where 10 is the number of times the command will be repeated.
Using GNU Parallel you can do:
parallel some_command ::: {1..1000}
If you do not want the number as argument and only run a single job at a time:
parallel -j1 -N0 some_command ::: {1..1000}
Watch the intro video for a quick introduction:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (http://www.gnu.org/software/parallel/parallel_tutorial.html). You command line
with love you for it.
A simple function in the bash config file (~/.bashrc often) could work well.
function runx() {
for ((n=0;n<$1;n++))
do ${*:2}
done
}
Call it like this.
$ runx 3 echo 'Hello world'
Hello world
Hello world
Hello world
Another form of your example:
n=0; while (( n++ < 10 )); do some_command; done
for _ in {1..10}; do command; done
Note the underscore instead of using a variable.
If you are OK doing it periodically, you could run the following command to run it every 1 sec indefinitely. You can put other custom checks in place to run it n number of times.
watch -n 1 some_command
If you wish to have visual confirmation of changes, append --differences prior to the ls command.
According to the OSX man page, there's also
The --cumulative option makes highlighting "sticky", presenting a
running display of all positions that have ever changed. The -t
or --no-title option turns off the header showing the interval,
command, and current time at the top of the display, as well as the
following blank line.
Linux/Unix man page can be found here
xargs is fast:
#!/usr/bin/bash
echo "while loop:"
n=0; time while (( n++ < 10000 )); do /usr/bin/true ; done
echo -e "\nfor loop:"
time for ((n=0;n<10000;n++)); do /usr/bin/true ; done
echo -e "\nseq,xargs:"
time seq 10000 | xargs -I{} -P1 -n1 /usr/bin/true
echo -e "\nyes,xargs:"
time yes x | head -n10000 | xargs -I{} -P1 -n1 /usr/bin/true
echo -e "\nparallel:"
time parallel --will-cite -j1 -N0 /usr/bin/true ::: {1..10000}
On a modern 64-bit Linux, gives:
while loop:
real 0m2.282s
user 0m0.177s
sys 0m0.413s
for loop:
real 0m2.559s
user 0m0.393s
sys 0m0.500s
seq,xargs:
real 0m1.728s
user 0m0.013s
sys 0m0.217s
yes,xargs:
real 0m1.723s
user 0m0.013s
sys 0m0.223s
parallel:
real 0m26.271s
user 0m4.943s
sys 0m3.533s
This makes sense, as the xargs command is a single native process that spawns the /usr/bin/true command multiple time, instead of the for and while loops that are all interpreted in Bash. Of course this only works for a single command; if you need to do multiple commands in each iteration the loop, it will be just as fast, or maybe faster, than passing sh -c 'command1; command2; ...' to xargs
The -P1 could also be changed to, say, -P8 to spawn 8 processes in parallel to get another big boost in speed.
I don't know why GNU parallel is so slow. I would have thought it would be comparable to xargs.
For one, you can wrap it up in a function:
function manytimes {
n=0
times=$1
shift
while [[ $n -lt $times ]]; do
$#
n=$((n+1))
done
}
Call it like:
$ manytimes 3 echo "test" | tr 'e' 'E'
tEst
tEst
tEst
xargs and seq will help
function __run_times { seq 1 $1| { shift; xargs -i -- "$#"; } }
the view :
abon#abon:~$ __run_times 3 echo hello world
hello world
hello world
hello world
All of the existing answers appear to require bash, and don't work with a standard BSD UNIX /bin/sh (e.g., ksh on OpenBSD).
The below code should work on any BSD:
$ echo {1..4}
{1..4}
$ seq 4
sh: seq: not found
$ for i in $(jot 4); do echo e$i; done
e1
e2
e3
e4
$
I solved with this loop, where repeat is an integer that represents the loops's number
repeat=10
for n in $(seq $repeat);
do
command1
command2
done
You can use this command to repeat your command 10 times or more
for i in {1..10}; do **your command**; done
for example
for i in {1..10}; do **speedtest**; done
Yet another answer: Use parameter expansion on empty parameters:
# calls curl 4 times
curl -s -w "\n" -X GET "http:{,,,}//www.google.com"
Tested on Centos 7 and MacOS.
For loops are probably the right way to do it, but here is a fun alternative:
echo -e {1..10}"\n" |xargs -n1 some_command
If you need the iteration number as a parameter for your invocation, use:
echo -e {1..10}"\n" |xargs -I# echo now I am running iteration #
Edit: It was rightly commented that the solution given above would work smoothly only with simple command runs (no pipes, etc.). you can always use a sh -c to do more complicated stuff, but not worth it.
Another method I use typically is the following function:
rep() { s=$1;shift;e=$1;shift; for x in `seq $s $e`; do c=${#//#/$x};sh -c "$c"; done;}
now you can call it as:
rep 3 10 echo iteration #
The first two numbers give the range. The # will get translated to the iteration number. Now you can use this with pipes too:
rep 1 10 "ls R#/|wc -l"
with give you the number of files in directories R1 .. R10.
The script file
bash-3.2$ cat test.sh
#!/bin/bash
echo "The argument is arg: $1"
for ((n=0;n<$1;n++));
do
echo "Hi"
done
and the output below
bash-3.2$ ./test.sh 3
The argument is arg: 3
Hi
Hi
Hi
bash-3.2$
A little bit naive but this is what I usually remember off the top of my head:
for i in 1 2 3; do
some commands
done
Very similar to #joe-koberg's answer. His is better especially if you need many repetitions, just harder for me to remember other syntax because in last years I'm not using bash a lot. I mean not for scripting at least.
How about the alternate form of for mentioned in (bashref)Looping Constructs?