I am trying to run a.out lot of times from command line, but am not able to start the processes in background because bash treats it as a syntax error.
for f in `seq 20`; do ./a.out&; done //incorrect syntax for bash near '&'
How can I place & on command line so that bash doesn't complain, and I am allowed to
run these processes in background, so that I can generate load on the system.
P.S: I don't want to break it into multiple lines.
This works:
for f in `seq 20`; do ./a.out& done
& terminates a command just like ; or &&, ||, |.
This means that bash expects a command between & and ; but can't find one. Hence the error.
& is a command terminator as well as ; ; do not use both.
And use bash syntax instead of using seq, which is not available on all Unix systems.
for f in {1..20} ; do ./a.out& done
Remove the ; after a.out:
for f in `seq 20`; do ./a.out& done
Break that into multiple lines or remove the ; after the &
Related
When I type the following command in a cygwin bash shell:
for i in $(ls) do echo $i done
I get a ">" asking me to keep typing, as opposed to the expected behavior. Why?
You need to separate your for, do and done statements.. Try this:
for i in $(ls); do echo $i; done
You can also separate the statements with newlines. For exmaple:
cygwin$ for i in $(ls)
> do
> echo $i
> done
Your for loop is still waiting for the semicolon or newline that terminates the list of values. So far, your loop with set i to the list of words produced by ls, the word do, the word echo, the words produced by the expansion of the current value of i, and the word done.
The > is the so-called secondary prompt, which indicates that the shell is still waiting for input to complete the command started by for.
When I type the following command in a cygwin bash shell:
for i in $(ls) do echo $i done
I get a ">" asking me to keep typing, as opposed to the expected behavior. Why?
You need to separate your for, do and done statements.. Try this:
for i in $(ls); do echo $i; done
You can also separate the statements with newlines. For exmaple:
cygwin$ for i in $(ls)
> do
> echo $i
> done
Your for loop is still waiting for the semicolon or newline that terminates the list of values. So far, your loop with set i to the list of words produced by ls, the word do, the word echo, the words produced by the expansion of the current value of i, and the word done.
The > is the so-called secondary prompt, which indicates that the shell is still waiting for input to complete the command started by for.
I have a function named myfunction.C which takes a number as input. I want to have a Bash script to iterate through numbers 0 to 99 in my function.
I had this code working before, but now it passes {0..99} instead of just one number to myfunction. What is wrong here?
for i in {0..99}
do
root -b -q -l myfunction.C++\($i\)
done
exit 0
Make sure you're actually using bash to run this script, either by explicitly doing it:
bash myScript.sh
or by using a shebang line, first line in the script:
#!/usr/bin/env bash
If, for some reason you don't have bash available to you, "lesser" shells can generally call of the external seq program to do something similar, something like:
for i in $(seq 1 12) ; do echo $i ; done
I am trying to run a for loop on the terminal where I want to send each iteration to background process so that all of them run simultaneously.
Following is the command running one by one
for i in *.sra; do fastq-dump --split-files $i ; done # ";" only
I have highlighted the semicolon.
To run simultaneously this works
for i in *.sra; do fastq-dump --split-files $i & done # "&" only
But this gives an error
for i in *.sra; do fastq-dump --split-files $i & ; done # "& ;"
It would be nice if some one explains what is going on here. I know this should be written in a shell script way with proper indentation, but some times I only have this command to run.
& and ; both terminate the command that precedes them.
You can't write & ; any more than you could write ; ; or & &, because the language only allows a command to be terminated once (and doesn't permit a zero-word list as a command).
Thus: for i in *.src; do fastq-dump --split-files "$i" & done is perfectly correct as-is, and does not require an additional ;.
I have a bash script that I wish to read from a file to get it's arguments set. Basically my script reads arguments positionally ($1, $2, $3, etc.)
while test $# -gt 0; do
case $1 in
-h | --help)
echo "Help cruft"
exit 0
;;
esac
shift
done
One of the options I was hoping could be a config file that reads in arguments (for simple and easy config) so I was hoping the set -- command would work (-- to over ride the arguments). However, since they are defined in a file I have to read it in and use xargs to pass them:
-c | --config)
cat $2 | xargs set --
continue
;;
The trouble is that xargs buggers up the -- so I don't know how to accomplish this.
Note: I realize I could use source config_file and have it set variable; might be the final option. I wanted to know if I could do it like above and simplify the documentation.
A simplified example script:
# foo.sh
echo "x y z" | xargs set --
echo $*
# Command line
$ bash foo.sh a b c
xargs: set: No such file or directory
a b c
xargs can't execute set because:
set is a shell built-in, not an external command. xargs only knows how to execute commands. (Some shell built-ins shadow commands with the same name, such as printf, true, and [. So xargs can execute those commands, but the semantics might not be identical to the built-in.)
Even if xargs could execute set, it would have no effect because xargs does not run inside of the shell's environment; every command executed by xargs is a separate process. So you will get no error if you do this:
echo a b c | xargs bash -c 'set -- "${#}"' _
But it also won't do anything useful. (Substitute set with echo and you'll see that it does actually invoke the command.)
How to read arguments from a file.
First, you need to answer the question: what does it mean to have arguments in a file? Are they individual whitespace-separated words with no mechanism to include whitespace in any argument? (That would also be required for xargs to work in its default mode, so it is not a totally unreasonable assumption, although it is almost certainly going to get you into trouble at some point.)
In that case you don't need xargs at all; you can just use command substitution:
set -- $(<file)
While that will work fine, this won't:
echo a b c | set -- $(</dev/stdin)
because the pipeline (created by the | operator) causes the processes on either side to be run in subshells, and consequently the set doesn't modify the current shell's environment variables.
A more robust solution
Suppose that each argument is in a single line in the file, which makes it possible to include whitespace in an argument, but not a newline. Then we could use the useful mapfile built-in to read the arguments into an array, and set the positional arguments from the array. (Or just use the array directly, but that would be a different question.)
mapfile -t args < file
set -- "${args[#]}"
Again, watch out for piping into mapfile; it won't work, for the same reason that it didn't work with set.