So I have this shell script:
#!/bin/bash -xv
PATH=${PATH[*]}:.
#filename: testScript
echo $#
It should print the number of parameters I receive from a text file.
I have a text file(named: file.txt) with one line:
I am a proud sentence
The output should be, as I understood, 5. Since there are 5 words, there are 5 parameters.
I try to run it by:
chmod +x ./testScript.txt
./testScript.txt < ./file.txt > output.txt
But I seem to get in output.txt 0, as if there were no parameters. I really barely understand when do I use $1 $2 to approach parameters, and how to actually send parameters into a script.Should I use pipe? Can it be implented with pipe, anyways?Also. When a text file is passed to a script. Is $1 the text file's name? Will echo $1 print file.txt for the above example, with only the specified arguments done?
< ./file.txt sets standard input of the command line to the content of the file.
read needs to be used to read standard input.
Maybe this script is closer to your needs
#/bin/bash --
printf "%d\n" $#
call it with
./testScript.txt $(cat ./file.txt) > output.txt
$(...) makes the shell ececute the command first. The line in the file is then passed as parameters to the script
----
Otherwise if you use
./testScript.txt ./file.txt
Then $1 is equal to ./file.txt
./testScript.txt < ./file.txt > output.txt
Here testScript.txt has no parameters, zero, none, nothing. The shell parses file redirection before the command runs, and < ./file.txt > output.txt is just file redirection, so the shell "grabs" that part away -- so testScript.txt never knows which files are being redirected from standard input and standard output.
This would work, (i.e. output "5"):
./testScript.txt I am a proud sentence
So would this:
xargs ./testScript.txt < file.txt
...and so forth (see Jay Jargot's answer).
For more info, this article by Mo Budlong should be helpful: Command line psychology 101
Related
I came across a syntax for "while read" loop in a bash script
$> while read line; do echo $line; done < f1 # f1 is a file in my current directory
will print the file line by line.
my search for "while read" in the bash GNU manual https://www.gnu.org/software/bash/manual/
came up short, and while other "tutorial sites" give some usage examples, i would still like to understand the full syntax options for this construct.
can it be used for "for" loops as well?
something like
for line in read; do echo $line; done < f1
The syntax for a while loop is
while list-1; do list-2; done
where list-1 is one or more commands (usually one) and the loop continues while list-1 is successful (return value of zero), list-2 is the "body" of the loop.
The syntax of a for loop is different:
for name in word; do list ; done
where word is usually a list of strings, not a command (although it can be hacked to use a command which returns word).
The purpose of a for loop is to iterate through word, the purpose of while is to loop while a command is successful. They are used for different tasks.
Redirection changes a file descriptor to refer to another file or file descriptor.
< changes file descriptor 0 (zero), also known as stdin
> changes file descriptor 1 (one), also known as stdout
So somecommand < foo changes stdin to read from foo rather than the terminal keyboard.
somecommand > foo changes stdout to write to foo rather than the terminal screen (if foo exists it will be overwritten).
In your case somecommand is while, but it can be any other - note that not all commands read from stdin, yet the command syntax with < is still valid.
A common mistake is:
# WRONG!
while read < somefile
do
....
done
In that case somecommand is read and the effect is that it will read the first line of somefile, then proceed with the body of the loop, come back, then read the first line of the file again! It will continually loop just reading the first line, since while has no knowledge or interest in what read is doing, only its return value of success or fail. (read uses the variable REPLY if you don't specify one)
Redirection examples ($ indicates a prompt):
$ cat > file1
111111
<CTRL+D>
$ cat > file2
222222
<CTRL+D>
cat reads from stdin if we don't specify a filename, so it reads from the keyboard. Instead of writing to the screen we redirect to a file. The <CTRL+D> indicates End-Of-File sent from the keyboard.
This redirects stdin to read from a file:
$ cat < file1
111111
Can you explain this?
$ cat < file1 file2
222222
Each line in a given file 'a.txt' contains the directory/path to another unique file. Suppose we want to parse 'a.txt' line-by-line, extract the path in string format, and then use a tool such as vim to process the file at this path, and so on.
After going through this thread - Read a file line by line assigning the value to a variable, I wrote the following script, say 'open-file.sh' on bash (I'm new to it)
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
vim -c ":q" -cq $line # Just open the file and close it using :q
done < "$1"
We would then run the above script as -
./open-file.sh a.txt
The problem is that although the path to a new file is correctly specified by $line, when vim opens the file, vim continues to receive the text contained in 'a.txt' as a command. How can I write a script where I can correctly obtain the path from 'a.txt', open it using vim, and then continue parsing the remaining lines in 'a.txt' ?
Replace:
vim -c ":q" -cq $line
With:
vim -c ":q" -cq "$line" </dev/tty
The redirection </dev/tty tells vim to take its standard input from the terminal. Without that, the standard input for vim is "$1".
Also, it is good practice to put $line in double-quotes to protect it from word splitting, etc.
Lastly, while vim is excellent for interactive work, if your end-goal is fully automated processing of each file, you might want to consider tools such as sed or awk.
Although I'm not sure of your ultimate goal, this shell command will execute vim once per line in a.txt:
xargs -o -n1 vim -c ':q' < a.txt
As explained in the comments to Read a file line by line assigning the value to a variable, the issue you're encountering is due to the fact that vim is an interactive program and thus continues to read input from $line.
The problem was already mentioned in a comment under the answer you based your script on.
vim is consuming stdin which is given to the loop by done < $1. We can observe the same behavior in the following example:
$ while read i; do cat; done < <(seq 3)
2
3
<(seq 3) simulates a file with the three lines 1, 2, and 3. Instead of three silent iterations we get only one iteration and the output 2 and 3.
stdin is not only passed to read in the head of the loop, but also to cat in the body of the loop. Therefore read reads one line, the loop is entered, cat reads all remaining lines, stdin is empty, read has nothing to read anymore, the loop exits.
You could circumvent the problem by redirecting something to vim, however there is an even better way. You don't need the loop at all:
< "$1" xargs -d\\n -n1 vim -c :q -cq
xargs will execute vim once for every line in the file given by $1.
After searching online I was able to figure out how to read a file line by line:
while read p; do
echo $p
done < file.txt
But I would actually like to modify the line in the file.
For example:
while read p; do
if condition
then
echo $p | perl -i -pe 's/a/b/'
fi
done < file.txt
However this doesn't actually modify the file.
Update A far better version of bash code added. Thanks to Charles Duffy for comments.
Your Perl one-liner takes a line piped into it by echo $p |, getting its standard input that way. It doesn't do anything with the file itself, so the -i flag has no effect. The -p makes it print to the standard output stream. So that whole line, echo ..., doesn't touch the file.
You can redirect the output to a new file and then move that to overwrite file.txt. Here is a simple minded example, that appends each line to a new file. For better bash code see the update below.
while read p; do
if condition
then
echo $p | perl -pe 's/a/b/' >> temp_out.txt
else
echo $p >> temp_out.txt
fi
done < file.txt
mv temp_out.txt file.txt
We have to add the else where all unmodified lines are also appended. Note that in general we cannot have just some lines replaced but the whole file has to be re-written.
If this is all that the script does you can do it with a very simple one-liner, see the end. If more work is done you can also put it all in a Perl script but I take it that there may be other good reasons for a bash script.
Update A much better version of the above. See read and echo in Builtins in Bash manual
Appending each line opens the file anew each time without a need for that.
Just redirect at the end of the loop, much like it is done in the terminal
read uses backslash for escaping, removing it from input. Turn that off with -r
Trailing white space is removed, as a part of breaking the line into words. Suppress this by unsetting the variable that controls which characters are used for splitting, IFS=
The echo $p can do all kinds of unintended things. A formatted print is better, printf '%s\n' "$p", or at least echo "$p"
With this,
while IFS= read -r p; do
if condition
then
echo "$p" | perl -pe 's/a/b/'
else
echo "$p"
fi
done < file.txt > temp_out.txt
mv temp_out.txt file.txt
Finally, if the sole purpose of the Perl one-liner were to run a simple substitution, it is much better to simply do that in the shell itself than to have a pipeline and run a whole new process for each line.
echo "${p//a/b}"
Thanks to Charles Duffy for raising all these points in comments.
A few comments on Perl one-liners. See documentation at perlrun.
The command perl -e '...' executes any valid Perl code between ''. When we add the -n or -p switch it also reads standard input and executes that code on a line of it at the time, where -p also prints out each line after it's processed. The standard input can be supplied to it from a file,
perl -pe '...' input.txt
in which case adding -i flag will result in the file being changed in-place. Or, the input can be piped into it, for example
echo "input text" | perl -pe '...'
in which case the processed line is printed to standard output. This can be redirected to a file, as in the answer above.
To make changes to a given file a line at a time you only need this on the command line
perl -i -pe 's/a/b/' file.txt
If there is more work to do then it may well be better to put it in a script, of course. In this case the one-liner can be a command in the bash script as well, replacing all that code above (unless some bash-specific functionality is preferred for processing lines).
I have a compiled program which i run from the shell; as i run it, it asks me for an input file in stdin. I want to run that program in a bash loop, with predefined input file, such as
for i in $(seq 100); do
input.txt | ./myscript
done
but of course this won't work. How can I achieve that? I cannot edit the source code.
Try
for i in $(seq 100); do
./myscript < input.txt
done
Pipes (|) are inter-process. That is, they stream between processes. What you're looking for is file redirection (e.g. <, > etc.)
Redirection simply means capturing output from a file, command,
program, script, or even code block within a script and sending it as
input to another file, command, program, or script.
You may see cat used for this e.g. cat file | mycommand. Given the above, this usage is redundant and often the winner of a 'Useless use of cat' award.
You can use:
./myscript < input.txt
to send content of input.txt on stdin of myscript
Based on your comments, it looks like myscript prompts for a file name and you want to always respond with input.txt. Did you try this?
for i in $(seq 100); do
echo input.txt | ./myscript
done
You might want to just try this first:
echo input.txt | ./myscript
just in case.
I'm writing a bash script called 'run' that tests programs with pre-defined inputs.
It takes in a file as the first parameter, then a program as a second parameter.
The call would look like
./run text.txt ./check
for example, the program 'run' would run 'check' with text.txt as the input. This will save me lots of testing time with my programs.
right now I have
$2 < text.txt > text.created
So it takes the text.txt and redirects it as input in to the program specified, which is the second argument. Then dumps the result in text.created.
I have the input in text.txt and I know what the output should look like, but when I cat text.created, it's empty.
Does anybody know the proper way to run a program with a file as the input? This seems intuitive to me, but could there be something wrong with the 'check' program rather than what I'm doing in the 'run' script?
Thanks! Any help is always appreciated!
EDIT: the file text.txt contains multiple lines of files that each have an input for the program 'check'.
That is, text.txt could contain
asdf1.txt
asdf2.txt
asdf3.txt
I want to test check with each file asdf1.txt, asdf2.txt, asdf3.txt.
A simple test with
#!/bin/sh
# the whole loop reads $1 line by line
while read
do
# run $2 with the contents of the file that is in the line just read
xargs < $REPLY $2
done < $1
works fine. Call that file "run" and run it with
./run text.txt ./check
I get the program ./check executed with text.txt as the parameters. Don't forget to chmod +x run to make it executable.
This is the sample check program that I use:
#!/bin/sh
echo "This is check with parameters $1 and $2"
Which prints the given parameters.
My file text.txt is:
textfile1.txt
textfile2.txt
textfile3.txt
textfile4.txt
and the files textfile1.txt, ... contain one line each for every instance of "check", for example:
lets go
or
one two
The output:
$ ./run text.txt ./check
This is check with parameters lets and go
This is check with parameters one and two
This is check with parameters uno and dos
This is check with parameters eins and zwei
The < operator redirects the contents of the file to the standard input of the program. This is not the same as using the file's contents for the arguments of the file--which seems to be what you want. For that do
./program $(cat file.txt)
in bash (or in plain old /bin/sh, use
./program `cat file.txt`
).
This won't manage multiple lines as separate invocations, which your edit indicates is desired. For that you probably going to what some kind scripting language (perl, awk, python...) which makes parsing a file linewise easy.