Redirecting output of list of commands - bash

I am using grep to pull out lines that match 0. in multiple files. For files that do not contain 0., I want it to output "None" to a file. If it finds matches I want it to output the matches to a file. I have two example files that look like this:
$ cat sea.txt
shrimp 0.352
clam 0.632
$ cat land.txt
dog 0
cat 0
In the example files I would get both lines output from the sea.txt, but from the land.txt file I would just get "None" using the following code:
$ grep "0." sea.txt || echo "None"
The double pipe (||) can be read as "do foo or else do bar", or "if not foo then bar". It works perfect but the problem I am having is I cannot get it to output either the matches (as it would find in the sea.txt file) or "None" (as it would find in the land.txt file) to a file. It always prints to the terminal. I have tried the following as well as others without any luck of getting the output saved to a file:
grep "0." sea.txt || echo "None" > output.txt
grep "0." sea.txt || echo "None" 2> output.txt
Is there a way to get it to save to a file? If not is there anyway I can use the lines printed to the terminal?

You can group commands with { }:
$ { grep '0\.' sea.txt || echo "None"; } > output.txt
$ cat output.txt
shrimp 0.352
clam 0.632
Notice the ;, which is mandatory before the closing brace.
Also, I've changed your regex to 0\. because you want to match a literal ., and not any character (which the unescaped . does). Finally, I've replaced the quotes with single quotes – this has no effect here, but prevents surprises with special characters in longer regexes.

how about this?
echo $(grep "0." sea.txt || echo "None") > output.txt

Related

looping with grep over several files

I have multiple files /text-1.txt, /text-2.txt ... /text-20.txt
and what I want to do is to grep for two patterns and stitch them into one file.
For example:
I have
grep "Int_dogs" /text-1.txt > /text-1-dogs.txt
grep "Int_cats" /text-1.txt> /text-1-cats.txt
cat /text-1-dogs.txt /text-1-cats.txt > /text-1-output.txt
I want to repeat this for all 20 files above. Is there an efficient way in bash/awk, etc. to do this ?
#!/bin/sh
count=1
next () {
[[ "${count}" -lt 21 ]] && main
[[ "${count}" -eq 21 ]] && exit 0
}
main () {
file="text-${count}"
grep "Int_dogs" "${file}.txt" > "${file}-dogs.txt"
grep "Int_cats" "${file}.txt" > "${file}-cats.txt"
cat "${file}-dogs.txt" "${file}-cats.txt" > "${file}-output.txt"
count=$((count+1))
next
}
next
grep has some features you seem not to be aware of:
grep can be launched on lists of files, but the output will be different:
For a single file, the output will only contain the filtered line, like in this example:
cat text-1.txt
I have a cat.
I have a dog.
I have a canary.
grep "cat" text-1.txt
I have a cat.
For multiple files, also the filename will be shown in the output: let's add another textfile:
cat text-2.txt
I don't have a dog.
I don't have a cat.
I don't have a canary.
grep "cat" text-*.txt
text-1.txt: I have a cat.
text-2.txt: I don't have a cat.
grep can be extended to search for multiple patterns in files, using the -E switch. The patterns need to be separated using a pipe symbol:
grep -E "cat|dog" text-1.txt
I have a dog.
I have a cat.
(summary of the previous two points + the remark that grep -E equals egrep):
egrep "cat|dog" text-*.txt
text-1.txt:I have a dog.
text-1.txt:I have a cat.
text-2.txt:I don't have a dog.
text-2.txt:I don't have a cat.
So, in order to redirect this to an output file, you can simply say:
egrep "cat|dog" text-*.txt >text-1-output.txt
Assuming you're using bash.
Try this:
for i in $(seq 1 20) ;do rm -f text-${i}-output.txt ; grep -E "Int_dogs|Int_cats" text-${i}.txt >> text-${i}-output.txt ;done
Details
This one-line script does the following:
Original files are intended to have the following name order/syntax:
text-<INTEGER_NUMBER>.txt - Example: text-1.txt, text-2.txt, ... text-100.txt.
Creates a loop starting from 1 to <N> and <N> is the number of files you want to process.
Warn: rm -f text-${i}-output.txt command first will be run and remove the possible outputfile (if there is any), to ensure that a fresh new output file will be only available at the end of the process.
grep -E "Int_dogs|Int_cats" text-${i}.txt will try to match both strings in the original file and by >> text-${i}-output.txt all the matched lines will be redirected to a newly created output file with the relevant number of the original file. Example: if integer number in original file is 5 text-5.txt, then text-5-output.txt file will be created & contain the matched string lines (if any).

how to add append hostname in shell [duplicate]

The use case is, in my case, CSS file concatenation, before it gets minimized. To concat two CSS files:
cat 1.css 2.css > out.css
To add some text at one single position, I can do
cat 1.css <<SOMESTUFF 2.css > out.css
This will end in the middle.
SOMESTUFF
To add STDOUT from one other program:
sed 's/foo/bar/g' 3.css | cat 1.css - 2.css > out.css
So far so good. But I regularly come in situations, where I need to mix several strings, files and even program output together, like copyright headers, files preprocessed by sed(1) and so on. I'd like to concatenate them together in as little steps and temporary files as possible, while having the freedom of choosing the order.
In short, I'm looking for a way to do this in as little steps as possible in Bash:
command [string|file|output]+ > concatenated
# note the plus ;-) --------^
(Basically, having a cat to handle multiple STDINs would be sufficient, I guess, like
<(echo "FOO") <(sed ...) <(echo "BAR") cat 1.css -echo1- -sed- 2.css -echo2-
But I fail to see, how I can access those.)
This works:
cat 1.css <(echo "FOO") <(sed ...) 2.css <(echo "BAR")
You can do:
echo "$(command 1)" "$(command 2)" ... "$(command n)" > outputFile
You can add all the commands in a subshell, which is redirected to a file:
(
cat 1.css
echo "FOO"
sed ...
echo BAR
cat 2.css
) > output
You can also append to a file with >>. For example:
cat 1.css > output
echo "FOO" >> output
sed ... >> output
echo "BAR" >> output
cat 2.css >> output
(This potentially opens and closes the file repeatedly)

Searching in bash shell

I have a text file.
Info in text file is
Book1:Author1:10.50:50:5
Book2:Author2:4.50:30:10
First one is book name, second is author name, third is the price, fourth is the quantity and fifth is the quantity sold.
Currently I have this set of codes
function search_book
{
read -p $'Title: ' Title
read -p $'Author: ' Author
if grep -Fq "${Title}:${Author}" BookDB.txt
then
record=grep -c "${Title}:${Author}" BookDB.txt
echo "Found '" $record "' record(s)"
else
echo "Book not found"
fi
}
for $record, I am trying the count the number of lines that is found. Did I do the right thing for it because when I run this code, it just shows error command -c.
When i did this
echo "Found"
grep -c "${Title}" BookDB.txt
echo "record(s)"
It worked, but the output is
Found
1
record(s)
I would like them to be together
Can I also add grep -i to grep -Fq in order to make all into small letters for better searching?
Lets say if I want to search Book1 and Author1, if I enter 'ok' for title and 'uth' for author, is there any % command to add to the title to search in the middle of the title and author?
The expected output is also expected to be..
Found 1 record(s)
Book1,Author1,$10.50,50,5.
Is there any where I can change the : delimiter to ,?
And also adding $ to the 3rd column which is the rice?
Please help..
Changing record=grep -c "${Title}:${Author}" BookDB.txt to record=$(grep -c "${Title}:${Author}" BookDB.txt) will fix the error. record=$(cmd) means assigning the output of command cmd to the variable record. Without that, shell will interpret record=grep -c ... as a command -c prepended by a environment variable setting(record=grep).
BTW, since your DB format is column-oriented text data, awk should be a better tool. Sample code:
function search_book
{
read -p $'Title: ' Title
read -p $'Author: ' Author
awk -F: '{if ($1 == "'"$Title"'" && $2 ~ "'"$Author"'") {count+=1; output=output "\n" $0} }
END {
if (count > 0) {print "found", count, "record(s)\n", output}
else {print "Book not found";}}' BookDB.txt
}
As you can see, using awk makes it easier to change delimiter(e.g. awk -F, for comma delimiter), and also makes the program more robust(e.g. it restricts the matching string to the first two fields). If you only need fuzzy match instead of exact match, you could change == to ~ in condition.
The "unnamed command -c" error can be avoided by enclosing the right part of the assignment in backticks or "$()", e.g.:
record=`grep -ic "${Title}:${Author}" BookDB.txt`
record=$(grep -ic "${Title}:${Author}" BookDB.txt)
Also, this snippet shows that -i is perfectly fine. However, please note that both grep commands should use the same list of flags (-F is missing in the 2nd one) - except for -q, of course.
Anyway, performing grep twice is probably not the best way to go. What about...
record=`grep -ic "${Title}:${Author}" BookDB.txt 2>/dev/null`
if [ ! -z "$record" ]; then ...
... or something like that?
By the way: If you omit -F you allow the user to operate with regular expressions. This would not only provide wildcards but also the possibility for more complex patterns. You could also apply an option to your script that decides whether to use -F or not..
Last but not least: To modify the lines, in order to change the delimiter or manipulate the columns at all, you could look into the manual pages or awk(1) or cut(1), at least. Although I believe that a more sophisticated language is more suitable here, e.g. perl(1) or python(1), especially when the script is to be extended with more features.
to add to the answer(s) above (this started as a comment, but it grew...) :
the $() form is preferred:
- it allows nesting,
- and it simplifies a lot the use of " and ' (each "level" of nesting see them at their level, so to speak). Tough to do with as using nested quotes and single-quotes becomes a nightmare of` and \\... depending on the "level of subshell" they are to be interpreted in...
ex: (trying to only grep once)
export results="$(grep -i "${Title}:${Author}" BookDB.txt)" ;
export nbresults=$(echo "${results}" | wc -l) ;
printf "Found %8s record(s)\n" "nbresults" ;
echo "$nbresults" ;
or, if too many results to fit in variable:
export tmpresults="/tmp/results.$$"
grep -i "${Title}:${Author}" BookDB.txt > "${tmpresults}"
export nbresults=$(wc -l "${tmpresults}") ;
printf "Found %8s record(s)\n" "nbresults" ;
cat "${tmpresults}" ;
rm -f "${tmpresults}" ;
Note: I use " a lot (except on the wc -l line) to illustrate it could be needed sometimes (not in all the cases above!) to keep spaces, newlines, etc. (And I purposely drop it for nbresults so that it only contain the number of lines, not the preceding spaces).

How to concatenate stdin and a string?

How to I concatenate stdin to a string, like this?
echo "input" | COMMAND "string"
and get
inputstring
A bit hacky, but this might be the shortest way to do what you asked in the question (use a pipe to accept stdout from echo "input" as stdin to another process / command:
echo "input" | awk '{print $1"string"}'
Output:
inputstring
What task are you exactly trying to accomplish? More context can get you more direction on a better solution.
Update - responding to comment:
#NoamRoss
The more idiomatic way of doing what you want is then:
echo 'http://dx.doi.org/'"$(pbpaste)"
The $(...) syntax is called command substitution. In short, it executes the commands enclosed in a new subshell, and substitutes the its stdout output to where the $(...) was invoked in the parent shell. So you would get, in effect:
echo 'http://dx.doi.org/'"rsif.2012.0125"
use cat - to read from stdin, and put it in $() to throw away the trailing newline
echo input | COMMAND "$(cat -)string"
However why don't you drop the pipe and grab the output of the left side in a command substitution:
COMMAND "$(echo input)string"
I'm often using pipes, so this tends to be an easy way to prefix and suffix stdin:
echo -n "my standard in" | cat <(echo -n "prefix... ") - <(echo " ...suffix")
prefix... my standard in ...suffix
There are some ways of accomplish this, i personally think the best is:
echo input | while read line; do echo $line string; done
Another can be by substituting "$" (end of line character) with "string" in a sed command:
echo input | sed "s/$/ string/g"
Why i prefer the former? Because it concatenates a string to stdin instantly, for example with the following command:
(echo input_one ;sleep 5; echo input_two ) | while read line; do echo $line string; done
you get immediatly the first output:
input_one string
and then after 5 seconds you get the other echo:
input_two string
On the other hand using "sed" first it performs all the content of the parenthesis and then it gives it to "sed", so the command
(echo input_one ;sleep 5; echo input_two ) | sed "s/$/ string/g"
will output both the lines
input_one string
input_two string
after 5 seconds.
This can be very useful in cases you are performing calls to functions which takes a long time to complete and want to be continuously updated about the output of the function.
You can do it with sed:
seq 5 | sed '$a\6'
seq 5 | sed '$ s/.*/\0 6/'
In your example:
echo input | sed 's/.*/\0string/'
I know this is a few years late, but you can accomplish this with the xargs -J option:
echo "input" | xargs -J "%" echo "%" "string"
And since it is xargs, you can do this on multiple lines of a file at once. If the file 'names' has three lines, like:
Adam
Bob
Charlie
You could do:
cat names | xargs -n 1 -J "%" echo "I like" "%" "because he is nice"
Also works:
seq -w 0 100 | xargs -I {} echo "string "{}
Will generate strings like:
string 000
string 001
string 002
string 003
string 004
...
The command you posted would take the string "input" use it as COMMAND's stdin stream, which would not produce the results you are looking for unless COMMAND first printed out the contents of its stdin and then printed out its command line arguments.
It seems like what you want to do is more close to command substitution.
http://www.gnu.org/software/bash/manual/html_node/Command-Substitution.html#Command-Substitution
With command substitution you can have a commandline like this:
echo input `COMMAND "string"`
This will first evaluate COMMAND with "string" as input, and then expand the results of that commands execution onto a line, replacing what's between the ‘`’ characters.
cat will be my choice: ls | cat - <(echo new line)
With perl
echo "input" | perl -ne 'print "prefix $_"'
Output:
prefix input
A solution using sd (basically a modern sed; much easier to use IMO):
# replace '$' (end of string marker) with 'Ipsum'
# the `e` flag disables multi-line matching (treats all lines as one)
$ echo "Lorem" | sd --flags e '$' 'Ipsum'
Lorem
Ipsum#no new line here
You might observe that Ipsum appears on a new line, and the output is missing a \n. The reason is echo's output ends in a \n, and you didn't tell sd to add a new \n. sd is technically correct because it's doing exactly what you are asking it to do and nothing else.
However this may not be what you want, so instead you can do this:
# replace '\n$' (new line, immediately followed by end of string) by 'Ipsum\n'
# don't forget to re-add the `\n` that you removed (if you want it)
$ echo "Lorem" | sd --flags e '\n$' 'Ipsum\n'
LoremIpsum
If you have a multi-line string, but you want to append to the end of each individual line:
$ ls
foo bar baz
$ ls | sd '\n' '/file\n'
bar/file
baz/file
foo/file
I want to prepend my sql script with "set" statement before running it.
So I echo the "set" instruction, then pipe it to cat. Command cat takes two parameters : STDIN marked as "-" and my sql file, cat joins both of them to one output. Next I pass the result to mysql command to run it as a script.
echo "set #ZERO_PRODUCTS_DISPLAY='$ZERO_PRODUCTS_DISPLAY';" | cat - sql/test_parameter.sql | mysql
p.s. mysql login and password stored in .my.cnf file

Concatenate strings, files and program output in Bash

The use case is, in my case, CSS file concatenation, before it gets minimized. To concat two CSS files:
cat 1.css 2.css > out.css
To add some text at one single position, I can do
cat 1.css <<SOMESTUFF 2.css > out.css
This will end in the middle.
SOMESTUFF
To add STDOUT from one other program:
sed 's/foo/bar/g' 3.css | cat 1.css - 2.css > out.css
So far so good. But I regularly come in situations, where I need to mix several strings, files and even program output together, like copyright headers, files preprocessed by sed(1) and so on. I'd like to concatenate them together in as little steps and temporary files as possible, while having the freedom of choosing the order.
In short, I'm looking for a way to do this in as little steps as possible in Bash:
command [string|file|output]+ > concatenated
# note the plus ;-) --------^
(Basically, having a cat to handle multiple STDINs would be sufficient, I guess, like
<(echo "FOO") <(sed ...) <(echo "BAR") cat 1.css -echo1- -sed- 2.css -echo2-
But I fail to see, how I can access those.)
This works:
cat 1.css <(echo "FOO") <(sed ...) 2.css <(echo "BAR")
You can do:
echo "$(command 1)" "$(command 2)" ... "$(command n)" > outputFile
You can add all the commands in a subshell, which is redirected to a file:
(
cat 1.css
echo "FOO"
sed ...
echo BAR
cat 2.css
) > output
You can also append to a file with >>. For example:
cat 1.css > output
echo "FOO" >> output
sed ... >> output
echo "BAR" >> output
cat 2.css >> output
(This potentially opens and closes the file repeatedly)

Resources