grep search with filename as parameter - bash

I'm working on a shell script.
OUT=$1
here, the OUT variable is my filename.
I'm using grep search as follows:
l=`grep "$pattern " -A 15 $OUT | grep -w $i | awk '{print $8}'|tail -1 | tr '\n' ','`
The issue is that the filename parameter I must pass is test.log.However, I have the folder structure :
test.log
test.log.001
test.log.002
I would ideally like to pass the filename as test.log and would like it to search it in all log files.I know the usual way to do is by using test.log.* in command line, but I'm facing difficulty replicating the same in shell script.
My efforts:
var-$'.*'
l=`grep "$pattern " -A 15 $OUT$var | grep -w $i | awk '{print $8}'|tail -1 | tr '\n' ','`
However, I did not get the desired result.

Hopefully this will get you closer:
#!/bin/bash
for f in "${1}*"; do
grep "$pattern" -A15 "$f"
done | grep -w $i | awk 'END{print $8}'

Related

How to remove any commands that begins with "echo" from history

I have tried the below
history -d $(history | grep "echo.*" |awk '{print $1}')
But it is not deleting all the commands from the history with echo
I want to delete any commands start with echo
like
echo "mamam"
echoaaa
echo "hello"
echooooo
You can use this to remove echo entries :
for d in $(history | grep -Po '^\s*\K(\d+)(?= +echo)' | sort -nr); do history -d $d; done
I would do a
history -d $(history | grep -E "^ *[0-9]+ *echo" | awk '{print $1})
The history command produces one column of event number, followed by the command. We need to match an echo, which is following such a event number. The awk then prints just the event number.
An alternative without reverting to awk would be:
history -d $(history | grep -E "^ *[0-9]+ *echo" | grep -Eow '[0-9]+)
history -w
sed -i '/^echo.*/d' ~/.bash_history
history -c
history -r

Bash : Curl grep result as string variable

I have a bash script as below:
curl -s "$url" | grep "https://cdn" | tail -n 1 | awk -F[\",] '{print $2}'
which is working fine, when i run run it, i able to get the cdn url as:
https://cdn.some-domain.com/some-result/
when i put it as variable :
myvariable=$(curl -s "$url" | grep "https://cdn" | tail -n 1 | awk -F[\",] '{print $2}')
and i echo it like this:
echo "CDN URL: '$myvariable'"
i get blank result. CDN URL:
any idea what could be wrong? thanks
If your curl command produces a trailing DOS carriage return, that will botch the output, though not exactly like you describe. Still, maybe try this.
myvariable=$(curl -s "$url" | awk -F[\",] '/https:\/\/cdn/{ sub(/\r/, ""); url=$2} END { print url }')
Notice also how I refactored the grep and the tail (and now also tr -d '\r') into the Awk command. Tangentially, see useless use of grep.
The result could be blank if there's only one item after awk's split.
You might try grep -o to only return the matched string:
myvariable=$(curl -s "$url" | grep -oP 'https://cdn.*?[",].*' | tail -n 1 | awk -F[\",] '{print $2}')
echo "$myvariable"

Grep Spellchecker

I am trying to write a simple shell script that takes a text file as input and checks all non-punctuated words against a dictionary (english.txt). It should return all non-matching (misspelled) words. I am using grep but it does not seem to successfully match all the lines in english.txt. I have included my code below.
#!/bin/bash
cat $1 |
tr ' \t' '\n\n' |
sed -e "/'/d" |
tr -d '[:punct:]' |
tr -cd '[:alpha:]\n' |
sed -e "/^$/d" |
grep -v -i -w -f english.txt

No output when using awk inside bash script

My bash script is:
output=$(curl -s http://www.espncricinfo.com/england-v-south-africa-2012/engine/current/match/534225.html | sed -nr 's/.*<title>(.*?)<\/title>.*/\1/p')
score=echo"$output" | awk '{print $1}'
echo $score
The above script prints just a newline in my console whereas my required output is
$ curl -s http://www.espncricinfo.com/england-v-south-africa-2012/engine/current/match/534225.html | sed -nr 's/.*<title>(.*
?)<\/title>.*/\1/p' | awk '{print $1}'
SA
So, why am I not getting the output from my bash script whereas it works fine in terminal am I using echo"$output" in the wrong way.
#!/bin/bash
output=$(curl -s http://www.espncricinfo.com/england-v-south-africa-2012/engine/current/match/534225.html | sed -nr 's/.*<title>(.*?)<\/title>.*/\1/p')
score=$( echo "$output" | awk '{ print $1 }' )
echo "$score"
Score variable was probably empty, since your syntax was wrong.

Use each line of piped output as parameter for script

I have an application (myapp) that gives me a multiline output
result:
abc|myparam1|def
ghi|myparam2|jkl
mno|myparam3|pqr
stu|myparam4|vwx
With grep and sed I can get my parameters as below
myapp | grep '|' | sed -e 's/^[^|]*//' | sed -e 's/|.*//'
But then want these myparamx values as paramaters of a script to be executed for each parameter.
myscript.sh myparam1
myscript.sh myparam2
etc.
Any help greatly appreciated
Please see xargs. For example:
myapp | grep '|' | sed -e 's/^[^|]*//' | sed -e 's/|.*//' | xargs -n 1 myscript.sh
May be this can help -
myapp | awk -F"|" '{ print $2 }' | while read -r line; do /path/to/script/ "$line"; done
I like the xargs -n 1 solution from Dark Falcon, and while read is a classical tool for such kind of things, but just for completeness:
myapp | awk -F'|' '{print "myscript.sh", $2}' | bash
As a side note, speaking about extraction of 2nd field, you could use cut:
myapp | cut -d'|' -f 1 # -f 1 => second field, starting from 0

Resources