expression using grep is giving all zeros - bash

So I have an expression that I want to extract some lines from a text and count them. I can grep them as follows:
$ cat medsCounts_totals.csv | grep -E 'NumMeds": 0' | wc -l
Which is fine. Now I want to loop over with the string ...
$ for i in {0..10}; do expr="NumMeds\": $i"; echo $expr; done
However, when I try to use $expr
for i in {0..10}; do expr="NumMeds:\" $i"; cat medsCounts_totals.csv | grep -E "$expr" | wc -l ; done
I get nothing. How do I solve this problem in an elegant manner?

there is a typo in
for i in {0..10}; do expr="NumMeds:\" $i"; cat medsCounts_totals.csv | grep -E "$expr" | wc -l ; done
it should be
"NumMeds\": $i"

Related

How to remove first & last character in bash string

#!/bin/bash
MA=$(bt-device -l | cut -d " " -f 3)
MAC=${MA:1: -1}
bluetoothctl connect $MAC
Expected Result
98:9E:63:18:00:88
Actual result
(98:9E:63:18:00:88
A few alternatives:
$ echo 'Denny’s Tunez (98:9E:63:18:00:88)' | sed -En 's/^[^(]*\(([^)]*)\).*/\1/p'
98:9E:63:18:00:88
$ echo 'Denny’s Tunez (98:9E:63:18:00:88)' | cut -d'(' -f2 | cut -d')' -f1
98:9E:63:18:00:88
$ echo 'Denny’s Tunez (98:9E:63:18:00:88)' | awk -F'[)(]' '{print $2}'
98:9E:63:18:00:88
$ echo 'Denny’s Tunez (98:9E:63:18:00:88)' | grep -Eow '(..)(:..){5}'
98:9E:63:18:00:88
$ x='Denny’s Tunez (98:9E:63:18:00:88)'
$ y="${x//*\(/}"
$ y="${y//\)*}"
$ echo $y
98:9E:63:18:00:88
With GNU bash and its Parameter Expansion:
s="(98:9E:63:18:00:88)"
s="${s/#?/}" # remove first character
s="${s/%?/}" # remove last character
echo "$s"
Output:
98:9E:63:18:00:88
Using sed it can be done in a single step:
s='Denny’s Tunez (98:9E:63:18:00:88)'
echo "$s" | sed -E 's/.* \(|)//g'
98:9E:63:18:00:88
So for your example you can use:
mac=$(bt-device -l | sed -E 's/.* \(|)//g')
You can use parameter expansion:
offset and length
echo ${MA:1: -1}
prefix and suffix removal
tmp=${MA#(}
echo ${tmp%)}
parameter matching
tmp=${MA/#\(}
echo ${tmp/%\)}
Another approach is to:
whitelist what you do want
echo "$MA" | tr -dC '[0-9A-F:]'

count all the lines in all folders in bash [duplicate]

wc -l file.txt
outputs number of lines and file name.
I need just the number itself (not the file name).
I can do this
wc -l file.txt | awk '{print $1}'
But maybe there is a better way?
Try this way:
wc -l < file.txt
cat file.txt | wc -l
According to the man page (for the BSD version, I don't have a GNU version to check):
If no files are specified, the standard input is used and no file
name is
displayed. The prompt will accept input until receiving EOF, or [^D] in
most environments.
To do this without the leading space, why not:
wc -l < file.txt | bc
Comparison of Techniques
I had a similar issue attempting to get a character count without the leading whitespace provided by wc, which led me to this page. After trying out the answers here, the following are the results from my personal testing on Mac (BSD Bash). Again, this is for character count; for line count you'd do wc -l. echo -n omits the trailing line break.
FOO="bar"
echo -n "$FOO" | wc -c # " 3" (x)
echo -n "$FOO" | wc -c | bc # "3" (√)
echo -n "$FOO" | wc -c | tr -d ' ' # "3" (√)
echo -n "$FOO" | wc -c | awk '{print $1}' # "3" (√)
echo -n "$FOO" | wc -c | cut -d ' ' -f1 # "" for -f < 8 (x)
echo -n "$FOO" | wc -c | cut -d ' ' -f8 # "3" (√)
echo -n "$FOO" | wc -c | perl -pe 's/^\s+//' # "3" (√)
echo -n "$FOO" | wc -c | grep -ch '^' # "1" (x)
echo $( printf '%s' "$FOO" | wc -c ) # "3" (√)
I wouldn't rely on the cut -f* method in general since it requires that you know the exact number of leading spaces that any given output may have. And the grep one works for counting lines, but not characters.
bc is the most concise, and awk and perl seem a bit overkill, but they should all be relatively fast and portable enough.
Also note that some of these can be adapted to trim surrounding whitespace from general strings, as well (along with echo `echo $FOO`, another neat trick).
How about
wc -l file.txt | cut -d' ' -f1
i.e. pipe the output of wc into cut (where delimiters are spaces and pick just the first field)
How about
grep -ch "^" file.txt
Obviously, there are a lot of solutions to this.
Here is another one though:
wc -l somefile | tr -d "[:alpha:][:blank:][:punct:]"
This only outputs the number of lines, but the trailing newline character (\n) is present, if you don't want that either, replace [:blank:] with [:space:].
Another way to strip the leading zeros without invoking an external command is to use Arithmetic expansion $((exp))
echo $(($(wc -l < file.txt)))
Best way would be first of all find all files in directory then use AWK NR (Number of Records Variable)
below is the command :
find <directory path> -type f | awk 'END{print NR}'
example : - find /tmp/ -type f | awk 'END{print NR}'
This works for me using the normal wc -l and sed to strip any char what is not a number.
wc -l big_file.log | sed -E "s/([a-z\-\_\.]|[[:space:]]*)//g"
# 9249133

How to make grep not interpret special characters in my search string?

When executing ./test.sh 12.34, the grep should match 12.34 and not 12-34. How can this be accomplished?
#!/bin/sh
ip=$1
echo $ip
if netstat | grep ssh | grep $ip; then
netstat | grep ssh | grep $ip
else
echo 'No'
fi
You could use grep with the -F option:
From man grep:
-F, --fixed-strings
Interpret pattern as a set of fixed strings (i.e. force grep to
behave as fgrep).
Your example:
grep -F "$ip"
grep using regex to match strings. . is a special character in regex, so it needs to be escaped. There is a rather elegant way of doing this:
export escaped_ip_addr = $(echo $ip_addr | sed "s/\./\\\./g")
Which would make your final code:
#!/bin/sh
#test.sh
ip=$1
echo $ip
export escaped_ip = $(echo $ip | sed "s/\./\\\./g")
if netstat | grep ssh | grep $escaped_ip; then
netstat | grep ssh | grep $escaped_ip
else
echo 'No'
fi

Remove all chars that are not a digit from a string

I'm trying to make a small function that removes all the chars that are not digits.
123a45a ---> will become ---> 12345
I've came up with :
temp=$word | grep -o [[:digit:]]
echo $temp
But instead of 12345 I get 1 2 3 4 5. How to I get rid of the spaces?
Pure bash:
word=123a45a
number=${word//[^0-9]}
Here's a pure bash solution
var='123a45a'
echo ${var//[^0-9]/}
12345
is this what you are looking for?
kent$ echo "123a45a"|sed 's/[^0-9]//g'
12345
grep & tr
echo "123a45a"|grep -o '[0-9]'|tr -d '\n'
12345
I would recommend using sed or perl instead:
temp="$(sed -e 's/[^0-9]//g' <<< "$word")"
temp="$(perl -pe 's/\D//g' <<< "$word")"
Edited to add: If you really need to use grep, then this is the only way I can think of:
temp="$( grep -o '[0-9]' <<< "$word" \
| while IFS= read -r ; do echo -n "$REPLY" ; done
)"
. . . but there's probably a better way. (It uses grep -o, like your solution, then runs over the lines that it outputs and re-outputs them without line-breaks.)
Edited again to add: Now that you've mentioned that you use can use tr instead, this is much easier:
temp="$(tr -cd 0-9 <<< "$word")"
What about using sed?
$ echo "123a45a" | sed -r 's/[^0-9]//g'
12345
As I read you are just allowed to use grep and tr, this can make the trick:
$ echo "123a45a" | grep -o [[:digit:]] | tr -d '\n'
12345
In your case,
temp=$(echo $word | grep -o [[:digit:]] | tr -d '\n')
tr will also work:
echo "123a45a" | tr -cd '[:digit:]'
# output: 12345
Grep returns the result on different lines:
$ echo -e "$temp"
1
2
3
4
5
So you cannot remove those spaces during the filtering, but you can afterwards, since $temp can transform itself like this:
temp=`echo $temp | tr -d ' '`
$ echo "$temp"
12345

How to get "wc -l" to print just the number of lines without file name?

wc -l file.txt
outputs number of lines and file name.
I need just the number itself (not the file name).
I can do this
wc -l file.txt | awk '{print $1}'
But maybe there is a better way?
Try this way:
wc -l < file.txt
cat file.txt | wc -l
According to the man page (for the BSD version, I don't have a GNU version to check):
If no files are specified, the standard input is used and no file
name is
displayed. The prompt will accept input until receiving EOF, or [^D] in
most environments.
To do this without the leading space, why not:
wc -l < file.txt | bc
Comparison of Techniques
I had a similar issue attempting to get a character count without the leading whitespace provided by wc, which led me to this page. After trying out the answers here, the following are the results from my personal testing on Mac (BSD Bash). Again, this is for character count; for line count you'd do wc -l. echo -n omits the trailing line break.
FOO="bar"
echo -n "$FOO" | wc -c # " 3" (x)
echo -n "$FOO" | wc -c | bc # "3" (√)
echo -n "$FOO" | wc -c | tr -d ' ' # "3" (√)
echo -n "$FOO" | wc -c | awk '{print $1}' # "3" (√)
echo -n "$FOO" | wc -c | cut -d ' ' -f1 # "" for -f < 8 (x)
echo -n "$FOO" | wc -c | cut -d ' ' -f8 # "3" (√)
echo -n "$FOO" | wc -c | perl -pe 's/^\s+//' # "3" (√)
echo -n "$FOO" | wc -c | grep -ch '^' # "1" (x)
echo $( printf '%s' "$FOO" | wc -c ) # "3" (√)
I wouldn't rely on the cut -f* method in general since it requires that you know the exact number of leading spaces that any given output may have. And the grep one works for counting lines, but not characters.
bc is the most concise, and awk and perl seem a bit overkill, but they should all be relatively fast and portable enough.
Also note that some of these can be adapted to trim surrounding whitespace from general strings, as well (along with echo `echo $FOO`, another neat trick).
How about
wc -l file.txt | cut -d' ' -f1
i.e. pipe the output of wc into cut (where delimiters are spaces and pick just the first field)
How about
grep -ch "^" file.txt
Obviously, there are a lot of solutions to this.
Here is another one though:
wc -l somefile | tr -d "[:alpha:][:blank:][:punct:]"
This only outputs the number of lines, but the trailing newline character (\n) is present, if you don't want that either, replace [:blank:] with [:space:].
Another way to strip the leading zeros without invoking an external command is to use Arithmetic expansion $((exp))
echo $(($(wc -l < file.txt)))
Best way would be first of all find all files in directory then use AWK NR (Number of Records Variable)
below is the command :
find <directory path> -type f | awk 'END{print NR}'
example : - find /tmp/ -type f | awk 'END{print NR}'
This works for me using the normal wc -l and sed to strip any char what is not a number.
wc -l big_file.log | sed -E "s/([a-z\-\_\.]|[[:space:]]*)//g"
# 9249133

Resources