count how many times a word appears in a specific cloumn bash - bash

this is a file that i have named people.txt
10001:Larry Simpson:65:NewYork:555666777
10002:Jonh Fin:91:Rome:333444555
10003:George Jas:86:Amsterdam:777888999
10004:Larry Simpson:65:NewYork:555666777
10005:Jonh Fin:91:Rome:333444555
I was trying to count how many people there was on a specific city that is given as argument of the script.
First thing i thought was:
grep "$1:" people.txt | wc -l
The ":" was because we can have a city named Amster and another named Amsterdam.
But then I realized that we can have people named Amsterdam, so I tried this to search in cities column:
k=$(awk -F ":" -v loc=$1 -v max=0 ' {if ($4==loc) max++; print max}' people.txt)
echo $k
But now the output is like 0 0 1 1 1 and how can I have just the last digit of this output?
I also tried with cut but when doing -f we donĀ“t know how long that output is.
Desired output is just
1
Regards

Assuming $1 is equal to "NewYork":
awk -F: -v loc="$1" '$4==loc { cnt++ } END { print cnt}' people.txt
You need to use the END block to print the final count.

You can just do it with a single grep command:
grep -Ec "^([^:]*:){3}$1:" people.txt

Related

Bash: Working with CSV file to build a loop and save the result

Using Bash, I'm wanting to get a list of email addresses from a CSV file to do a recursive grep search on it for a bunch of directories looking for a match in specific metadata XML files, and then also tallying up how many results I find for each address throughout the directory tree (i.e. updating the tally field in the same CSV file).
accounts.csv looks something like this:
updated to more accurately reflect real-world data
email,date,bar,URL,"something else",tally
address#somewhere.com,21/04/2015,1.2.3.4,https://blah.com/,"blah blah",5
something#that.com,17/06/2015,5.6.7.8,https://blah.com/,"lah yah",0
another#here.com,7/08/2017,9.10.11.12,https://blah.com/,"wah wah",1
For example, if we put address#somewhere.com in $email from the list, run
grep -rl "${email}" --include=\*_meta.xml --only-matching | wc -l
on it and then add that result to the tally column.
At the moment I can get the first column of that CSV file (minus the heading/first line) using
awk -F"," '{print $1}' accounts.csv | tail -n +2
but I'm lost how to do the looping and also the writing of the result back to the CSV file...
So for instance, with another#here.com if we run
grep -rl "${email}" --include=\*_meta.xml --only-matching | wc -l
and the result is say 17, how can I update that line to become:
another#here.com,7/08/2017,9.10.11.12,https://blah.com/,"wah wah",17
Is this possible with maybe awk or sed?
This is where I'm up to:
#!/bin/bash
# make temporary list of email addresses
awk -F"," '{print $1}' accounts.csv | tail -n +2 > emails.tmp
# loop over each
while read email; do
# count how many uploads for current email address
grep -rl "${email}" --include=\*_meta.xml --only-matching | wc -l
done < emails.tmp
XML Metadata looks something like this:
<?xml version="1.0" encoding="UTF-8"?>
<metadata>
<identifier>SomeTitleNameGoesHere</identifier>
<mediatype>audio</mediatype>
<collection>opensource_movies</collection>
<description>example <br /></description>
<subject>testing</subject>
<title>Some Title Name Goes Here</title>
<uploader>another#here.com</uploader>
<addeddate>2017-05-28 06:20:54</addeddate>
<publicdate>2017-05-28 06:21:15</publicdate>
<curation>[curator]email#address.com[/curator][date]20170528062151[/date][comment]checked for malware[/comment]</curation>
</metadata>
how to do the looping and also the writing of the result back to the CSV file
awk does the looping automatically. You can change any field by assigning to it. So to change a tally field (the 6th in each line) you would do $6 = ....
awk is a great tool for many scenarios. You probably can safe a lot of time in the future by investing some minutes in a short tutorial now.
The only non-trivial part is getting the output of grep into awk.
The following script increments each tally by the count of *_meta.xml files containing the given email address:
awk -F, -v OFS=, -v q=\' 'NR>1 {
cmd = "grep -rlFw " q $1 q " --include=\\*_meta.xml | wc -l";
cmd | getline c;
close(cmd);
$6 = c
} 1' accounts.csv
For simplicity we assume that filenames are free of linebreaks and email addresses are free of '.
To reduce possible false positives, I also added the -F and -w option to your grep command.
-F searches literal strings; without it, searching for a.b#c would give false positives for things like axb#c and a-b#c.
-w matches only whole words; without it, searching for b#c would give a false positive for ab#c. This isn't 100% safe, as a-b#c would still give a false positive, but without knowing more about the structure of your xml files we cannot fix this.
A pipeline to reduce the number of greps:
grep -rHo --include=\*_meta.xml -f <(awk -F, 'NR > 1 {print $1}' accounts.csv) \
| gawk -F, -v OFS=',' '
NR == FNR {
# store the filenames for each email
if (match($0, /^([^:]+):(.+)/, m)) tally[m[2]][m[1]]
next
}
FNR > 1 {$4 = length(tally[$1])}
1
' - accounts.csv
Here is a solution using single awk command to achieve this. This solution will be highly performant as compared to other solutions because it is scanning each XML file only once for all the email addresses found in first column of the CSV file. Also it is not invoking any external command or spawning a sub0shell anywhere.
This should work in any version of awk.
cat srch.awk
# function to escape regex meta characters
function esc(s, tmp) {
tmp = s
gsub(/[&+.]/, "\\\\&", tmp)
return tmp
}
BEGIN {FS=OFS=","}
# while processing csv file
NR == FNR {
# save escaped email address in array em skipping header row
if (FNR > 1)
em[esc($1)] = 0
# save each row in rec array
rec[++n] = $0
next
}
# this block will execute for eaxh XML file
{
# loop each email and save count of matched email in array em
# PS: gsub return no of substitutionx
for (i in em)
em[i] += gsub(i, "&")
}
END {
# print header row
print rec[1]
# from 2nd row onwards split row into columns using comma
for (i=2; i<=n; ++i) {
split(rec[i], a, FS)
# 6th column is the count of occurrence from array em
print a[1], a[2], a[3], a[4], a[5], em[esc(a[1])]
}
}
Use it as:
awk -f srch.awk accounts.csv $(find . -name '*_meta.xml') > tmp && mv tmp accounts.csv
A script that handles accounts.csv line by line and replaces the data in accounts.new.csv for comparison.
#! /bin/bash
file_old=accounts.csv
file_new=${file_old/csv/new.csv}
delimiter=","
x=1
# Copy file
cp ${file_old} ${file_new}
while read -r line; do
# Skip first line
if [[ $x -gt 1 ]]; then
# Read data into variables
IFS=${delimiter} read -r address foo bar tally somethingelse <<< ${line}
cnt=$(find . -name '*_meta.xml' -exec grep -lo "${address}" {} \; | wc -l)
# Reset tally
tally=$cnt
# Change line number $x in new file
sed "${x}s/.*/${address} ${foo} ${bar} ${tally} ${somethingelse}/; ${x}s/ /${delimiter}/g" \
-i ${file_new}
fi
((x++))
done < ${file_old}
The input and ouput:
# Input
$ find . -name '*_meta.xml' -exec cat {} \; | sort | uniq -c
2 address#somewhere.com
1 something#that.com
$ cat accounts.csv
email,foo,bar,tally,somethingelse
address#somewhere.com,bar1,foo2,-1,blah
something#that.com,bar2,foo3,-1,blah
another#here.com,bar4,foo5,-1,blah
# output
$ ./test.sh
$ cat accounts.new.csv
email,foo,bar,tally,somethingelse
address#somewhere.com,bar1,foo2,2,blah
something#that.com,bar2,foo3,1,blah
another#here.com,bar4,foo5,0,blah

Counting the number of names in a category in a .csv with bash

I would like to count the number of students in a .csv file depending on the category
Category 1 is the name, Category 2 is the country, Category 3 is the city
The .csv file is displayed as such :
michael_s;jpa;NYC
john_d;chn;TXS
jim_h;usa;POP
I have tried in my .sh script but it didn't work
sort -k3 -t; students.csv
edit:
I am trying to make a bash script that counts students by city and something that can also count one city just by executing the script such as
cat students.csv | ./script.sh NYC
The terminal will only display the students from NYC
If I've understood you correctly, something like this?
cut -d";" -f3 mike.txt | sort | uniq -c
(Sorry, incorrect solution first time - updated now)
To count only one city:
cut -d";" -f3 mike.txt | grep "NYC" | wc -l
Depending on the size of the file, how often you'll be doing this etc. it may be sensible to look at other solutions, eg. awk. But this solution will work just fine.
The reason for the error message "sort: multi-character tab 'students.csv'" is you haven't given the -t option the separator character. If you add a semicolon after -t, the sort will work as expected:
sort -k3 -t';' students.csv
There is always awk:
$ awk -F\; 'a[$1]++==0{c++}END{print c}' file
3
Once you describe your requirements more throughly, (count the names but sort -k3. Update the OP, please) we can help you better.
Edited to match your update:
$ awk -F\; -v col=3 -v val=NYC '
(length(val) && $col==val) || length(val)==0 && a[$col]++==0 {
c++
}
END { print c }
' file
1
If you set -v val= with the value you are looking for and -v col= with the column number, it counts the occurrences of val in col. You you set col but not val ot counts different values in col.

How to use grep -c to count ocurrences of various strings in a file?

i have a bunch files with data from a company and i need to count, let's say, how many people from a certain cities there are. Initially i was doing it manually with
grep -c 'Chicago' file.csv
But now i have to look for a lot cities and it would be time consuming to do this manually every time. So i did some reaserch and found this:
#!/bin/sh
for p in 'Chicago' 'Washington' 'New York'; do
grep -c '$p' 'file.csv'
done
But it doenst work. It keeps giving me 0s as output and im not sure what is wrong. Anyways, basically what i need is for an output with every result (just the values) given by grep in a column so i can copy directly to a spreadsheet. Ex.:
132
407
523
Thanks in advance.
You should use sort + uniq for that:
$ awk '{print $<N>}' file.csv | sort | uniq -c
where N is the column number of cities (I assume it structured, as it's CSV file).
For example, which shell how often used on my system:
$ awk -F: '{print $7}' /etc/passwd | sort | uniq -c
1 /bin/bash
1 /bin/sync
1 /bin/zsh
1 /sbin/halt
41 /sbin/nologin
1 /sbin/shutdown
$
From the title, it sounds like you want to count the number of occurrences of the string rather than the number of lines on which the string appears, but since you accept the grep -c answer I'll assume you actually only care about the latter. Do not use grep and read the file multiple times. Count everything in one pass:
awk '/Chicago/ {c++} /Washington/ {w++} /New York/ {n++}
END { print c; print w; print n }' input-file
Note that this will print a blank line instead of "0" for any string that does not appear, so you migt want to initialize. There are several ways to do that. I like:
awk '/Chicago/ {c++} /Washington/ {w++} /New York/ {n++}
END { print c; print w; print n }' c=0 w=0 n=0 input-file

Pipe awk output to add to variable inside loop

I might be going about this the wrong way but I have tried every syntax and I am stuck on the closest error I could get to.
I have a log file, in which I want to filter to a set of lines like so:
Files : 1 1 1 1 1
Files : 3 3 4 4 5
Files : 10 4 2 3 1
Files : 254 1 1 1 1
The code I have will get me to this point, however, I want to use awk to perform addition of all of the first numeric column, in this instance giving 268 as the output (then performing a similar task on the other columns).
I have tried to pipe the awk output into a loop to perform the final step, but it won't add the values, throwing an error. I thought it could be due to awk handling the entries as a string, but as bash isn't strongly typed it should not matter?
Anyway, the code is:
x=0;
iconv -f UTF-16 -t UTF-8 "./TestLogs/rbTest.log" | grep "Files :" | grep -v "*.*" | egrep -v "Files : [a-zA-Z]" |awk '{$1=$1}1' OFS="," | awk -F "," '{print $4}' | while read i;
do
$x=$((x+=i));
done
Error message:
-bash: 0=1: command not found
-bash: 1=4: command not found
-bash: 4=14: command not found
-bash: 14=268: command not found
I tried a couple of the different addition syntaxes but I feel this has something to do with what I am trying to feed it than the addition itself.
This is currently just with integer values but I would also be looking to perform it with floats as well.
Any help much appreciated and I am sure there is a less convoluted way to achieve this, still learning.
You can do computations in awk itself:
awk '{for (c=3; c<=NF; c++) sum[c]+=$c} END{printf "Total : ";
for (c=3; c<=NF; c++) printf "%s%s", sum[c], ((c<NF)? OFS:ORS) }' file
Output:
Total : 268 9 8 9 8
Here sum is an associative array that holds sum for each column from #3 onwards.
Command breakup:
for (c=3; c<=NF; c++) # Iterate from 3rd col to last col
sum[c]+=$c # Add each col value into an array sum with index of col #
END # Execute this block after last record
printf "Total : " # Print literal "Total : "
for (c=3; c<=NF; c++) # Iterate from 3rd col to last col
printf "%s%s", # Use printf to format the output as 2 strings (%s%s)
sum[c], # 1st one is sum for the given index
((c<NF)? OFS:ORS) # 2nd is conditional string. It will print OFS if it is not last
# col and will print ORS if it is last col.
(Not an answer, but a formatted comment)
I always get antsy when I see a long pipeline of greps and awks (and seds, etc)
... | grep "Files :" | grep -v "*.*" | egrep -v "Files : [a-zA-Z]" | awk '{$1=$1}1' OFS="," | awk -F "," '{print $4}'
Can be written as
... | awk '/Files : [^[:alpha:]]/ && !/\*/ {print $4}'
Are you using grep -v "*.*" to filter out lines with dots, or lines with asterisks? Because you're achieving the latter.

Bash script to search csv file column and count how many times a value shows up

I am really new a bash and I was trying to search a csv file column for a value and then add a counter. I found this online but it prints it and I have been trying to count how many times an R shows up and not print the whole thing.
awk -F "\"*,\"*" '{print $2}' $file
The csv file is like:
12345,R,N,N,Y,N,N,N,Bob Builder
I am looking for R in column 2. Can anybody point me in the right direction?
The following should do what you want (where file.csv is your csv file):
Case sensitive version:
cut -f 2 -d , file.csv | grep -c R
Case insensitive version:
cut -f 2 -d , file.csv | grep -ic R
Explanation
cut -f 2 -d , file.csv
This takes each line of file.csv and extracts the specified fields. The -f 2 option means extract field 2 and the -d , means use a ',' as the field delimiter. The output of this is then piped to grep.
grep -c R This looks for lines containing 'R'. Since it is passed the contents of the previous cut command, it is looking for an 'R' in field two. The -c option means count the number of matching lines.
Using awk only:
awk -F "\",\"" '{if ($2 == "R") cnt++} END{print cnt}' file
For a fun - perl only - this count everything.
perl -F, -anle 'map{$cnt{$_}{$F[$_]}++}0..$#F;END{print $cnt{1}{R}}'

Resources