How to search a text file based on columns using Bash Script - bash

I am creating a bash script contact list system and this is how it prints out.
=================
Menu
=================
Enter 1 for Insert new contact
Enter 2 for Print current contact list
Enter 3 for Search contact list
Enter 4 for Exit
Enter your selection:
When 2 is selected, basically it prints out the following:
Name Email Phone
Test test#aol.com 102-123-1234
Data data#yahoo.com 345-345-5555
Sally sally#yahoo.com 344-555-4930
To display this I use
$ awk -F, 'BEGIN{printf "%-12s %-15s %-12s\n","Name"," Email"," Phone"} {printf "%-12s %-15s %-12s\n",$1,$2,$3}' contacts.txt
I am having trouble with the option number 3 (searching contact list).
It prompts for:
Enter in data that you would like to search for: aol
Then the code behind is:
echo -e "Enter in data that you would like to search for: \c"
read search
grep "$search" contacts.txt
It prints out:
Test,test#aol.com,102-123-1234
This is because the text file contacts.txt stores the data in a comma separated list.
I want the search results to display in the columns like option number 2. So when "aol" is the search it should print out:
Name Email Phone
Test test#aol.com 102-123-1234
How would I do this?

use read and IFS, eg:
echo -e "Enter in data that you would like to search for: \c"
read search
printf "%16s%16s%16s\n\n" Name Email Phone
grep "$search" contacts.txt | while IFS="," read name email phone etc ; do
printf "%16s%16s%16s\n" "$name" "$email" "$phone"
done

awk -v pattern=$search '/Name/{print $0} $0~pattern{print $0}' input
will output:
Name Email Phone
Test test#aol.com 102-123-1234
what does it?
-v option creates a awk variable pattern assigns it with $search
/Name/ selects the title line
$0~pattern matches the search patters
A much simpler version would be
awk -v pattern=$search '/Name/; $0~pattern' input
since print $0 is the default action.

Related

How to extract phone number and Pin from each text line

Sample Text from the log file
2021/08/29 10:25:37 20210202GL1 Message Params [userid:user1] [timestamp:20210829] [from:TEST] [to:0214736848] [text:You requested for Pin reset. Your Customer ID: 0214736848 and PIN: 4581]
2021/08/27 00:03:18 20210202GL2 Message Params [userid:user1] [timestamp:20210827] [from:TEST] [to:0214736457] [text:You requested for Pin reset. Your Customer ID: 0214736457 and PIN: 6193]
2021/08/27 10:25:16 Thank you for joining our service; Your ID is 0214736849 and PIN is 5949
Other wording and formatting can change but ID and PIN don't change
Expected out put for each line
0214736848#4581
0214736457#6193
0214736849#5949
Below is what I have tried out using bash though am currently able to extract only the numeric values
while read p; do
NUM=''
counter=1;
text=$(echo "$p" | grep -o -E '[0-9]+')
for line in $text
do
if [ "$counter" -eq 1 ] #if is equal to 1
then
NUM+="$line" #concatenate string
else
NUM+="#$line" #concatenate string
fi
let counter++ #Increment counter
done
printf "$NUM\n"
done < logfile.log
Current output though not the expected.
2021#08#29#00#03#18#20210202#2#1#20210826#0214736457#0214736457#6193
2021#08#27#10#25#37#20210202#1#1#20210825#0214736848#0214736848#4581
2021#08#27#10#25#16#0214736849#5949
Another variation using gawk and 2 capture groups, matching 1 or more digits per group:
awk '
match($0, /ID: ([0-9]+) and PIN: ([0-9]+)/, m) {
print m[1]"#"m[2]
}
' file
Output
0214736848#4581
0214736457#6193
For the updated question, you could either match : or is if you want a more precise match, and the capture group values will be 2 and 4.
awk '
match($0, /ID(:| is) ([0-9]+) and PIN(:| is) ([0-9]+)/, m) {
print m[2]"#"m[4]
}
' file
Output
0214736848#4581
0214736457#6193
0214736849#5949
Using sed capture groups you can do:
sed 's/.* Your Customer ID: \([0-9]*\) and PIN: \([0-9]*\).*/\1#\2/g' file.txt
With your shown samples please try following awk code, you could simple do it with using different field separators. Simple explanation would be, making Customer ID: OR and PIN: OR ]$ as field separators and then keeping them in mind printing only 2nd and 3rd fields along with # as per required output by OP.
awk -v FS='Customer ID: | and PIN: |]$' '{print $2"#"$3}' Input_file
With bash and a regex:
while IFS='] ' read -r line; do
[[ "$line" =~ ID:\ ([^\ ]+).*PIN:\ ([^\ ]+)] ]]
echo "${BASH_REMATCH[1]}#${BASH_REMATCH[2]}"
done <file
Output:
0214736848#4581
0214736457#6193
Given the updated input in your question then using any sed in any shell on every Unix box:
$ sed 's/.* ID[: ][^0-9]*\([0-9]*\).* PIN[: ][^0-9]*\([0-9]*\).*/\1#\2/' file
0214736848#4581
0214736457#6193
0214736849#5949
Original answer:
Using any awk in any shell on every Unix box:
$ awk -v OFS='#' '{print $18, $21+0}' file
0214736848#4581
0214736457#6193

How to search for a file that contains a specific "keyword: string" pair and print its whole contents using awk? [duplicate]

This question already has answers here:
How to find a file and delete containing some string in the body using awk command from multiple files?
(2 answers)
how to find a file name that contains a string "abcd " in the file content from multiple files in the directory using awk
(2 answers)
Closed 2 years ago.
I was given a project to make a project to store book records in a directory on different files.
If the user want to search a book by its name or author ,I wanted to search files in a directory to find a string matching with a key word and print the whole file using awk.
and if the user wanted to delete the file i wanted to delete it.
I tried to do the searching part with the following code but it only prints the line not the whole contents of file.
Records directory contains 1.txt, 2.txt, 3.txt ...
Example
1.txt
author: jhon
title : book1
year : 2000
pages : 342
I would appreciate it someone helps me out.
#!bin/bash
BOOK=./Records/*.txt
# Ask the user what to look for.
echo -n -e "What field would you like to search author or title: "
read field
echo -n "In the field = \"$field\", what string should I find? "
read string
# Find the string in the selected field
case $field in
"author" ) # Search for a specific name
awk -v var=$string -F ":" '$2~ var {printf "Record: %d\n\t%s\n\t%s\n\t%s, %s, %s\n\t%s\n", NR=1 , $1, $2}' $BOOK
;;
"title" ) # Search for a specific name
awk -v var=$string -F ":" '$2 ~ var {printf "Record: %d\n\t%s\n\t%s\n\t%s, %s, %s\n\t%s\n", NR, $1, $2 }' $BOOK
;;
"*" ) # Search pattern not recognized
echo "I did not understand your field name";
;;
esac
exit 0
Not the full replacement of the script but just the idea you can extend
...
case "$field" in
author) # Search for a specific name
grep -lF "author: $search_string" $BOOK | xargs cat
search literal string in files and print the filename of matches, pipe to cat to print the contents of the files...

Shell script -How to group test file records based on column value and send email to corresponding receipents.?

I have a comma separated csv file named "file1" with below details and the headers are.. incident number, date, person name and email id. The requirement is to group records by person name and send email listing all records by his or her name.
So in this example Sam, Mutthu, Andrew, Jordan will receive one email each and in that email they will see all records on their name.
10011,5-Jan,Sam,Sam#companydomain.com
10023,8-Jan,Mutthu,Mutthu#companydomain.com
10010,8-Jan,Mutthu,Mutthu#companydomain.com
10026,15-Jan,Sam,Sam#companydomain.com
10050,10-Jan,Jordan,Jordan#companydomain.com
10021,12-Jan,Andrew,Andrew#companydomain.com
I have searched the forum for solution but not able to map which solution to go with, all I can find below command to create separate files based in person name which will not fit in our requirement.
awk -F\, '{print>$3}' file1
talking about our existing script, it sends email one by one using below command so it will send multiple emails to Mutthu and Sam which we don't want.
/usr/sbin/sendmail -v $MAILTO $MAILCC |grep Sent >> "$HOME/maillog.txt"
Any Help will be appreciated
Thanks
Shan
As the question is tagged "bash", here a possible solution as a pure shell script. Note that this was not tested.
#!/bin/bash
MAILCC=x#y.com
in_file="file1"
# 1. grep all lines containing a '#' (contained in email address)
# 2. Cut field 4 (the email address)
# 3. sort uniq (remove duplicate email addresses)
#
# Loop over that list
#
for email in $(grep '#' $in_file | cut -d, -f 4 | sort -u); do
# only if $email is a non-empty string
if [ -n "$email" ]; then
# grep for email in source file and mail found lines
{
echo "From: sender#example.net"
echo "To: $email"
echo "Subject: Your test file records"
echo ""
grep "$email" $in_file | while read -r line; do
echo "$line"
done
} | /usr/sbin/sendmail -v $email $MAILCC
fi
done | grep Send >>"$HOME/maillog.txt"
Here is an Awk script which does what you request. We collect the input into an array, where each element contains the lines for one recipient, and the key is the recipient's email address ($4). Finally we loop over the keys and send one message each.
awk -F , '{ a[$4] = a[$4] ORS $0 }
END {
for (user in a) {
cmd = "/usr/sbin/sendmail -v " $4
print "From: sender#example.net" | cmd
print "To: " $4 | cmd
print "Subject: stuff from CSV" | cmd
# Neck between headers and email body
print "" | cmd
# Skip initial ORS
print substr(a[user],2) | cmd
close(cmd) } }' file.csv |
grep Sent >>"$HOME/maillog.txt"
I can't guess what's in MAILCC so I just left it off. If you always want to Cc: a static address, adding that back should be easy.

Output a record from an existing file based on a matching condition in bash scripting

I need to be able to output a record if a condition is true.
Suppose this is the existing file,
Record_ID,Name,Last Name,Phone Number
I am trying to output record if the last name matches. I collect user input to get last name and then perform the following operation.
read last_name
cat contact_records.txt | awk -F, '{if($3=='$last_name')print "match"; else print "no match";}'
This script outputs no match for every record within contact_records.txt
Your script has two problems:
First, $last_name is not considered quoted in the context of 'awk'. For example, if "John" is to be queried, you are comparing $3 with the variable John rather than string "John". This can be fixed by adding two double-quotes as below:
read last_name
cat contact_records.txt | awk -F, '{if($3=="'$last_name'")print "match"; else print "no match";}'
Second, it actually scans the whole contact_records.txt and prints match/no match for each line of comparison. For example, contact_records.txt has 100 lines, with "John" in it. Then, querying if John is in it by this script yields 1 "match"'s and 99 "no match"'s. This might not be what you want. Here's a fix:
read last_name
if [ `cat contact_records.txt | cut -d, -f 3 | grep -c "$last_name"` -eq 0 ]; then
echo "no match"
else
echo "match"
fi

How to display text file in columns using bashscript in unix?

I am making a bash script contact list system. This is what it prints out.
=================
Menu
=================
Enter 1 for Insert new contact
Enter 2 for Print current contact list
Enter 3 for Search contact list
Enter 4 for Exit
Enter your selection:
For "1" it ask for name, email, and phone and stores them for variables then stores them in a text file.
case "$answer" in
1) echo -e "Enter in a new contact name: \c"
read name
echo -e "Enter in new contact email address: \c"
read email
echo -e "Enter in new contact phone number: \c"
read phone
echo "$name, $email, $phone" >> contacts.txt ;;
For 2 is where I am having trouble. I want to display the text in three columns so I can sort them by name, email, or phone number. This is my code for case 2.
2) cat contacts.txt ;;
Obviously it only spits out:
Test Name, Test#data.com, 123-123-1234
Blank Data, Data#aol.com, 234-555-5555
I want it to read:
Name Email Phone
Test Name Test#data.com 123-123-1234
Blank Data Data#aol.com 234-555-5555
How would I do that? And how would I be able to sort it later on?
Change
echo "$name, $email, $phone" >> contacts.txt ;;
to
echo "$name,$email,$phone" >> contacts.txt ;;
and try this:
(echo Name,Email,Phone; cat contacts.txt) | column -s , -t
$ cat contacts.txt
Test Name, Test#data.com, 123-123-1234
Blank Data, Data#aol.com, 234-555-5555
$ awk -F, 'BEGIN{printf "%-12s %-15s %-12s\n","Name"," Email"," Phone"} {printf "%-12s %-15s %-12s\n",$1,$2,$3}' contacts.txt
Name Email Phone
Test Name Test#data.com 123-123-1234
Blank Data Data#aol.com 234-555-5555
How it works:
The printf statement allows custom formatting of output. Above the format string %-12s %-15s %-12s\n was used. Taking %-12s, for example, the 12s part means that we want to format a string to a width of 12 columns. The minus sign means that we want that field left-justified.
Looking at each piece of the awk code separately:
-F,
This tells awk to use a comma as the field separator on each line.
BEGIN{printf "%-12s %-15s %-12s\n","Name"," Email"," Phone"}
The BEGIN block is executed before the first line of the file is read. It is used here to print the header.
printf "%-12s %-15s %-12s\n",$1,$2,$3
awk implicitly loops through every line in the file. For each line, we print out the first three fields as per the format statement.

Resources