How to retrieve the specified parameters in a file (Shell Scripting )? - bash

Here is my query :
/path/newdir/newtext.csv
newtext.csv looks like below :
Record 1
line 1
line 2
Sample Number: 123456789 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
Record 2
line 1
line 2
Sample Number: 363214563 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
Record 3
line 1
line 2
Sample Number: 987654321 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
Assume there are such 100 records in a newtext.csv
So, now i need the parameters of the entered i/p string, which is something below
Example Input Search String :
123456789
Example Output :
Sample Number: 123456789
Time In: 2012-05-29T10:21:06Z
Time Out: 2012-05-29T13:07:46Z
This is what exactly i need. Can you please help me ?

For plain input* string,
grep -F "InputString" -A27 inputFile.csv | sed -n '1p;19p;$p'
For pattern(Extended-regex)* string,
grep -E "InputPattern" -A27 inputFile.csv | sed -n '1p;19p;$p'
Script:
user$ cat script.sh
#!/bin/bash
grep -F "$1" -A27 inputFile.csv | sed -n '1p;19p;$p'
user$ chmod +x script.sh
user$ ./script.sh "inputString"
Edit:
Non-line number based solution,
#!/bin/bash
grep -F "$1" -A27 inputFile.csv |sed -n "/$1/p;/^Time\s[^:]*:/p"
* The input must be unique to the file.

Try This
csv.txt (Input File)
Sample Number: 123456789 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
line 1
line 2
Sample Number: 363214563 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
line 1
line 2
Sample Number: 987654321 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
csv.sh (Code)
echo "Enter your search string:"
read name
grep -A 10 "$name" csv.txt | grep -v "|" | awk -F "(" '{print $1}'
Output
Enter your search string: 123456789
Sample Number: 123456789
Time In: 2012-05-29T10:21:06Z
Time Out: 2012-05-29T13:07:46Z

Related

How to read each cell of a column in csv and take each as input for jq in bash

I am trying to read each cell of CSV and treat it as an input for the JQ command. Below is my code:
line.csv
| Line |
|:---- |
| 11 |
| 22 |
| 33 |
Code to read CSV:
while read line
do
echo "Line is : $line"
done < line.csv
Output:
Line is 11
Line is 22
jq Command
jq 'select(.scan.line == '"$1"') | .scan.line,"|", .scan.service,"|", .scan.comment_1,"|", .scan.comment_2,"|", .scan.comment_3' linescan.json | xargs
I have a linescan.json which have values for line, service, comment_1, comment_2, comment_3
I want to read each value of csv and treat the input in jq query where $1 is mentioned.
Given the input files and desired output:
line.csv
22,Test1
3389,Test2
10,Test3
linescan.json
{
"scan": {
"line": 3389,
"service": "Linetest",
"comment_1": "Line is tested1",
"comment_2": "Line is tested2",
"comment_3": "Line is tested3"
}
}
desired output:
Test2 | 3389 | Linetest | Line is tested1 | Line is tested2 | Line is tested3
Here's a solution with jq:
This is a shot in the dark, as you didn't specify what your output should look like:
jq -sr --rawfile lineArr line.csv '
(
$lineArr | split("\n") | del(.[-1]) | .[] | split(",")
) as [$lineNum,$prefix] |
.[] | select(.scan.line == ($lineNum | tonumber)) |
[
$prefix,
.scan.line,
.scan.service,
.scan.comment_1,
.scan.comment_2,
.scan.comment_3
] |
join(" | ")
' linescan.json
Update: with jq 1.5:
#!/bin/bash
jq -sr --slurpfile lineArr <(jq -R 'split(",")' line.csv) '
($lineArr | .[]) as [$lineNum,$prefix] |
.[] | select(.scan.line == ($lineNum | tonumber)) |
[
$prefix,
(.scan.line | tostring),
.scan.service,
.scan.comment_1,
.scan.comment_2,
.scan.comment_3
] |
join(" | ")
' linescan.json

Inconsistency in output field separator

We have to find the difference(d) Between last 2 nos and display rows with the highest value of d in ascending order
INPUT
1 | Latha | Third | Vikas | 90 | 91
2 | Neethu | Second | Meridian | 92 | 94
3 | Sethu | First | DAV | 86 | 98
4 | Theekshana | Second | DAV | 97 | 100
5 | Teju | First | Sangamithra | 89 | 100
6 | Theekshitha | Second | Sangamithra | 99 |100
Required OUTPUT
4$Theekshana$Second$DAV$97$100$3
5$Teju$First$Sangamithra$89$100$11
3$Sethu$First$DAV$86$98$12
awk 'BEGIN{FS="|";OFS="$";}{
avg=sqrt(($5-$6)^2)
print $1,$2,$3,$4,$5,$6,avg
}'|sort -nk7 -t "$"| tail -3
Output:
4 $ Theekshana $ Second $ DAV $ 97 $ 100$3
5 $ Teju $ First $ Sangamithra $ 89 $ 100$11
3 $ Sethu $ First $ DAV $ 86 $ 98$12
As you can see there is space before and after $ sign but for the last column (avg) there is no space, please explain why its happening
2)
awk 'BEGIN{FS=" | ";OFS="$";}{
avg=sqrt(($5-$6)^2)
print $1,$2,$3,$4,$5,$6,avg
}'|sort -nk7 -t "$"| tail -3
OUTPUT
4$|$Theekshana$|$Second$|$0
5$|$Teju$|$First$|$0
6$|$Theekshitha$|$Second$|$0
I have not mentiond | as the output field separator but still it appears, why is this happening and the difference is zero too
I am just 6 days old in unix,please answer even if its easy
your field separator is only the pipe symbol, so surrounding whitespace is part of the field definitions and that's what you see in the output. In combined uses pipe has the regex special meaning and need to be escaped. In your second case it means space or space is the field separator.
$ awk 'BEGIN {FS=" *\\| *"; OFS="$"}
{d=sqrt(($NF-$(NF-1))^2); $1=$1;
print d "\t" $0,d}' file | sort -n | tail -3 | cut -f2-
4$Theekshana$Second$DAV$97$100$3
5$Teju$First$Sangamithra$89$100$11
3$Sethu$First$DAV$86$98$12
a slight rewrite will eliminate the number of fields dependency and fixes the format.

bash looping and extracting of the fragment of txt file

I am dealing with the analysis of big number of dlg text files located within the workdir. Each file has a table (usually located in different positions of the log) in the following format:
File 1:
CLUSTERING HISTOGRAM
____________________
________________________________________________________________________________
| | | | |
Clus | Lowest | Run | Mean | Num | Histogram
-ter | Binding | | Binding | in |
Rank | Energy | | Energy | Clus| 5 10 15 20 25 30 35
_____|___________|_____|___________|_____|____:____|____:____|____:____|____:___
1 | -5.78 | 11 | -5.78 | 1 |#
2 | -5.53 | 13 | -5.53 | 1 |#
3 | -5.47 | 17 | -5.44 | 2 |##
4 | -5.43 | 20 | -5.43 | 1 |#
5 | -5.26 | 19 | -5.26 | 1 |#
6 | -5.24 | 3 | -5.24 | 1 |#
7 | -5.19 | 4 | -5.19 | 1 |#
8 | -5.14 | 16 | -5.14 | 1 |#
9 | -5.11 | 9 | -5.11 | 1 |#
10 | -5.07 | 1 | -5.07 | 1 |#
11 | -5.05 | 14 | -5.05 | 1 |#
12 | -4.99 | 12 | -4.99 | 1 |#
13 | -4.95 | 8 | -4.95 | 1 |#
14 | -4.93 | 2 | -4.93 | 1 |#
15 | -4.90 | 10 | -4.90 | 1 |#
16 | -4.83 | 15 | -4.83 | 1 |#
17 | -4.82 | 6 | -4.82 | 1 |#
18 | -4.43 | 5 | -4.43 | 1 |#
19 | -4.26 | 7 | -4.26 | 1 |#
_____|___________|_____|___________|_____|______________________________________
The aim is to loop over all the dlg files and take the single line from the table corresponding to wider cluster (with bigger number of slashes in Histogram column). In the above example from the table this is the third line.
3 | -5.47 | 17 | -5.44 | 2 |##
Then I need to add this line to the final_log.txt together with the name of the log file (that should be specified before the line). So in the end I should have something in following format (for 3 different log files):
"Name of the file 1": 3 | -5.47 | 17 | -5.44 | 2 |##
"Name_of_the_file_2": 1 | -5.99 | 13 | -5.98 | 16 |################
"Name_of_the_file_3": 2 | -4.78 | 19 | -4.44 | 3 |###
A possible model of my BASH workflow would be:
#!/bin/bash
do
file_name2=$(basename "$f")
file_name="${file_name2/.dlg}"
echo "Processing of $f..."
# take a name of the file and save it in the log
echo "$file_name" >> $PWD/final_results.log
# search of the beginning of the table inside of each file and save it after its name
cat $f |grep 'CLUSTERING HISTOGRAM' >> $PWD/final_results.log
# check whether it works
gedit $PWD/final_results.log
done
Here I need to substitute combination of echo and grep in order to take selected parts of the table.
You can use this one, expected to be fast enough. Extra lines in your files, besides the tables, are not expected to be a problem.
grep "#$" *.dlg | sort -rk11 | awk '!seen[$1]++'
grep fetches all the histogram lines which are then sorted in reverse order by last field, that means lines with most # on the top, and finally awk removes the duplicates. Note that when grep is parsing more than one file, it has -H by default to print the filenames at the beginning of the line, so if you test it for one file, use grep -H.
Result should be like this:
file1.dlg: 3 | -5.47 | 17 | -5.44 | 2 |##########
file2.dlg: 3 | -5.47 | 17 | -5.44 | 2 |####
file3.dlg: 3 | -5.47 | 17 | -5.44 | 2 |#######
Here is a modification to get the first appearence in case of many equal max lines in a file:
grep "#$" *.dlg | sort -k11 | tac | awk '!seen[$1]++'
We replaced the reversed parameter in sort, with the 'tac' command which is reversing the file stream, so now for any equal lines, initial order is preserved.
Second solution
Here using only awk:
awk -F"|" '/#$/ && $NF > max[FILENAME] {max[FILENAME]=$NF; row[FILENAME]=$0}
END {for (i in row) print i ":" row[i]}' *.dlg
Update: if you execute it from different directory and want to keep only the basename of every file, to remove the path prefix:
awk -F"|" '/#$/ && $NF > max[FILENAME] {max[FILENAME]=$NF; row[FILENAME]=$0}
END {for (i in row) {sub(".*/","",i); print i ":" row[i]}}'
Probably makes more sense as an Awk script.
This picks the first line with the widest histogram in the case of a tie within an input file.
#!/bin/bash
awk 'FNR == 1 { if(sel) print sel; sel = ""; max = 0 }
FNR < 9 { next }
length($10) > max { max = length($10); sel = FILENAME ":" $0 }
END { if (sel) print sel }' ./"$prot"/*.dlg
This assumes the histograms are always the tenth field; if your input format is even messier than the lump you show, maybe adapt to taste.
In some more detail, the first line triggers on the first line of each input file. If we have collected a previous line (meaning this is not the first input file), print that, and start over. Otherwise, initialize for the first input file. Set sel to nothing and max to zero.
The second line skips lines 1-8 which contain the header.
The third line checks if the current line's histogram is longer than max. If it is, update max to this histogram's length, and remember the current line in sel.
The last line is spillover for when we have processed all files. We never printed the sel from the last file, so print that too, if it's set.
If you mean to say we should find the lines between CLUSTERING HISTOGRAM and the end of the table, we should probably have more information about what the surrounding lines look like. Maybe something like this, though;
awk '/CLUSTERING HISTOGRAM/ { if (sel) print sel; looking = 1; sel = ""; max = 0 }
!looking { next }
looking > 1 && $1 != looking { looking = 0; nextfile }
$1 == looking && length($10) > max { max = length($10); sel = FILENAME ":" $0 }
END { if (sel) print sel }' ./"$prot"/*.dlg
This sets looking to 1 when we see CLUSTERING HISTOGRAM, then counts up to the first line where looking is no longer increasing.
I would suggest processing using awk:
for i in $FILES
do
echo -n \""$i\": "
awk 'BEGIN {
output="";
outputlength=0
}
/(^ *[0-9]+)/ { # process only lines that start with a number
if (length(substr($10, 2)) > outputlength) { # if line has more hashes, store it
output=$0;
outputlength=length(substr($10, 2))
}
}
END {
print output # output the resulting line
}' "$i"
done

shell - grep - how to get only lines that have certain amount char

good morning.
I have the following lines :
1 | blah | 2 | 1993 | 86 | 0 | NA | 123 | 123
1 | blah | TheBeatles | 0 | 3058 | NA | NA | 11
And I wanna get only the lines with 7 "|" and the same first field.
So the output for these two lines will be nothing, but for these two lines :
1 | blah | 2 | 1993 | 86 | 0 | NA | 123
1 | blah | TheBeatles | 0 | 3058 | NA | NA | 11
The output will be "error".
I'm getting the inputs from a file using the following command :
grep '.*|.*|.*|.*|.*|.*|.*|.*' < $1 | sort -nbsk1 | cut -d "|" -f1 | uniq -d |
while read line2; do
echo error
done
But this implementation would still print error even if I have more then 7 "|".
Any suggestions ?
P.S - I can assume that there is a \n in the end of each line.
For printing lines containing only 7 |, try:
awk -F'|' 'NF == 8' filename
If you want to use bash to count the number of | in a given line, try:
line="1 | blah | 2 | 1993 | 86 | 0 | NA | 123 | 123";
count=${line//[^|]/};
echo ${#count};
With grep
grep '^\([^|]*|[^|]*\)\{7\}$'
Assuming zz.txt is:
$ cat zz.txt
1 | blah | 2 | 1993 | 86 | 0 | NA | 123 | 123
1 | blah | TheBeatles | 0 | 3058 | NA | NA | 11
$ cut -d\| -f1-8 zz.txt
above cut will give you the output you need.
I would suggest that you use awk for this job.
BEGIN { FS = "|" }
NF == 8 && $1 == '1' { print $0}
would do the job (although the == and && could be = and & ; my awk is a bit rusty)

How can i write a shell script to display the below output?

newtext.csv looks like below :
Record 1
---------
line 1 line 2 Sample Number: 123456789 (line no. 3) | | | | | Time In: 2012-05-29T10:21:06Z (line no. 21) | | | Time Out: 2012-05-29T13:07:46Z (line no. 30)
Record 2
----------
line 1 line 2 Sample Number: 363214563 (line no. 3) | | | | | Time In: 2012-05-29T10:21:06Z (line no. 21) | | | Time Out: 2012-05-29T13:07:46Z (line no. 30)
Record 3
---------
line 1 line 2 Sample Number: 987654321 (line no. 3) | | | | | Time In: 2012-05-29T10:21:06Z (line no. 21) | | | Time Out: 2012-05-29T13:07:46Z (line no. 30)
Assume there are such 100 records in a newtext.csv So, now i need the parameters of the entered i/p string, which is something below
Input
Enter the search String :
123456789
Output
Sample Number is, Sample Number: 123456789
Connected Time is,Time In: 2012-05-29T10:21:06Z
Disconnected Time is, Time Out: 2012-05-29T13:07:46Z
This is what exactly i need. Can you please help me with shell scripting for the above mentioned format ?
OK, the input and the desired output are kinda weird, but it's still not difficult to get what you want, try the following:
var=123456789
awk -v "var=$var" --exec /dev/stdin newtext.csv <<'EOF'
($7 == var) {
printf("Sample Number is, Sample Number: %s\n", $7);
printf("Connected Time is, Time In: %s\n", $18);
printf("Disconnected Time is, Time Out: %s\n", $27);
}
EOF

Resources