newtext.csv looks like below :
Record 1
---------
line 1 line 2 Sample Number: 123456789 (line no. 3) | | | | | Time In: 2012-05-29T10:21:06Z (line no. 21) | | | Time Out: 2012-05-29T13:07:46Z (line no. 30)
Record 2
----------
line 1 line 2 Sample Number: 363214563 (line no. 3) | | | | | Time In: 2012-05-29T10:21:06Z (line no. 21) | | | Time Out: 2012-05-29T13:07:46Z (line no. 30)
Record 3
---------
line 1 line 2 Sample Number: 987654321 (line no. 3) | | | | | Time In: 2012-05-29T10:21:06Z (line no. 21) | | | Time Out: 2012-05-29T13:07:46Z (line no. 30)
Assume there are such 100 records in a newtext.csv So, now i need the parameters of the entered i/p string, which is something below
Input
Enter the search String :
123456789
Output
Sample Number is, Sample Number: 123456789
Connected Time is,Time In: 2012-05-29T10:21:06Z
Disconnected Time is, Time Out: 2012-05-29T13:07:46Z
This is what exactly i need. Can you please help me with shell scripting for the above mentioned format ?
OK, the input and the desired output are kinda weird, but it's still not difficult to get what you want, try the following:
var=123456789
awk -v "var=$var" --exec /dev/stdin newtext.csv <<'EOF'
($7 == var) {
printf("Sample Number is, Sample Number: %s\n", $7);
printf("Connected Time is, Time In: %s\n", $18);
printf("Disconnected Time is, Time Out: %s\n", $27);
}
EOF
Related
We have to find the difference(d) Between last 2 nos and display rows with the highest value of d in ascending order
INPUT
1 | Latha | Third | Vikas | 90 | 91
2 | Neethu | Second | Meridian | 92 | 94
3 | Sethu | First | DAV | 86 | 98
4 | Theekshana | Second | DAV | 97 | 100
5 | Teju | First | Sangamithra | 89 | 100
6 | Theekshitha | Second | Sangamithra | 99 |100
Required OUTPUT
4$Theekshana$Second$DAV$97$100$3
5$Teju$First$Sangamithra$89$100$11
3$Sethu$First$DAV$86$98$12
awk 'BEGIN{FS="|";OFS="$";}{
avg=sqrt(($5-$6)^2)
print $1,$2,$3,$4,$5,$6,avg
}'|sort -nk7 -t "$"| tail -3
Output:
4 $ Theekshana $ Second $ DAV $ 97 $ 100$3
5 $ Teju $ First $ Sangamithra $ 89 $ 100$11
3 $ Sethu $ First $ DAV $ 86 $ 98$12
As you can see there is space before and after $ sign but for the last column (avg) there is no space, please explain why its happening
2)
awk 'BEGIN{FS=" | ";OFS="$";}{
avg=sqrt(($5-$6)^2)
print $1,$2,$3,$4,$5,$6,avg
}'|sort -nk7 -t "$"| tail -3
OUTPUT
4$|$Theekshana$|$Second$|$0
5$|$Teju$|$First$|$0
6$|$Theekshitha$|$Second$|$0
I have not mentiond | as the output field separator but still it appears, why is this happening and the difference is zero too
I am just 6 days old in unix,please answer even if its easy
your field separator is only the pipe symbol, so surrounding whitespace is part of the field definitions and that's what you see in the output. In combined uses pipe has the regex special meaning and need to be escaped. In your second case it means space or space is the field separator.
$ awk 'BEGIN {FS=" *\\| *"; OFS="$"}
{d=sqrt(($NF-$(NF-1))^2); $1=$1;
print d "\t" $0,d}' file | sort -n | tail -3 | cut -f2-
4$Theekshana$Second$DAV$97$100$3
5$Teju$First$Sangamithra$89$100$11
3$Sethu$First$DAV$86$98$12
a slight rewrite will eliminate the number of fields dependency and fixes the format.
I'm trying to retrieve items from Node01.pc and put it within a table.
Example:
echo ${NodeCPU[0]} is able to print the item from the line.
But when I use printf or echo it either breaks or does not display the output from the array item.
The formating of the table seems work and it displays only if it's not the arrays. Could it be that there's more than to the file that I can see?
Node01.pc contains
192.168.0.99
2
70
16
80
4
4
100
4
VS122:NMAD:20:20:1:1
VS122:NAMD:20:20:1:1
RS123:FEM:10:20:1:1
QV999:BEM:20:20:1:1
But I only need lines 3,5,7,9
I'm not sure if what is the best way to do this, or if I even need to store items into arrays.
I thought about retrieving all text from the texts files and making a new file which will contain all the data, but I'm not sure how to do that.
This is the code that I have right now.
#!/bin/bash
Node01=($(cat Node01.pc))
Node02=($(cat Node02.pc))
Node03=($(cat Node03.pc))
Node04=($(cat Node04.pc))
Node05=($(cat Node05.pc))
NodeCPU=("${Node01[2]}" "${Node02[2]}" "${Node03[2]}" "${Node04[2]}" "${Node05[2]}")
NodeMEM=("${Node01[4]}" "${Node02[4]}" "${Node03[4]}" "${Node04[4]}" "${Node05[4]}")
NodeHDD=("${Node01[6]}" "${Node02[6]}" "${Node03[6]}" "${Node04[6]}" "${Node05[6]}")
NodeNET=("${Node01[8]}" "${Node02[8]}" "${Node03[8]}" "${Node04[8]}" "${Node05[8]}")
seperator=----------------------
seperator=$seperator$seperator
rows="%-10s| %-7s| %-7s| %-7s| %-7s\n"
TableWidth=140
printf "%-10s| %-7s| %-7s| %-7s| %-7s\n" NodeNumber CPU MEM HDD NET
printf "%.${TableWidth}s\n" "$seperator"
for((i=0;i<=4;i++))
do
printf "$rows" "$(( $i+1 ))" "${NodeCPU[i]}" "${NodeMEM[i]}" "${NodeHDD[i]}" "${NodeNET[i]}"
done
read
This is an example of what I want to display
NodeNumber | CPU | MEM | HDD | NET
----------------------------------
1 | 10 | 20 | 20 | 40
2 | 10 | 20 | 20 | 40
3 | 10 | 20 | 20 | 40
4 | 10 | 20 | 20 | 40
5 | 10 | 20 | 20 | 40
EDIT This is what I'm currently getting:
NodeNumber| CPU | MEM | HDD | NET
--------------------------------------------
| 4 | 70
| 5 | 90
| 6 | 100
| 6 | 70
| 40 | 40
Issue I'm having is with
printf "$rows" "$(( $i+1 ))" "${NodeCPU[i]}" "${NodeMEM[i]}" "${NodeHDD[i]}" "${NodeNET[i]}"
Why worry about all the separate array? Simply loop over all "Node*.pc" files in the current directory and read the contents of each file into an array with readarray and then output the file count and elements nos. 2, 4, 6, 8 of the array in the proper format (adjust elements output as needed), e.g.
#!/bin/bash
cnt=1 ## file counter
## print heading
printf "NodeNumber | CPU | MEM | HDD | NET\n----------------------------------\n"
for i in Node*.pc; do ## loop over all Node*.pc files in directory
readarray -t node < "$i" ## read contents into array
## output count and elements 2, 4, 6, 8 in proper format
printf "%-11s| %-4s| %-4s| %-4s| %s\n" $((cnt++)) \
"${node[2]}" "${node[4]}" "${node[6]}" "${node[8]}"
done
Example Use/Output
With the example data shown copied to the file Node01.pc in the current directory, you would get:
$ bash node.sh
NodeNumber | CPU | MEM | HDD | NET
----------------------------------
1 | 70 | 80 | 4 | 4
(I called the script node.sh)
It would output the information from each file as separate lines numbered 1, 2, ... Look things over an let me know if this is what you intended. (you can also do the same thing with awk faster by setting FS=\n and treating the lines as columns in a single record)
You can do the same thing in awk with:
awk '
BEGIN {
RS=""; FS="\n"
printf "NodeNumber | CPU | MEM | HDD | NET\n----------------------------------\n"
}
NF >= 9 {
printf "%-11s| %-4s| %-4s| %-4s| %s\n",++cnt,$3,$5,$7,$9
}
' Node*.pc
(note: in awk the field numbers are 1-based, while in bash the array indexes are 0-based)
Output is the same.
I am dealing with the analysis of big number of dlg text files located within the workdir. Each file has a table (usually located in different positions of the log) in the following format:
File 1:
CLUSTERING HISTOGRAM
____________________
________________________________________________________________________________
| | | | |
Clus | Lowest | Run | Mean | Num | Histogram
-ter | Binding | | Binding | in |
Rank | Energy | | Energy | Clus| 5 10 15 20 25 30 35
_____|___________|_____|___________|_____|____:____|____:____|____:____|____:___
1 | -5.78 | 11 | -5.78 | 1 |#
2 | -5.53 | 13 | -5.53 | 1 |#
3 | -5.47 | 17 | -5.44 | 2 |##
4 | -5.43 | 20 | -5.43 | 1 |#
5 | -5.26 | 19 | -5.26 | 1 |#
6 | -5.24 | 3 | -5.24 | 1 |#
7 | -5.19 | 4 | -5.19 | 1 |#
8 | -5.14 | 16 | -5.14 | 1 |#
9 | -5.11 | 9 | -5.11 | 1 |#
10 | -5.07 | 1 | -5.07 | 1 |#
11 | -5.05 | 14 | -5.05 | 1 |#
12 | -4.99 | 12 | -4.99 | 1 |#
13 | -4.95 | 8 | -4.95 | 1 |#
14 | -4.93 | 2 | -4.93 | 1 |#
15 | -4.90 | 10 | -4.90 | 1 |#
16 | -4.83 | 15 | -4.83 | 1 |#
17 | -4.82 | 6 | -4.82 | 1 |#
18 | -4.43 | 5 | -4.43 | 1 |#
19 | -4.26 | 7 | -4.26 | 1 |#
_____|___________|_____|___________|_____|______________________________________
The aim is to loop over all the dlg files and take the single line from the table corresponding to wider cluster (with bigger number of slashes in Histogram column). In the above example from the table this is the third line.
3 | -5.47 | 17 | -5.44 | 2 |##
Then I need to add this line to the final_log.txt together with the name of the log file (that should be specified before the line). So in the end I should have something in following format (for 3 different log files):
"Name of the file 1": 3 | -5.47 | 17 | -5.44 | 2 |##
"Name_of_the_file_2": 1 | -5.99 | 13 | -5.98 | 16 |################
"Name_of_the_file_3": 2 | -4.78 | 19 | -4.44 | 3 |###
A possible model of my BASH workflow would be:
#!/bin/bash
do
file_name2=$(basename "$f")
file_name="${file_name2/.dlg}"
echo "Processing of $f..."
# take a name of the file and save it in the log
echo "$file_name" >> $PWD/final_results.log
# search of the beginning of the table inside of each file and save it after its name
cat $f |grep 'CLUSTERING HISTOGRAM' >> $PWD/final_results.log
# check whether it works
gedit $PWD/final_results.log
done
Here I need to substitute combination of echo and grep in order to take selected parts of the table.
You can use this one, expected to be fast enough. Extra lines in your files, besides the tables, are not expected to be a problem.
grep "#$" *.dlg | sort -rk11 | awk '!seen[$1]++'
grep fetches all the histogram lines which are then sorted in reverse order by last field, that means lines with most # on the top, and finally awk removes the duplicates. Note that when grep is parsing more than one file, it has -H by default to print the filenames at the beginning of the line, so if you test it for one file, use grep -H.
Result should be like this:
file1.dlg: 3 | -5.47 | 17 | -5.44 | 2 |##########
file2.dlg: 3 | -5.47 | 17 | -5.44 | 2 |####
file3.dlg: 3 | -5.47 | 17 | -5.44 | 2 |#######
Here is a modification to get the first appearence in case of many equal max lines in a file:
grep "#$" *.dlg | sort -k11 | tac | awk '!seen[$1]++'
We replaced the reversed parameter in sort, with the 'tac' command which is reversing the file stream, so now for any equal lines, initial order is preserved.
Second solution
Here using only awk:
awk -F"|" '/#$/ && $NF > max[FILENAME] {max[FILENAME]=$NF; row[FILENAME]=$0}
END {for (i in row) print i ":" row[i]}' *.dlg
Update: if you execute it from different directory and want to keep only the basename of every file, to remove the path prefix:
awk -F"|" '/#$/ && $NF > max[FILENAME] {max[FILENAME]=$NF; row[FILENAME]=$0}
END {for (i in row) {sub(".*/","",i); print i ":" row[i]}}'
Probably makes more sense as an Awk script.
This picks the first line with the widest histogram in the case of a tie within an input file.
#!/bin/bash
awk 'FNR == 1 { if(sel) print sel; sel = ""; max = 0 }
FNR < 9 { next }
length($10) > max { max = length($10); sel = FILENAME ":" $0 }
END { if (sel) print sel }' ./"$prot"/*.dlg
This assumes the histograms are always the tenth field; if your input format is even messier than the lump you show, maybe adapt to taste.
In some more detail, the first line triggers on the first line of each input file. If we have collected a previous line (meaning this is not the first input file), print that, and start over. Otherwise, initialize for the first input file. Set sel to nothing and max to zero.
The second line skips lines 1-8 which contain the header.
The third line checks if the current line's histogram is longer than max. If it is, update max to this histogram's length, and remember the current line in sel.
The last line is spillover for when we have processed all files. We never printed the sel from the last file, so print that too, if it's set.
If you mean to say we should find the lines between CLUSTERING HISTOGRAM and the end of the table, we should probably have more information about what the surrounding lines look like. Maybe something like this, though;
awk '/CLUSTERING HISTOGRAM/ { if (sel) print sel; looking = 1; sel = ""; max = 0 }
!looking { next }
looking > 1 && $1 != looking { looking = 0; nextfile }
$1 == looking && length($10) > max { max = length($10); sel = FILENAME ":" $0 }
END { if (sel) print sel }' ./"$prot"/*.dlg
This sets looking to 1 when we see CLUSTERING HISTOGRAM, then counts up to the first line where looking is no longer increasing.
I would suggest processing using awk:
for i in $FILES
do
echo -n \""$i\": "
awk 'BEGIN {
output="";
outputlength=0
}
/(^ *[0-9]+)/ { # process only lines that start with a number
if (length(substr($10, 2)) > outputlength) { # if line has more hashes, store it
output=$0;
outputlength=length(substr($10, 2))
}
}
END {
print output # output the resulting line
}' "$i"
done
Here is my query :
/path/newdir/newtext.csv
newtext.csv looks like below :
Record 1
line 1
line 2
Sample Number: 123456789 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
Record 2
line 1
line 2
Sample Number: 363214563 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
Record 3
line 1
line 2
Sample Number: 987654321 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
Assume there are such 100 records in a newtext.csv
So, now i need the parameters of the entered i/p string, which is something below
Example Input Search String :
123456789
Example Output :
Sample Number: 123456789
Time In: 2012-05-29T10:21:06Z
Time Out: 2012-05-29T13:07:46Z
This is what exactly i need. Can you please help me ?
For plain input* string,
grep -F "InputString" -A27 inputFile.csv | sed -n '1p;19p;$p'
For pattern(Extended-regex)* string,
grep -E "InputPattern" -A27 inputFile.csv | sed -n '1p;19p;$p'
Script:
user$ cat script.sh
#!/bin/bash
grep -F "$1" -A27 inputFile.csv | sed -n '1p;19p;$p'
user$ chmod +x script.sh
user$ ./script.sh "inputString"
Edit:
Non-line number based solution,
#!/bin/bash
grep -F "$1" -A27 inputFile.csv |sed -n "/$1/p;/^Time\s[^:]*:/p"
* The input must be unique to the file.
Try This
csv.txt (Input File)
Sample Number: 123456789 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
line 1
line 2
Sample Number: 363214563 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
line 1
line 2
Sample Number: 987654321 (line no. 3)
|
|
|
|
|
Time In: 2012-05-29T10:21:06Z (line no. 21)
|
|
|
Time Out: 2012-05-29T13:07:46Z (line no. 30)
csv.sh (Code)
echo "Enter your search string:"
read name
grep -A 10 "$name" csv.txt | grep -v "|" | awk -F "(" '{print $1}'
Output
Enter your search string: 123456789
Sample Number: 123456789
Time In: 2012-05-29T10:21:06Z
Time Out: 2012-05-29T13:07:46Z
I have a large txt file space delimited which I split into 18 smaller files (each with their own number of columns). This split is based on a delimiter i.e. whenever the timestamp hits midnight. So effectively, I'll end up with a 18 files in the form of (note, ignore the dashes and pipes, I've used them to improve the readability):
file1
time ----------- valueA - valueB
12:00:00 AM | 54.13 | 239.12
12:00:01 AM | 51.83 | 119.93
..
file18
time ---------- valueA - valueB - valueC - valueD
12:00:00 AM | 54.92 | 239.12 | 231.23 | 882.12
12:00:01 AM | 23.92 | 121.92 | 201.23 | 892.12
..
Once I split the file I then perform some processing on each of the files using AWK so in short there's 2 stages the 'split stage' and the 'processing stage'.
Unfortunately, the timestamp contained in the large txt file is in 1 of 2 formats. Either the desirable 24 hour format of "00:00:01" or the undesirable 12 hour format of "12:00:01 AM".
As a result, I'm trying to convert all formats to be 24 hours and I'm not sure how to do this. I'm also not sure whether to attempt this at the split stage using bash or at the process stage using AWK. I know that the following function converts 12 hour to 24 hr
'date --date="12:00:01 AM" +%T'
however, I'm not sure how to incorporate this into my shell script were I'm using 'while read line' at the 'split stage' or whether I should do the time conversion in AWK (if possible?) at the 'processing stage'.
see the test below, is it helpful for you?
kent$ echo "12:00:00 AM | 54.92 | 239.12 | 231.23 | 882.12 "\
|awk -F'|' 'BEGIN{OFS="|"}{("date --date=\""$1"\" +%T") |getline $1;print }'
output
00:00:00| 54.92 | 239.12 | 231.23 | 882.12