How can I skip line with awk - bash

I have a command like this:
tac $log | awk -v pattern="test" '$9 ~ pattern {print; exit}'
It shows me the last line in which $9 contains test text.
Like this:
Thu Mar 26 20:21:38 2015 1 10.8.0.22 94 /home/xxxyyy/zzz_test_223123.txt b _ o r spy ftp 0 * c
Thu Mar 26 20:21:39 2015 1 10.8.0.22 94 /home/SAVED/zzz_test_123123.txt b _ o r spy ftp 0 * c
Thu Mar 26 20:21:40 2015 1 10.8.0.22 94 /home/xxxyyy/zzz_test_123123.txt b _ o r spy ftp 0 * c
Thu Mar 26 20:21:41 2015 1 10.8.0.22 94 /home/SAVED/zzz_test_123124.txt b _ o r spy ftp 0 * c
-- >
Thu Mar 26 20:21:41 2015 1 10.8.0.22 94 /home/SAVED/zzz_test_123124.txt b _ o r spy ftp 0 * c
This command shows me the last line. But I need to pass if line have SAVED. So I need to show like this:
Thu Mar 26 20:21:40 2015 1 10.8.0.22 94 /home/xxxyyy/zzz_test_123123.txt b _ o r spy ftp 0 * c
How can I do this?

To skip a line, you can match it, and use the next command.
$9 ~ /SAVED/ { next }
$9 ~ /\.txt$/ { print; exit }

You can add another condition !~ to prevent this second pattern to be matched (I use pattern2 to make it more generic, of course you can hardcode SAVED there):
$9 ~ pattern && $9 !~ pattern2
All together:
$ awk -v pattern="test" -v pattern2="SAVED" '$9 ~ pattern && $9 !~ pattern2 {print; exit}'
Thu Mar 26 20:21:40 2015 1 10.8.0.22 94 /home/xxxyyy/zzz_test_123123.txt b _ o r spy ftp 0 * c

Use !~ to test if a line doesn't match a pattern.
awk -v pattern="test" $9 ~ pattern && $9 !~ /SAVED/ { print; exit; }

Related

Get the longest logon time of a given user using awk

My task is to write a bash script, using awk, to find the longest logon of a given user ("still logged in" does not count), and print the month day IP logon time in minutes.
Sample input: ./scriptname.sh username1
Content of last username1:
username1 pts/ IP Apr 2 .. .. .. .. (00.03)
username1 pts/ IP Apr 3 .. .. .. .. (00.13)
username1 pts/ IP Apr 5 .. .. .. .. (12.00)
username1 pts/ IP Apr 9 .. .. .. .. (12.11)
Sample output:
Apr 9 IP 731
(note: 12 hours and 11 minutes is in total 731 minutes)
I have written this script, but a bunch of errors pop up, and I am really confused:
#!/bin/bash
usr=$1
last $usr | grep -v "still logged in" | awk 'BEGIN {max=-1;}
{
h=substr($10,2,2);
min=substr($10,5,2) + h/60;
}
(max < min){
max = min;
}
END{
maxh=max/60;
maxmin=max-maxh;
($maxh == 0 && $maxmin >=10){
last $usr | grep "00:$maxmin" | awk '{print $5," ",$6," ", $3," ",$maxmin}'
exit 1
}
($maxh == 0 $$ $maxmin < 10){
last $usr | grep "00:0$maxmin" | awk '{print $5," ",$6," ",$3," ",$maxmin}'
exit 1
}
($maxh < 10 && $maxmin == 0){
last $usr | grep "0$maxh:00" | awk '{print $5," ",$6," ",$3," ",$maxmin}'
exit 1
}
($maxh < 10 && $maxmin < 10){
last $usr | grep "0$maxh:0$maxmin" | awk '{print $5," ",$6," ",$3," ",$maxmin}'
exit 1
}
($maxh >= 10 && $maxmin < 10){
last $usr | grep "$maxh:0$maxmin" | awk '{print $5," ",$6," ",$3," ",$maxmin}'
exit 1
}
($maxh >=10 && $maxmin >= 10){
last $usr | grep "$maxh:$maxmin" | awk '{print $5," ",$6," ",$3," ",$maxmin}'
exit 1
}
}'
So a bit of explaining of how I imagined this would work:
After the initialization, I want to find the (hh:mm) column of the last $usr command, save the h and min of every line, find the biggest number (in minutes, meaning it is the longest logon time).
After I found the longest logon time (in minutes, stored in the variable max), I then have to reformat the only minutes format to hh:mm to be able to use a grep, use the last command again, but now only searching for the line(s) that contain the max logon time, and print all of the needed information in the month day IP logon time in minutes format, using another awk.
Errors I get when running this code: A bunch of syntax errors when I try using grep and awk inside the original awk.
awk is not shell. You can't directly call tools like last, grep and awk from awk any more than you could call them directly from a C program.
Using any awk in any shell on every Unix box and assuming if multiple rows have the max time you'd want all of them printed and that if no timestamped rows are found you want something like No matching records printed (easy tweak if not, just tell us your requirements for those cases and include them in the example in your question):
last username1 |
awk '
/still logged in/ {
next
}
{
split($NF,t,/[().]/)
cur = (t[2] * 60) + t[3]
}
cur >= max {
out = ( cur > max ? "" : out ORS ) $4 OFS $5 OFS $3 OFS cur
max = cur
}
END {
print (out ? out : "No matching records")
}
'
Apr 9 IP 731
If gnu-awk is available, you might use a pattern with 2 capture groups for the numbers in the last field. In the END block print the format that you want.
If in this example, file contains the example content, and the last column contains the logon:
awk '
match ($(NF), /\(([0-9]+)\.([0-9]+)\)/, a) {
hm = (a[1] * 60) + a[2]
if(hm > max) {max = hm; line = $0;}
}
END {
n = split(line,a,/[[:space:]]+/)
print a[3], a[4], a[5], max
}
' file
Output
IP Apr 9 731
Testing last command in my machine:
Using Red Hat Linux 7.8
Got the following output:
user0022 pts/1 10.164.240.158 Sat Apr 25 19:32 - 19:47 (00:14)
user0022 pts/1 10.164.243.80 Sat Apr 18 22:31 - 23:31 (1+01:00)
user0022 pts/1 10.164.243.164 Sat Apr 18 19:21 - 22:05 (02:43)
user0011 pts/0 10.70.187.1 Thu Nov 21 15:26 - 18:37 (03:10)
user0011 pts/0 10.70.187.1 Thu Nov 7 16:21 - 16:59 (00:38)
astukals pts/0 10.70.187.1 Mon Oct 7 19:10 - 19:13 (00:03)
reboot system boot 3.10.0-957.10.1. Mon Oct 7 22:09 - 14:30 (156+17:21)
astukals pts/0 10.70.187.1 Mon Oct 7 18:56 - 19:08 (00:12)
reboot system boot 3.10.0-957.10.1. Mon Oct 7 21:53 - 19:08 (-2:-44)
IT pts/0 10.70.187.1 Mon Oct 7 18:50 - 18:53 (00:03)
IT tty1 Mon Oct 7 18:48 - 18:49 (00:00)
user0022 pts/1 30.30.30.168 Thu Apr 16 09:43 - 14:54 (05:11)
user0022 pts/1 30.30.30.59 Wed Apr 15 11:48 - 04:59 (17:11)
user0022 pts/1 30.30.30.44 Tue Apr 14 19:03 - 04:14 (09:11)
Found time format is DD+HH:MM appears only when DD is not zero.
Found there are additional technical users: IT, system, reboot need to filtered.
Suggesting solution:
last | awk 'BEGIN {FS="[ ()+:]*"}
/reboot|system|still/{next}
{ print $5 OFS $6 OFS $3 OFS $(NF-1) + ($(NF-2) * 60) + ($(NF-3) * 60 * 24)}
' |sort -nk 4| head -1
Result:
Apr 15 30.30.30.59 85991

awk print date formats for all letters - lower and upper cases

I'm working on a awk one-liner to get the date command output for all possible characters ( upper and lower case) like below
a Tue | A Tuesday
b Apr | B April
c Tue Apr 14 17:33:37 2020 | C 20
d 14 | D 04/14/20
. . . .
. . . .
z +0530 | Z IST
The below command seems to be syntactically correct, but awk is throwing error.
seq 0 25 | awk ' { d="date \"+" printf("%c",$0+97) " %" printf("%c",$0+97) "\""; d | getline ; print } '
-bash: syntax error near unexpected token `)'
what is wrong with my attempt. Any other awk solution is also welcome.
bash can do this:
for c in {a..z}; do date "+$c %$c | ${c^} %${c^}"; done
Could you please try following(without ASCII numbers using trick).
awk -v s1="\"" '
BEGIN{
num=split("a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z",alphabets,",")
for(i=1;i<=num;i++){
print "date " s1 "+"alphabets[i] " %"alphabets[i] " | " toupper(alphabets[i]) " %"toupper(alphabets[i]) s1
}
}
' | bash
Logical explanation:
Starting awk program and mentioning variable s1 with value ".
Everything we re doing is in BEGIN section of code only.
Using split to create an array named alphabets where all small letters alphabets are stored in it with index of 1,2,3.. and so on.
Now running for loop from 1 to till length of array alphabets.
Now here print command will actually print all the commands(how they should actually run), but this will do only printing of them.
Closing awk command and passing its output to bash will execute the commands and show the output on terminal.
Any time you find youself considering using awk like a shell (i.e. as a tool to call other tools from) you really need to think hard about whether or not it's the right approach.
Using any awk in any shell without the complications of having shell call awk to spawn a subshell to call date and then have getline try to read it and close the pipe, etc. as happens if you try to call date from awk:
$ awk 'BEGIN{for (i=0; i<=25; i++) print c=sprintf("%c",i+97), toupper(c)}' |
while read c C; do date "+$c %$c | $C %$C"; done
a Tue | A Tuesday
b Apr | B April
c Tue Apr 14 09:03:28 2020 | C 20
d 14 | D 04/14/20
e 14 | E E
f f | F 2020-04-14
g 20 | G 2020
h Apr | H 09
i i | I 09
j 105 | J J
k 9 | K K
l 9 | L L
m 04 | M 03
n
| N N
o o | O O
p AM | P P
q q | Q Q
r 09:03:28 AM | R 09:03
s 1586873008 | S 28
t | T 09:03:28
u 2 | U 15
v 14-Apr-2020 | V 16
w 2 | W 15
x 04/14/2020 | X 09:03:28
y 20 | Y 2020
z -0500 | Z CDT
You may want to have this:
awk -v q='"' 'BEGIN{for(i=0;i<=25;i++){
ch=sprintf("%c",i+97)
d="date +%s%s %%%s%s "
sprintf(d, q,ch,ch,q)|getline v;
sprintf(d,q,toupper(ch),toupper(ch),q)|getline v2;
print v "|" v2
close(d)
}}'
Note
you don't need to feed awk by seq 0 25, you can use the BEGIN block
printf does output, if you want the result, use sprintf()
you should close the command after execution
you didn't implement the "uppercase" part
Output:
a Tue|A Tuesday
b Apr|B April
c Tue 14 Apr 2020 03:02:33 PM CEST|C 20
d 14|D 04/14/20
e 14|E %E
f %f|F 2020-04-14
g 20|G 2020
h Apr|H 15
i %i|I 03
j 105|J %J
k 15|K %K
l 3|L %L
m 04|M 02
n |N 396667929
o %o|O %O
p PM|P pm
q 2|Q %Q
r 03:02:33 PM|R 15:02
s 1586869353|S 33
t |T 15:02:33
u 2|U 15
v %v|V 16
w 2|W 15
x 04/14/2020|X 03:02:33 PM
y 20|Y 2020
z +0200|Z CEST

awk Count number of occurrences

I made this awk command in a shell script to count total occurrences of the $4 and $5.
awk -F" " '{if($4=="A" && $5=="G") {print NR"\t"$0}}' file.txt > ag.txt && cat ag.txt | wc -l
awk -F" " '{if($4=="C" && $5=="T") {print NR"\t"$0}}' file.txt > ct.txt && cat ct.txt | wc -l
awk -F" " '{if($4=="T" && $5=="C") {print NR"\t"$0}}' file.txt > tc.txt && cat ta.txt | wc -l
awk -F" " '{if($4=="T" && $5=="A") {print NR"\t"$0}}' file.txt > ta.txt && cat ta.txt | wc -l
The output is #### (number) in shell. But I want to get rid of > ag.txt && cat ag.txt | wc -l and instead get output in shell like AG = ####.
This is input format:
>seq1 284 284 A G 27 100 16 11 16 11
>seq1 266 266 C T 27 100 16 11 16 11
>seq1 185 185 T - 24 100 10 14 10 14
>seq1 194 194 T C 24 100 12 12 12 12
>seq1 185 185 T AAA 24 100 10 14 10 14
>seq1 194 194 A G 24 100 12 12 12 12
>seq1 185 185 T A 24 100 10 14 10 14
I want output like this in the shell or in file for a single occurrences not other patterns.
AG 2
CT 1
TC 1
TA 1
Yes, everything you're trying to do can likely be done within the awk script. Here's how I'd count lines based on a condition:
awk -F" " '$4=="A" && $5=="G" {n++} END {printf("AG = %d\n", n)}' file.txt
Awk scripts consist of condition { statement } pairs, so you can do away with the if entirely -- it's implicit.
n++ increments a counter whenever the condition is matched.
The magic condition END is true after the last line of input has been processed.
Is this what you're after? Why were you adding NR to your output if all you wanted was the line count?
Oh, and you might want to confirm whether you really need -F" ". By default, awk splits on whitespace. This option would only be required if your fields contain embedded tabs, I think.
UPDATE #1 based on the edited question...
If what you're really after is a pair counter, an awk array may be the way to go. Something like this:
awk '{a[$4 $5]++} END {for (pair in a) printf("%s %d\n", pair, a[pair])}' file.txt
Here's the breakdown.
The first statement runs on every line, and increments a counter that is the index on an array (a[]) whose key is build from $4 and $5.
In the END block, we step through the array in a for loop, and for each index, print the index name and the value.
The output will not be in any particular order, as awk does not guarantee array order. If that's fine with you, then this should be sufficient. It should also be pretty efficient, because its max memory usage is based on the total number of combinations available, which is a limited set.
Example:
$ cat file
>seq1 284 284 A G 27 100 16 11 16 11
>seq1 266 266 C T 27 100 16 11 16 11
>seq1 227 227 T C 25 100 13 12 13 12
>seq1 194 194 A G 24 100 12 12 12 12
>seq1 185 185 T A 24 100 10 14 10 14
$ awk '/^>seq/ {a[$4 $5]++} END {for (p in a) printf("%s %d\n", p, a[p])}' file
CT 1
TA 1
TC 1
AG 2
UPDATE #2 based on the revised input data and previously undocumented requirements.
With the extra data, you can still do this with a single run of awk, but of course the awk script is getting more complex with each new requirement. Let's try this as a longer one-liner:
$ awk 'BEGIN{v["G"]; v["A"]; v["C"]; v["T"]} $4 in v && $5 in v {a[$4 $5]++} END {for (p in a) printf("%s %d\n", p, a[p])}' i
CT 1
TA 1
TC 1
AG 2
This works by first (in the magic BEGIN block) defining an array, v[], to record "valid" records. The condition on the counter simply verifies that both $4 and $5 contain members of the array. All else works the same.
At this point, with the script running onto multiple lines anyway, I'd probably separate this into a small file. It could even be a stand-alone script.
#!/usr/bin/awk -f
BEGIN {
v["G"]; v["A"]; v["C"]; v["T"]
}
$4 in v && $5 in v {
a[$4 $5]++
}
END {
for (p in a)
printf("%s %d\n", p, a[p])
}
Much easier to read that way.
And if your goal is to count ONLY the combinations you mentioned in your question, you can handle the array slightly differently.
#!/usr/bin/awk -f
BEGIN {
a["AG"]; a["TA"]; a["CT"]; a["TC"]
}
($4 $5) in a {
a[$4 $5]++
}
END {
for (p in a)
printf("%s %d\n", p, a[p])
}
This only validates things that already have array indices, which are NULL per BEGIN.
The parentheses in the increment condition are not required, and are included only for clarity.
Just count them all then print the ones you care about:
$ awk '{cnt[$4$5]++} END{split("AG CT TC TA",t); for (i=1;i in t;i++) print t[i], cnt[t[i]]+0}' file
AG 2
CT 1
TC 1
TA 1
Note that this will produce a count of zero for any of your target pairs that don't appear in your input, e.g. if you want a count of "XY"s too:
$ awk '{cnt[$4$5]++} END{split("AG CT TC TA XY",t); for (i=1;i in t;i++) print t[i], cnt[t[i]]+0}' file
AG 2
CT 1
TC 1
TA 1
XY 0
If that's desirable, check if other solutions do the same.
Actually, this might be what you REALLY want, just to make sure $4 and $5 are single upper case letters:
$ awk '$4$5 ~ /^[[:upper:]]{2}$/{cnt[$4$5]++} END{for (i in cnt) print i, cnt[i]}' file
TA 1
AG 2
TC 1
CT 1

Extract date from log file

I have a log line like this:
Tue Dec 2 10:03:46 2014 1 10.0.0.1 0 /home/test4/TEST_LOGIN_201312021003.201412021003.23872.sqlLdr b _ i r test4 ftp 0 * c
And I can print date value of this line like this.
echo $log | awk '{print $9}' | grep -oP '(?<!\d)201\d{9}' | head -n 1
I have another log line like this, how can I print date value?
Tue Dec 9 10:48:13 2014 1 10.0.0.1 80 /home/DATA1/2014/12/11/16/20/blablabla_data-2014_12_11_16_20.txt b _ i r spy ftp 0 * c
I tried my awk/grep solution, but it prints 201 and 9 number after 201 when see 201.
Sub folders and data name is the same:
2014/12/11/16/20 --> 11 Dec 2014 16:20 <-- blablabla_data-2014_12_11_16_20.txt
note: /home/DATA1 is not static. year/month/day/hour/minute is static.
As the format in the path is /.../YYYY/MM/DD/HH/MM/filename, you can use 201D/DD/DD/DD/DD in the grep expression to match the date block:
$ log="Tue Dec 9 10:48:13 2014 1 10.0.0.1 80 /home/DATA1/2014/12/11/16/20/blablabla_data2_11_16_20.txt b _ i r spy ftp 0 * c"
$ echo "$log" | grep -oP '(?<!\d)201\d/\d{2}/\d{2}/\d{2}/\d{2}'
2014/12/11/16/20
And eventually remove the slashes with tr:
$ echo "$log" | grep -oP '(?<!\d)201\d/\d{2}/\d{2}/\d{2}/\d{2}' | tr -d '/'
201412111620
sed can also work too, if you are acquainted with it
echo "Tue Dec 9 10:48:13 2014 1 10.0.0.1 80 /home/DATA1/2014/12/11/16/20/blablabla_data-2014_12_11_16_20.txt b _ i r spy ftp 0 * c"|sed 's#.*[[:alnum:]]*/\([[:digit:]]\{4\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}\).*#\1#'
output
2014/12/11/16/20
To remove "/", the same above command piped to tr -d '/'
Full command line
echo "Tue Dec 9 10:48:13 2014 1 10.0.0.1 80 /home/DATA1/2014/12/11/16/20/blablabla_data-2014_12_11_16_20.txt b _ i r spy ftp 0 * c"|sed 's#.*[[:alnum:]]*/\([[:digit:]]\{4\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}\).*#\1#'|tr -d '/'
Output
201412111620

Shell script to find common values and write in particular pattern with subtraction math to range pattern

Shell script to find common values and write in particular pattern with subtraction math to range pattern
Shell script to get command values in two files and write i a pattern to new file AND also have the first value of the range pattern to be subtracted by 1
$ cat file1
2
3
4
6
7
8
10
12
13
16
20
21
22
23
27
30
$ cat file2
2
3
4
8
10
12
13
16
20
21
22
23
27
Script that works:
awk 'NR==FNR{x[$1]=1} NR!=FNR && x[$1]' file1 file2 | sort | awk 'NR==1 {s=l=$1; next} $1!=l+1 {if(l == s) print l; else print s ":" l; s=$1} {l=$1} END {if(l == s) print l; else print s ":" l; s=$1}'
Script out:
2:4
8
10
12:13
16
20:23
27
Desired output:
1:4
8
10
11:13
16
19:23
27
Similar to sputnick's, except using comm to find the intersection of the file contents.
comm -12 <(sort file1) <(sort file2) |
sort -n |
awk '
function print_range() {
if (start != prev)
printf "%d:", start-1
print prev
}
FNR==1 {start=prev=$1; next}
$1 > prev+1 {print_range(); start=$1}
{prev=$1}
END {print_range()}
'
1:4
8
10
11:13
16
19:23
27
Try doing this :
awk 'NR==FNR{x[$1]=1} NR!=FNR && x[$1]' file1 file2 |
sort |
awk 'NR==1 {s=l=$1; next}
$1!=l+1 {if(l == s) print l; else print s -1 ":" l; s=$1}
{l=$1}
END {if(l == s) print l; else print s -1 ":" l; s=$1}'

Resources