awk print date formats for all letters - lower and upper cases - bash

I'm working on a awk one-liner to get the date command output for all possible characters ( upper and lower case) like below
a Tue | A Tuesday
b Apr | B April
c Tue Apr 14 17:33:37 2020 | C 20
d 14 | D 04/14/20
. . . .
. . . .
z +0530 | Z IST
The below command seems to be syntactically correct, but awk is throwing error.
seq 0 25 | awk ' { d="date \"+" printf("%c",$0+97) " %" printf("%c",$0+97) "\""; d | getline ; print } '
-bash: syntax error near unexpected token `)'
what is wrong with my attempt. Any other awk solution is also welcome.

bash can do this:
for c in {a..z}; do date "+$c %$c | ${c^} %${c^}"; done

Could you please try following(without ASCII numbers using trick).
awk -v s1="\"" '
BEGIN{
num=split("a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z",alphabets,",")
for(i=1;i<=num;i++){
print "date " s1 "+"alphabets[i] " %"alphabets[i] " | " toupper(alphabets[i]) " %"toupper(alphabets[i]) s1
}
}
' | bash
Logical explanation:
Starting awk program and mentioning variable s1 with value ".
Everything we re doing is in BEGIN section of code only.
Using split to create an array named alphabets where all small letters alphabets are stored in it with index of 1,2,3.. and so on.
Now running for loop from 1 to till length of array alphabets.
Now here print command will actually print all the commands(how they should actually run), but this will do only printing of them.
Closing awk command and passing its output to bash will execute the commands and show the output on terminal.

Any time you find youself considering using awk like a shell (i.e. as a tool to call other tools from) you really need to think hard about whether or not it's the right approach.
Using any awk in any shell without the complications of having shell call awk to spawn a subshell to call date and then have getline try to read it and close the pipe, etc. as happens if you try to call date from awk:
$ awk 'BEGIN{for (i=0; i<=25; i++) print c=sprintf("%c",i+97), toupper(c)}' |
while read c C; do date "+$c %$c | $C %$C"; done
a Tue | A Tuesday
b Apr | B April
c Tue Apr 14 09:03:28 2020 | C 20
d 14 | D 04/14/20
e 14 | E E
f f | F 2020-04-14
g 20 | G 2020
h Apr | H 09
i i | I 09
j 105 | J J
k 9 | K K
l 9 | L L
m 04 | M 03
n
| N N
o o | O O
p AM | P P
q q | Q Q
r 09:03:28 AM | R 09:03
s 1586873008 | S 28
t | T 09:03:28
u 2 | U 15
v 14-Apr-2020 | V 16
w 2 | W 15
x 04/14/2020 | X 09:03:28
y 20 | Y 2020
z -0500 | Z CDT

You may want to have this:
awk -v q='"' 'BEGIN{for(i=0;i<=25;i++){
ch=sprintf("%c",i+97)
d="date +%s%s %%%s%s "
sprintf(d, q,ch,ch,q)|getline v;
sprintf(d,q,toupper(ch),toupper(ch),q)|getline v2;
print v "|" v2
close(d)
}}'
Note
you don't need to feed awk by seq 0 25, you can use the BEGIN block
printf does output, if you want the result, use sprintf()
you should close the command after execution
you didn't implement the "uppercase" part
Output:
a Tue|A Tuesday
b Apr|B April
c Tue 14 Apr 2020 03:02:33 PM CEST|C 20
d 14|D 04/14/20
e 14|E %E
f %f|F 2020-04-14
g 20|G 2020
h Apr|H 15
i %i|I 03
j 105|J %J
k 15|K %K
l 3|L %L
m 04|M 02
n |N 396667929
o %o|O %O
p PM|P pm
q 2|Q %Q
r 03:02:33 PM|R 15:02
s 1586869353|S 33
t |T 15:02:33
u 2|U 15
v %v|V 16
w 2|W 15
x 04/14/2020|X 03:02:33 PM
y 20|Y 2020
z +0200|Z CEST

Related

How to append a character at the end of a specific line in a loop?

I want to read line numbers from a file and according to that insert characters in another file. This is what I got so far:
#!/bin/bash
character=:
line_number=1
sed $line_number's/$/ '$character'/' <readme >readme_new
line_number=3
sed $line_number's/$/ '$character'/' <readme_new >readme_newer
I would like to do that in a loop now.
TL;DR:
$: c='!'
$: sed "s#\$# s/\$/ $c/#" fibs >script
$: sed -i "$(<script)" infile
Broken out -
A file of line numbers:
$: cat fibs
1
2
3
5
8
13
21
a file to be edited:
$: cat infile
1 a
2 b
3 c
4 d
5 e
6 f
7 g
8 h
9 i
10 j
11 k
12 l
13 m
14 n
15 o
16 p
17 q
18 r
19 s
20 t
21 u
22 v
23 q
24 x
25 y
26 z
3 steps -- first set your character variable if you're using one.
$: c='!'
Then make a script from the line number file -
$: sed "s#\$# s/\$/ $c/#" fibs >script
which creates:
$: cat script
1 s/$/ !/
2 s/$/ !/
3 s/$/ !/
5 s/$/ !/
8 s/$/ !/
13 s/$/ !/
21 s/$/ !/
It's a simple sed to add a sed substitution command for each line number, and sends the resulting script to a file. A few tricks here include using double-quotes to allow the character embedding, and #'s to allow the replacement text to include /'s without creating leaning-toothpick syndrome from all the backslash quoting.
Then run it against your input -
$: sed -i "$(<script)" infile
Which does the work. That pulls the script file contents in for sed to use, generating:
1 a !
2 b !
3 c !
4 d
5 e !
6 f
7 g
8 h !
9 i
10 j
11 k
12 l
13 m !
14 n
15 o
16 p
17 q
18 r
19 s
20 t
21 u !
22 v
23 q
24 x
25 y
26 z
Let me know if you want to tweak it.

How can I skip line with awk

I have a command like this:
tac $log | awk -v pattern="test" '$9 ~ pattern {print; exit}'
It shows me the last line in which $9 contains test text.
Like this:
Thu Mar 26 20:21:38 2015 1 10.8.0.22 94 /home/xxxyyy/zzz_test_223123.txt b _ o r spy ftp 0 * c
Thu Mar 26 20:21:39 2015 1 10.8.0.22 94 /home/SAVED/zzz_test_123123.txt b _ o r spy ftp 0 * c
Thu Mar 26 20:21:40 2015 1 10.8.0.22 94 /home/xxxyyy/zzz_test_123123.txt b _ o r spy ftp 0 * c
Thu Mar 26 20:21:41 2015 1 10.8.0.22 94 /home/SAVED/zzz_test_123124.txt b _ o r spy ftp 0 * c
-- >
Thu Mar 26 20:21:41 2015 1 10.8.0.22 94 /home/SAVED/zzz_test_123124.txt b _ o r spy ftp 0 * c
This command shows me the last line. But I need to pass if line have SAVED. So I need to show like this:
Thu Mar 26 20:21:40 2015 1 10.8.0.22 94 /home/xxxyyy/zzz_test_123123.txt b _ o r spy ftp 0 * c
How can I do this?
To skip a line, you can match it, and use the next command.
$9 ~ /SAVED/ { next }
$9 ~ /\.txt$/ { print; exit }
You can add another condition !~ to prevent this second pattern to be matched (I use pattern2 to make it more generic, of course you can hardcode SAVED there):
$9 ~ pattern && $9 !~ pattern2
All together:
$ awk -v pattern="test" -v pattern2="SAVED" '$9 ~ pattern && $9 !~ pattern2 {print; exit}'
Thu Mar 26 20:21:40 2015 1 10.8.0.22 94 /home/xxxyyy/zzz_test_123123.txt b _ o r spy ftp 0 * c
Use !~ to test if a line doesn't match a pattern.
awk -v pattern="test" $9 ~ pattern && $9 !~ /SAVED/ { print; exit; }

How can I use sort by custom date in file?

I have log file like this:
Fri Jan 30 13:52:57 2015 1 10.1.1.1 0 /home/test1/MAIL_201401301353.201501301352.19721.sqlLdr b _ i r test1 ftp 0 * c
Fri Jan 30 13:52:58 2015 1 10.1.1.1 0 /home/test2/MAIL_201401301354.201501301352.12848.sqlLdr b _ i r test2 ftp 0 * c
Fri Jan 30 13:53:26 2015 1 10.1.1.1 0 /home/test3/MAIL_201401301352.201501301353.17772.sqlLdr b _ i r test3 ftp 0 * c
I need to sort by date value. Date value is first 2014....
I can find date value like this:
echo $log | awk '{print $9}' | grep -oP '(?<!\d)201\d{9}' | head -n 1
How can I sort by this date value(new to old)?
To sort this file you can use:
sort -t_ -nk2,2 file
Fri Jan 30 13:53:26 2015 1 10.1.1.1 0 /home/test3/MAIL_201401301352.201501301353.17772.sqlLdr b _ i r test3 ftp 0 * c
Fri Jan 30 13:52:57 2015 1 10.1.1.1 0 /home/test1/MAIL_201401301353.201501301352.19721.sqlLdr b _ i r test1 ftp 0 * c
Fri Jan 30 13:52:58 2015 1 10.1.1.1 0 /home/test2/MAIL_201401301354.201501301352.12848.sqlLdr b _ i r test2 ftp 0 * c
Details:
-n # numerical sort
-t # set field separator as _
-k2,2 # sort on 2nd field

Extract date from log file

I have a log line like this:
Tue Dec 2 10:03:46 2014 1 10.0.0.1 0 /home/test4/TEST_LOGIN_201312021003.201412021003.23872.sqlLdr b _ i r test4 ftp 0 * c
And I can print date value of this line like this.
echo $log | awk '{print $9}' | grep -oP '(?<!\d)201\d{9}' | head -n 1
I have another log line like this, how can I print date value?
Tue Dec 9 10:48:13 2014 1 10.0.0.1 80 /home/DATA1/2014/12/11/16/20/blablabla_data-2014_12_11_16_20.txt b _ i r spy ftp 0 * c
I tried my awk/grep solution, but it prints 201 and 9 number after 201 when see 201.
Sub folders and data name is the same:
2014/12/11/16/20 --> 11 Dec 2014 16:20 <-- blablabla_data-2014_12_11_16_20.txt
note: /home/DATA1 is not static. year/month/day/hour/minute is static.
As the format in the path is /.../YYYY/MM/DD/HH/MM/filename, you can use 201D/DD/DD/DD/DD in the grep expression to match the date block:
$ log="Tue Dec 9 10:48:13 2014 1 10.0.0.1 80 /home/DATA1/2014/12/11/16/20/blablabla_data2_11_16_20.txt b _ i r spy ftp 0 * c"
$ echo "$log" | grep -oP '(?<!\d)201\d/\d{2}/\d{2}/\d{2}/\d{2}'
2014/12/11/16/20
And eventually remove the slashes with tr:
$ echo "$log" | grep -oP '(?<!\d)201\d/\d{2}/\d{2}/\d{2}/\d{2}' | tr -d '/'
201412111620
sed can also work too, if you are acquainted with it
echo "Tue Dec 9 10:48:13 2014 1 10.0.0.1 80 /home/DATA1/2014/12/11/16/20/blablabla_data-2014_12_11_16_20.txt b _ i r spy ftp 0 * c"|sed 's#.*[[:alnum:]]*/\([[:digit:]]\{4\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}\).*#\1#'
output
2014/12/11/16/20
To remove "/", the same above command piped to tr -d '/'
Full command line
echo "Tue Dec 9 10:48:13 2014 1 10.0.0.1 80 /home/DATA1/2014/12/11/16/20/blablabla_data-2014_12_11_16_20.txt b _ i r spy ftp 0 * c"|sed 's#.*[[:alnum:]]*/\([[:digit:]]\{4\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}/[[:digit:]]\{2\}\).*#\1#'|tr -d '/'
Output
201412111620

Shell script to find common values and write in particular pattern with subtraction math to range pattern

Shell script to find common values and write in particular pattern with subtraction math to range pattern
Shell script to get command values in two files and write i a pattern to new file AND also have the first value of the range pattern to be subtracted by 1
$ cat file1
2
3
4
6
7
8
10
12
13
16
20
21
22
23
27
30
$ cat file2
2
3
4
8
10
12
13
16
20
21
22
23
27
Script that works:
awk 'NR==FNR{x[$1]=1} NR!=FNR && x[$1]' file1 file2 | sort | awk 'NR==1 {s=l=$1; next} $1!=l+1 {if(l == s) print l; else print s ":" l; s=$1} {l=$1} END {if(l == s) print l; else print s ":" l; s=$1}'
Script out:
2:4
8
10
12:13
16
20:23
27
Desired output:
1:4
8
10
11:13
16
19:23
27
Similar to sputnick's, except using comm to find the intersection of the file contents.
comm -12 <(sort file1) <(sort file2) |
sort -n |
awk '
function print_range() {
if (start != prev)
printf "%d:", start-1
print prev
}
FNR==1 {start=prev=$1; next}
$1 > prev+1 {print_range(); start=$1}
{prev=$1}
END {print_range()}
'
1:4
8
10
11:13
16
19:23
27
Try doing this :
awk 'NR==FNR{x[$1]=1} NR!=FNR && x[$1]' file1 file2 |
sort |
awk 'NR==1 {s=l=$1; next}
$1!=l+1 {if(l == s) print l; else print s -1 ":" l; s=$1}
{l=$1}
END {if(l == s) print l; else print s -1 ":" l; s=$1}'

Resources