The below awk command (copied and pasted from stackoverflow) works fine from the command line but doesnt print anything when aliased
awk '/WORD/ {print $3}' log.log | awk 'BEGIN{c=0} length($0){a[c]=$0;c++}END{p5=(c/100*5); p5=p5%1?int(p5)+1:p5; print a[c-p5-1]}'
alias getperc="awk '/WORD/ {print \$3}' log.log | awk 'BEGIN{c=0} length(\$0){a[c]=$0;c++}END{p5=(c/100*5); p5=p5%1?int(p5)+1:p5; print a[c-p5-1]}'"
I am fairly new to using bash. What am I missing here?
Don't use aliases. They require an additional layer of quoting, which is troublesome (as here), and they prevent you from being able to usefully parameterize or add conditional logic to your code.
A simple transliteration to a function is:
getperc() { awk '/WORD/ {print $3}' log.log | awk 'BEGIN{c=0} length($0){a[c]=$0;c++}END{p5=(c/100*5); p5=p5%1?int(p5)+1:p5; print a[c-p5-1]}'; }
A slightly more capable one, which will still use log.log by default, but which will also let you provide an alternate input file name (as in getperc alternate.log) or pipe to your function (as in cat alternate.log | getperc):
getperc() {
[[ -t 0 || $1 ]] || set -- - # use "-" (stdin) as input file if not a TTY
# ...this will let you pipe to your function.
awk '/WORD/ {print $3}' "${1:-log.log}" | awk 'BEGIN{c=0} length($0){a[c]=$0;c++}END{p5=(c/100*5); p5=p5%1?int(p5)+1:p5; print a[c-p5-1]}'
}
I think there is confusion by bash regarding $3 and $0 it thinks they are argument of the alias. you can verify this by
try this in bash
alias ech="echo {print \$3}"
it will print just
{print }
but now try
alias ech="echo {print \$\3}"
it will print what you expected
{print $3}
Let me know if this solves your problem
Related
In my project, I have two files.
The content userid is :
6534
4524
4522
6635
The content userpwinfo.txt is:
nsgg315_RJ:x:4520:100::/home-gg/users/nsgg315_RJ:/bin/bash
nsgg316_ZJY:x:4521:100::/home-gg/users/nsgg316_ZJY:/bin/bash
nsgg317_CPA:x:4522:100::/home-gg/users/nsgg317_CPA:/bin/bash
nsgg318_ZRL:x:4523:100::/home-gg/users/nsgg318_ZRL:/bin/bash
nsgg319_YYM:x:4524:100::/home-gg/users/nsgg319_YYM:/bin/bash
Now I want to print the username which id is in userid. I writed a bash shell like:
for i in $(cat userid)
do
#username=`awk -F: '{if($3=="$i") print $1}' /root/userpwinfo.txt`
#username=`awk -F: '$3=="$i" {print $1}' /root/userpwinfo.txt`
#username=`awk -F: '{if($3~/$i/) print $1}' /root/userpwinfo.txt`
username=`awk -F: '{if($3==$i) print $1}' /root/userpwinfo.txt`
echo $username
done
But unlucky, it shows nothing. The correct result should be:
nsgg319_YYM
nsgg317_CPA
I have tried in command line:
awk -F: '{if($3==4524) print $1}' /root/userpwinfo.txt
It is OK
Maybe if($3==$i) is wrong in shell, Who can help me?
Your $i is the shell variable, but it's inside the quotation mark ' so awk will try to interpret it instead of the shell.
Try this:
username=`awk -F: '{if($3=='$i') print $1}' /root/userpwinfo.txt`
Note that the $i is between ' marks, meaning it's outside of the block that will be interpreted by awk, meaning it should be interpreted by the shell.
Also note that if you have an empty line in the input file, your awk command would be if($3==) which is invalid and will yield an error.
I'd like to comment also that awk is meant to have a filter and an execution block. You shouldn't need to write an if inside a block, unless you want something unusual. Meaning, your command would be more appropriately written as:
username=`awk -F: '($3=='$i'){print $1}' /root/userpwinfo.txt`
Note that even this is not a very good solution, but you already have much to think about with only these changes. When you're more familiar with awk or getting more professional, come back and check the comments. ;)
If username is what you needed using the 2 files, you could try
$ cat userpwinfo.txt
nsgg315_RJ:x:4520:100::/home-gg/users/nsgg315_RJ:/bin/bash
nsgg316_ZJY:x:4521:100::/home-gg/users/nsgg316_ZJY:/bin/bash
nsgg317_CPA:x:4522:100::/home-gg/users/nsgg317_CPA:/bin/bash
nsgg318_ZRL:x:4523:100::/home-gg/users/nsgg318_ZRL:/bin/bash
nsgg319_YYM:x:4524:100::/home-gg/users/nsgg319_YYM:/bin/bash
$ cat userid.txt
6534
4524
4522
6635
$ awk -F":" ' { if( NR==FNR ) { a[$3]=$1; next } ; if(a[$1]) print a[$1] }' userpwinfo.txt userid.txt
nsgg319_YYM
nsgg317_CPA
Sometimes I want a bash script that's mostly a help file. There are probably better ways to do things, but sometimes I want to just have a file called "awk_help" that I run, and it dumps my awk notes to the terminal.
How can I do this easily?
Another idea, use #!/bin/cat -- this will literally answer the title of your question since the shebang line will be displayed as well.
Turns out it can be done as pretty much a one liner, thanks to #CharlesDuffy for the suggestions!
Just put the following at the top of the file, and you're done
cat "$BASH_SOURCE" | grep -v EZREMOVEHEADER
So for my awk_help example, it'd be:
cat "$BASH_SOURCE" | grep -v EZREMOVEHEADER
# Basic form of all awk commands
awk search pattern { program actions }
# advanced awk
awk 'BEGIN {init} search1 {actions} search2 {actions} END { final actions }' file
# awk boolean example for matching "(me OR you) OR (john AND ! doe)"
awk '( /me|you/ ) || (/john/ && ! /doe/ )' /path/to/file
# awk - print # of lines in file
awk 'END {print NR,"coins"}' coins.txt
# Sum up gold ounces in column 2, and find out value at $425/ounce
awk '/gold/ {ounces += $2} END {print "value = $" 425*ounces}' coins.txt
# Print the last column of each line in a file, using a comma (instead of space) as a field separator:
awk -F ',' '{print $NF}' filename
# Sum the values in the first column and pretty-print the values and then the total:
awk '{s+=$1; print $1} END {print "--------"; print s}' filename
# functions available
length($0) > 72, toupper,tolower
# count the # of times the word PASSED shows up in the file /tmp/out
cat /tmp/out | awk 'BEGIN {X=0} /PASSED/{X+=1; print $1 X}'
# awk regex operators
https://www.gnu.org/software/gawk/manual/html_node/Regexp-Operators.html
I found another solution that works on Mac/Linux and works exactly as one would hope.
Just use the following as your "shebang" line, and it'll output everything from line 2 on down:
test.sh
#!/usr/bin/tail -n+2
hi there
how are you
Running this gives you what you'd expect:
$ ./test.sh
hi there
how are you
and another possible solution - just use less, and that way your file will open in searchable gui
#!/usr/bin/less
and this way you can grep if for something too, e.g.
$ ./test.sh | grep something
I have a file that has multiple lines that starts with a keyword. I only want to modify one of them and it's easy to distinguish the two. I want the one that is under the [dbinfo] section. The domain name is static so I know that won't change.
awk -F '=' '$1 ~ /^dbhost/ {print $NF};' myfile.txt
myfile.txt
[ual]
path=/web/
dbhost=ez098sf
[dbinfo]
dbhost=ec0001.us-east-1.localdomain
dbname=ez098sf_default
dbpass=XXXXXX
You can use this awk command to first check for presence of [dbinfo] section and then modify dbhost parameter:
awk -v h='newhost' 'BEGIN{FS=OFS="="}
$0 == "[dbinfo]" {sec=1} sec && $1 == "dbhost"{$2 = h; sec=0} 1' file
[ual]
path=/web/
dbhost=ez098sf
[dbinfo]
dbhost=newhost
dbname=ez098sf_default
dbpass=XXXXXX
You want to utilize a little bit of a state machine here:
awk -F '=' '
$0 ~ /^\[.*\]/ {in_db_info=($0=="[dbinfo]"}
$0 ~ /^dbhost/{if (in_db_info) print $2;}' myfile.txt
You can also do it with sed:
sed '/\[dbinfo\]/,/\[/s/\(^dbhost=\).*/\1domain.com/' myfile.txt
I am trying to run a shell command from within awk for each line of a file, and the shell command needs one input argument. I tried to use system(), but it didn't recognize the input argument.
Each line of this file is an address of a file, and I want to run a command to process that file. So, for a simple example I want to use 'wc' command for each line and pass $1to wc.
awk '{system("wc $1")}' myfile
you are close. you have to concatenate the command line with awk variables:
awk '{system("wc "$1)}' myfile
You cannot grab the output of an awk system() call, you can only get the exit status. Use the getline/pipe or getline/variable/pipe constructs
awk '{
cmd = "your_command " $1
while (cmd | getline line) {
do_something_with(line)
}
close(cmd)
}' file
FYI here's how to use awk to process files whose names are stored in a file (providing wc-like functionality in this example):
gawk '
NR==FNR { ARGV[ARGC++]=$0; next }
{ nW+=NF; nC+=(length($0) + 1) }
ENDFILE { print FILENAME, FNR, nW, nC; nW=nC=0 }
' file
The above uses GNU awk for ENDFILE. With other awks just store the values in an array and print in a loop in the END section.
I would suggest another solution:
awk '{print $1}' myfile | xargs wc
the difference is that it executes wc once with multiple arguments. It often works (for example, with kill command)
Or use the pipe | as in bash then retrive the output in a variable with awk's getline, like this
zcat /var/log/fail2ban.log* | gawk '/.*Ban.*/ {print $7};' | sort | uniq -c | sort | gawk '{ "geoiplookup " $2 "| cut -f2 -d: " | getline geoip; print $2 "\t\t" $1 " " geoip}'
That line will print all the banned IPs from your server along with their origin (country) using the geoip-bin package.
The last part of that one-liner is the one that affects us :
gawk '{ "geoiplookup " $2 "| cut -f2 -d: " | getline geoip; print $2 "\t\t" $1 " " geoip}'
It simply says : run the command "geoiplookup 182.193.192.4 | -f2 -d:" ($2 gets substituted as you may guess) and put the result of that command in geoip (the | getline geoip bit). Next, print something something and anything inside the geoip variable.
The complete example and the results can be found here, an article I wrote.
I am working on the following bash script:
# contents of dbfake file
1 100% file 1
2 99% file name 2
3 100% file name 3
#!/bin/bash
# cat out data
cat dbfake |
# select lines containing 100%
grep 100% |
# print the first and third columns
awk '{print $1, $3}' |
# echo out id and file name and log
xargs -rI % sh -c '{ echo %; echo "%" >> "fake.log"; }'
exit 0
This script works ok, but how do I print everything in column $3 and then all columns after?
You can use cut instead of awk in this case:
cut -f1,3- -d ' '
awk '{ $2 = ""; print }' # remove col 2
If you don't mind a little whitespace:
awk '{ $2="" }1'
But UUOC and grep:
< dbfake awk '/100%/ { $2="" }1' | ...
If you'd like to trim that whitespace:
< dbfake awk '/100%/ { $2=""; sub(FS "+", FS) }1' | ...
For fun, here's another way using GNU sed:
< dbfake sed -r '/100%/s/^(\S+)\s+\S+(.*)/\1\2/' | ...
All you need is:
awk 'sub(/.*100% /,"")' dbfake | tee "fake.log"
Others responded in various ways, but I want to point that using xargs to multiplex output is rather bad idea.
Instead, why don't you:
awk '$2=="100%" { sub("100%[[:space:]]*",""); print; print >>"fake.log"}' dbfake
That's all. You don't need grep, you don't need multiple pipes, and definitely you don't need to fork shell for every line you're outputting.
You could do awk ...; print}' | tee fake.log, but there is not much point in forking tee, if awk can handle it as well.