I'm learning bash, and here's a short script to assign deciles to the second column of file $1.
The complicating bit is the use of awk within the script, leading to ambiguous redirects when I run the script.
I would have gotten this done in SAS by now, but like the idea of two lines of code doing the job.
How can I communicate the total number of rows (${N}) to awk within the script? Thanks.
N=$(wc -l < $1)
cat $1 | sort -t' ' -k2gr,2 | awk '{$3=int((((NR-1)*10.0)/"${N}")+1);print $0}'
You can set an awk variable from the command line using -v.
N=$(wc -l < "$1" | tr -d ' ')
sort -t' ' -k2gr,2 "$1" | awk -v n=$N '{$3=int((((NR-1)*10.0)/n)+1);print $0}'
I added tr -d to get rid of the leading spaces that wc -l puts in its result.
Related
I am piping a result of grep to AWK and using the result as a pattern for another grep inside EOF (not sure whats the terminology there), but the AWK gives me blank results. Below is part of the bash script that gave me issues.
ssh "$USER"#logs << EOF
zgrep $wgr $loc$env/app*$date* | awk -F":" '{print $5 "::" $7}' | awk -F"," '{print $1}' | sort | uniq | while read -r rid ; do
zgrep $rid $loc$env/app*$date*;
done
EOF
I am really drawing a blank here beacuse of no error and Im out of ideas.
Samples:
I am greping log files that looks like below:
app-server.log.2020010416.gz:2020-01-04 16:00:00,441 INFO [redacted] (redacted) [rid:12345::12345-12345-12345-12345-12345,...
I am interested in rid and I can grep that in logs again:
zgrep $rid $loc$env/app*$date*
loc, env and date are working properly, but they are outside of EOF.
The script as a whole connects to ssh and goes out properly but I am getting no result.
The immediate problem is that the dollar signs are evaluated by the local shell because you don't (and presumably cannot) quote the here document (because then $wqr and $loc etc will also not be expanded by the shell).
The quick fix is to backslash the dollar signs, but in addition, I see several opportunities to get rid of inelegant or wasteful constructs.
ssh "$USER"#logs << EOF
zgrep "$wgr" "$loc$env/app"*"$date"* |
awk -F":" '{v = \$5 "::" \$7; split(v, f, /,/); print f[1]}' |
sort -u | xargs -I {} zgrep {} "$loc$env"/app*"$date"*
EOF
If you want to add decorations around the final zgrep, probably revert to the while loop you had; but of course, you need to escape the dollar sign in that, too:
ssh "$USER"#logs << EOF
zgrep "$wgr" "$loc$env/app"*"$date"* |
awk -F":" '{v = \$5 "::" \$7; split(v, f, /,/); print f[1]}' |
sort -u |
while read -r rid; do
echo Dancing hampsters "\$rid" more dancing hampsters
zgrep "\$rid" "$loc$env"/app*"$date"*
done
EOF
Again, any unescaped dollar sign is evaluated by your local shell even before the ssh command starts executing.
Could you please try following. Fair warning I couldn't test it since lack of samples. By doing this approach we need not to escape things while doing ssh.
##Configure/define your shell variables(wgr, loc, env, date, rid) here.
printf -v var_wgr %q "$wgr"
printf -v var_loc %q "$loc"
printf -v var_env %q "$env"
printf -v var_date %q "$date"
ssh -T -p your_pass user#"$host" "bash -s $var_str" <<'EOF'
# retrieve it off the shell command line
zgrep "$var_wgr $var_loc$var_env/app*$var_date*" | awk -F":" '{print $5 "::" $7}' | awk -F"," '{print $1}' | sort | uniq | while read -r rid ; do
zgrep "$rid $var_loc$var_env/app*$date*";
done
EOF
I'm new to Linux shell. I know there are tools to do this thing, such as awk. But I'm wondering if I could do it using grep or wc or other commands? awk seems intimidating to me. Thanks.
I tried grep and wc, like this:
grep tol test.txt | wc -w
But grep will give me the whole line.
If I tried the following:
grep '^tol$*' test.txt | wc -w
It only counts the line begins with mol.
How can I grep the words starting with tol?
Something like that:
grep -o '\<tol[[:alpha:]]*\>' test.txt | wc -w
< - for beginning of the word,
> - the end of the word.
[[:alpha:]] - to avoid match of combinations like tol123 (You said you need only words).
-o - to show only matches, not the entire line.
You can do the same fairly simply with awk, e.g.
awk '{for(i=1;i<=NF;i++) $i~/^tol/ && n++} END {print n}'
Example
$ echo -e "tolerance topaz tolstoy\nbats toluene toledo" |
> awk '{for(i=1;i<=NF;i++) $i~/^tol/ && n++} END {print n}'
4
Another option is to translate all whitespace characters into linefeeds so that each word starts on a new line, then grep can count them itself:
echo -e "tolerance topaz\ttolstoy\nbats toluene toledo" | tr '[:space:]' '\n' | grep -c "^tol"
4
Or, if using a file called words.txt:
tr '[:space:]' '\n' < words.txt | grep -c "^tol"
This question already has answers here:
Get just the integer from wc in bash
(19 answers)
Closed 8 years ago.
I want to get only the number of lines in a file:
so I do:
$wc -l countlines.py
9 countlines.py
I do not want the filename, so I tried
$wc -l countlines.py | cut -d ' ' -f1
but this just echo empty line.
I just want number 9 to be printed
Use stdin and you won't have issue with wc printing filename
wc -l < countlines.py
You can also use awk to count lines. (reference)
awk 'END { print NR }' countlines.py
where countlines.py is the file you want to count
If your file doesn't ends with a \n (new line) the wc -l gives a wrong result. Try it with the next simulated example:
echo "line1" > testfile #correct line with a \n at the end
echo -n "line2" >> testfile #added another line - but without the \n
the
$ wc -l < testfile
1
returns 1. (The wc counts the number of newlines (\n) in a file.)
Therefore, for counting lines (and not the \n characters) in a file, you should to use
grep -c '' testfile
e.g. find empty character in a file (this is true for every line) and count the occurences -c. For the above testfile it returns the correct 2.
Additionally, if you want count the non-empty lines, you can do it with
grep -c '.' file
Don't trust wc :)
Ps: one of the strangest use of wc is
grep 'pattern' file | wc -l
instead of
grep -c 'pattern' file
cut is being confused by the leading whitespace.
I'd use awk to print the 1st field here:
% wc -l countlines.py | awk '{ print $1 }'
As an alternative, wc won't print the file name if it is being piped input from stdin
$ cat countlines.py | wc -l
9
yet another way :
cnt=$(wc -l < countlines.py )
echo "total is $cnt "
Piping the file name into wc removes it from the output, then translate away the whitespace:
wc -l <countlines.py |tr -d ' '
Use awk like this:
wc -l countlines.py | awk {'print $1'}
I can make the following line work on ksh
for user in $( awk -F: '{ print $1}' /etc/passwd); do last $user | head -1 ; done | tr -s "\n" |sort
But I'd like to make it work on UNIX sh and UNIX csh. (in linux sh it runs fine, but linux is not unix...)
I know there are limitations for this since it seems that each UNIX(*) has its own variations on the syntax.
update: sorry, there are some restrictions here:
I can't write on the disk, so I can't save scripts.
how do i write this in CSH?
This awk-script seems to be the equivalent to you loop above:
{
cmd = "last "$1
cmd | getline result
printf "%s", result
}
use it like this:
awk -F: -f script_above.awk /etc/passwd
Pipe the output to sort
As a one-liner:
$ awk -F: '{cmd = "last "$1; cmd | getline result;printf "%s", result}' /etc/passwd
This might do the trick for you, should be POSIX compliant:
last | awk 'FNR==NR{split($0,f,/:/);a[f[1]];next}($1 in a)&&++b[$1]==1' /etc/passwd - | sort
You don't really need Awk for this.
while IFS=: read user _; do
last "$user" | head -n 1
done </etc/passwd # | grep .
Instead of reinvent it in Csh, how about
sh -c 'while IFS=: read user _; do last "$user" | head -n 1; done </etc/passwd'
You will get empty output for users who have not logged in since wtmp was rotated; maybe add a | grep . to weed those out. (I added it commented out above.)
To reiterate, IFS=: sets the shell's internal field separator to a colon, so that read will split the password file on that.
Just use simple command:
lastlog
I have written a script finding the hash value from a dictionary and outputting it in the form "word:md5sum" for each word. I then have a file of names which I would like to use to place each name followed by every hash value i.e.
tom:word1hash
tom:word2hash
.
.
bob:word1hash
and so on. Everything works fine but I can not figure out the substitution. Here is my script.
$#!/bin/bash
#/etc/dictionaries-common/words
cat words.txt | while read line; do echo -n "$line:" >> dbHashFile.txt
echo "$line" | md5sum | sed 's/[ ]-//g' >> dbHashFile.txt; done
cat users.txt | while read name
do
cat dbHashFile.txt >> nameHash.txt;
awk '{$1="$name"}' nameHash.txt;
cat nameHash.txt >> dbHash.txt;
done
the line
$awk '{$1="$name"}' nameHash.txt;
is where I attempt to do the substitution.
thank you for your help
Try replacing the entire contents of the last loop (both cats and the awk) with:
awk -v name="$name" -F ':' '{ print name ":" $2 }' dbHashFile.txt >>dbHash.txt