I have this where it could be one \ or multiple
C:\folder\file.log
C:\folder\folder\file.log
C:\folder\folder\folder\file.log
I want to get this
file.log
This works but its static with print $.
cat C:\folder\file.log | awk -F "\\" "{print $3}"
cat C:\folder\folder\file.log | awk -F "\\" "{print $4}
cat C:\folder\folder\folder\file.log | awk -F "\\" "{print $5}
How can i awk and always grab the data after the last \
You need the $NF special variable, which gives you the number of fields in your input.
echo C:\folder\file.log | awk -F "\\" "{print $NF}"
with grep:
grep -o '[^\\]*$' file
If you have awk, do you also have "basename"?
and as pointed out above, windows has similar capabilities built in.
Related
i have "handler: xyz.lambda_handler" text in one file and i want "xyz.lambda_handler" i.e text present next to "handler:" as output using shell script, how can i do this.
I have tried
awk -F '${handler}' '{print $1}' filename | awk '{print $2}
grep handler filename
command but not getting correct output
as mentioned in qtn.
I combined two commands and i got my answer
grep Handler: filename | awk -F '${handler}' '{print $1}' | awk '{print $2}'
grep givepattern givefilename | awk -F '${givepattern}' '{print $1}' | awk '{print $2}'
It's grep, not greap. To print only the matched parts of a matching line, use option -o.
grep -o xyz.lambda_handler filename
I have list of names in a file that I need to create directories from. The list looks like
Ada Lovelace
Jean Bartik
Leah Culver
I need the folders to be the exact same, preserving the whitespace(s). But with
awk '{print $0}' myfile | xargs mkdir
I create separate folders for each word
Ada
Lovelace
Jean
Bartik
Leah
Culver
Same happens with
awk '{print $1 " " $2}' myfile | xargs mkdir
Where is the error?
Using gnu xargs you can use -d option to set delimiter as \n only. This way you can avoid awk also.
xargs -d '\n' mkdir -p < file
If you don't have gnu xargs then you can use tr to convert all \n to \0 first:
tr '\n' '\0' < file | xargs -0 mkdir
#birgit:try: Completely based on your sample Input_file provided.
awk -vs1="\"" 'BEGIN{printf "mkdir ";}{printf("%s%s%s ",s1,$0,s1);} END{print ""}' Input_file | sh
awk '{ system ( sprintf( "mkdir \"%s\"", $0)) }' YourFile
# OR
awk '{ print"mkdir "\"" $0 "\"" | "/bin/sh" }' YourFile
# OR for 1 subshell
awk '{ Cmd = sprintf( "%s%smkdir \"%s\"", Cmd, (NR==1?"":"\n"), $0) } END { system ( Cmd ) }' YourFile
Last version is better due to creation of only 1 subshell.
If there are a huge amount of folder (shell parameter limitation), you could loop and create smaller command several times
i need to run hadoop command to list all live nodes, then based on the output i reformat it using awk command, and eventually output the result to a variable, awk use different delimiter each time i call it:
hadoop job -list-active-trackers | sort | awk -F. '{print $1}' | awk -F_ '{print $2}'
it outputs result like this:
hadoop-dn-11
hadoop-dn-12
...
then i put the whole command in variable to print out the result line by line:
var=$(sudo -H -u hadoop bash -c "hadoop job -list-active-trackers | sort | awk -F "." '{print $1}' | awk -F "_" '{print $2}'")
printf %s "$var" | while IFS= read -r line
do
echo "$line"
done
the awk -F didnt' work, it output result as:
tracker_hadoop-dn-1.xx.xsy.interanl:localhost/127.0.0.1:9990
tracker_hadoop-dn-1.xx.xsy.interanl:localhost/127.0.0.1:9390
why the awk with -F won't work correctly? and how i can fix it?
var=$(sudo -H -u hadoop bash -c "hadoop job -list-active-trackers | sort | awk -F "." '{print $1}' | awk -F "_" '{print $2}'")
Because you're enclosing the whole command in double quotes, your shell is expanding the variables $1 and $2 before launching sudo. This is what the sudo command looks like (I'm assuming $1 and $2 are empty)
sudo -H -u hadoop bash -c "hadoop job -list-active-trackers | sort | awk -F . '{print }' | awk -F _ '{print }'"
So, you see your awk commands are printing the whole line instead of just the first and 2nd fields respectively.
This is merely a quoting challenge
var=$(sudo -H -u hadoop bash -c 'hadoop job -list-active-trackers | sort | awk -F "." '\''{print $1}'\'' | awk -F "_" '\''{print $2}'\')
A bash single quoted string cannot contain single quotes, so that's why you see ...'\''... -- to close the string, concatenate a literal single quote, then re-open the string.
Another way is to escape the vars and inner double quotes:
var=$(sudo -H -u hadoop bash -c "hadoop job -list-active-trackers | sort | awk -F \".\" '{print \$1}' | awk -F \"_\" '{print \$2}'")
I have a file with very simple syntax:
cat /tmp/test
ARCH=""prtconf -b | awk '/^name:/ {print $2}'
I tried to grep it:
cat /tmp/test | grep "prtconf -b | awk '/^name:/ {print $2"
ARCH=""prtconf -b | awk '/^name:/ {print $2}'
Let's make grep string a little longer, add } to the end:
cat /tmp/test | grep "prtconf -b | awk '/^name:/ {print $2"}
Nothing found
Why when I add } to the end of the line grep stop working?
OS is Solaris 10U11
$2 refers to command-line parameter so here it will substitute blank character in a patter. So you'l need to escape $ by slash like \$
cat /tmp/test | grep "prtconf -b | awk '/^name:/ {print \$2}"
Without adding } in your patter it was working because it was matching actual pattern as prtconf -b | awk '/^name:/ {print for your input. But if you add } in your patter then it will try to match prtconf -b | awk '/^name:/ {print } (which isn't there in your file so it won't show output.)
I have a simple command (part of a bash script) that I'm piping through awk but can't seem to suppress the final record separator without then piping to sed. (Yes, I have many choices and mine is sed.) Is there a simpler way without needing the last pipe?
dolls = $(egrep -o 'alpha|echo|november|sierra|victor|whiskey' /etc/passwd \
| uniq | awk '{IRS="\n"; ORS=","; print}'| sed s/,$//);
Without the sed, this produces output like echo,sierra,victor, and I'm just trying to drop the last comma.
You don't need awk, try:
egrep -o ....uniq|paste -d, -s
Here is another example:
kent$ echo "a
b
c"|paste -d, -s
a,b,c
Also I think your chained command could be simplified. awk could do all things in an one-liner.
Instead of egrep, uniq, awk, sed etc, all this can be done in one single awk command:
awk -F":" '!($1 in a){l=l $1 ","; a[$1]} END{sub(/,$/, "", l); print l}' /etc/password
Here is a small and quite straightforward one-liner in awk that suppresses the final record separator:
echo -e "alpha\necho\nnovember" | awk 'y {print s} {s=$0;y=1} END {ORS=""; print s}' ORS=","
Gives:
alpha,echo,november
So, your example becomes:
dolls = $(egrep -o 'alpha|echo|november|sierra|victor|whiskey' /etc/passwd | uniq | awk 'y {print s} {s=$0;y=1} END {ORS=""; print s}' ORS=",");
The benefit of using awk over paste or tr is that this also works with a multi-character ORS.
Since you tagged it bash here is one way of doing it:
#!/bin/bash
# Read the /etc/passwd file in to an array called names
while IFS=':' read -r name _; do
names+=("$name");
done < /etc/passwd
# Assign the content of the array to a variable
dolls=$( IFS=, ; echo "${names[*]}")
# Display the value of the variable
echo "$dolls"
echo "a
b
c" |
mawk 'NF-= _==$NF' FS='\n' OFS=, RS=
a,b,c