I have those lines in my text file :
msg_wdraw[] = "whatever a sentence here,"
"This is the second part of this text1 ."
msg_sp2million[] = "whatever a sentence here,"
"This is the second part of this text2."
I need the sentence between msg_sp2million and the period "." and print them out.
i.e ("whatever a sentence here,"
"This is the second part of this text2.")
I tried this : sed -n "/msg_sp2million/,/./p" filename.txt
However, this sed command also returns me the value of msg_wdraw (the first variable)
I also tried awk, grep, other sed..... but failed eventually.
How can I fix this problem ? And Why this returns me not only the value of msg_sp2million and also the value of msg_wdraw ?
Please help # ~ #
Maybe something like this:
awk '/msg_sp2million/{ split($0,a,"="); print a[length(a)]; getline; print}' file.txt
Match regexp, print what comes after the =, get next line, and print that too.
Returns:
"whatever a sentence here,"
"This is the second part of this text2."
Using simple awk command:
awk -F '= *' -v RS='.' -v ORS='."\n' '$1 ~ /msg_sp2million/ {sub(/" *\n */, "\" ", $2);
print $2}' file
"whatever a sentence here," "This is the second part of this text2."
I'm unable to add my solution (a POSIX-compliant derivative of qwwqwwq's solution, referred as qww below) as a comment. So, qww's solution works, but ONLY in GNU awk from a certain version onward (apparently 3.1.5, see also http://awk.freeshell.org/AwkFeatureComparison).
Tip: Try
awk -W posix '/msg_sp2million/{ split($0,a,"="); print a[length(a)]; getline; print}' file.txt
in a non-GNU environment and you will 99% sure get an error message, e. g. about using an array in a scalar context.
The following solution should also work on a HP-UX workstation:
(well, the -W posix may be omitted of course, but is always invaluable while in testing stage)
awk -W posix '/msg_sp2million/{ amount=split($0,a,"="); print a[amount]; getline; print}' file.txt
Related
I have this awk command:
echo www.host.com |awk -F. '{$1="";OFS="." ; print $0}' | sed 's/^.//'
which what it does is to get the domain from the hostname:
host.com
that command works on CentOS 7 (awk v 4.0.2), but it does not work on ubuntu 19.04 (awk 4.2.1) nor alpine (gawk 5.0.1), the output is:
host com
How could I fix that awk expression so it works in recent awk versions ?
For your provided samples could you please try following. This will try to match regex from very first . to till last of the line and then prints after first dot to till last of line.
echo www.host.com | awk 'match($0,/\..*/){print substr($0,RSTART+1,RLENGTH-1)}'
OP's code fix: In case OP wants to use his/her own tried code then following may help. There are 2 points here: 1st- We need not to use any other command along with awk to processing. 2nd- We need to set values of FS and OFS in BEGIN section which you are doing in everyline.
echo www.host.com | awk 'BEGIN{FS=OFS="."} {$1="";sub(/\./,"");print}'
To get the domain, use:
$ echo www.host.com | awk 'BEGIN{FS=OFS="."}{print $(NF-1),$NF}'
host.com
Explained:
awk '
BEGIN { # before processing the data
FS=OFS="." # set input and output delimiters to .
}
{
print $(NF-1),$NF # then print the next-to-last and last fields
}'
It also works if you have arbitrarily long fqdns:
$ echo if.you.have.arbitrarily.long.fqdns.example.com |
awk 'BEGIN{FS=OFS="."}{print $(NF-1),$NF}'
example.com
And yeah, funny, your version really works with 4.0.2. And awk version 20121220.
Update:
Updated with some content checking features, see comments. Are there domains that go higher than three levels?:
$ echo and.with.peculiar.fqdns.like.co.uk |
awk '
BEGIN {
FS=OFS="."
pecs["co\034uk"]
}
{
print (($(NF-1),$NF) in pecs?$(NF-2) OFS:"")$(NF-1),$NF
}'
like.co.uk
You got 2 very good answers on awk but I believe this should be handled with cut because of simplicity it offers in getting all fields starting for a known position:
echo 'www.host.com' | cut -d. -f2-
host.com
Options used are:
-d.: Set delimiter as .
-f2-: Extract all the fields starting from position 2
What you are observing was a bug in GNU awk which was fixed in release 4.2.1. The changlog states:
2014-08-12 Arnold D. Robbins
OFS being set should rebuild $0 using previous OFS if $0 needs to be
rebuilt. Thanks to Mike Brennan for pointing this out.
awk.h (rebuild_record): Declare.
eval.c (set_OFS): If not being called from var_init(), check if $0 needs rebuilding. If so, parse the record fully and rebuild it. Make OFS point to a separate copy of the new OFS for next time, since OFS_node->var_value->stptr was
already updated at this point.
field.c (rebuild_record): Is now extern instead of static. Use OFS and OFSlen instead of the value of OFS_node.
When reading the code in the OP, it states:
awk -F. '{$1="";OFS="." ; print $0}'
which, according to POSIX does the following:
-F.: set the field separator FS to represent the <dot>-character
read a record
Perform field splitting with FS="."
$1="": redefine field 1 and rebuild record $0 using OFS. At this time, OFS is set to be a single space. If the record $0 was www.foo.com it now reads _foo_com (underscores represent spaces). Recompute the number of fields which are now only one as there is no FS available anymore.
OFS=".": redefine the output field separator OFS to be the <dot>-character. This is where the bug happens. The Gnu awk knew that a rebuild needed to happend, but did this already with the new OFS and not the old OFS.
**print $0':** print the record $0 which is now_foo_com`.
The minimal change to your program would be:
awk -F. '{OFS="."; $1=""; print $0}'
The clean change would be:
awk 'BEGIN{FS=OFS="."}{$1="";print $0}'
The perfect change would be to replace the awk and sed by the cut solution of Anubahuva
If you have a variable with that name in there, you could use:
var=www.foo.com
echo ${var#*.}
file.csv:
XA90;"standard"
XA100;"this is
the multi-line"
XA110;"other standard"
I want to grep the "XA100" entry like this:
grep XA100 file.csv
to obtain this result:
XA100;"this is
the multi-line"
but grep return only one line:
XA100;"this is
source.csv contains 3 entries.
The "XA100" entry contain a multi-line field.
And grep doesn't seem to be the right tool to "grep" CSV file including multilines fields.
Do you know the way to make the job ?
Edit: the real world file contains many columns. The researched term can be in any column (not at begin of line, nor at the begin of field). All fields are encapsulated by ". Any field can contain a multi-line, from 1 line to any, and this cannot be predicted.
Give this line a try:
awk '/^XA100;/{p=1}p;p&&/"$/{p=0}' file
I extended your example a bit:
kent$ cat f
XA90;"standard"
XA100;"this is
the
multi-
line"
XA110;"other standard"
kent$ awk '/^XA100;/{p=1}p;p&&/"$/{p=0}' f
XA100;"this is
the
multi-
line"
In the comments you mention: In the real world file, each line start with ". I assume they also end with " and present you this:
Test file:
$ cat file
"single line"
"multi-
lined"
Code and outputs:
$ awk 'BEGIN{RS=ORS="\"\n"} /single/' file
"single line"
$ awk 'BEGIN{RS=ORS="\"\n"} /m/' file
"multi-
lined"
You can also parametrize the search:
$ awk -v s="multi" 'BEGIN{RS=ORS="\"\n"} match($0,s)' file
"multi-
lined"
try:
Solution 1:
awk -v RS="XA" 'NR==3{gsub(/$\n$/,"");print RS $0}' Input_file
Making Record separator as string XA then looking for line 3rd here and then globally substituting the $\n$(which is to remove the extra line at the end of the line) with NULL. Then printing the Record Separator with the current line.
Solution 2:
awk '/XA100/{print;getline;while($0 !~ /^XA/){print;getline}}' Input_file
Looking for string XA100 then printing the current line and using getline to go to next line, using while loop then which will run and print the lines until a line is starting from XA.
If this file was exported from MS-Excel or similar then lines end with \r\n while the newlines inside quotes are just \ns so then all you need is:
$ awk -v RS='\r\n' '/XA100/' file
XA100;"this is
the multi-line"
The above uses GNU awk for multi-char RS. On some platforms, e.g. cygwin, you'll have to add -v BINMODE=3 so gawk sees the \rs rather than them getting stripped by underlying C primitives.
Otherwise, it's extremely hard to parse CSV files in general without a real CSV parser (which awk currently doesn't have but is in the works for GNU awk) but you could do this (again with GNU awk for multi-char RS):
$ cat file
XA90;"standard"
XA100;"this is
the multi-line"
XA110;"other standard"
$ awk -v RS="\"[^\"]*\"" -v ORS= '{gsub(/\n/," ",RT); print $0 RT}' file
XA90;"standard"
XA100;"this is the multi-line"
XA110;"other standard"
to replace all newlines within quotes with blank chars and then process it as regular 1-line-per-record file.
Using PS response, this works for the small example:
sed 's/^X/\n&/' file.csv | awk -v RS= '/XA100/ {print}'
For my real world CSV file, with many columns, with researched term anywhere, with unknown count of multi-lines, with characters " replaced by "", with multi-lines lines beginning with ", with all fields encapsulated by ", this works. Note the exclusion of the second character " in sed part:
sed 's/^"[^"]/\n&/' file.csv | awk -v RS= '/RESEARCH_TERM/ {print}'
Because first column of any entry cannot start with "". First column allways looks like "XXXXXXXXX", where X is any character but ".
Thank you all for so much responses, maybe others solutions are working depending the CSV file format you use.
I'm on a Mac, and I want to find a field in a CSV file adjacent to a search string
This is going to be a single file with a hard path; here's a sample of it:
84:a5:7e:6c:a6:b0, AP-ATC-151g84
84:a5:7e:6c:a6:b1, AP-A88-131g84
84:a5:7e:73:10:32, AP-AG7-133g56
84:a5:7e:73:10:30, AP-ADC-152g81
84:a5:7e:73:10:31, AP-D78-152e80
so if my search string is "84:a5:7e:73:10:32"
I want to get returned "AP-AG7-133g56"
I had been working within an Applescript, but maybe a shell script will do.
I just need the proper syntax for opening the file and having awk search it. Again, I'm weak conceptually on how shell commands run, how they must be executed, etc
This errors, gives me ("command not found"):
set the_file to "/Users/Paw/Desktop/AP-Decoder 3.app/Contents/Resources/BSSIDtable.csv"
set the_val to "70:56:81:cb:a2:dc"
do shell script "'awk $1 ~ the_val {print $2} the_file'"
Thank you for coddling me...
This is a relatively simple:
awk '$1 == "70:56:81:cb:a2:dc," {print "The answer is "$2}' 'BSSIDtable.csv'
(the "The answer is " text can be omitted if you only wish to see only the data, but this shows you how to get more user-friendly output if desired).
The comma is included since awk uses white space for separators so the comma becomes part of column 1.
If the thing you're looking for is in a shell variable, you can use -v to provide that to awk as an awk variable:
lookfor="70:56:81:cb:a2:dc,"
awk -v mac=$lookfor '$1 == mac {print "The answer is "$2}' 'BSSIDtable.csv'
As an aside, your AppleScript solution is probably not working because the $1/$2 are being interpreted as shell variable rather than awk variables. If you insist on using AppleScript, you will have to figure out how to construct a shell command that quotes the awk commands correctly.
My advice is to just use the shell directly, the number of people proficient in that almost certainly far outnumber those proficient in AppleScript :-)
if sed is available (normaly on mac, event if not tagged in OP)
simple but read all the file
sed -n 's/84:a5:7e:73:10:32,[[:blank:]]*//p' YourFile
quit after first occurence (so average of 50% faster on huge file)
sed -n -e '/84:a5:7e:73:10:32,[[:blank:]]*/!b' -e 's///p;q' YourFile
awk
awk '/^84:a5:7e:73:10:32/ {print $2}'
# OR using a variable for batch interaction
awk -v Src='84:a5:7e:73:10:32' '$1 == Src {print $2}'
# OR assuming that case is unknow
awk -v Src='84:a5:7e:73:10:32' 'BEGIN{IGNORECASE=1} $1 == Src {print $2}'
by default it take $0 as compare test if a regex is present, just add the ^ to take first field content
I have two questions. The first is that sometimes when I am coding in Unix and I input a command and I do it wrong I get a new line without my prompt and no matter what I type, nothing happens until I exit out and re-enter. Does anyone know why this is happening?
Secondly,
I have a file that consists of: filename space data
I need to get the data, I heard that I should use awk or sed but I am not sure how to do it. Any help is welcome.
Dennis has already answered your first question well. (Note: please put only one question in at a time!)
For your second question, it can be done much more simply.
awk '{ print $2 }' yourfile
By default, awk uses space as its column delimiter, so this simply tells awk to print out the second column. If you want the output sent to a new file, then just do this:
awk '{ print $2 }' yourfile > newfile
First question:
Enter echo " and it will happen. Unix supports multi-line commands.
Example:
echo "
is a multi-line
command"
Type in " and enter to terminate.
Second question:
Here's a link to a nice AWK tutorial: Awk - A Tutorial and Introduction
Basicaly, you use
awk '{ print "echo " $2 }' filename | sh
for example, to echo all the data.
$2 accesses the second chunk of information of each line (chunks are seperates by spaces).
print "echo " $2 will cause awk to output echo data.
Last, you pipe to sh to execute the command of awk's output.
I have a file with fields separated by pipe characters and I want to print only the second field. This attempt fails:
$ cat file | awk -F| '{print $2}'
awk: syntax error near line 1
awk: bailing out near line 1
bash: {print $2}: command not found
Is there a way to do this?
Or just use one command:
cut -d '|' -f FIELDNUMBER
The key point here is that the pipe character (|) must be escaped to the shell. Use "\|" or "'|'" to protect it from shell interpertation and allow it to be passed to awk on the command line.
Reading the comments I see that the original poster presents a simplified version of the original problem which involved filtering file before selecting and printing the fields. A pass through grep was used and the result piped into awk for field selection. That accounts for the wholly unnecessary cat file that appears in the question (it replaces the grep <pattern> file).
Fine, that will work. However, awk is largely a pattern matching tool on its own, and can be trusted to find and work on the matching lines without needing to invoke grep. Use something like:
awk -F\| '/<pattern>/{print $2;}{next;}' file
The /<pattern>/ bit tells awk to perform the action that follows on lines that match <pattern>.
The lost-looking {next;} is a default action skipping to the next line in the input. It does not seem to be necessary, but I have this habit from long ago...
The pipe character needs to be escaped so that the shell doesn't interpret it. A simple solution:
$ awk -F\| '{print $2}' file
Another choice would be to quote the character:
$ awk -F'|' '{print $2}' file
Another way using awk
awk 'BEGIN { FS = "|" } ; { print $2 }'
And 'file' contains no pipe symbols, so it prints nothing. You should either use 'cat file' or simply list the file after the awk program.