Sorry for the wrong phrasing of question.
I am new to stackoverflow as well as I am completely new to PIG and trying to experiment on my own.
I have a scenario where to process the words.t file and data.txt file.
words.txt
word1
word2
word3
word4
data.txt
{"created_at":"18:47:31,Sun Sep 30 2012","text":"RT #Joey7Barton: ..give a word1 about whether the americans wins a Ryder cup. I mean surely he has slightly more important matters. #fami ...","user_id":450990391,"id":252479809098223616}
I need to get the output as
(word1_epochtime){complete data which matched in text attribute}
i.e
(word1_1234567890){"created_at":"18:47:31,Sun Sep 30 2012","text":"RT #Joey7Barton: ..give a word1 about whether the americans wins a Ryder cup. I mean surely he has slightly more important matters. #fami ...","user_id":450990391,"id":252479809098223616}
I have got the ouput as
(word1){"created_at":"18:47:31,Sun Sep 30 2012","text":"RT #Joey7Barton: ..give a
word1 about whether the americans wins a Ryder cup. I mean surely he
has slightly more important matters. #fami
...","user_id":450990391,"id":252479809098223616}
by using this script.
load words.txt
load data.txt
c = cross words,data;
d = FILTER c BY (data::text MATCHES CONCAT(CONCAT('.*',words::word),'.*'));
e = foreach (group d BY word) {data);
and I got the epochtime with the words as
time = FOREACH words GENERATE CONCAT(CONCAT(word,'_'),(chararray)ToUnixTime(CurrentTime(created_at));
But I am unable to CONCAT the words with time.
How can i get the output as
(word1_time){data}
Please feel free to suggest me for the above.
Thank you.
I think i got the output.
here is the script that I have written.
d = FILTER c BY (data::text MATCHES CONCAT(CONCAT('.*',word::word),'.*'));
e = FOREACH d GENERATE CONCAT(CONCAT(word,'_'),(chararray)ToUnixTime(CurrentTime(created_at))) as epochtime;
f = foreach (group e BY epochtime) {data}
dump f;
Per this reference, CONCAT takes in two "Fields" as an input. I think in your case the problem is (chararray)ToUnixTime(CurrentTime()), is not being a field name. You could generate field that represents the current timestamp value and use it then in your concat function.
Related
I have a big file, results.txt, that I want to take certain lines out of and put them into another file. The data I want to take out is some variable, omega and alpha. However for results.txt, there are two occurrences of omega and alpha for each set of data in results.txt, and I only want the second set of data. I am not sure how to proceed. I know I should use SED but I don't know how since I have only found help regarding replacing lines use sed. Any help would be appreciated. Thank you very much.
#
--- Sorry I was on mobile when I asked the question. Didn't know how to insert code. ---
So my file looks something like
Very big list of useless output
.
.
.
Results 1:
Omega = 121
Distance = 18.7037218936
Alpha = -1.05958217593e-05
Result 5 = 18983
Result 6 = 1231.903
-------------------------
Results 1:
Omega = 121
Distance = 18.7037218936
Alpha = -1.05958217593e-05
Result 5 = 18983
Result 6 = 1231.903
-------------------------
Second useless output for the next data set
.
.
.
The next data set begins after both sets of results. I have 600 data sets. I want to print Omega and Alpha from the second set of results from each dataset to some other file, preferably in two columns, which I don't know if it is possible.
I have tried using sed but the documentation I have found only talks about replacing words I searched for. Thanks for any help!
Made a test file for you:
$ cat > results.txt
foo
alpha 1
omega 1
foo
alpha 2
omega 2
foo
$ tac results.txt|grep -m 1 alpha; tac results.txt |grep -m 1 omega
alpha 2
omega 2
I am working on a latex file from which I need to pick out the references marked by \citep{}. This is what I am doing using sed.
cat file.tex | grep citep | sed 's/.*citep{\(.*\)}.*/\1/g'
Now this one works if there is only one pattern in a line. If there are more than one patterns i.e. \citep in a line, it fails. It fails even when there is only one pattern but more than one closing bracket }. What should I do, so that it works for all the patterns in a line and also for the exclusive bracket I am looking for?
I am working on bash. And a part of the file looks like this:
of the Asian crust further north \citep{TapponnierM76, WangLiu2009}. This has led to widespread deformation both within and
\citep{BilhamE01, Mitraetal2005} and by distributed seismicity across the region (Fig. \ref{fig1_2}). Recent GPS Geodetic
across the Dawki fault and Naga Hills, increasing eastwards from $\sim$3~mm/yr to $\sim$13~mm/yr \citep{Vernantetal2014}.
GPS velocity vectors \citep{TapponnierM76, WangLiu2009}. Sikkim Himalaya lies at the transition between this relatively simple
this transition includes deviation of the Himalaya from a perfect arc beyond 89\deg\ longitude \citep{BendickB2001}, reduction
\citep{BhattacharyaM2009, Mitraetal2010}. Rivers Tista, Rangit and Rangli run through Sikkim eroding the MCT and Ramgarh
thrust to form a mushroom-shaped physiography \citep{Mukuletal2009,Mitraetal2010}. Within this sinuous physiography,
\citep{Pauletal2015} and also in accordance with the findings of \citet{Mitraetal2005} for northeast India. In another study
field results corroborate well with seismic studies in this region \citep{Actonetal2011, Arunetal2010}. From studies of
On one line, I get answer like this
BilhamE01, TapponnierM76} and by distributed seismicity across the region (Fig. \ref{fig1_2
whereas I am looking for
BilhamE01, TapponnierM76
Another example with more than one /citep patterns gives output like this
Pauletal2015} and also in accordance with the findings of \citet{Mitraetal2005} for northeast India. In another study
whereas I am looking for
Pauletal2015 Mitraetal2005
Can anyone please help?
it's a greedy match change the regex match the first closing brace
.*citep{\([^}]*\)}
test
$ echo "\citep{string} xyz {abc}" | sed 's/.*citep{\([^}]*\)}.*/\1/'
string
note that it will only match one instance per line.
If you are using grep anyway, you can as well stick with it (assuming GNU grep):
$ echo $str | grep -oP '(?<=\\citep{)[^}]+(?=})'
BilhamE01, TapponierM76
For what it's worth, this can be done with sed:
echo "\citep{string} xyz {abc} \citep{string2},foo" | \
sed 's/\\citep{\([^}]*\)}/\n\1\n\n/g; s/^[^\n]*\n//; s/\n\n[^\n]*\n/, /g; s/\n.*//g'
output:
string, string2
But wow, is that ugly. The sed script is more easily understood in this form, which happens to be suitable to be fed to sed via a -f argument:
# change every \citep{string} to <newline>string<newline><newline>
s/\\citep{\([^}]*\)}/\n\1\n\n/g
# remove any leading text before the first wanted string
s/^[^\n]*\n//
# replace text between wanted strings with comma + space
s/\n\n[^\n]*\n/, /g
# remove any trailing unwanted text
s/\n.*//
This makes use of the fact that sed can match and sub the newline character, even though reading a new line of input will not result in a newline initially appearing in the pattern space. The newline is the one character that we can be certain will appear in the pattern space (or in the hold space) only if sed puts it there intentionally.
The initial substitution is purely to make the problem manageable by simplifying the target delimiters. In principle, the remaining steps could be performed without that simplification, but the regular expressions involved would be horrendous.
This does assume that the string in every \citep{string} contains at least one character; if the empty string must be accommodated, too, then this approach needs a bit more refinement.
Of course, I can't imagine why anyone would prefer this to #Lev's straight grep approach, but the question does ask specifically for a sed solution.
f.awk
BEGIN {
pat = "\\citep"
latex_tok = "\\\\[A-Za-z_][A-Za-z_]*" # match \aBcD
}
{
f = f $0 # store content of input file as a sting
}
function store(args, n, k, i) { # store `keys' in `d'
gsub("[ \t]", "", args) # remove spaces
n = split(args, keys, ",")
for (i=1; i<=n; i++) {
k = keys[i]
d[k]
}
}
function ntok() { # next token
if (match(f, latex_tok)) {
tok = substr(f, RSTART ,RLENGTH)
f = substr(f, RSTART+RLENGTH-1 )
return 1
}
return 0
}
function parse( i, rc, args) {
for (;;) { # infinite loop
while ( (rc = ntok()) && tok != pat ) ;
if (!rc) return
i = index(f, "{")
if (!i) return # see `pat' but no '{'
f = substr(f, i+1)
i = index(f, "}")
if (!i) return # unmatched '}'
# extract `args' from \citep{`args'}
args = substr(f, 1, i-1)
store(args)
}
}
END {
parse()
for (k in d)
print k
}
f.example
of the Asian crust further north \citep{TapponnierM76, WangLiu2009}. This has led to widespread deformation both within and
\citep{BilhamE01, Mitraetal2005} and by distributed seismicity across the region (Fig. \ref{fig1_2}). Recent GPS Geodetic
across the Dawki fault and Naga Hills, increasing eastwards from $\sim$3~mm/yr to $\sim$13~mm/yr \citep{Vernantetal2014}.
GPS velocity vectors \citep{TapponnierM76, WangLiu2009}. Sikkim Himalaya lies at the transition between this relatively simple
this transition includes deviation of the Himalaya from a perfect arc beyond 89\deg\ longitude \citep{BendickB2001}, reduction
\citep{BhattacharyaM2009, Mitraetal2010}. Rivers Tista, Rangit and Rangli run through Sikkim eroding the MCT and Ramgarh
thrust to form a mushroom-shaped physiography \citep{Mukuletal2009,Mitraetal2010}. Within this sinuous physiography,
\citep{Pauletal2015} and also in accordance with the findings of \citet{Mitraetal2005} for northeast India. In another study
field results corroborate well with seismic studies in this region \citep{Actonetal2011, Arunetal2010}. From studies of
Usage:
awk -f f.awk f.example
Expected ouput:
BendickB2001
Arunetal2010
Pauletal2015
Mitraetal2005
BilhamE01
Mukuletal2009
TapponnierM76
WangLiu2009
BhattacharyaM2009
Mitraetal2010
Actonetal2011
Vernantetal2014
My column has first name and last name separated by SPACE. I want to use pig function to split into 2 different columns. I think of STRSPLIT function but I don't know how to use it.
Could anyone help me on this simple question?
You can try something like this, sample code below
here what i am doing is
1.Reading each line as single column
2.Apply the STRSPLIT function using space as delimiter
3.Store the firstname and lastname into two different columns
input.txt
Pearson Charles
James Michael
Smith Linda
PigScript:
A = LOAD 'input.txt' AS line;
B = FOREACH A GENERATE FLATTEN(STRSPLIT(line,'\\s+',2)) AS (firstname:chararray,lastname:chararray);
C = FOREACH B GENERATE firstname,lastname;
DUMP C;
Output:
(Pearson,Charles)
(James,Michael)
(Smith,Linda)
Check more info from this link
http://pig.apache.org/docs/r0.13.0/func.html#strsplit
I have a fail2ban.log from which I want to grab specific fields, from 'Ban' strings. I can grab the data I need using regex one at the time, but I am not able to combine them. A typical 'fail2ban' log file has many strings. I'm interested in strings like these:
2012-05-02 14:47:40,515 fail2ban.actions: WARNING [ssh-iptables] Ban 84.xx.xx.242
xx = numbers (digits)
I want to grab: a) Date and Time, b) Ban (keyword), c) IP address
Here is my regex:
IP = (\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})
date & time = ^(\d{4}\W\d{2}\W\d{2}\s\d{2}\W\d{2}\W\d{2})
My problem here is, how can I combine these three together. I tried something like this:
^(?=^\d{4}\W\d{2}\W\d{2}\s\d{2}\W\d{2}\W\d{2})(?=\.*d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$)(?=^(?Ban).)*$).*$
but does not work as I would wanted too.
To give a clearer example, here is what I want:
greyjewel:FailMap atma$ cat fail2ban.log |grep Ban|awk -F " " '{print $1, $2, $7}'|tail -n 3
2012-05-02 14:47:40,515 84.51.18.242
2012-05-03 00:35:44,520 202.164.46.29
2012-05-03 17:55:03,725 203.92.42.6
Best Regards
A pretty direct translation of the example
ruby -alne 'BEGIN {$,=" "}; print $F.values_at(0,1,-1) if /Ban/' fail2ban.log
And because I figure you must want them from within Ruby
results = File.foreach("input").grep(/Ban/).map { |line| line.chomp.split.values_at 0, 1, -1 }
If the field placement doesn't change, you don't even need a regex here:
log_line =
'2012-05-02 14:47:40,515 fail2ban.actions: WARNING [ssh-iptables] Ban 84.12.34.242'
date, time, action, ip = log_line.split.values_at(0,1,-2,-1)
Problem
I need to insert text of arbitrary length ( # of lines ) into a template while maintaining an exact number of total lines.
Sample source data file:
You have a hold available for pickup as of 2012-01-13:
Title: Really Long Test Title Regarding Random Gibberish. Volume 1, A-B, United States
and affiliated territories, United Nations, countries of the world
Author: Barrel Roll Morton
Title: How to Compromise Free Speech Using Everyday Tools. Volume XXVI
Author: Lamar Smith
#end-of-record
You have a hold available for pickup as of 2012-01-13:
Title: Selling Out Democracy For Fun and Profit. Volume 1, A-B, United States
Author: Lamar Smith
Copy: 12
#end-of-record
Sample Template ( simplified for brevity ):
<%CUST-NAME%>
<%CUST-ADDR%>
<%CUST-CTY-ZIP%>
<%TITLES GO HERE%>
<%STORE-NAME%>
<%STORE-ADDR%>
<%STORE-CTY-ZIP%>
At this point I use bash's 'mapfile' to load the source file
record by record using the /^#end-of-file/ regex ...so far so good.
Then I pull predictable aspects of each record according to the line
on which they occur, then process the info using a series of sed
search replace statements.
The Hang-Up
So the problem is the unknown number of 'title' records that could occur.
How can I accommodate an unknown number of titles and always have output
of precisely 65 lines?
Given that title records always occur starting on line 8, I can pull the
titles easily with:
sed -n '8,$p' test-match.txt
However, how can I insert this within an allotted space, ex, between <%CUST-CTY-ZIP%> and <%STORE-NAME%> without pushing the store info out of place in the template?
My idea so far:
-first send the customer info through:
Ex.
sed 's/<%CUST-NAME%>/Benedict Arnold/' template.txt
-Append title records
???
-Then the store/location info
sed 's/<%STORE-NAME%>/Smith's House of Greasy Palms/' template.txt
I have code and functions for this stuff if interested but this post is 'windy' as it is.
Just need help with inserting the title records while maintaining position of following text and maintaining total line number of 65.*
UPDATE
I've decided to change tactics. I'm going to create place holders in the template for all available lines between customer and store info --- then:
Test if line is null in source
if yes -- replace placeholder with null leaving the line ending. Line number maintained.
if not null -- again, replace with text, maintaining line number and line endings in template.
Eventually, I plan to invest some time looking closer at Triplee's suggestion regarding Perl. The Perl way really does look simpler and easier to maintain if I'm going to be stuck with this project long term.
This might work for you:
cat <<! >titles.txt
> 1
> 2
> 3
> 4
> 5
> 6
> 7
> Title 1
> Title 2
> Title 3
> Title 4
> Title 5
> Title 6
> !
cat <<! >template.txt
> <%CUST-NAME%>
> <%CUST-ADDR%>
> <%CUST-CTY-ZIP%>
>
> <%TITLES GO HERE%>
>
> <%STORE-NAME%>
> <%STORE-ADDR%>
> <%STORE-CTY-ZIP%>
> !
sed '1,7d;:a;$!{N;ba};:b;G;s/\n[^\n]*//5g;tc;bb;:c;s/\n/\\n/g;s|.*|/<%TITLES GO HERE%>/c\\&|' titles.txt |
sed -f - template.txt
<%CUST-NAME%>
<%CUST-ADDR%>
<%CUST-CTY-ZIP%>
Title 1
Title 2
Title 3
Title 4
Title 5
<%STORE-NAME%>
<%STORE-ADDR%>
<%STORE-CTY-ZIP%>
This pads/squeezes the titles to 5 lines (s/\n[^\n]*//5g) if you want fewer or more change the 5 to the number desired.
This will give you five lines of output regardless of the number of lines in titles.txt:
sed -n '$s/$/\n\n\n\n\n/;8,$p' test-match.txt | head -n 5
Another version:
sed -n '8,$N; ${s/$/\n\n\n\n\n/;s/\(\([^\n]*\n\)\{4\}\).*/\1/p}' test-match.txt
Use one less than the number of lines you want (4 in this example will cause 5 lines of output).
Here's a quick proof of concept using Perl formats. If you are unfamiliar with Perl, I guess you will need some additional help with how to get the values from two different files, but it's quite doable, of course. Here, the data is simply embedded into the script itself.
I set the $titles format to 5 lines instead of the proper value (58 or something?) in order to make this easier to try out in a terminal window, and to demonstrate that the output is indeed truncated when it is longer than the allocated space.
#!/usr/bin/perl
use strict;
use warnings;
use vars (qw($cust_name $cust_addr $cust_cty_zip $titles
$store_name $store_addr $store_cty_zip));
my $fmtline = '#' . '<' x 78;
my $titlefmtline = '^' . '<' x 78;
my $empty = '';
my $fmt = join ("\n$fmtline\n", 'format STDOUT = ',
'$cust_name', '$cust_addr', '$cust_cty_zip', '$empty') .
("\n$titlefmtline\n" . '$titles') x 5 . #58
join ("\n$fmtline\n", '', '$empty',
'$store_name', '$store_addr', '$store_cty_zip');
#print $fmt;
eval "$fmt\n.\n";
titles = <<____HERE;
Title: Really Long Test Title Regarding Random Gibberish. Volume 1, A-B, United States
and affiliated territories, United Nations, countries of the world
Author: Barrel Roll Morton
Title: How to Compromise Free Speech Using Everyday Tools. Volume XXVI
Author: Lamar Smith
____HERE
# Preserve line breaks -- ^<< will fill lines, but preserves line breaks on \r
$titles =~ s/\n/\r\n/g;
while (<DATA>) {
chomp;
($cust_name, $cust_addr, $cust_cty_zip, $store_name, $store_addr, $store_cty_zip)
= split (",");
write STDOUT;
}
__END__
Charlie Bravo,23 Alpa St,Delta ND 12345,Spamazon,98 Spamway,Atlanta GA 98765
The use of $empty to get an empty line is pretty ugly, but I wanted to keep the format as regular as possible. I'm sure it could be avoided, but at the cost of additional code complexity IMHO.
If you are unfamiliar with Perl, the use strict is a complication, but a practical necessity; it requires you to declare your variables either with use vars or my. It is a best practice which helps immensely if you try to make changes to the script.
Here documents with <<HERE work like in shell scripts; it allows you to create a multi-line string easily.
The x operator is for repetition; 'string' x 3 is 'stringstringstring' and ("list") x 3 is ("list" "list" "list"). The dot operator is string concatenation; that is, "foo" . "bar" is "foobar".
Finally, the DATA filehandle allows you to put arbitrary data in the script file itself after the __END__ token which signals the end of the program code. For reading from standard input, use <> instead of <DATA>.