Cannot extract values from a map in apache pig - hadoop

I have a simple relation, v, in Apache Pig:
dump v;
(151364,[ 'ref'#'R813','highway'#'secondary', 'name:ga'#'Lána Chairdif', 'name'#'Cardiff Lane'],(31015271, 31053762))
(151368,[ 'ref'#'N1', 'oneway'#'yes','designation'#'Buses Only', 'highway'#'trunk', 'motor_vehicle'#'designated', 'name:ga'#'Cearnóg Pharnell Thoir', 'maxspeed'#'30', 'name'#'Parnell Square East'],(389365, 540403072))
(151596,[ 'name:en'#'Liffey', 'boundary'#'administrative', 'name:ga'#'An Life','admin_level'#'8', 'name'#'Liffey', 'waterway'#'river'],(1347749, 1426049020, 1347745, 1426049019, 1347742, 900075612))
(367947,[ 'maxspeed'#'80', 'ref'#'L2223','highway'#'tertiary'],(13259933, 2384217, 335978958))
(367952,['created_by'#'YahooApplet 1.0', 'name'#'Charnwood Avenue', 'highway'#'residential'],(2384386, 25963471, 14949594, 2384385, 6146344, 2384254))
(508603,[ 'ref'#'L3018','highway'#'tertiary', 'maxspeed'#'50', 'name'#'Shelerin Road'],(2854184, 2854168, 335978984, 2853307, 2384254, 335978978, 335978975, 2655735, 2655703, 392675957, 11676198, 920037194, 244531387, 2655952, 11675077))
(727153,[ 'ref'#'N8','highway'#'trunk', 'name'#'Merchants' Quay'],(354153, 453344873))
(727157,['highway'#'unclassified', 'oneway'#'yes', 'maxspeed'#'30', 'name'#'Kyle Street'],(354168, 354167))
(727159,['highway'#'unclassified', 'oneway'#'yes', 'maxspeed'#'30', 'name'#'North Main Street'],(354178, 465226768, 354167, 413995429, 72219131, 685537307, 1232381779, 354164))
(727161,[ 'maxspeed'#'30','highway'#'pedestrian', 'name'#'Maylor Street'],(1486492976, 1515360721, 1515360722, 1515345383, 1515344226, 1515344227, 1515344228, 1515344231))
On #orangeoctopus's advice, I have tried regenerating my data with any ' in the key names, and I have this data:
(151364,[ ref#'R813', name:ga#'Lána Chairdif', name#'Cardiff Lane',highway#'secondary'],(31015271, 31053762))
(151368,[ motor_vehicle#'designated', name#'Parnell Square East', highway#'trunk', oneway#'yes',designation#'Buses Only', maxspeed#'30', name:ga#'Cearnóg Pharnell Thoir', ref#'N1'],(389365, 540403072))
(151596,[ name:en#'Liffey', boundary#'administrative', waterway#'river', name:ga#'An Life',admin_level#'8', name#'Liffey'],(1347749, 1426049020, 1347745, 1426049019, 1347742, 900075612))
(367947,[highway#'tertiary', maxspeed#'80', ref#'L2223'],(13259933, 2384217, 335978958))
(367952,[ name#'Charnwood Avenue',created_by#'YahooApplet 1.0', highway#'residential'],(2384386, 25963471, 14949594, 2384385, 6146344, 2384254))
(508603,[ maxspeed#'50', ref#'L3018', name#'Shelerin Road',highway#'tertiary'],(2854184, 2854168, 335978984, 2853307, 2384254, 335978978, 335978975, 2655735, 2655703, 392675957, 11676198, 920037194, 244531387, 2655952, 11675077))
(727153,[highway#'trunk', name#'Merchants' Quay', ref#'N8'],(354153, 453344873))
(727157,[ oneway#'yes', maxspeed#'30', name#'Kyle Street',highway#'unclassified'],(354168, 354167))
(727159,[ oneway#'yes', maxspeed#'30', name#'North Main Street',highway#'unclassified' (354178, 465226768, 354167, 413995429, 72219131, 685537307, 1232381779, 354164))
(727161,[highway#'pedestrian', name#'Maylor Street', maxspeed#'30'],(1486492976, 1515360721, 1515360722, 1515345383, 1515344226, 1515344227, 1515344228, 1515344231))
In both cases v has the same schema/structure:
grunt> describe v;
2012-01-09 22:55:34,271 [main] WARN org.apache.pig.PigServer - Encountered Warning IMPLICIT_CAST_TO_CHARARRAY 1 time(s).
v: {id: int,tags: map[ ],nodes: (null)}
Then I try to extract out just one value from the tags map:
grunt> w = foreach v generate tags#'ref';
dump w;
But it only gives me empty data, even though some elements have data here.
()
()
()
()
()
()
()
()
()
()
With the old 'quoted' keys I tried (as per #orangeoctopus' solution)
w = foreach v generate tags#'\'ref\'';
And that gave me the same 'empty' data, and didn't work. (I also tried other combinations of ' and ", like "'ref'"/'"ref"'/etc. but all except '\'ref\'' were invalid pig latin syntax)
What's going on? If i try to filter based on the tag value, (e.g. filter v by tags#'highway' != ''), I get nothing, which is consistant with this above problem of not being able to extract data from the map, am I doing something wrong?

Very tricky!
Your problem is that your literal data includes single quotes. Your string is not ref (3 characters long), it is 'ref' (5 characters long). I realized this because the dump of a map containing strings does not typically have the quotes there.
Therefore, you need to be keying including those quotes (you have to escape them with \):
grunt> w = foreach v generate tags#'\'ref\'';
Your other option would be to change the way your data is being loaded so it doesn't include the single quotes in the strings themselves, and strips them out. PigStorage doesn't do this for free, but you could use something like REPLACE or your own UDF to do this.

Are you loading the data correctly too? It is weird that there is a space after the [ and before the ] when you dump your map.
Also it is more simple to drop all the quotes in the key and value in the input data. For example:
Input file
151364 [ref#R813,highway#secondary]
Pig
a = LOAD 'data.txt' AS (id:INT, m:MAP[]);
DUMP a;
b = FOREACH a GENERATE m#'ref';
DUMP b;
Output
(151364,[highway#secondary,ref#R813])
(R813)

Related

With ruamel.yaml how can I conditionally convert flow maps to block maps based on line length?

I'm working on a ruamel.yaml (v0.17.4) based YAML reformatter (using the RoundTrip variant to preserve comments).
I want to allow a mix of block- and flow-style maps, but in some cases, I want to convert a flow-style map to use block-style.
In particular, if the flow-style map would be longer than the max line length^, I want to convert that to a block-style map instead of wrapping the line somewhere in the middle of the flow-style map.
^ By "max line length" I mean the best_width that I configure by setting something like yaml.width = 120 where yaml is a ruamel.yaml.YAML instance.
What should I extend to achieve this? The emitter is where the line-length gets calculated so wrapping can occur, but I suspect that is too late to convert between block- and flow-style. I'm also concerned about losing comments when I switch the styles. Here are some possible extension points, can you give me a pointer on where I'm most likely to have success with this?
Emitter.expect_flow_mapping() probably too late for converting flow->block
Serializer.serialize_node() probably too late as it consults node.flow_style
RoundTripRepresenter.represent_mapping() maybe? but this has no idea about line length
I could also walk the data before calling yaml.dump(), but this has no idea about line length.
So, where should I and where can I adjust the flow_style whether a flow-style map would trigger line wrapping?
What I think the most accurate approach is when you encounter a flow-style mapping in the dumping process is to first try to emit it to a buffer and then get the length of the buffer and if that combined with the column that you are in, actually emit block-style.
Any attempt to guesstimate the length of the output without actually trying to write that part of a tree is going to be hard, if not impossible to do without doing the actual emit. Among other things the dumping process actually dumps scalars and reads them back to make sure no quoting needs to be forced (e.g. when you dump a string that reads back like a date). It also handles single key-value pairs in a list in a special way ( [1, a: 42, 3] instead of the more verbose [1, {a: 42}, 3]. So a simple calculation of the length of the scalars that are the keys and values and separating comma, colon and spaces is not going to be precise.
A different approach is to dump your data with a large line width and parse the output and make a set of line numbers for which the line is too long according to the width that you actually want to use. After loading that output back you can walk over the data structure recursively, inspect the .lc attribute to determine the line number on which a flow style mapping (or sequence) started and if that line number is in the set you built beforehand change the mapping to block style. If you have nested flow-style collections, you might have to repeat this process.
If you run the following, the initial dumped value for quote will be on one line.
The change_to_block method as presented changes all mappings/sequences that are too long
that are on one line.
import sys
import ruamel.yaml
yaml_str = """\
movie: bladerunner
quote: {[Batty, Roy]: [
I have seen things you people wouldn't believe.,
Attack ships on fire off the shoulder of Orion.,
I watched C-beams glitter in the dark near the Tannhäuser Gate.,
]}
"""
class Blockify:
def __init__(self, width, only_first=False, verbose=0):
self._width = width
self._yaml = None
self._only_first = only_first
self._verbose = verbose
#property
def yaml(self):
if self._yaml is None:
self._yaml = y = ruamel.yaml.YAML(typ=['rt', 'string'])
y.preserve_quotes = True
y.width = 2**16
return self._yaml
def __call__(self, d):
pass_nr = 0
changed = [True]
while changed[0]:
changed[0] = False
try:
s = self.yaml.dumps(d)
except AttributeError:
print("use 'pip install ruamel.yaml.string' to install plugin that gives 'dumps' to string")
sys.exit(1)
if self._verbose > 1:
print(s)
too_long = set()
max_ll = -1
for line_nr, line in enumerate(s.splitlines()):
if len(line) > self._width:
too_long.add(line_nr)
if len(line) > max_ll:
max_ll = len(line)
if self._verbose > 0:
print(f'pass: {pass_nr}, lines: {sorted(too_long)}, longest: {max_ll}')
sys.stdout.flush()
new_d = self.yaml.load(s)
self.change_to_block(new_d, too_long, changed, only_first=self._only_first)
d = new_d
pass_nr += 1
return d, s
#staticmethod
def change_to_block(d, too_long, changed, only_first):
if isinstance(d, dict):
if d.fa.flow_style() and d.lc.line in too_long:
d.fa.set_block_style()
changed[0] = True
return # don't convert nested flow styles, might not be necessary
# don't change keys if any value is changed
for v in d.values():
Blockify.change_to_block(v, too_long, changed, only_first)
if only_first and changed[0]:
return
if changed[0]: # don't change keys if value has changed
return
for k in d:
Blockify.change_to_block(k, too_long, changed, only_first)
if only_first and changed[0]:
return
if isinstance(d, (list, tuple)):
if d.fa.flow_style() and d.lc.line in too_long:
d.fa.set_block_style()
changed[0] = True
return # don't convert nested flow styles, might not be necessary
for elem in d:
Blockify.change_to_block(elem, too_long, changed, only_first)
if only_first and changed[0]:
return
blockify = Blockify(96, verbose=2) # set verbose to 0, to suppress progress output
yaml = ruamel.yaml.YAML(typ=['rt', 'string'])
data = yaml.load(yaml_str)
blockified_data, string_output = blockify(data)
print('-'*32, 'result:', '-'*32)
print(string_output) # string_output has no final newline
which gives:
movie: bladerunner
quote: {[Batty, Roy]: [I have seen things you people wouldn't believe., Attack ships on fire off the shoulder of Orion., I watched C-beams glitter in the dark near the Tannhäuser Gate.]}
pass: 0, lines: [1], longest: 186
movie: bladerunner
quote:
[Batty, Roy]: [I have seen things you people wouldn't believe., Attack ships on fire off the shoulder of Orion., I watched C-beams glitter in the dark near the Tannhäuser Gate.]
pass: 1, lines: [2], longest: 179
movie: bladerunner
quote:
[Batty, Roy]:
- I have seen things you people wouldn't believe.
- Attack ships on fire off the shoulder of Orion.
- I watched C-beams glitter in the dark near the Tannhäuser Gate.
pass: 2, lines: [], longest: 67
-------------------------------- result: --------------------------------
movie: bladerunner
quote:
[Batty, Roy]:
- I have seen things you people wouldn't believe.
- Attack ships on fire off the shoulder of Orion.
- I watched C-beams glitter in the dark near the Tannhäuser Gate.
Please note that when using ruamel.yaml<0.18 the sequence [Batty, Roy] never will be in block style
because the tuple subclass CommentedKeySeq does never get a line number attached.

Bug in my Pig Latin script

Im trying to do a Median operation on a file in Pig. The file looks like this.
NewYork,-1
NewYork,-5
NewYork,-2
NewYork,3
NewYork,4
NewYork,13
NewYork,11
Amsterdam,12
Amsterdam,11
Amsterdam,2
Amsterdam,1
Amsterdam,-1
Amsterdam,-4
Mumbai,1
Mumbai,4
Mumbai,5
Mumbai,-2
Mumbai,9
Mumbai,-4
The file is loaded and the data inside it is grouped as follows:
wdata = load 'weatherdata' using PigStorage(',') as (city:chararray, temp:int);
wdata_g = group wdata by city;
Im trying to get the median from all the temperatures of the cities as following:
wdata_tempmedian = foreach wdata_g { tu = wdata.temp as temp; ord = order tu by temp generate group, Median(ord); }
The data is ordering because is needs to in sorted order to find a median.
But Im getting the following error message which I couldn't figure out what is the mistake:
[main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: <line 3, column 53> mismatched input 'as' expecting SEMI_COLON
Any help is much appreciated.
You are missing a ';' after ordering the temperatures.
wdata_tempmedian = FOREACH wdata_g {
tu = wdata.temp as temp;
ord = ORDER tu BY temp;
GENERATE group, Median(ord);
}
OR
wdata_ordered = ORDER wdata_g BY temp;
wdata_tempmedian = FOREACH wdata_ordered GENERATE group, Median(ord);
Note:I am assuming you are using data-fu since PIG does not have a Median function.Ensure the jar is correctly registered
register /path/datafu-pig-incubating-1.3.1.jar

libsvm: read vectors from word2vec

Is there an easy way to use w2v's output vectors in libsvm?
There are two output formats for w2v: binary and text. In the text format each line begins with a word followed by a space-separated vector. e.g.:
something -0.197045 -0.292196 -0.107292 -0.168469 0.114897 -0.006383 -0.000056 0.068514 -0.079548 0.251488 0.185607 0.248675 -0.058647 0.062771 0.129014 -0.024715 -0.168974 -0.035367 -0.009597 0.090379 0.030133 0.017338 0.062264 -0.219165 -0.214198 0.226869 -0.058710 0.034563 -0.046304 0.2
Found a way with ruby:
First require the libsvm wrapper:
require 'libsvm'
read the vectors file (assuming textual form):
lines = File.readlines('vectors.txt')
insert to a hash
words = {}
lines[1..-1].each{ |l| sp = l.strip.split; words[sp[0]] = sp[1..-1].map(&:to_f) }
and finally use libsvm:
examples = words.values.map { |ary| Libsvm::Node.features(ary) }

Python: Can I grab the specific lines from a large file faster?

I have two large files. One of them is an info file(about 270MB and 16,000,000 lines) like this:
1101:10003:17729
1101:10003:19979
1101:10003:23319
1101:10003:24972
1101:10003:2539
1101:10003:28242
1101:10003:28804
The other is a standard FASTQ format(about 27G and 280,000,000 lines) like this:
#ST-E00126:65:H3VJ2CCXX:7:1101:1416:1801 1:N:0:5
NTGCCTGACCGTACCGAGGCTAACCCTAATGAGCTTAATCAAGATGATGCTCGTTATGG
+
AAAFFKKKKKKKKKFKKKKKKKFKKKKAFKKKKKAF7AAFFKFAAFFFKKF7FF<FKK
#ST-E00126:65:H3VJ2CCXX:7:1101:10003:75641:N:0:5
TAAGATAGATAGCCGAGGCTAACCCTAATGAGCTTAATCAAGATGATGCTCGTTATGG
+
AAAFFKKKKKKKKKFKKKKKKKFKKKKAFKKKKKAF7AAFFKFAAFFFKKF7FF<FKK
The FASTQ file uses four lines per sequence. Line 1 begins with a '#' character and is followed by a sequence identifie. For each sequence,this part of the Line 1 is unique.
1101:1416:1801 and 1101:10003:75641
And I want to grab the Line 1 and the next three lines from the FASTQ file according to the info file. Here is my code:
import gzip
import re
count = 0
with open('info_path') as info, open('grab_path','w') as grab:
for i in info:
sample = i.strip()
with gzip.open('fq_path') as fq:
for j in fq:
count += 1
if count%4 == 1:
line = j.strip()
m = re.search(sample,j)
if m != None:
grab.writelines(line+'\n'+fq.next()+fq.next()+fq.next())
count = 0
break
And it works, but because both of these two files have millions of lines, it's inefficient(running one day only get 20,000 lines).
UPDATE at July 6th:
I find that the info file can be read into the memory(thank #tobias_k for reminding me), so I creat a dictionary that the keys are info lines and the values are all 0. After that, I read the FASTQ file every 4 line, use the identifier part as the key,if the value is 0 then return the 4 lines. Here is my code:
import gzip
dic = {}
with open('info_path') as info:
for i in info:
sample = i.strip()
dic[sample] = 0
with gzip.open('fq_path') as fq, open('grap_path',"w") as grab:
for j in fq:
if j[:10] == '#ST-E00126':
line = j.split(':')
match = line[4] +':'+line[5]+':'+line[6][:-2]
if dic.get(match) == 0:
grab.writelines(j+fq.next()+fq.next()+fq.next())
This way is much faster, it takes 20mins to get all the matched lines(about 64,000,000 lines). And I have thought about sorting the FASTQ file first by external sort. Splitting the file that can be read into the memory is ok, my trouble is how to keep the next three lines following the indentifier line while sorting. The Google's answer is to linear these four lines first, but it will take 40mins to do so.
Anyway thanks for your help.
You can sort both files by the identifier (the 1101:1416:1801) part. Even if files do not fit into memory, you can use external sorting.
After this, you can apply a simple merge-like strategy: read both files together and do the matching in the meantime. Something like this (pseudocode):
entry1 = readFromFile1()
entry2 = readFromFile2()
while (none of the files ended)
if (entry1.id == entry2.id)
record match
else if (entry1.id < entry2.id)
entry1 = readFromFile1()
else
entry2 = readFromFile2()
This way entry1.id and entry2.id are always close to each other and you will not miss any matches. At the same time, this approach requires iterating over each file once.

Handle thorn delimiter in pig

My Source is a log file having "þ" as delimiter.I am trying to read this file in Pig.Please look at the options I tried.
Option 1 :
Using PigStorage("þ") - This does'nt work out as it cant handle unicode characters.
Option 2 :
I tried reading the lines as string and tried to split the line with "þ".This also does'nt work out as the STRSPLIT left out the last field as it has "\n" in the end.
I can see multiple questions in web, but unable to find a solution.
Kindly direct me with this.
Thorn Details :
http://www.fileformat.info/info/unicode/char/fe/index.htm
Is this the solution are you expecting?
input.txt:
helloþworldþhelloþworld
helloþworldþhelloþworld
helloþworldþhelloþworld
helloþworldþhelloþworld
helloþworldþhelloþworld
PigScript:
A = LOAD 'input.txt' as line;
B = FOREACH A GENERATE FLATTEN(REGEX_EXTRACT_ALL(line,'(.*)þ(.*)þ(.*)þ(.*)'));
dump B;
Output:
(hello,world,hello,world)
(hello,world,hello,world)
(hello,world,hello,world)
(hello,world,hello,world)
(hello,world,hello,world)
Added 2nd option with different datatypes:
input.txt
helloþ1234þ1970-01-01T00:00:00.000+00:00þworld
helloþ4567þ1990-01-01T00:00:00.000+00:00þworld
helloþ8901þ2001-01-01T00:00:00.000+00:00þworld
helloþ9876þ2014-01-01T00:00:00.000+00:00þworld
PigScript:
A = LOAD 'input.txt' as line;
B = FOREACH A GENERATE FLATTEN(REGEX_EXTRACT_ALL(line,'(.*)þ(.*)þ(.*)þ(.*)')) as (f1:chararray,f2:long,f3:datetime,f4:chararray);
DUMP B;
DESCRIBE B;
Output:
(hello,1234,1970-01-01T00:00:00.000+00:00,world)
(hello,4567,1990-01-01T00:00:00.000+00:00,world)
(hello,8901,2001-01-01T00:00:00.000+00:00,world)
(hello,9876,2014-01-01T00:00:00.000+00:00,world)
B: {f1: chararray,f2: long,f3: datetime,f4: chararray}
Another thorn symbol A¾:
input.txt
1077A¾04-01-2014þ04-30-2014þ0þ0.0þ0
1077A¾04-01-2014þ04-30-2014þ0þ0.0þ0
1077A¾04-01-2014þ04-30-2014þ0þ0.0þ0
PigScript:
A = LOAD 'jinput.txt' as line;
B = FOREACH A GENERATE FLATTEN(REGEX_EXTRACT_ALL(line,'(.*)A¾(.*)þ(.*)þ(.*)þ(.*)þ(.*)')) as (f1:long,f2:datetime,f3:datetime,f4:int,f5:double,f6:int);
DUMP B;
describe B;
Output:
(1077,04-01-2014,04-30-2014,0,0.0,0)
(1077,04-01-2014,04-30-2014,0,0.0,0)
(1077,04-01-2014,04-30-2014,0,0.0,0)
B: {f1: long,f2: datetime,f3: datetime,f4: int,f5: double,f6: int}
}
This should work (replace the unicode code point with the one that's working for you, this is for capital thorn):
A = LOAD 'input' USING
B = FOREACH A GENERATE STRSPLIT(f1, '\\u00DE', -1);
I don't see why the last field should be left out.
Somehow, this does not work:
A = LOAD 'input' USING PigStorage('\00DE');

Resources