Merging of two part files with header as only first line Hadoop - hadoop

how can i merge two or more part files in hadoop to single file in such a way that merge output is having entire data but, only one header that is in the 1st line of merge output .
File 1
column1|column2|column3
20000|newyork|john
30000|sydney|joseph
File n
column1|column2|column3
60000|delhi|mike
30000|sydney|joseph
Merged output should be
column1|column2|column3
20000|newyork|john
30000|sydney|joseph
60000|delhi|mike
30000|sydney|joseph
Is there any easy way using hadoop fs -cat command.. ?
or by any other method..

Method 1:
Leaving the headers on is fairly complicated without creating an index or rank, since in Pig a collection of tuples is unsorted. Here's what a Pig job looks like, using rank and order by to place the header on top.
header_ranked.pig
HEADER = LOAD 'header.txt' USING PigStorage('|') AS (b0:int,b1:chararray,b2:chararray,b3:chararray);
H1 = LOAD 'header_test' USING PigStorage('|') AS (c1:chararray,c2:chararray,c3:chararray);
F_H1 = FILTER H1 BY NOT (c1 MATCHES 'column1' AND c2 MATCHES 'column2' AND c3 MATCHES 'column3');
R_H1 = RANK F_H1 by c1 DESC DENSE;
U = UNION R_H1, HEADER;
O = ORDER U by rank_F_H1;
F = FOREACH O GENERATE c1,c2,c3;
dump F;
The two sample files, each containing 2 records and a header line, were placed in a directory called header_test. Additionally, in order for this program to work, I had to create a header file in the following format:
header.txt
0|column1|column2|column3
Walking through the code, the file containing the headers (slightly modified to include an additional column, which is the rank value of 0) is loaded into the HEADER alias.
Next the actual data is loaded into the H1 alias, as it grabs all files under the header_test directory.
F_H1 filters out all headers from the data. If you had 20 files that were loaded into H1 from the header_test directory, those 20 headers would now be filtered out of the data.
R_H1 creates a rank on the filtered data, in descending order and without skipping any numbers.
U effectively concatenates the ranked filtered data with the 0|column1|column2|column3 header line.
O orders the data by the rank, so that the header (which has a rank of 0), appears on top.
And finally, F gets rid of the ranking, leaving the clean tuples.
Results
(column1,column2,column3)
(60000,delhi,mike)
(30000,sydney,joseph)
(30000,sydney,joseph)
(20000,newyork,john)
Method 2:
Basically, leave the headers on one file, strip them from the rest, and then mash them together. Not sure it'll stay sorted, though, haven't tested it thoroughly.
H1 = LOAD 'header_test/header1.txt' USING PigStorage('|') AS (c1:chararray,c2:chararray,c3:chararray);
H2 = LOAD 'header_test/header2.txt' USING PigStorage('|') AS (d1:chararray,d2:chararray,d3:chararray);
F_H2 = FILTER H2 BY NOT (d1 MATCHES 'column1' AND d2 MATCHES 'column2' AND d3 MATCHES 'column3');
U = UNION H1, F_H2;
dump U;
Results
(column1,column2,column3)
(20000,newyork,john)
(30000,sydney,joseph)
(60000,delhi,mike)
(30000,sydney,joseph)

Related

File diff of large size files

I need to code this task in java.
I have 2 large files around 5GB each containing text data of multiple rows. Each row is a line of comma separated fields, for example "name,empId,designation,address,...,so on up to 30 fields". I need to read these 2 files and write the records to another file with additional field which specifies the given data row is Changed, Not Changed, Added or Deleted.
For example
File1
Tom,E100,Engineer
Rick,E200,Engineer
File2
Tom,E100,Manager
Paul,E300,Clerk
ResultFile
Tom,E100,Manager,Changed
Paul,E300,Clerk,Added
Rick,E200,Engineer,Deleted
Approach I used is to create a map from the data of file1 using empId as the key and entire data row as value (assuming empId is unique) and then read each record from file2 to check against the data in the map (I am not reading entire content of file2 into memory, but only file1 to create the map). I am using BufferedReader/BufferedWriter for reading and writing.
This approach works fine but only for small data file. Given data files that runs into GBs my program runs out of memory very soon while trying to create the map.
What would be the right approach to achieve this task both in terms of memory and speed of execution?
Thanks,
LX
A different approach could be to do an external sort on each file based on the key, and then iterate them in parallel.
High level pseudo code:
sort(file1)
sort(file2)
iter1 = file1.begin()
iter2 = file2.begin()
while (iter1 != file1.end() && iter2 != file2.end()):
element1 = iter1.getElement()
element2 = iter2.getElement()
if element1.key() == element2.key():
// same element, check if changed
iter1 = iter1.next()
iter2 = iter2.next()
else if element1.key() < element2.key()
// element1 is not in file2, so it is removed.
iter1 = iter1.next()
else
// element2 is in file2 but not in file1, so it's added
iter2 = iter2.next()
while (iter1 != list1.end()):
element1 = iter1.getElement()
// element1 is removed
iter1 = iter1.next()
while (iter2 != list2.end()):
element2 = iter2.getElement()
// element2 is added
iter2 = iter2.next()
This requires sorting, which can be done with little memory signature when doing external sort, and the next loops also use constant amount of memory.
Complexity is O(mlogm + nlogn), where n,m being the lists sizes

PIG Script to split large txt file into parts based on specified word

I am trying to build a pig script that takes in a textbook file and divides it into chapters and then compares the words in each chapter and returns only words that show up in all chapters and counts them. The chapters are Delimited fairly easily by CHAPTER - X.
Here's what I have so far:
lines = LOAD '../../Alice.txt' AS (line:chararray);
lineswithoutspecchars = FOREACH lines GENERATE REPLACE(line,'([^a-zA-Z\\s]+)','') as line;
words = FOREACH lineswithoutspecchars GENERATE FLATTEN(TOKENIZE(line)) as word;
grouped = GROUP words BY word;
wordcount = FOREACH grouped GENERATE group, COUNT(words);
DUMP wordcount;
Sorry that this question is probably way too simple compared to what I normally ask on stackoverflow and I googled around for it but perhaps I am not using the correct keywords. I am brand new to PIG and trying to learn it for a new job assignment.
Thanks in advance!
A bit lengthy but you will get the result. You could cut down unnecessary relations based on your file though. Provided appropriate comments in teh script.
Input File:
Pig does not know whether integer values in baseball are stored as ASCII strings, Java
serialized values, binary-coded decimal, or some other format. So it asks the load func-
tion, because it is that function’s responsibility to cast bytearrays to other types. In
general this works nicely, but it does lead to a few corner cases where Pig does not know
how to cast a bytearray. In particular, if a UDF returns a bytearray, Pig will not know
how to perform casts on it because that bytearray is not generated by a load function.
CHAPTER - X
In a strongly typed computer language (e.g., Java), the user must declare up front the
type for all variables. In weakly typed languages (e.g., Perl), variables can take on values
of different type and adapt as the occasion demands.
CHAPTER - X
In this example, remember we are pretending that the values for base_on_balls and
ibbs turn out to be represented as integers internally (that is, the load function con-
structed them as integers). If Pig were weakly typed, the output of unintended would
be records with one field typed as an integer. As it is, Pig will output records with one
field typed as a double. Pig will make a guess and then do its best to massage the data
into the types it guessed.
Pig Script:
A = LOAD 'file' as (line:chararray);
B = FOREACH A GENERATE REPLACE(line,'([^a-zA-Z\\s]+)','') as line;
//we need to split on CHAPTER X but the above load function would give us a tuple for each newline. so
group everything and convert that bag to string which will give a single tuple with _ as delimiter.
C = GROUP B ALL;
D = FOREACH C GENERATE BagToString(B) as (line:chararray);
//now we dont have any commas so convert our delimiter CHAPTER X to comma. We do this becuase if we pass this
to TOKENIZE it would split that into separarte column that would be useful to RANK it.
E = FOREACH D GENERATE REPLACE(line,'_CHAPTER X_',',') AS (line:chararray);
F = FOREACH E GENERATE REPLACE(line,'_',' ') AS (line:chararray); //remove the delimiter created by BagToString
//create separate columns
G = FOREACH F GENERATE FLATTEN(TOKENIZE(line,',')) AS (line:chararray);
//we need to rank each chapter so that would be easy when you are doing the count of each word.
H = RANK G;
J = FOREACH H GENERATE rank_G,FLATTEN(TOKENIZE(line)) as (line:chararray);
J1 = GROUP J BY (rank_G, line);
J2 = FOREACH J1 GENERATE COUNT(J) AS (cnt:long),FLATTEN(group.line) as (word:chararray),FLATTEN(group.rank_G) as (rnk:long);
//So J2 result will not have duplicate word within each chapter now.
//So if we group it by word and then filter teh count of that by 2 we are sure that the word is present in all chapters.
J3 = GROUP J2 BY word;
J4 = FOREACH J3 GENERATE SUM(J2.cnt) AS (sumval:long),COUNT(J2) as (cnt:long),FLATTEN(group) as (word:chararray);
J5 = FILTER J4 BY cnt > 2;
J6 = FOREACH J5 GENERATE word,sumval;
dump J6;
//result in order word,count across chapters
Output:
(a,8)
(In,5)
(as,6)
(the,9)
(values,4)

Key renumbering in map reduce

I am new in hadoop and i am working with a programme that the input of map function is a file that keys are like this:
ID: value:
3 sd
37 g
5675 gk
68 oi
My file is about 10 gigabytes and i want to change these Ids and renumber them in descending order. I don't want to change the values.
My output must be like this:
ID: value:
5675 sd
68 g
37 gk
3 oi
I want to do this work in a cluster of nodes? How can i do that?
I think that i need a global variable and i can't do this in a cluster? What can i do?
You can do one map/reduce to order the ids then you'd have a file with the ids in descending order.
You can then write a second map/reduce that would join that file with the unsorted file where the mapper will emit enumerator (that can be calculated by the split size to facilitate multiple maps) so that the mapper that go over the fist file will emit "1 sd" "2 g" etc. and the mapper that processes the ids file would emit "1 5675" "2 68". The reducer will then join the files
here's an (untested) pig 0.11 script that would do something along these line:
A = load 'data' AS (id:chararray,value:chararray);
ID_RAW= FOREACH A GENERATE id;
DATA_RAW = FOREACH A GENERATE value;
ID_SORT= RANK ID_RAW BY id DESC DENSE;
DATA_SORT = RANK DATA_RAW DENSE;
ID_DATA = JOIN ID_SORT by $0, DATA_SORT by $0;
RESULT = FOREACH ID_DATA GENERATE ID_SORT::ID,DATA_SORT::value;
STORE RESULT to 'output';
Before I say this, I like Arnon's answer for using hadoop.
But, since this is small file, 10G is not that big, and you only need to run it once, I would personally just write a small script.
Assuming a tab delimited file
sort myfile.txt > myfile.sorted.text
paste myfile.sorted.text myfile.text | cut -f1,4 > newFile.txt
That might take a long time, certainly longer than using hadoop, but is simple and works

Apache Pig not parsing a tuple fully

I have a file called data that looks like this: (note there are tabs after the 'personA')
personA (1, 2, 3)
personB (2, 1, 34)
And I have an Apache pig script like this:
A = LOAD 'data' AS (name: chararray, nodes: tuple(a:int, b:int, c:int));
C = foreach A generate nodes.$0;
dump C;
The output of which makes sense:
(1)
(2)
However if I change the schema of the script to be like this:
A = LOAD 'data' AS (name: chararray, nodes: tuple());
C = foreach A generate nodes.$0;
dump C;
Then the output I get is this:
(1, 2, 3)
(2, 1, 34)
It looks like the first (and only) element in this tuple is a bytearray. i.e. it's not parsing the input text 1, 2, 3 into a tuple.
In future my input will have an unknown & variable number of elements in the nodes item, so I can't just write out a:int, ….
Is there anyway to get Pig to parse the input tuple as a tuple without having to write out the full schema?
Pig does not accept what you are passing in as valid. The default loading scheme PigStorage only accepts delimited files (by default tab delimited). It is not smart enough to parse the tuple construct with the parenthesis and commas you have in the text. Your options are:
Reformat your file to be tab delimited: personA 1 2 3
Read the file in line by line with TextLoader, then write some sort of UDF that parses the line and returns the data in the form you want.
Write your own custom loader.
This is no more a limitation. Pig parses the tuples in input file considering comma as field separator. I'm trying in Apache Pig version 0.15.0.
A = LOAD 'data' AS (name: chararray, nodes: tuple());
C = foreach A generate nodes.$0;
dump C;
Output I get is:
(1)
(2)
Here is another way of tackling this issue, although I know the answers above are more efficient.
data = LOAD 'data' USING PigStorage() AS (name:chararray, field2:chararray);
data = FOREACH data GENERATE name, REPLACE(REPLACE(field2, '\\(',''),'\\)','') AS field2;
data = FOREACH data GENERATE name, STRSPLIT(field2, '\\,') AS fieldTuple;
data = FOREACH data GENERATE name, fieldTuple.$0,fieldTuple.$1, fieldTuple.$2 ;
Load field2 as chararray
Remove parentheses
Split field2 by comma (it gives you a tuple with 3 fields in it)
Get values by index
I know it is hacky. Just wanted to provide another way of doing this

How can I use the map datatype in Apache Pig?

I'd like to use Apache Pig to build a large key -> value mapping, look things up in the map, and iterate over the keys. However, there does not even seem to be syntax for doing these things; I've checked the manual, wiki, sample code, Elephant book, Google, and even tried parsing the parser source. Every single example loads map literals from a file... and then never uses them. How can you use Pig's maps?
First, there doesn't seem to be a way to load a 2-column CSV file into a map directly. If I have a simple map.csv:
1,2
3,4
5,6
And I try to load it as a map:
m = load 'map.csv' using PigStorage(',') as (M: []);
dump m;
I get three empty tuples:
()
()
()
So I try to load tuples and then generate the map:
m = load 'map.csv' using PigStorage(',') as (key:chararray, val:chararray);
b = foreach m generate [key#val];
ERROR 1000: Error during parsing. Encountered " "[" "[ "" at line 1, column 24.
...
Many variations on the syntax also fail (e.g., generate [$0#$1]).
OK, so I munge my map into Pig's map literal format as map.pig:
[1#2]
[3#4]
[5#6]
And load it up:
m = load 'map.pig' as (M: []);
Now let's load up some keys and try lookups:
k = load 'keys.csv' as (key);
dump k;
3
5
1
c = foreach k generate m#key; /* Or m[key], or... what? */
ERROR 1000: Error during parsing. Invalid alias: m in {M: map[ ]}
Hrm, OK, maybe since there are two relations involved, we need a join:
c = join k by key, m by /* ...um, what? */ $0;
dump c;
ERROR 1068: Using Map as key not supported.
c = join k by key, m by m#key;
dump c;
Error 1000: Error during parsing. Invalid alias: m in {M: map[ ]}
Fail. How do I refer to the key (or value) of a map? The map schema syntax doesn't seem to let you even name the key and value (the mailing list says there's no way to assign types).
Finally, I'd just like to be able to find all they keys in my map:
d = foreach m generate ...oh, forget it.
Is Pig's map type half-baked? What am I missing?
Currently pig maps need the key to a chararray (string) that you supply and not a variable which contains a string. so in map#key the key has to be constant string that you supply (eg: map#'keyvalue').
The typical use case of this is to load a complex data structure one of the element being a key value pair and later in a foreach statement you can refer to a particular value based on the key you are interested in.
http://pig.apache.org/docs/r0.9.1/basic.html#map-schema
In Pig version 0.10.0 there is a new function available called "TOMAP" (http://pig.apache.org/docs/r0.10.0/func.html#tomap) that converts its odd (chararray) parameters to keys and even parameters to values. Unfortunately I haven't found it to be that useful, though, since I typically deal with arbitrary dicts of varying lengths and keys.
I would find a TOMAP function that took a tuple as a single argument, instead of a variable number of parameters, to be much more useful.
This isn't a complete solution to your problem, but the availability of TOMAP gives you some more options for your constructing a real solution.
Great question!
I personally do not like Maps in Pig. They have a place in traditional programming languages like Java, C# etc, wherein its really handy and fast to lookup a key in the map. On the other hand, Maps in Pig have very limited features.
As you rightly pointed, one can not lookup variable key in the Map in Pig. The key needs to be Constant. e.g. myMap#'keyFoo' is allowed but myMap#$SOME_VARIABLE is not allowed.
If you think about it, you do not need Map in Pig. One usually loads the data from some source, transforms it, joins it with some other dataset, filter it, transform it and so on. JOIN actually does a good job of looking up the variable keys in the data.
e.g. data1 has 2 columns A and B and data2 has 3 columns X, Y, Z. If you join data1 BY A with data2 BY Z, JOIN does the work of a Map (from traditional language) which maps value of column Z to value of column B (via column A). So data1 essentially represents a Map A -> B.
So why do we need Map in Pig?
Usually Hadoop data are the dumps of different data sources from Traditional languages. If original data sources contain Maps, the HDFS data would contain a corresponding Map.
How can one handle the Map data?
There are really 2 use cases:
Map keys are constants.
e.g. HttpRequest Header data contains time, server, clientIp as the keys in Map. to access value of a particular key, one case access them with Constant key.
e.g. header#'clientIp'.
Map keys are variables.
In these cases, you would most probably would want to JOIN the Map keys with some other data set. I usually convert the Map to Bag using UDF MapToBag, which converts map data into Bag of 2 field tuples (key, value). Once map data is converted to Bag of tuples, its easy to join it with other data sets.
I hope this helps.
1)If you want to load map data it should be like "[programming#SQL,rdbms#Oracle]"
2)If you want to load tuple data it should be like "(first_name_1234,middle_initial_1234,last_name_1234)"
3)If you want to load bag data it should be like"{(project_4567_1),(project_4567_2),(project_4567_3)}"
my file pigtest.csv like this
1234|emp_1234#company.com|(first_name_1234,middle_initial_1234,last_name_1234)|{(project_1234_1),(project_1234_2),(project_1234_3)}|[programming#SQL,rdbms#Oracle]
4567|emp_4567#company.com|(first_name_4567,middle_initial_4567,last_name_4567)|{(project_4567_1),(project_4567_2),(project_4567_3)}|[programming#Java,OS#Linux]
my schema:
a = LOAD 'pigtest.csv' using PigStorage('|') AS (employee_id:int, email:chararray, name:tuple(first_name:chararray, middle_name:chararray, last_name:chararray), project_list:bag{project: tuple(project_name:chararray)}, skills:map[chararray]) ;
b = FOREACH a GENERATE employee_id, email, name.first_name, project_list, skills#'programming' ;
dump b;
I think you need to think in term of relations and the map is just one field of one record. Then you can apply some operations on the relations, like joining the two sets data and mapping:
Input
$ cat data.txt
1
2
3
4
5
$ cat mapping.txt
1 2
2 4
3 6
4 8
5 10
Pig
mapping = LOAD 'mapping.txt' AS (key:CHARARRAY, value:CHARARRAY);
data = LOAD 'data.txt' AS (value:CHARARRAY);
-- list keys
mapping_keys =
FOREACH mapping
GENERATE key;
DUMP mapping_keys;
-- join mapping to data
mapped_data =
JOIN mapping BY key, data BY value;
DUMP mapped_data;
Output
> # keys
(1)
(2)
(3)
(4)
(5)
> # mapped data
(1,2,1)
(2,4,2)
(3,6,3)
(4,8,4)
(5,10,5)
This answer could also help you if you just want to do a simple look up:
pass-a-relation-to-a-pig-udf-when-using-foreach-on-another-relation
You can load up any data and then convert and store in key value format to read for later use
data = load 'somedata.csv' using PigStorage(',')
STORE data into 'folder' using PigStorage('#')
and then read as a mapped data.

Resources