Map job which splits file into small ones and generates names of these files on reduce stage - hadoop

Given big file A, I need to iterate over records of that file and for each record
extract value of certain field (status)
add this record to the file with name "status_" + value
emit that status value to reducer
so output would contain set of files with records, grouped by statuses, and some file with list of statuses
ideally, it should
place files with statuses under 'output_dir/statuses/status_nnn' (where nnn is actual status value),
'output_dir/status_list' would contain statuses one per line
Is that possible to do with hadoop? I found out how to generate filename per record with this example, but not sure how to do separation of records and enumerate statuses.
I don't know in advance which statuses could be in those records.

In the map phase you can do 2 emits per record: and <list_statuses, status>. The 'list_statusses' must be a unique key you choose in advance. Then in the reduce phase your behaviour depends on the key, if it equals your special key then you emit a file with the statuses (this reduce function will store all statuses in a Set for example) otherwise generate the <status,field> file.
Does this make sense to you?

Related

Spring Batch, read whole csv file before reading line by line

I want to read a csv file, enrich each row with some data from some other external system and then write the new enriched csv to some directory
Now to get the data from external system i need to pass each row one by one and get the new columns from external system.
But to query the external system with each row i need to pass a value which i have got from external system by sending all the values of a perticular column.
e.g - my csv file is -
name, value, age
10,v1,12
11,v2,13
so to enrich that i first need to fetch a value as per total age - i.e 12 + 13 and get the value total from external system and then i need to send that total with each row to external system to get the enriched value.
I am doing it using spring batch but using fLatFileReader i can read only one line at a time. How would i refer to whole column before that.
Please help.
Thanks
There are two ways to do this.
OPTION 1
Go for this option if you are okey to store all the records in memory. Totally depends how many record you need to calculate the total age.
Reader(Custom Reader) :
Write the logic to read one line at a time.
You need to return null from read() only when you feel all the lines are read for calculating the total age.
NOTE:- A reader will loop the read() method until it returns null.
Processor : You will get the full list of records. calculate the total age.
Connect the external system and get the value. Form the records which need to be written and return from the process method.
NOTE:- You can return all the records modified by a particular field or merge a single record. This is totally your choice what you would like to do.
Writer : Write the records.
OPTION 2
Go for this if option1 is not feasible.
Step1: read all the lines and calculate the total age and pass the value to the next step.
Step2: read all the lines again and update the records with required update and write the same.

Pentaho Data Integration (DI) Get Last File in a Directory of a SFTP Server

I am doing a transformation on Pentaho Data Integration and I have a list of files in a directory of my SFTP server. This files are named with FILE_YYYYMMDDHHIISS.txt format, my directory looks like that:
mydirectory
FILE_20130701090000.txt
FILE_20130701170000.txt
FILE_20130702090000.txt
FILE_20130702170000.txt
FILE_20130703090000.txt
FILE_20130703170000.txt
My problem is that I need get the last file of this list in accordance of its creation date, to pass it to other transformation step...
How can I do this in Pentaho Data Integration?
In fact this is quite simple because your file names can be sorted textually, and the max in the sort list will be your most recent file.
Since a list of files is likely short, you can use a Memory Group by step. A grouping step needs a separate column by which to aggregate. If you only have column and you want to find the max in the entire set, you can add a grouping column with an Add Constants step, and configure it to add a column with, say an integer 1 in every row.
Configure your Memory Group by to group on the column of 1s, and use the file name column as the subject. Then simply select the Maximum grouping type. This will produce a single row with your grouping column, the file name field removed and the aggregate column containing your max file name. It would look something like this:

Hadoop map-reduce : Order of records while grouping

I have a record in each line of input and each record has around 10 fields. First, I group the records by three fields (field1, field2, field3) thus one mapper/reducer is responsible for one unique group (based on the three fields). Within each group, I sort the records based on another integer field timestamp and I tag each record in the group with the same tag aTag by adding another field.
Lets say that in mapper#1, I tag a sorted group as aTag and in mapper#2, I tag another group (a different group because I initially grouped the records based on the three fields) with the same tag aTag.
Now, if I group the records based on the tag field (i.e., grouping the groups in different mappers), I notice that the ordering within each group is no more preserved. I was expecting that since each mapper has a group with all records having the same tag, grouping by the tag name should just involve getting the relevant groups from other mappers and just concatenating them without re-ordering each individual group.
Is it because I am trying to store the records in gzip format and hence it tries to re-order the records for better compression? Also I would like to know how to preserve the order after grouping by the tag name.
It seems that you are trying to implement the sort step of MapReduce yourself in local memory, but then it completely ignores what you did and re-sorts the items in each group anyway. The proper way to fix this would be to specify a comparator on the keys, so that within each partition so that the merged input to the reducer is according to that comparison function. This means that
You don't have to do the sorting yourself
You don't run out of memory on one machine trying to sort a really large group.
It seems on your case that you'd want to add timestamp to the set of keys, tell it to partition on the first three keys, and tell it to sort on the timestamp.
For more information, see the following diagram, and Where is Sort used in MapReduce phase and why?

retrieving unique results from a column in a text file with Hadoop MapReduce

I have the data set below. I want to get a unique list of the first column as the output. {9719,382 ..} there are integers in the end of the each line so checking if it starts and ends with a number is not a way and i couldn't think of a solution. Can you show me how to do it? I'd really
appreciate it if you show it in detail.(with what to do in map and what to do in reduce step)
id - - [date] "URL"
In your mapper you should parse each line and write out the token that you are interested in from the beginning of the line (e.g. 9719) as the Key in a Key-Value pair (the Value is irrelevant in this case). Since the keys will be sorted before sending to the reducer, all you need to do in the reducer is iterate thru the values and each time a value changes, output it.
The WordCount example app that is packaged with Hadoop is very close to what you need.

Mapping through two data sets with Hadoop

Suppose I have two key-value data sets--Data Sets A and B, let's call them. I want to update all the data in Set A with data from Set B where the two match on keys.
Because I'm dealing with such large quantities of data, I'm using Hadoop to MapReduce. My concern is that to do this key matching between A and B, I need to load all of Set A (a lot of data) into the memory of every mapper instance. That seems rather inefficient.
Would there be a recommended way to do this that doesn't require repeating the work of loading in A every time?
Some pseudcode to clarify what I'm currently doing:
Load in Data Set A # This seems like the expensive step to always be doing
Foreach key/value in Data Set B:
If key is in Data Set A:
Update Data Seta A
According to the documentation, the MapReduce framework includes the following steps:
Map
Sort/Partition
Combine (optional)
Reduce
You've described one way to perform your join: loading all of Set A into memory in each Mapper. You're correct that this is inefficient.
Instead, observe that a large join can be partitioned into arbitrarily many smaller joins if both sets are sorted and partitioned by key. MapReduce sorts the output of each Mapper by key in step (2) above. Sorted Map output is then partitioned by key, so that one partition is created per Reducer. For each unique key, the Reducer will receive all values from both Set A and Set B.
To finish your join, the Reducer needs only to output the key and either the updated value from Set B, if it exists; otherwise, output the key and the original value from Set A. To distinguish between values from Set A and Set B, try setting a flag on the output value from the Mapper.
All of the answers posted so far are correct - this should be a Reduce-side join... but there's no need to reinvent the wheel! Have you considered Pig, Hive, or Cascading for this? They all have joins built-in, and are fairly well optimized.
This video tutorial by Cloudera gives a great description of how to do a large-scale Join through MapReduce, starting around the 12 minute mark.
Here are the basic steps he lays out for joining records from file B onto records from file A on key K, with pseudocode. If anything here isn't clear, I'd suggest watching the video as he does a much better job explaining it than I can.
In your Mapper:
K from file A:
tag K to identify as Primary Key
emit <K, value of K>
K from file B:
tag K to identify as Foreign Key
emit <K, record>
Write a Sorter and Grouper which will ignore the PK/FK tagging, so that your records are sent to the same Reducer regardless of whether they are a PK record or a FK record and are grouped together.
Write a Comparator which will compare the PK and FK keys and send the PK first.
The result of this step will be that all records with the same key will be sent to the same Reducer and be in the same set of values to be reduced. The record tagged with PK will be first, followed by all records from B which need to be joined. Now, the Reducer:
value_of_PK = values[0] // First value is the value of your primary key
for value in values[1:]:
value.replace(FK,value_of_PK) // Replace the foreign key with the key's value
emit <key, value>
The result of this will be file B, with all occurrences of K replaced by the value of K in file A. You can also extend this to effect a full inner join, or to write out both files in their entirety for direct database storage, but those are pretty trivial modifications once you get this working.

Resources