MinMax algorithm implementation in map-reduce paradigm - hadoop

I have some data in Hbase tables ( few billions). I have to process them to score the stored documents. What are the possible algorithms that can be implemented and applied in mapreduce paradigm.
I have tried to deploy MinMax algorithm but due to its requirement, all data is shifted to single node in reducer phase (to find min and max value). Due to that reason, GC overhead limit is read that was quite expected as single node could not have so much memory to process all data in single go.
Is there any other option available for hbase document ranking (scoring) in mapreduce paradigm ?

Related

Data not being partitioned in optimal way on executors when using Spark's ALS Mlib algorithm

I am using ALS algorithm in Spark MLlib. Whenever I run my code, the data is cached on just 1 or 2 executors. I have 8 executors running but caching on just one node decreases the performance of my jobs drastically. The data cached is what ALS uses internally, i.e - itemBlocks, userBlocks, userFactors etc. I have given the number of partitions and am using implicit parameter in the model.
My data size is quite small in some use cases. eg - 40-50MB. But still I can't understand why it would cache on just one node. Also sometimes 3-4 executors are used and sometimes 1-2. The partitioning is very random.

Hadoop Performance When retrieving Data Only

We know that performance of Hadoop may be increased by adding more data nodes. My question is: if we want to retrieve the data only without the need to process it or analyze it, is adding more data nodes will be useful? or it won't increase performance at all because we have retrieve operations only without any computations or map reduce jobs?
I will try to answer in parts:
If you only retrieve information from a hadoop cluster or HDFS then
it is similar to Cat command in linux, meaning only reading data
not processing.
If you want some calculations like SUM, AVG or any other aggregate
functions on top of your data then comes the concept of REDUCE ,
hence Map reduce comes into picture.
So hadoop is useful or worthy when your data is Huge and you do
calculations also. I think their is no performance benefits while
reading a small amount of data in HDFS than reading a Large amount
of data in HDFS (just think like you are storing your data in RDBMS
regularly and you only query select * statements on daily basis),
but when your data grows exponentially and you want to do
calculations your RDBMS query would take time to execute.
For Map reduce to work efficiently on huge data sets , you need to
have good amount of nodes and computing power, depending upon your
use case.

What Hadoop Mapreduce can achieve?

I am reading Hadoop mapreduce tutorials and come up with the following shallow understanding. Could anyone help confirm if my understanding is correct?
Mapreduce is a way to aggregate data
in a distributed environment
with non-structured data in very large files
using Java, Python, etc.
to produce similar results like what could be done in RDBMS using SQL aggregate functions
select count, sum, max, min, avg, k2
from input_file
group by k2
map() method basically pivots horizontal data v1 that is a line from
the input file into vertical rows, with each row having a string key
and a numeric value.
The grouping will happen in the shuffling and partition stage of the
data flow.
reduce() method will be responsible for computing/aggregating data.
Mapreduce jobs can be combined/nested just as SQL statement can be nested to produce complex aggregation output.
Is that correct?
With Hive on top of Hadoop, MR code will be generated by HiveQL Process Engine.
Therefore from coding perspective, MR coding using Java will gradually be replaced with high level HiveQL.
Is that true?
Have a look at this post for comparison between RDBMS & Hadoop
1.Unlike RDBMS, Hadoop can handle Peta bytes of data, which is distributed over thousands of nodes using commodity hardware. The efficiency of Map reduce algorithm lies with data locality during processing of data.
2.RDBMS can handle structured data only unlike Hadoop, which can handle structured, unstructured and semi-structured data.
Your understanding is correct regarding aggregation, grouping and partitioning.
You have provided example only for processing structured data.
HiveQL is getting converted into series of Map reduce jobs. On performance wise, HiveQL jobs will be slower compared to raw Map reduce jobs. HiveQL can't handle all types of data as explained above and hence it can't replace Map reduce jobs with java code.
HiveQL will co-exists with Map Reduce jobs in other languages. If you are looking for performance as key criteria of your map reduce job, you have to consider Java Map Reduce job as alternative. If you are looking for Map reduce jobs for semi-structured & un-structured data, you have to consider alternatives for Hive QL map reduce jobs.

Why are group by operations considered expensive in mapreduce?

Can anyone explain me how jobs are mapped and reduces in hadoop and why are group by operations considered expensive?
I won't say expensive really. But I would use the word that it does affect the performance, as for order by or sort the processing needed to order the record is much more. The processing of data by comparator and partitioner would be huge when millions or billions of records are being sorted.
I hope I could answer your question.
Performance in Hadoop is affected by two main factors:
1- Processing: The execution time spent for processing the map and reduce tasks over the cluster node.
2- Communication: Shuffling data, Some operations needs to send data from one node to another one for processing (like global sorting).
Groupby needs complexity needs affects the two factors. In the shuffle half of the data size might be shuffled between the nodes.

Where to add specific function in the Map/Reduce Framework

I have a general question to the MAP/Reduce Framework.
I have a task, which can be separated into several partitions. For each partition, I need to run a computation intensive algorithm.
Then, according to the MAP/Reduce Framework, it seems that I have two choices:
Run the algorithm in the Map stage, so that in the reduce stage, there is no work needed to be done, except collect the results of each partition from the Map stage and do summarization
In the Map stage, just divide and send the partitions (with data) to the reduce stage. In the reduce stage, run the algorithm first, and then collect and summarize the results from each partitions.
Correct me if I misunderstand.
I am a beginner. I may not understand the MAP/Reduce very well. I only have basic parallel computing concept.
You're actually really confused. In a broad and general sense, the map portion takes the task and divides it among some n many nodes or so. Those n nodes that receive a fraction of the whole task do something with their piece. When finished computing some steps on their data, the reduce operation reassembles the data.
The REAL power of map-reduce is how scalable it is.
Given a dataset D running on a map-reduce cluster m with n nodes under it, each node is mapped 1/D pieces of the task. Then the cluster m with n nodes reduces those pieces into a single element. Now, take a node q to be a cluster n with p nodes under it. If m assigns q 1/D, q can map 1/D to (1/D)/p with respect to n. Then n's nodes can reduce the data back to q where q can supply its data to its neighbors for m.
Make sense?
In MapReduce, you have a Mapper and a Reducer. You also have a Partitioner and a Combiner.
Hadoop is a distributed file system that partitions(or splits, you might say) the file into blocks of BLOCK SIZE. These partitioned blocks are places on different nodes. So, when a job is submitted to the MapReduce Framework, it divides that job such that there is a Mapper for every input split(for now lets say it is the partitioned block). Since, these blocks are distributed onto different nodes, these Mappers also run on different nodes.
In the Map stage,
The file is divided into records by the RecordReader, the definition of record is controlled by InputFormat that we choose. Every record is a key-value pair.
The map() of our Mapper is run for every such record. The output of this step is again in key-value pairs
The output of our Mapper is partitioned using the Partitioner that we provide, or the default HashPartitioner. Here in this step, by partitioning, I mean deciding which key and its corresponding values go to which Reducer(if there is only one Reducer, its of no use anyway)
Optionally, you can also combine/minimize the output that is being sent to the reducer. You can use a Combiner to do that. Note that, the framework does not guarantee the number of times a Combiner will be called. It is only part of optimization.
This is where your algorithm on the data is usually written. Since these tasks run in parallel, it makes a good candidate for computation intensive tasks.
After all the Mappers complete running on all nodes, the intermediate data i.e the data at end of Map stage is copied to their corresponding reducer.
In the Reduce stage, the reduce() of our Reducer is run on each record of data from the Mappers. Here the record comprises of a key and its corresponding values, not necessarily just one value. This is where you generally run your summarization/aggregation logic.
When you write your MapReduce job you usually think about what can be done on each record of data in both the Mapper and Reducer. A MapReduce program can just contain a Mapper with map() implemented and a Reducer with reduce() implemented. This way you can focus more on what you want to do with the data and not bother about parallelizing. You don't have to worry about how the job is split, the framework does that for you. However, you will have to learn about it sooner or later.
I would suggest you to go through Apache's MapReduce tutorial or Yahoo's Hadoop tutorial for a good overview. I personally like yahoo's explanation of Hadoop but Apache's details are good and their explanation using word count program is very nice and intuitive.
Also, for
I have a task, which can be separated into several partitions. For
each partition, I need to run a computing intensive algorithm.
Hadoop distributed file system has data split onto multiple nodes and map reduce framework assigns a task to every every split. So, in hadoop, the process goes and executes where the data resides. You cannot define the number of map tasks to run, data does. You can however, specify/control the number of reduce tasks.
I hope I have comprehensively answered your question.

Resources