how does the task-tracker gets the data for map task from another node if data is not-local? - hadoop

How does the task tracker gets its data for map task from another node in case if data is not-local?
Does it talk directly to the data node of the machine containing data directly or it talks to its own data node which in-turn talks to the other one?
Thanks,
Suresh.

The task tracker itself doesn't get the data - it launches (or reuses) a JVM to run a Map task. The map task uses the DFS File System client to query the name node for the block locations of the file it is to process. The client then connects to the data node where one of the blocks is replicated to actually acquire the file contents (as a stream).
If you want to delve deeper, the source is an excellent place to get a good understanding - check out the DFSClient and inner class DFSInputStream (especially the bestNode method)
http://svn.apache.org/viewvc/hadoop/common/tags/release-0.20.2/src/hdfs/org/apache/hadoop/hdfs/DFSClient.java?view=markup
Class starts around line 1443
openInfo() method # line 1494
chooseDataNode() method # 1800

Related

Getting duplicates with NiFi HBase_1_1_2_ClientMapCacheService

I need to remove duplicates from a flow I've developed, it can receive the same ${filename} multiple times. I tried using HBase_1_1_2_ClientMapCacheService with DetectDuplicate (I am using NiFi v1.4), but found that it lets a few duplicates through. If I use DistributedMapCache (ClientService and Server), I do not get any duplicates. Why would I receive some duplicates with the HBase Cache?
As a test, I listed a directory (ListSFTP) with 20,000 files on all cluster nodes (4 nodes) and passed to DetectDuplicate (using the HBase Cache service). It routed 20,020 to "non-duplicate", and interestingly the table actually has 20,000 rows.
Unfortunately I think this is due to a limitation in the operations that are offered by HBase.
The DetectDuplicate processor relies on an operation "getAndPutIfAbsent" which is expected to return the original value, and then set the new value if it wasn't there. For example, first time through it would return null and set the new value, indicating it wasn't a duplicate.
HBase doesn't natively support this operation, so the implementation of this method in the HBase map cache client does this:
V got = get(key, keySerializer, valueDeserializer);
boolean wasAbsent = putIfAbsent(key, value, keySerializer, valueSerializer);
if (! wasAbsent) return got;
else return null;
So because it is two separate calls there is a possible race condition...
Imagine node 1 calls the first line and gets null, but then node 2 performs the get and the putIfAbsent, now when node 1 calls putIfAbsent it gets false because node 2 just populated the cache, so now node 1 returns the null value from the original get... both of these look like non-duplicates to DetectDuplicate.
In the DistributedMapCacheServer, it locks the entire cache per operation so it can provide an atomic getAndPutIfAbsent.

HDFS exclude datanodes in AddblockRequestProto

I am implementing a datanode failover for writing in HDFS, that HDFS can still write a block when the first datanode of the block fails.
The algorithm is. First, the failure node would be identified. Then, a new block is requested. The HDFS port api provides excludeNodes, which I used to tell Namenode not to allocate new block there. failedDatanodes are identified failed datanodes, and they are correct in logs.
req := &hdfs.AddBlockRequestProto{
Src: proto.String(bw.src),
ClientName: proto.String(bw.clientName),
ExcludeNodes: failedDatanodes,
}
But, the namenode still locates the block to the failed datanodes.
Anyone knows why? Did I miss anything here?
Thank you.
I found the solution that, first abandon the block and then request the new block. In the previous design, the new requested block cannot replace the old one

Hadoop passing variables from reducer to main

I am working on a map reduce program. I'm trying to pass parameters to the context configuration in the reduce method using the setLong method and then after completion read them in the main
in reducer:
context.getConfiguration().setLong(key, someLong);
In the Main after the job completion i try to read using :
long val = job.getConfiguration().getLong(key, -1);
but i always get -1.
when i try reading inside the reducer i see that the value is set and i get the correct answer.
am i missing something?
Thank you
You can use counters: set&update their value in reducers and then you can access them in your client application (Main).
You can translate configuration from main to map task or reduce task, but you cannot translate it back. The procedure of configuration translation is:
A configuration file is generated on the MapReduce client based on the configuration you set on main, and it will be pushed to a HDFS path only shared by the job. The file will be readonly
When launching a map or reduce task, the configuration file is pulled from the HDFS path, and task init the configuration based by the file.
If you want to translate configuration back, you may use another HDFS file: update the file on Reducer, and read it after job completes

How does communication between datanodes work in a Hadoop cluster?

I am new to Hadoop and help with this questions is appreciated.
The replication of blocks in a cluster is handled by individual data nodes having a copy of the block, but how does this transfer take place without considering namenode.
I found that ssh is setup from slaves to master and master to slaves unlike slave to slave.
Could someone explain?
Is it through hadoop data transfer protocol like Client to DN communication ?
http://blog.cloudera.com/blog/2013/03/how-to-set-up-a-hadoop-cluster-with-network-encryption/
After digging into hadoop source code,I find datanodes use BlockSender class to transfer block data.Actually Socket is under the hood.
Below is my hack way to find this.(hadoop version 1.1.2 used here)
DataNode Line 946 is offerService method, which is a main loop
for service.
codes above is datanode send heartbeat to namenode mainly to tell it is alive.the return value are some commands which datanode will process.this is where block copy happens.
digging into processCommand we come at Line 1160
here is a comment which we can be undoubtedly sure transferBlocks is what we want.
digging into transferBlocks, we come at Line 1257, a private method.At the end of the method,
new Daemon(new DataTransfer(xferTargets, block, this)).start();
so,we know datanode start a new thread to do block copy.
Look at DataTransfer in Line 1424,check at run method.
at the nearly end of run method,we find following snippets:
// send data & checksum
blockSender.sendBlock(out, baseStream, null);
from code above, we can know BlockSender is the actual worker.
I have done my work,It is up to you to find more,such as BlockReader
Whenever a block has to be written in HDFS, the NameNode will allocate space for this block on any datanode. It will also allocate space on other datanodes for the replicas of this block. Then it will instruct the first datanode to write the block and also to replicate the block on the other datanodes where space was allocated for the replicas.

MRJob and mapreduce task partitioning over Hadoop

I am trying to perform a mapreduce job using the Python MRJob lib and am having some issues getting it to properly distribute across my Hadoop cluster. I believe I am simply missing a basic principle of mapreduce. My cluster is a small, one master one slave test cluster. The basic idea is that I'm just requesting a series of web pages with parameters, doing some analysis on them and returning back some properties on the web page.
The input to my map function is simply a list of URLs with parameters such as the following:
http://guelph.backpage.com/automotive/?layout=bla&keyword=towing
http://guelph.backpage.com/whatever/?p=blah
http://semanticreference.com/search.html?go=Search&q=red
http://copiahcounty.wlbt.com/h/events?ename=drupaleventsxmlapi&s=rrr
http://sweetrococo.livejournal.com/34076.html?mode=ffff
Such that the key-value pairs for the initial input are just key:None, val:URL.
The following is my map function:
def mapper(self, key, url):
'''Yield domain as the key, and (url, query parameter) tuple as the value'''
parsed_url = urlparse(url)
domain = parsed_url.scheme + "://" + parsed_url.netloc + "/"
if self.myclass.check_if_param(parsed_url):
parsed_url_query = parsed_url.query
url_q_dic = parse_qs(parsed_url_query)
for query_param, query_val in url_q_dic.iteritems():
#yielding a tuple in mrjob will yield a list
yield domain, (url, query_param)
Pretty simple, I'm just checking to make sure the URL has a parameter and yielding the URL's domain as key and a tuple giving me the URL and the query parameter as value which MRJob kindly transforms into a list to pass to the reducer, which is the following:
def reducer(self, domain, url_query_params):
final_list = []
for url_query_param in url_query_params:
url_to_list_props = url_query_param[0]
param_to_list_props = url_query_param[1]
#set our target that we will request and do some analysis on
self.myclass.set_target(url_to_list_props, param_to_list_props)
#perform a bunch of requests and do analysis on the URL requested
props_list = self.myclass.get_props()
for prop in props_list:
final_list.append(prop)
#index this stuff to a central db
MapReduceIndexer(domain, final_list).add_prop_info()
yield domain, final_list
My problem is that only one reducer task is run. I would expect the number of reducer tasks to be equal to the number of unique keys emitted by the mapper. The end result with the above code is that I have one reducer which runs on the master, but the slave sits idly and does nothing, which is obviously not ideal. I notice that in my output a few mapper tasks are started, but always only 1 reducer task. Other than that, the task runs smoothly and all works as expected.
My question is... what the heck am I doing wrong? Am I misunderstanding the reduce step or screwing up my key-value pairs somewhere? Why are there not multiple reducers running on this job?
Update: OK so from the answer given I increased mapred.reduce.tasks to higher (it was the default which I now realize is 1). This was indeed why I was getting 1 reducer. I now see 3 reduce tasks being performed simultaneously. I now have an import error on my slave that needs to be resolved but at least I am getting somewhere...
The number of reducers is totally unrelated to the form of your input data. For MRJob it looks like you need bootstrap options

Resources