Simulating Map-reduce using Cloudera - hadoop

I want to use cloudera to simulate Hadoop job on a single machine (of course with many VMs). I have 2 question
1) Can I change the replication policy of HDFS in cloudera?
2) Can I see cpu usage of each VMs?

You can use hadoop fs -setrep to change the replication factor on any file. You can also change the default replication factor by modifying hdfs-site.xml by adding the following:
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
You'll have to log into each box and use top to see the cpu usage of each VM. There is nothing out of the box in Hadoop that lets you see this.

I found out that I can change data replication policy by changing "ReplicationTargetChooser.java".

Related

does configuration properties on hdfs-site.xml applies to NameNode in hadoop?

I recently set up a test environment cluster for hadoop -One master and two slaves.
Master is NOT a dataNode (although some use master node as both master and slave).
So basically I have 2 datanodes. The default configuration for replication is 3.
Initially, I did not change any configuration on conf/hdfs-site.xml. I was getting error could only be replicated to 0 nodes instead of 1.
I then changed the configuration in conf/hdfs-site.xml in both my master and slave as follows:
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
and lo! everything worked fine.
My question is: does this configuration applies to NameNode or DatNode although I changed hdfs-site.xml in all my datanodes and NameNodes.
if my understanding is correct, NameNode allocates the block for datanodes. so replication configuration in master or NameNode is important and probably not needed in datanodes. Is this correct?
I am confused with the actual purpose of different xml in hadoop framework: from my little understanding:
1) core-site.xml - configuration parameters for the entire framework, such as where the logs files should go, what is the default name of the filesystem etc
2) hdfs-site.xml - applies to individual datanodes. how many replication, data dir in the local filesystem of the datanode, size of the block etc
3) mapred-site.xml - applies to datanode and gives configuration for the task tracker.
please correct if this is wrong. These configuration files are not well explained in the tutorials I had. so it comes from my look into these files in the defaults.
This is my understanding and I may be wrong.
{hdfs-site.xml} - is to for the properties of HDFS(Hadoop Distributed File System)
{mapred-site.xml} - is to for the properties of MapReduce
{core-site.xml} - is for other properties which touch both HDFS and MapReduce
this is usually caused by insufficient space.
please check the total capacity of your cluster and used, remaining ratio using
hdfs dfsadmin -report
also check dfs.datanode.du.reserved in the hdfs-site.xml, if this value is larger than your remained capacity
look for other possible causes explained here

Hadoop HA Namenode remote access

Im configuring Hadoop 2.2.0 stable release with HA namenode but i dont know how to configure remote access to the cluster.
I have HA namenode configured with manual failover and i defined dfs.nameservices and i can access hdfs with nameservice from all the nodes included in the cluster, but not from outside.
I can perform operations on hdfs by contact directly the active namenode, but i dont want that, i want to contact the cluster and then be redirected to the active namenode. I think this is the normal configuration for a HA cluster.
Does anyone now how to do that?
(thanks in advance...)
You have to add more values to the hdfs site:
<property>
<name>dfs.ha.namenodes.myns</name>
<value>machine-98,machine-99</value>
</property>
<property>
<name>dfs.namenode.rpc-address.myns.machine-98</name>
<value>machine-98:8100</value>
</property>
<property>
<name>dfs.namenode.rpc-address.myns.machine-99</name>
<value>machine-145:8100</value>
</property>
<property>
<name>dfs.namenode.http-address.myns.machine-98</name>
<value>machine-98:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.myns.machine-99</name>
<value>machine-145:50070</value>
</property>
You need to contact one of the Name nodes (as you're currently doing) - there is no cluster node to contact.
The hadoop client code knows the address of the two namenodes (in core-site.xml) and can identity which is the active and which is the standby. There might be a way by which you can interrogate a zookeeper node in the quorum to identify the active / standby (maybe, i'm not sure) but you might as well check one of the namenodes - you have a 50/50 chance it's the active one.
I'd have to check, but you might be able to query either if you're just reading from HDFS.
for Active Name node you can always ask Zookeeper.
you can get the active name node from the below Zk Path.
/hadoop-ha/namenodelogicalname/ActiveStandbyElectorLock
There are two ways to resolve this situation(code with java)
use core-site.xml and hdfs-site.xml in your code
load conf via addResource
use conf.set in your code
set hadoop conf via conf.set
an example use conf.set

Coexistance of Hadoop MR1 and MR2

Is is possible to run both Hadoop MR1 and MR2 together in same cluster (at least, in theory)?
If yes, how can I do that?
In theory, you can do as:
run DataNode TaskTracker and NodeManager on one machine
run NameNode SecondaryNameNode and ResourceManager on other machines
all processes with different ports
but, not suggest to do this, see cloudera blog:
"Make sure you are not trying to run MRv1 and YARN on the same set of nodes at the same time. This is not supported; it will degrade performance and may result in an unstable cluster deployment."
In theory, yes.
Unpack the tarball into 2 different locations, owned by different users.
In both of them, change all mapred/yarn related ports to mutually exclusive sets.
Run the datanodes from only one of the locations.
Start mapred/yarn related daemons in both locations
Do post here if it works.
Also dfs name dir and data dir should be different for MR1 and MR2.
<property>
<name>dfs.name.dir</name>
<value>/home/userx/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/userx/hdfs/data</value>
</property>
It seems for Mapr, this is not only a theory but practice, check this link.
You dont need to run both, just run the Hadoop 2.0, it provides full backward compatibility to MapReduce applications written for Hadoop 1.0.
There are few minor changes in API, please look at the link to check if any changes effect your applications.

Using s3 as fs.default.name or HDFS?

I'm setting up a Hadoop cluster on EC2 and I'm wondering how to do the DFS. All my data is currently in s3 and all map/reduce applications use s3 file paths to access the data. Now I've been looking at how Amazons EMR is setup and it appears that for each jobflow, a namenode and datanodes are setup. Now I'm wondering if I really need to do it that way or if I could just use s3(n) as the DFS? If doing so, are there any drawbacks?
Thanks!
in order to use S3 instead of HDFS fs.name.default in core-site.xml needs to point to your bucket:
<property>
<name>fs.default.name</name>
<value>s3n://your-bucket-name</value>
</property>
It's recommended that you use S3N and NOT simple S3 implementation, because S3N is readble by any other application and by yourself :)
Also, in the same core-site.xml file you need to specify the following properties:
fs.s3n.awsAccessKeyId
fs.s3n.awsSecretAccessKey
fs.s3n.awsSecretAccessKey
Any intermediate data of your job goes to HDFS, so yes, you still need a namenode and datanodes
https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/core-default.xml
fs.default.name is deprecated, and maybe fs.defaultFS is better.
I was able to get the s3 integration working using
<property>
<name>fs.default.name</name>
<value>s3n://your-bucket-name</value>
</property>
in the core-site.xml and get the list of the files get using hdfs ls command.but should also should have namenode and separate datanode configurations, coz still was not sure how the data gets partitioned in the data nodes.
should we have local storage for namenode and datanode?

Is it possible to run Hadoop in Pseudo-Distributed operation without HDFS?

I'm exploring the options for running a hadoop application on a local system.
As with many applications the first few releases should be able to run on a single node, as long as we can use all the available CPU cores (Yes, this is related to this question). The current limitation is that on our production systems we have Java 1.5 and as such we are bound to Hadoop 0.18.3 as the latest release (See this question). So unfortunately we can't use this new feature yet.
The first option is to simply run hadoop in pseudo distributed mode. Essentially: create a complete hadoop cluster with everything on it running on exactly 1 node.
The "downside" of this form is that it also uses a full fledged HDFS. This means that in order to process the input data this must first be "uploaded" onto the DFS ... which is locally stored. So this takes additional transfer time of both the input and output data and uses additional disk space. I would like to avoid both of these while we stay on a single node configuration.
So I was thinking: Is it possible to override the "fs.hdfs.impl" setting and change it from "org.apache.hadoop.dfs.DistributedFileSystem" into (for example) "org.apache.hadoop.fs.LocalFileSystem"?
If this works the "local" hadoop cluster (which can ONLY consist of ONE node) can use existing files without any additional storage requirements and it can start quicker because there is no need to upload the files. I would expect to still have a job and task tracker and perhaps also a namenode to control the whole thing.
Has anyone tried this before?
Can it work or is this idea much too far off the intended use?
Or is there a better way of getting the same effect: Pseudo-Distributed operation without HDFS?
Thanks for your insights.
EDIT 2:
This is the config I created for hadoop 0.18.3
conf/hadoop-site.xml using the answer provided by bajafresh4life.
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>file:///</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost:33301</value>
</property>
<property>
<name>mapred.job.tracker.http.address</name>
<value>localhost:33302</value>
<description>
The job tracker http server address and port the server will listen on.
If the port is 0 then the server will start on a free port.
</description>
</property>
<property>
<name>mapred.task.tracker.http.address</name>
<value>localhost:33303</value>
<description>
The task tracker http server address and port.
If the port is 0 then the server will start on a free port.
</description>
</property>
</configuration>
Yes, this is possible, although I'm using 0.19.2. I'm not too familiar with 0.18.3, but I'm pretty sure it shouldn't make a difference.
Just make sure that fs.default.name is set to the default (which is file:///), and mapred.job.tracker is set to point to where your jobtracker is hosted. Then start up your daemons using bin/start-mapred.sh . You don't need to start up the namenode or datanodes. At this point you should be able to run your map/reduce jobs using bin/hadoop jar ...
We've used this configuration to run Hadoop over a small cluster of machines using a Netapp appliance mounted over NFS.

Resources