Hadoop config error - hadoop

I am trying to run Multi Node Cluster of Hadoop over a LAN Network.
I am running my master as namenode and datanode both
and another machine as datanode
When I started hadoop from master and did jps on master and slave I got
master > NameNode
DataNode
SecondaryNameNode
JobTracker
TaskTracker
Jps
and on slave
slave > DataNode
TaskTracker
Jps
but after a while I get :(
slave > Jps
so I checked my log of datanode on slave and I am getting this error
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol version mismatch. (client = 25, server = 26)
Is it because of the different versions of hadoop on master and slave???

You need to install the same version across the whole cluster.

Yes, it is because of using different protocol on master and slave machine.
In this case, slave will not be able to communicate with master machine.

Related

Adding a node to hadoop cluster without restarting master

i have created a hadoop cluster and wanted to add a new node node in the cluster running as a slave without restarting the master node
how can this be acheived
Datanodes and nodemanagers can be added without restarting the namenode(s) or resource manager(s).
More specifically, these need to be ran on the machines of those running services
Namenode
hdfs dfsadmin -refreshNodes
ResourceManager
rmadmin -refreshNodes

Datanode is not starting in hadoop-hbase start?

I am running the following script to run all the hbase and hadoop processes in my hbase setup in virtual machine.
#!/bin/sh
start-dfs.sh
start-yarn.sh
start-hbase.sh
#hbase-daemon.sh start rest
hbase-daemon.sh start thriftr
Earlier all the processes used to run properly. But recently, I have force shutdown my virtual machine process without stopping the hbase and hadoop related processes. Then my datanode process stopped. Later I have formatted my name node process, using some suggestion on online. Now my name node comes properly but data node process does not come up. When I check the running java process (JPS) the datanode process is missing
4672 NodeManager
5474 ThriftServer
4098 NameNode
4408 SecondaryNameNode
5723 Jps
4555 ResourceManager
5372 HRegionServer
5246 HMaster
5182 HQuorumPeer
But earlier the DataNode process used to come properly. Is it because of formatting my namenode. Do I need to change any config data or someting also?

Two hadoop nodes on same machine while a second machine not joining the cluster

I have a test cluster of two machines, on both of them hadoop is installed. I've configured the hadoop cluster but on admin UI (as in the below picture) I see that two nodes are running on the same master machine, and that the other machine has no Hadoop node.
On master machine following services are running:
~$ jps
26310 ResourceManager
27593 Jps
26216 DataNode
26135 NameNode
26557 NodeManager
26701 JobHistoryServer
On the slave machine:
~$ jps
2614 DataNode
2920 Jps
2707 NodeManager
I don't why the slave is not joining the cluster (It was before). I tried to shutdown all servers on both machines and format HDFS then restarting everything but that didn't help. Any help to figure what's causing that behavior is appreciated.
Fixed, the two machines had same hostname! So I just renamed the slave.

Hadoop: pseudo cluster, adding datanode

am trying to install a multiple pseudo nodes for an experimental cluster. The reason is simple: I just have only one machine in my office.
Therefore, i followed this guide: and especially the answer of Matt:
http://search-hadoop.com/m/sApJY1zWgQV/
I created an additional folder conf2
1.1. In hadoop-env.sh, i edited HADOOP_IDENT_STRING to ${USER}_02
1.2. I changed the data.dir in hdfs-site.xml
1.3. In hdfs-site.xml i changed the port of:
dfs.datanode.address (default 0.0.0.0:50010)
dfs.datanode.ipc.address (default 0.0.0.0:50020)
dfs.datanode.http.address (default 0.0.0.0:50075)
dfs.datanode.https.address (default 0.0.0.0:50475)
I tried the command: "./hadoop-daemons.sh --config ../conf2 start datanode"
on my current single node hadoop system
The error is still: "localhost: datanode running as process 42855. Stop it first."
The jps command says:
:~/hadoop/bin$ jps
2255 Jps
43412 SecondaryNameNode
43853 TaskTracker
42855 DataNode
43544 JobTracker
42537 NameNode
Does anyone have an idea how i could trick my hadoop system to accept the additional data node now?
thanks alot

cluster not working with cdh4 tarball installation

I am trying with installing CDH4 using tarball version , but facing issues as in steps taken by me are as below :
i downloaded tarball from link https://ccp.cloudera.com/display/SUPPORT/CDH4+Downloadable+Tarballs
i first untar the hadoop-0.20-mapreduce-0.20.2+1341 tar file
i did with configuration changes in
hadoop-0.20-mapreduce-0.20.2+1341 since i wanted mrv1 not yarn .
the first thing as per mentioned in cdh4 installation was to configure HDFS
i made the relevant changes in
core-site.xml
hdfs-site.xml
mapred-site.xml
masters --- which is my namenode
slaves ---- my datanodes
copied the hadoop configurations on all the nodes in the cluster
did a namenode format .
after format i had to start the cluster , but in the bin folder could not
find start-all.sh script . so in that case i started with command
bin/start-mapred.sh
in the logs it shows jobtracker started and tasktracker started on slave nodes
but when i do a jps
i can see only
jobtracker
jps
further going did a datanode start on the datanode with below command
bin/hadoop-daemon.sh start datanode .
it shows datanode started .
Namenode not getting started , tasktracker not getting started .
when i checked with my logs i could see
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.FileNotFoundException: webapps/hdfs not found in CLASSPATH
not sure what is stopping my cluster to work .
earlier i had a cdh3 running . so i stopped the cdh3 cluster . Then i started with installing cdh4 . Also i changed all the directories hdfs-site.xml i.e. pointed it new empty directories for namenode and datanode and not the used the ones defined in cdh3.
but still nothing seems to help .
Also i turned off firewall since i do have a root access but same thing it did not work for me .
Any help on above will be great help.
thank you for kind reply but
I do not have
start-dfs.sh file in bin folder
only files in /home/hadoop-2.0.0-mr1-cdh4.2.0/bin folder are as
start-mapred.sh
stop-mapred.sh
hadoop-daemon.sh
hadoop-daemons.sh
hadoop-config.sh
rcc
slaves.sh
hadoop
command now i am using are as below
for starting datanode :
for x in /home/hadoop-2.0.0-mr1-cdh4.2.0/bin/hadoop-* ; do $x start datanode ; done ;
for starting namenode :
bin/start-mapred.sh
still i am working on the same issue .
Hi sorry for the above misunderstanding the following commands can be run to start your datanodes and namenode
To start namenode:
hadoop-daemon.sh start namenode
To start datanode:
hadoop-daemons.sh start datanode
To start secondarynamenode:
hadoop-daemons.sh --hosts masters start secondarynamenode
The jobtracker demon will get started in your master node and tasktraker demons will get started in each of your datanodes after you run the command
bin/start-mapred.sh
In Hadoop Cluster Setup only jobtacker demon will be show by JPS command in masternode and in each of your datanodes you can see Tasktracker demons runnig by using JPS command.
Then you have to start HDFS by running the following command in your masternode
bin/start-dfs.sh
This command will start namenode demon in you namenode machine (in this configuration your masternode itself I believe) and Datanode demons are started in each of your slave nodes.
Now you can run JPS on each of your datanodes and it will give output
tasktracker
datanode
jps
I think this link will be usefull
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/

Resources