I am a beginner to hadoop and HDFS, Now I have a situation where I need to connect 3 different PC having a file, NIFI and Hadop+HDFS.
Machine 1 : Will have a .csv file
Machine 2(Personal laptop): Will have my NIFI running to it.
Machine 3(Running at my office) : will have Hadoop+HDFS in it.
Now I would like to send a csv file from machine 1 to my database running on machine 3 using nifi which is running on machine 2.
I connect to machine 3 using ssh connection which is basically a router at my office.
Question:How can I connect to machine 3 from machine 2 which has nifi which can send the file to my hadoop hbase.
Should I use public key as configuration or should I use a different setup or server?
My configuration of files of haddo and hdfs are as follows
hbase-site.xml
<configuration>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2222</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/zookeeper</value>
</property>
</configuration>
<property>
<name>hbase.wal.provider</name>
<value>filesystem</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>./tmp</value>
</property>
</configuration>
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hdoop/tmpdata</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hdfs/datanode</value>
</property>
</configuration>
Look in to the configuration files and let me know where I need to change the properties and also I have install til now the psudo distributed mode HDFS in machine 3.
Pseudo distributed and fully distributed aren't any different.
You say only machine 3 has HDFS. Therefore only it needs to be running a Namenode and Datanode, setup in a distributed fashion, meaning that external clients will be able to communicate with it.
More specifically, no config file should be using localhost and should instead use LAN IP or hostnames
Related
I am trying to set a Hadoop cluster over two nodes. start-dfs.sh on my master node is opening a window and shortly after the window closes, and when i execute start-dfs it logs namenode is correctly launched, but datanode is not and logs the following :
Problem binding to [slave-VM1:9005] java.net.BindException: Cannot assign requested address: bind; For more details see: http://wiki.apache.org/hadoop/BindException
I have set
ssh-keygen -t rsa -P ''
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
(and also set adminstrators_authorized_keys file with the right public key) (also ssh user#remotemachine is working and gives access to the slave)
Here's my full Hadoop configuration set on both master and slave machines (Windows):
hdfs-site.xml :
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/C:/Hadoop/hadoop-3.2.2/data/namenode</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>slaveVM1:50475</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/C:/Hadoop/hadoop-3.2.2/data/datanode</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
core-site.xml :
<configuration>
<property>
<name>dfs.datanode.http.address</name>
<value>slaveVM1:9005</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://masterVM2:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/C:/Hadoop/hadoop-3.2.2/hadoopTmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://masterVM2:8020</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>masterVM2:9001</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>%HADOOP_HOME%/share/hadoop/mapreduce/*,%HADOOP_HOME%/share/hadoop/mapreduce/lib/*,%HADOOP_HOME%/share/hadoop/common/*,%HADOOP_HOME%/share/hadoop/common/lib/*,%HADOOP_HOME%/share/hadoop/yarn/*,%HADOOP_HOME%/share/hadoop/yarn/lib/*,%HADOOP_HOME%/share/hadoop/hdfs/*,%HADOOP_HOME%/share/hadoop/hdfs/lib/*</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.acl.enable</name>
<value>0</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
PS : i am adminstrator on both machines, and i set HADOOP_CONF_DIR C:\Hadoop\hadoop-3.2.2\etc\hadoop
I also set the slave IP in hadoop_conf_dir slaves file.
PS : if i remove the code :
<property>
<name>dfs.datanode.https.address</name>
<value>slave:50475</value>
</property>
from hdfs-site.xml
Then both datanote and namenode launch on the master node.
hosts :
*.*.*.* slaveVM1
*.*.*.* masterVM2
... are the IPs of the respective machines, all other entries are commented out
This usually happens
BindException: Cannot assign requested address: bind;
when the port in use. Meaning maybe it's the application was already started, or was started previously and didn't shut down properly or another applicaiton is using that port. Try rebooting, (as a heavy handed but reasonably effective way of clearing ports).
I have two VMs setup with Ubuntu 12.04. I am trying to setup Hadoop multinode, but after executing hadoop/sbin/start-dfs.shI see following process on my master:
20612 DataNode
20404 NameNode
20889 SecondaryNameNode
21372 Jps
However, there is nothing in the slave. Also when I do hdfs dfsadmin -report, I only see:
Live datanodes (1):
Name: 10.222.208.221:9866 (master)
Hostname: master
I checked logs, my start-dfs.sh does not even try to start datanode on my slave.
I am using following configuration:
#/etc/hosts
127.0.0.1 localhost
10.222.208.221 master
10.222.208.68 slave-1
changed hostanme in /etc/hostname in respective systems
Also, I am able to ping slave-1 from master system and vice-versa using ping.
/hadoop/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
#hadoop/etc/hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///hadoop/data/namenode</value>
<description>NameNode directory</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///hadoop/data/datanode</value>
<description>DataNode directory</description>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
/hadoop/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
</property>
</configuration>
I have also added master and slave-1 in /hadoop/etc/master and /hadoop/etc/slaveson both my master and slave system.
I have also tried cleaning data/* and then hdfs namenode -format before start-dfs.sh, still the problem persists.
Also, I have Network adapter setting marked as Bridged adapter.
Any possible reason datanode not starting on slave?
Can't claim to have the answer, but I found this "start-all.sh" and "start-dfs.sh" from master node do not start the slave node services?
Changed my slaves file to workers file and everything clicked in.
It seems you are using hadoop-2.x.x or above, so, try this configuration. And by default masters file( hadoop-2.x.x/etc/hadoop/masters) won't available on hadoop-2.x.x onwards.
hadoop-2.x.x/etc/hadoop/core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
~/etc/hadoop/hdfs-site.xml:
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///hadoop/data/namenode</value>
<description>NameNode directory</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///hadoop/data/datanode</value>
<description>DataNode directory</description>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
~/etc/hadoop/mapred-site.xml:
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
~/etc/hadoop/yarn-site.xml:
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
~/etc/hadoop/slaves
slave-1
copy all the above configured file from master and replace it on slave on this path hadoop-2.x.x/etc/hadoop/.
I am trying to run a word count example. My current testing setup is:
NameNode and ResourceManager on one machine (10.38.41.134).
DataNode and NodeManager on another (10.38.41.135).
They can ssh between them without passwords.
When reading the logs, I don't get any warnings, except a security warning (I didn't set it up for testing) and a containermanager.AuxServices 'mapreduce_shuffle' warning. Upon submitting the example job, nodes react to it and output logs, which suggests that they can communicate well. NodeManager outputs memory usage, but the job doesn't budge.
Where should I even start looking for problems? Everything else I could find is either old or non-relevant. I followed the official cluster setup tutorial for version 2.5.1 which left way too many questions unanswered.
My conf files are following:
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://10.38.41.134:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.rpc-bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>dfs.namenode.servicerpc-bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
<value>NEVER</value>
<description>
</description>
</property>
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>The runtime framework for executing MapReduce jobs.
Can be one of local, classic or yarn.
</description>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.delete.debug-delay-sec</name>
<value>300</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>mapreduce.jobtracker.address</name>
<value>10.38.41.134:50030</value>
</property>
</configuration>
Everything else is default.
I suggest you first try to get it working with a single server cluster so it's easier to debug.
When that is working, continue with two nodes.
As already suggested, memory might be an issue. Without tweaking the settings, it seems some 2GB is the minimum and I'd recommend at least 4GB per server. Also remember to check also the job's logs (under logs/userlogs, especially syslog).
P.S. I share your frustration about old / non-relevant documentation.
I am having trouble connection to my NameNode server from another server on the cluster. the namenode starts fine and i can get to the namenode dashboard browsing to http://localhost:50070, but trying to browse to http://hadoop-cluster-1:50070 or even using the IP address doesn't work. I am able to ping hadoop-cluster-1 and the IP address. I am also able to traceroute the port and host all from the server that i am getting an "Unable to connect" in Firefox. See below for values files.
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-cluster-1.com:8020</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>8192</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop-cluster-1:8020</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-cluster-1:8020</value>
<description>The port where the NameNode runs the HDFS protocol.
Combined with the NameNode's hostname to build its address.
</description>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>hadoop-cluster-1:8020</value>
<description>
RPC address that handles all clients requests. In the case of HA/Federation where multiple namenodes exist, the name ser
vice id is added to the name e.g. dfs.namenode.rpc-address.ns1 dfs.namenode.rpc-address.EXAMPLENAMESERVICE The value of this propert
y will take the form of nn-host1:rpc-port.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/nn1,file:///data/nn2</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>131072</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///data/data1</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
I run hbase in a distributed mode. Hbase starts region servers java processes on all nodes, but web ui doesn' show them
http://s1.ipicture.ru/uploads/20120517/16DXTnsU.png
here is hbase-site.xml
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>10.3.6.44</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/hdfs/zookeeper</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://10.3.6.44:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
btw hadoop cluster is running normally and sees all the datanodes
thanks very much for your help.
problem was with dns and hosts file.
Add this property to your hbase-site.xml file and see if it works for you
name - hbase.zookeeper.property.clientPort
value - 2181