http://localhost:9870 does not work HADOOP - hadoop

I am beginner to learn hadoop,i use latest version ubuntu,I have an error when browsing localhost:9870 the browser says that the page does not exist

First, you need to check you Hadoop daemons are running by entering the command: jps. Here my namenode is also configured as datanode.
Second, check Namenode Java Process is running with 9870 port or not by entering the command.
netstat -an | grep 9870
Third, check dfs.namenode.http-address property in hdfs-site.xml
Finally, make sure you have disabled firewall

$ bin/hdfs namenode -format
then -format I can restart hdfs
I don't know why it must be

Related

Cannot connect slave1:8088 in hadoop 2.7.2

I am new in hadoop and I have installed hadoop 2.7.2 into two machines which are master and slave1. I have followed this tutorial. It was not mentioned in the tutorial but I have also edited JAVA_HOME and HADOOP_CONF_DIR variables in hadoop-env.sh. At the end I have two machines hadoop installed. In master NameNode, DataNode, SecondaryNameNode, ResourceManager and NodeManager are running and in slave1 DataNode and NodeManager are running.
I am able to go to master:8088 in the browser and when I go http://master:8088/cluster/nodes, there is only master node here. I am not able to go isci17:8088 and that is not a live node. Why could it be?
Port 8088 is the resource manager web ui port, so if it is running on master you probably won't have it on the slave.
You should be able to also go to the name node web ui on port 50070 on your name node as well to see status such as http://master:50070/ and the MapReduce JobHistory Server at http://hostname:19888/ for a web ui.
If you have access to a terminal session you run the following command on each server as root/sudo user to see which ports are listening on which server in a Linux terminal session;
sudo lsof -i tcp | grep -i LISTEN
You also also run hadoop cli commands that will give you info;
You can run the following to check hadoops ports in a terminal session.
hdfs portmap
Other Health checks on command line;
hdfs classpath
hdfs getconf -namenodes
hdfs dfsadmin -report -live
hdfs dfsadmin -report -dead
hdfs dfsadmin -printTopology
Depending on if the hadoop cli command works automatically you might have to find the executable to run ./hdfs. Also depending on distro/version you might have to replace the command hdfs with the command hadoop.
If you want to see your cluster configurations check your /etc/hadoop/conf folder along with /etc/hadoop/hive . You will find about 5-10 *-site.xml files. There configuration files contain your clusters configuration with the hostnames and ports.

Cannot start running on browser the namenode for Hadoop

It is my first time in installing Hadoop on my Linux (Fedora distro) running on VM (using Parallel on my Mac). And I followed every step on this video and including the textual version of it.And then when I run it on localhost (or the equivalent value from hostname) in port 50070, I got the following message.
...can't establish a connection to the server at localhost:50070
When I run the jps by the way command I don't have the datanode and namenode unlike at the end of the textual version tutorial which has the following:
While mine has only the following processes running:
6021 NodeManager
3947 SecondaryNameNode
5788 ResourceManager
8941 Jps
When I run the hadoop namenode command I have some of the following [redacted] error:
Cannot access storage directory /usr/local/hadoop_store/hdfs/namenode
16/10/11 21:52:45 WARN namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop_store/hdfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.
I tried to access by the way the above mentioned directories and it existed.
Any hint for this newbie? ;-)
You would need to give read and write permission to user with which you are running the services on directory /usr/local/hadoop_store/hdfs/namenode.
Once done, you should run format command using hadoop namenode -format
Then try to start your services.
delete files /app/hadoop/tmp/*
and try again formatting the namenode and then start-dfs.sh & start-yarn.sh

How do I check the status of Different Daemons in Hadoop?

I am struggling to find out how to determine the status of different daemons in Hadoop. I know the status of only two daemons (namenode, jobtracker) since it is default in cloudera. How do I find the rest?
Use jps (need jdk installed) to see all the daemons.
Besides jps, you can scan the results given by below to check if processes are running-
ps -ef | grep hadoop
You can also use the following once the server is running
./hadoop dfsadmin -report
Another place to look at is the web interfaces of namenode and jobtracker for more details

Name node is not displaying iwhen I hit JPS command

17223 JobTracker
16897 DataNode
17518 Jps
17451 TaskTracker
17129 SecondaryNameNode
8571 FsShell
Name node is not displaying
Seems like you are using the same user for starting all users, so If namenode is coming in the jps output, Probably namenode daemons might be got killed to not started properly. you may use the following command for ensuring namenode process running or not
ps aux | grep -i namenode
If not running you may need to format your namenode before starting hdfs service, stop all hdfs deamons using stop-dfs.sh script then format your namenode using the below command and start HDFS using the start-dfs.sh script.
hadoop namenode -format
Go through the below SO post if you are hitting the below situation.
Hadoop namenode needs to be formatted after every computer start
If you are looking to check all running JVMs on the host via 'jps',
you need to run the command as root. Otherwise, 'jps' will only show
JVMs running as your currently logged-in user.
Please see this link for more:
https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/1dlxmB_GVuU
Should always check the logs.. :)

HBase connection exception

I try to run HBase in a Pseudo-Distributed mode. But it doesn't work after I set hbase-site.xml.
Each time I try to run a command inside hbase shell I get this error:
ERROR:
org.apache.hadoop.hbase.ZooKeeperConnectionException:
org.apache.hadoop.hbase.ZooKeeperConnectionException:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = connectionLoss for
/hbase
I set up ssh and make sure all port are correct.
Moreover, I cannot stop hbase though ./bin/stop-hbase.sh. I only get the follow output.
stopping hbase........................................................
Pseudo-distributed means that you are running all of the processes on one machine. You need to check that all of the required processes are running:
Hadoop:
NameNode
DataNode
JobTracker
TaskTracker
Zookeeper:
HQuorumPeer
HBase:
HMaster
RegionServer
You also need to ensure that your hbase-site.xml contains the correct entries for zookeeper defining the host name and the port. The HBase FAQ and Wiki are really quite good. What are you missing from there?
It's because the HBase documentation has you setup your HDFS settings to point to port 8020, but the Hadoop instructions configure HDFS for port 9000.
Change hbase-site.xml settings that HBase recommends to point to port 9000 instead:
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
<description>The directory shared by RegionServers.
</description>
</property>
I had similar issue and got the same error message as above. In my case HMaster was not running. Using
sudo start-hbase.sh
resolved the issue.
i just fixed the problem by deleting the hbase.rootdir and the hbase.zookeeper.property.dataDir folders. for example:
more conf/hbase-site.xml
gives me:
hbase.rootdir
file:///somepath/hbase/testuser/hbase
hbase.zookeeper.property.dataDir
/somepath/hbase/testuser/zookeeper
then remove the old data:
rm -fr /somepath/hbase/testuser/hbase
mkdir -p /somepath/hbase/testuser/hbase
rm -fr /somepath/hbase/testuser/zookeeper
mkdir -p /somepath/hbase/testuser/zookeeper
then to start it:
bin/start-hbase.sh
and finally i could connect to the local instance:
./bin/hbase shell

Resources