When i try to start the hadoop on master node i am getting the following output.and the namenode is not starting.
[hduser#dellnode1 ~]$ start-dfs.sh
starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-dellnode1.library.out
dellnode1.library: datanode running as process 5123. Stop it first.
dellnode3.library: datanode running as process 4072. Stop it first.
dellnode2.library: datanode running as process 4670. Stop it first.
dellnode1.library: secondarynamenode running as process 5234. Stop it first.
[hduser#dellnode1 ~]$ jps
5696 Jps
5123 DataNode
5234 SecondaryNameNode
"Stop it first".
First call stop-all.sh
Type jps
Call start-all.sh (or start-dfs.sh and start-mapred.sh)
Type jps (if namenode don't appear type "hadoop namenode" and check error)
According to running "stop-all.sh" on newer versions of hardoop, this is deprecated. You should instead use:
stop-dfs.sh
and
stop-yarn.sh
Today, while executing pig scripts I got the same error mentioned in the question:
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-namenode-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost:
localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-datanode-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost:
localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-jobtracker-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost:
localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-tasktracker-localhost.localdomain.out
So, the answer is:
[training#localhost bin]$ stop-all.sh
and then type:
[training#localhost bin]$ start-all.sh
The issue will be resolved. Now you can run the pig script with mapreduce!
In Mac (If you install using homebrew) Where 3.0.0 is Hadoop version. In Linux change the installation path accordingly(only this part will change . /usr/local/Cellar/).
> /usr/local/Cellar/hadoop/3.0.0/sbin/stopyarn.sh
> /usr/local/Cellar/hadoop/3.0.0/sbin/stopdfs.sh
> /usr/local/Cellar/hadoop/3.0.0/sbin/stop-all.sh"
Better for pro users write this alias at the end of your ~/.bashrc or ~/.zshrc(If you are zsh user). And just type hstopfrom your command line everytime you want to stop Hadoop and all the related processes.
alias hstop="/usr/local/Cellar/hadoop/3.0.0/sbin/stop-yarn.sh;/usr/local/Cellar/hadoop/3.0.0/sbin/stop-dfs.sh;/usr/local/Cellar/hadoop/3.0.0/sbin/stop-all.sh"
Related
I have installed hadoop 2.7.3 in my ubuntu 16.10. I want to create a multinode cluster and I have done all the steps till formatting the namenode but "hadoop-daemon.sh start namenode command is not working. When I type this command it shows" hadoop-daemon.sh: command not found.
You need to execute following command. sbin/start-dfs.sh or sbin/start-yarn.sh
It is my first time in installing Hadoop on my Linux (Fedora distro) running on VM (using Parallel on my Mac). And I followed every step on this video and including the textual version of it.And then when I run it on localhost (or the equivalent value from hostname) in port 50070, I got the following message.
...can't establish a connection to the server at localhost:50070
When I run the jps by the way command I don't have the datanode and namenode unlike at the end of the textual version tutorial which has the following:
While mine has only the following processes running:
6021 NodeManager
3947 SecondaryNameNode
5788 ResourceManager
8941 Jps
When I run the hadoop namenode command I have some of the following [redacted] error:
Cannot access storage directory /usr/local/hadoop_store/hdfs/namenode
16/10/11 21:52:45 WARN namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop_store/hdfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.
I tried to access by the way the above mentioned directories and it existed.
Any hint for this newbie? ;-)
You would need to give read and write permission to user with which you are running the services on directory /usr/local/hadoop_store/hdfs/namenode.
Once done, you should run format command using hadoop namenode -format
Then try to start your services.
delete files /app/hadoop/tmp/*
and try again formatting the namenode and then start-dfs.sh & start-yarn.sh
I would like to see if the hdfs file system for Hadoop is working properly. I know that jps lists the daemons that are running, but I don't actually know which daemons to look for.
I ran the following commands:
$HADOOP_PREFIX/sbin/hadoop-daemon.sh start namenode
$HADOOP_PREFIX/sbin/hadoop-daemon.sh start datanode
$HADOOP_PREFIX/sbin/yarn-daemon.sh start resourcemanager
$HADOOP_PREFIX/sbin/yarn-daemon.sh start nodemanager
Only namenode, resourcemanager, and nodemanager appeared when I entered jps.
Which daemons are supposed to be running in order for hdfs/Hadoop to function? Also, what could you do to fix hdfs if it is not running?
Use any of the following approaches for to check your deamons status
JPS command would list all active deamons
the below is the most appropriate
hadoop dfsadmin -report
This would list down details of datanodes which is basically in a sense your HDFS
cat any file available in hdfs path.
So, I spent two weeks validating my setup (it was fine) , finally found this command:
sudo -u hdfs jps
Initially my simple JPS command was showing only one process, but Hadoop 2.6 under Ubuntu LTS 14.04 was up. I was using 'Sudo' to run the startup scripts.
Here is the startup that work with JPS listing multiple processes:
sudo su hduser
/usr/local/hadoop/sbin/start-dfs.sh
/usr/local/hadoop/sbin/start-yarn.sh
I have installed the Hadoop Cluster which is the hadoop 0.23.9 version. I install the HDFS-1943.patch and now I can start all the namenode and datanode. (start-dfs.sh is working for me)
However, when I want to start the yarn daemons (running start-yarn.sh) , it shows the following error as the same as the previous happening:
[root#dbnode1 sbin]# ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hchen/hadoop-0.23.9/logs/yarn-root- resourcemanager-dbnode1.out
datanode: starting nodemanager, logging to /home/hchen/hadoop-0.23.9/logs/yarn-root-nodemanager-dbnode2.out
datanode: Unrecognized option: -jvm
datanode: Error: Could not create the Java Virtual Machine.
datanode: Error: A fatal exception has occurred. Program will exit.
I have installed the patch already and start-dfs.sh is working for me. Why start-yarn.sh does not work??
Run HDFS as a non-root user with the appropriate permissions. Here is a JIRA with more details.
I am using hadoop-0.20.203, after did required changes when i start the hdfs it throws following warning while start up
root#master:/usr/local/hadoop-0.20.203# bin/start-all.sh
starting namenode, logging to /usr/local/hadoop-0.20.203/bin/../logs/hadoop-root-namenode-master.out
master: starting datanode, logging to /usr/local/hadoop-0.20.203/bin/../logs/hadoop-root-datanode-master.out
master: Unrecognized option: -jvm
master: Error: Could not create the Java Virtual Machine.
master: Error: A fatal exception has occurred. Program will exit.
master: starting secondarynamenode, logging to /usr/local/hadoop-0.20.203/bin/../logs/hadoop-root-secondarynamenode-master.out
starting jobtracker, logging to /usr/local/hadoop-0.20.203/bin/../logs/hadoop-root-jobtracker-master.out
master: starting tasktracker, logging to /usr/local/hadoop-0.20.203/bin/../logs/hadoop-root-tasktracker-master.out
Run the script normally as a non-root user. Ensure your non-root user has appropriate permissions. Refer to this bug report for more information
https://issues.apache.org/jira/browse/HDFS-1943
Use sudo bin/start-all.sh and see if it helps. Ideally you should avoid running Hadoop as root user.