Error while starting resource manager daemon in Hadoop 2.6 - hadoop

This is the error I get while starting resource manager daemon in Hadoop 2.6:
[hadoop#master hadoop]$ sbin/yarn-daemon.sh start resourcemanager
sbin/yarn-daemon.sh: line 60: [: master.out: integer expression expected
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-master.phd.com
Error: Could not find or load main class master.log

Related

Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error

vijay#ubuntu:~$ start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as vijay in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
localhost: namenode is running as process 22733. Stop it first and ensure /tmp/hadoop-vijay-namenode.pid file is empty before retry.
Starting datanodes
localhost: datanode is running as process 22866. Stop it first and ensure /tmp/hadoop-vijay-datanode.pid file is empty before retry.
Starting secondary namenodes [ubuntu]
ubuntu: secondarynamenode is running as process 23072. Stop it first and ensure /tmp/hadoop-vijay-secondarynamenode.pid file is empty before retry.
Starting resourcemanager
Starting nodemanagers
vijay#ubuntu:~$ jps
23072 SecondaryNameNode
22866 DataNode
22733 NameNode
24447 Jps
enter image description here
I am facing hadoop web console error
Currently installed java version "19.0.1" 2022-10-18 and Hadoop 3.3.4

Error in loading hadoop distributed file system

I installed Hadoop-3.3.4 in Ubuntu-20. I wrote the command for starting hadoop, i.e.
samar#pc:~$ $HADOOP_HOME/sbin/start-all.sh
Then it showed the output as.
WARNING: Attempting to start all Apache Hadoop daemons as samar in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [pc]
Starting resourcemanager
Starting nodemanagers
But when I tried to access the HDFS with the command
samar#pc:~$ hdfs dfs -ls
It gave a message as:
ls: Call From pc/127.0.1.1 to localhost:9000 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused
and the output of jps was:
10485 Jps
10101 NodeManager
9946 ResourceManager
9739 SecondaryNameNode
9533 DataNode
Namenode did not start successfully (9000 is namenodes services port)
Are there more logs?

How to install Hadoop on M1 Mac

I followed serveral tuitorial and everytime I start Hadoop will have these
feiyechen#FEIYEdeMac-mini ~ % start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as feiyechen in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
localhost: datanode is running as process 55832. Stop it first and ensure /tmp/hadoop-feiyechen-datanode.pid file is empty before retry.
Starting secondary namenodes [FEIYEdeMac-mini.local]
FEIYEdeMac-mini.local: secondarynamenode is running as process 55966. Stop it first and ensure /tmp/hadoop-feiyechen-secondarynamenode.pid file is empty before retry.
2022-01-28 20:35:24,311 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers
feiyechen#FEIYEdeMac-mini ~ % jps
55832 DataNode
57838 Jps
55966 SecondaryNameNode
57247 NameNode
Tutorial said should got these after run jps
I only have 4 items: DataNode, Jps, SecondaryNameNode, NameNode. Is that mean I failed?
It means you have a running HDFS installation, but not YARN.
You should be able to run start-yarn.sh separately if you want the ResourceManger + NodeManager
Otherwise, there are log files created for both the YARN processes that would include information about why they are failing.

Spark-shell --master yarn stuck

I installed Hadoop and Spark via Homebrew
$ brew list --versions | grep spark
apache-spark 2.2.0
$ brew list --versions | grep hadoop
hadoop 2.8.1 2.8.2 hdfs
where Hadoop 2.8.2 is what I am using.
I followed this post to configure Hadoop. Also, followed this post to configure spark.yarn.archive as:
spark.yarn.archive hdfs://localhost:9000/user/panc25/spark-jars.zip
The following are my Hadoop/Spark related environment setting in my .bash_profile :
# ---------------------
# Hadoop
# ---------------------
export HADOOP_HOME=/usr/local/Cellar/hadoop/2.8.2
export YARN_CONF_DIR=$HADOOP_HOME/libexec/etc/hadoop/
alias hadoop-start="$HADOOP_HOME/sbin/start-dfs.sh;$HADOOP_HOME/sbin/start-yarn.sh"
alias hadoop-stop="$HADOOP_HOME/sbin/stop-yarn.sh;$HADOOP_HOME/sbin/stop-dfs.sh"
# ---------------------
# Apache Spark
# ---------------------
export SPARK_HOME=/usr/local/Cellar/apache-spark/2.2.0/libexec
export PATH=$SPARK_HOME/../bin:$SPARK_HOME/sbin:$PATH
I can successfully start hadoop (hdfa + yarn):
$ hadoop-start
17/11/12 17:08:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/Cellar/hadoop/2.8.2/libexec/logs/hadoop-panc25-namenode-mbp13mid2017.local.out
localhost: starting datanode, logging to /usr/local/Cellar/hadoop/2.8.2/libexec/logs/hadoop-panc25-datanode-mbp13mid2017.local.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/Cellar/hadoop/2.8.2/libexec/logs/hadoop-panc25-secondarynamenode-mbp13mid2017.local.out
17/11/12 17:08:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/Cellar/hadoop/2.8.2/libexec/logs/yarn-panc25-resourcemanager-mbp13mid2017.local.out
localhost: starting nodemanager, logging to /usr/local/Cellar/hadoop/2.8.2/libexec/logs/yarn-panc25-nodemanager-mbp13mid2017.local.out
$ jps
92723 NameNode
93188 Jps
93051 ResourceManager
93149 NodeManager
92814 DataNode
92926 SecondaryNameNode
However, when I start spark-shell --master yarn it seems to freeze and I don't know what is going on:
What is wrong?
BTW, I could visit the SparkUI http://localhost:4040/, but all pages are blank.
I experienced a similar issue an was caused by the fact that I forgot to append /conf to HADOOP_CONF_DIR env variable (/etc/hadoop/conf).
In my case I was running spark 2.1 cloudera distribution and specified HADOOP_CONF_DIR=/etc/hadoop/conf/:/etc/hive/conf/ . Due to some reason it was getting stuck so I modified it to HADOOP_CONF_DIR=/etc/hadoop/conf/ and it worked. Still looking for the root cause !

hadoop error coming when starting hadoop

Hi i can't resolve my problem when running hadoop with start-all.sh
rochdi#127:~$ start-all.sh
/usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer
expression expected
starting namenode, logging to
/usr/local/hadoop/libexec/../logs/hadoop-rochdi-namenode-127.0.0.1
localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [:
localhost: integer expression expected
localhost: starting datanode, logging to
/usr/local/hadoop/libexec/../logs/hadoop-rochdi-datanode-127.0.0.1
localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [:
localhost: integer expression expected
localhost: starting secondarynamenode, logging to
/usr/local/hadoop/libexec/../logs/hadoop-rochdi-secondarynamenode-127.0.0.1
/usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer
expression expected
starting jobtracker, logging to
/usr/local/hadoop/libexec/../logs/hadoop-rochdi-jobtracker-127.0.0.1
localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [:
localhost: integer expression expected
localhost: starting tasktracker, logging to
/usr/local/hadoop/libexec/../logs/hadoop-rochdi-tasktracker-127.0.0.1
localhost: Erreur : impossible de trouver ou charger la classe
principale localhost
path:
rochdi#127:~$ echo "$PATH"
/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/hadoop/bin:/usr/local/hadoop/lib
before coming error i change hostname file as:
127.0.0.1 localhost
127.0.1.1 ubuntu.local ubuntu
and i configured my bashrc file as
export HADOOP_PREFIX=/usr/local/hadoop
export PATH=$PATH:$HADOOP_PREFIX/bin
export JAVA_HOME=/usr/lib/jvm/java-7-oracle
and jps command
rochdi#127:~$ jps
3427 Jps
help me please
i resolve the problem i just change my hostname but and all nodes start but when i stop them i have this message:
rochdi#acer:~$ jps
4605 NameNode
5084 SecondaryNameNode
5171 JobTracker
5460 Jps
5410 TaskTracker
rochdi#acer:~$ stop-all.sh
stopping jobtracker
localhost: no tasktracker to stop
stopping namenode
localhost: no datanode to stop
localhost: stopping secondarynamenode
after extract hadoop tar file
open ~/bashrc file and add following at the end of file
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_INSTALL=$HADOOP_HOME
then,
edit file $HADOOP_HOME/etc/hadoop/core-site.xml add following config then start hadoop
<configuration>
<property>
<name>fs.default.name </name>
<value> hdfs://localhost:9000 </value>
</property>
</configuration>
still problem the use this link click here
Check your hosts file and *-site.xml files for host names. This error occurs when the hostnames not defined properly.
Use server IP address instead of using localhost in core-site.xml and check your entries in etc/hosts and slaves file.
The OP posted that the main error was fixed by changing the hostname (answered Dec 6 '13 at 14:19). It suggests issues with the file /etc/hosts and the file 'slaves' in the master. Remember that each host name in the cluster must match the values in those files. When an xml is wrongly configured, it normally throw up Connectivity issues between the ports of the services.
From the message "no ${SERVICE} to stop" most probably the previous start-all.sh leaved the java processes orphaned. The solutions is to stop manually each process, e.g.
$ kill -9 4605
and then execute again the start-all.sh command
It's important to mention that this is an old question, and currently we have the version 2 and 3 of Hadoop. And I strongly recommend using on of the latest version.

Resources