Hadoop command error - hadoop

I have installed Hadoop-2.4.0 on sigle node cluster. After starting the dfs and yarn and executing the jps I get the following services running..
6584 ResourceManager
5976 NameNode
6706 NodeManager
6407 SecondaryNameNode
6148 DataNode
7471 Jps
When I try to execute the following command I get the error
hduser#dhruv-VirtualBox:/usr/local/hadoop$ bin/hdfs dfs -mkdir /hello
OpenJDK 64-Bit Server VMwarning: You have loaded llibrary
/usr/local/hadoop-2.4.0/lib/native/libhadoop.so.1.0.0 which might
have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack' -c
', or link it with '-z noexecstack. 14/10/22 12:21:36 WARN
util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin java classes where applicable.
Can please somebody suggest me, what is wrong and how to rectify this ?
Thanks
Dhruv

You can ignore that if you want. It means that you are running a 32-bit native libraries on a 64-bit runtime.
If the log still annoys you, then you need to build those native libraries on a 64-bit environment.

Related

Spark-shell --master yarn stuck

I installed Hadoop and Spark via Homebrew
$ brew list --versions | grep spark
apache-spark 2.2.0
$ brew list --versions | grep hadoop
hadoop 2.8.1 2.8.2 hdfs
where Hadoop 2.8.2 is what I am using.
I followed this post to configure Hadoop. Also, followed this post to configure spark.yarn.archive as:
spark.yarn.archive hdfs://localhost:9000/user/panc25/spark-jars.zip
The following are my Hadoop/Spark related environment setting in my .bash_profile :
# ---------------------
# Hadoop
# ---------------------
export HADOOP_HOME=/usr/local/Cellar/hadoop/2.8.2
export YARN_CONF_DIR=$HADOOP_HOME/libexec/etc/hadoop/
alias hadoop-start="$HADOOP_HOME/sbin/start-dfs.sh;$HADOOP_HOME/sbin/start-yarn.sh"
alias hadoop-stop="$HADOOP_HOME/sbin/stop-yarn.sh;$HADOOP_HOME/sbin/stop-dfs.sh"
# ---------------------
# Apache Spark
# ---------------------
export SPARK_HOME=/usr/local/Cellar/apache-spark/2.2.0/libexec
export PATH=$SPARK_HOME/../bin:$SPARK_HOME/sbin:$PATH
I can successfully start hadoop (hdfa + yarn):
$ hadoop-start
17/11/12 17:08:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/Cellar/hadoop/2.8.2/libexec/logs/hadoop-panc25-namenode-mbp13mid2017.local.out
localhost: starting datanode, logging to /usr/local/Cellar/hadoop/2.8.2/libexec/logs/hadoop-panc25-datanode-mbp13mid2017.local.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/Cellar/hadoop/2.8.2/libexec/logs/hadoop-panc25-secondarynamenode-mbp13mid2017.local.out
17/11/12 17:08:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/Cellar/hadoop/2.8.2/libexec/logs/yarn-panc25-resourcemanager-mbp13mid2017.local.out
localhost: starting nodemanager, logging to /usr/local/Cellar/hadoop/2.8.2/libexec/logs/yarn-panc25-nodemanager-mbp13mid2017.local.out
$ jps
92723 NameNode
93188 Jps
93051 ResourceManager
93149 NodeManager
92814 DataNode
92926 SecondaryNameNode
However, when I start spark-shell --master yarn it seems to freeze and I don't know what is going on:
What is wrong?
BTW, I could visit the SparkUI http://localhost:4040/, but all pages are blank.
I experienced a similar issue an was caused by the fact that I forgot to append /conf to HADOOP_CONF_DIR env variable (/etc/hadoop/conf).
In my case I was running spark 2.1 cloudera distribution and specified HADOOP_CONF_DIR=/etc/hadoop/conf/:/etc/hive/conf/ . Due to some reason it was getting stuck so I modified it to HADOOP_CONF_DIR=/etc/hadoop/conf/ and it worked. Still looking for the root cause !

Starting hadoop Daemons issues

I have installed Hadoop 2.6.0 in my ubuntu 12.04. When I start/stop the dfs-sh daemon its showing the below error. Please help me to overcome this issue
no namenode to stop
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
16/05/04 10:40:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Thanks,
It looks like namenode server was no started by looking at mentioned errors. Can you please share your cluster details ?
Meanwhile, you can check with [the link for how to setup hadoop cluster][1]
[1]: http://bigdatahandler.com/hadoop-hdfs/hadoop-multi-node-cluster-setup/ and compare with your setup

Command "hadoop fs -ls ." does not work

I think I have installed hadoop correctly. If I do jps I can see the namenode and datanode, no problem.
When I type hadoop fs -ls . I get the error:
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /opt/db/hadoop-2.4.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/08/08 12:42:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: '.': No such file or directory
When I type hadoop dfs -ls . I get the error:
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /opt/db/hadoop-2.4.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/08/08 12:43:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: '.': No such file or directory
And when I type hadoop hdfs -ls . I get the error:
Error: Could not find or load main class hdfs
This is regardless of whether I put '.' or '/' or whatever directory I'm in.
What does this all mean? How can I get normal, expected output? What am I missing?
Use
hdfs dfs -ls ...
I dont think there is such a thing as hadoop hdfs
use the command as follows
bin/hadoop fs -ls /

Hadoop 2.2 - datanode doesn't start up

I had Hadoop 2.4 this morning (see my previous 2 questions). Now I removed it and installed 2.2 as I had issues with 2.4, and also as I think 2.2 is the latest stable release. Now I followed the tutorial here:
http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1
I am pretty sure I did everything right but I am facing similar issues again.
When I run jps it is obvious that the data node is not starting up.
What am I doing wrong again?
hduser#test02:~$ start-dfs.sh
14/06/06 18:12:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-test02.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-test02.out
localhost: Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
localhost: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-test02.out
0.0.0.0: Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
0.0.0.0: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/06 18:13:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hduser#test02:~$ jps
2201 Jps
hduser#test02:~$ jps
2213 Jps
hduser#test02:~$ start-yarn
start-yarn: command not found
hduser#test02:~$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-test02.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-test02.out
hduser#test02:~$ jps
2498 NodeManager
2264 ResourceManager
2766 Jps
hduser#test02:~$ jps
2784 Jps
2498 NodeManager
2264 ResourceManager
hduser#test02:~$ jps
2498 NodeManager
2264 ResourceManager
2796 Jps
hduser#test02:~$
My problem was that I took these instructions from the tutorial too literally.
Paste following between <configuration>
fs.default.name
hdfs://localhost:9000
I suspected this was wrong while doing it but still I did it.
It seemed incorrect as the core-site.xml file is in XML format.
So actually, it needs to look like this.
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
Changing it to this fixed my problem.
I had similar issues with DataNode not starting up. What I did was reformat the namenode, then restarted the cluster. Then, running jps confirmed that data node was started up.
This can be caused by placing the HDFS directory in your "home" directory (on a linux box) since upon starting up and shutting down the OS affects these folders (not exactly sure how, but to prevent this problem in the future, move the HDFS directory out of your home directory).
Please let me know if this works.

Hortonworks Data node install: Exception in secureMain

Am trying to install Hortonworks Hadoop single node cluster. I am able to start namenode and secondary namenode, but datanode failed with the following error. How do I solve this issue?
2014-04-04 18:22:49,975 FATAL datanode.DataNode (DataNode.java:secureMain(1841)) - Exception in secureMain
java.lang.RuntimeException: Although a UNIX domain socket path is configured as /var/lib/hadoop-hdfs/dn_socket, we cannot start a localDataXceiverServer because libhadoop cannot be loaded."
See Native Libraries Guide. Make sure libhadoop.so is available in $HADOOP_HOME\bin. Look into the logs for this message:
INFO util.NativeCodeLoader - Loaded the native-hadoop library
If instead you find
INFO util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
then it means the libhadoop.so is not available, and you'll have to investigate why. Alternatively you can turn off HDFS shortcircuit if you wish, or enable the legacy short-circuit instead using dfs.client.use.legacy.blockreader.local, to remove the libhadoop dependency. But I reckon would be better to find out what's the problem with your library.
Make sure you read and understand the articles linked before asking further questions.

Resources