Hadoop command `hadoop fs -ls` gives ConnectionRefused error - hadoop

When I run hadoop command like hadoop fs -ls, I get following error/warnings:
16/08/04 11:24:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: Call From master/172.17.100.54 to master:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Am I doing anything wrong with the hadoop path?

Hadoop Native Libraries Guide say its some thing to do with
installation. please check documentation to resolve this.
Native Hadoop Library
Hadoop has native implementations of certain components for performance reasons and for non-availability of Java implementations. These components are available in a single, dynamically-linked native library called the native hadoop library. On the *nix platforms the library is named libhadoop.so.
Please note the following:
It is mandatory to install both the zlib and gzip development packages on the target platform in order to build the native hadoop library; however, for deployment it is sufficient to install just one package if you wish to use only one codec.
It is necessary to have the correct 32/64 libraries for zlib, depending on the 32/64 bit jvm for the target platform, in order to build and deploy the native hadoop library.
Runtime
The bin/hadoop script ensures that the native hadoop library is on the library path via the system property: -Djava.library.path=<path>
During runtime, check the hadoop log files for your MapReduce tasks.
If everything is all right, then: DEBUG util.NativeCodeLoader - Trying to load the custom-built native-hadoop library... INFO util.NativeCodeLoader - Loaded the native-hadoop library
If something goes wrong, then: INFO util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Check
NativeLibraryChecker is a tool to check whether native libraries are loaded correctly. You can launch NativeLibraryChecker as follows
$ hadoop checknative -a
14/12/06 01:30:45 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
14/12/06 01:30:45 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /home/ozawa/hadoop/lib/native/libhadoop.so.1.0.0
zlib: true /lib/x86_64-linux-gnu/libz.so.1
snappy: true /usr/lib/libsnappy.so.1
lz4: true revision:99
bzip2: false
Second thing Connection refused is something related to your setup. please double check setup.
also see the below as pointers..
Hadoop cluster setup - java.net.ConnectException: Connection refused
Hadoop - java.net.ConnectException: Connection refused

Related

Does NativeLoader supported on Windows?

I have build Hadoop 2.7.3 from source, all succeeded. I am using a prebuild Spark 2.0 binary with Hadoop 2.7 support. When I start the spark-shell, I got this warning.
16/09/23 14:53:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop checknative -a gives me
16/09/23 14:59:47 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
16/09/23 14:59:47 WARN zlib.ZlibFactory: Failed to load/initialize native-zlib library
Native library checking:
hadoop: true D:\hadoop-2.7.3\bin\hadoop.dll
zlib: false
snappy: false
lz4: true revision:99
bzip2: false
openssl: false build does not support openssl.
winutils: true D:\hadoop-2.7.3\bin\winutils.exe
16/09/23 14:59:47 INFO util.ExitUtil: Exiting with status 1
Do I have to get native build for all the libraries? I checked the Hadoop build instruction, and I could not find any information about build the other libraries.
Or maybe there's some miss configuration in my Spark. But I could not figure out what. I have these environment variable set for my Spark:
set HADOOP_HOME=D:/hadoop-2.7.3
set HADOOP_CONF_DIR=%HADOOP_HOME%/etc/hadoop
set SPARK_HOME=D:/spark-2.0.0-bin-hadoop2.7
set HADOOP_COMMON_LIB_NATIVE_DIR=%HADOOP_HOME%/bin
set SPARK_LOCAL_IP=

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

alpesh#alpesh-Inspiron-3647:~/hadoop-2.7.2/sbin$ hadoop fs -ls
16/07/05 13:59:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
It is also showing me the the output as below
hadoop check native -a
16/07/05 14:00:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Native library checking:
hadoop: false
zlib: false
snappy: false
lz4: false
bzip2: false
openssl: false
16/07/05 14:00:42 INFO util.ExitUtil: Exiting with status 1
Please help me to solve this
Library you are using is compiled for 32 bit and you are using 64 bit version. so open your .bashrc file where configuration for hadoop exists. Go to this line
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
and replace it with
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"
To get rid of this error:
Suppose Jar file is at /home/cloudera/test.jar and class file is at /home/cloudera/workspace/MapReduce/bin/mapreduce/WordCount, where mapreduce is the package name.
Input file mytext.txt is at /user/process/mytext.txt and output file location is /user/out.
We should run this mapreduce program in following way:
$hadoop jar /home/cloudera/bigdata/text.jar mapreduce.WordCount /user/process /user/out
Add these properties in bash profile of Hadoop user, the issue will be solved
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_COMMON_LIB_NATIVE_DIR"
It's just a warning, because it can not find the correct .jar. Either by compiling it or because it does not exist.
If I were you, I would simply omit it
To do that add in the corresponding configuration file
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable with hadoop-2.6.0

I have started working in hadoop, i am a beginner. I have succuefully install the hadoop-2.6.0 in ubuntu 15.04 64 bit.
The commond like start-all.sh, start-dfs.sh etc are working nicely.
I am facing the problem when i am trying to move the local file system to HDFS.
Like in copyFromLocal command:
hadoop dfs -copyFromLocal ~/Hadoop/test/text2.txt ~/Hadoop/test_hds/input.txt
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/06/04 23:18:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
copyFromLocal: Call From royaljay-Inspiron-N4010/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Same problem in mkdir command:
hadoop dfs -put ~/test/test/test1.txt hd.txt
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/06/03 20:49:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
put: Cannot create file/user/hduser/hd.txt.COPYING. Name node is in safe mode.
I have found many solutions, but no one is working out.
If anyone have idea about this please tell me.
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
You should not use hadoop dfs, instead use the following command:
hdfs dfs -copyFromLocal ...
Do not use ~, instead mention full path like /home/hadoop/Hadoop/test/text2.txt
Call From royaljay-Inspiron-N4010/127.0.1.1 to localhost:9000 failed
on connection exception: java.net.ConnectException: Connection
refused; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused
127.0.1.1 will cause loopback problems. Remove the line with 127.0.1.1 from /etc/hosts.
NOTE: For copying files from local filesystem to HDFS, try using -put command instead of -copyFromLocal.

Hortonworks Data node install: Exception in secureMain

Am trying to install Hortonworks Hadoop single node cluster. I am able to start namenode and secondary namenode, but datanode failed with the following error. How do I solve this issue?
2014-04-04 18:22:49,975 FATAL datanode.DataNode (DataNode.java:secureMain(1841)) - Exception in secureMain
java.lang.RuntimeException: Although a UNIX domain socket path is configured as /var/lib/hadoop-hdfs/dn_socket, we cannot start a localDataXceiverServer because libhadoop cannot be loaded."
See Native Libraries Guide. Make sure libhadoop.so is available in $HADOOP_HOME\bin. Look into the logs for this message:
INFO util.NativeCodeLoader - Loaded the native-hadoop library
If instead you find
INFO util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
then it means the libhadoop.so is not available, and you'll have to investigate why. Alternatively you can turn off HDFS shortcircuit if you wish, or enable the legacy short-circuit instead using dfs.client.use.legacy.blockreader.local, to remove the libhadoop dependency. But I reckon would be better to find out what's the problem with your library.
Make sure you read and understand the articles linked before asking further questions.

Hadoop compression : "Loaded native gpl library" but "Failed to load/initialize native-lzo library"

after several try installing Lzo compression for hadoop, I need help because I have really no idea why it doesn't work.
I'using hadoop 1.0.4 on CentOs 6. I tried http://opentsdb.net/setup-hbase.html, https://github.com/kevinweil/hadoop-lzo and some others but i'm still getting error :
13/07/03 19:52:23 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
13/07/03 19:52:23 WARN lzo.LzoCompressor: java.lang.NoSuchFieldError: workingMemoryBuf
13/07/03 19:52:23 ERROR lzo.LzoCodec: Failed to load/initialize native-lzo library
even if native gpl is loaded. I've updated my mapred-site and core-site according to links below, I've copy/paste libs in right path (still according to links).
The real problem is that the lzo test works on the namenode :
13/07/03 18:55:47 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
13/07/03 18:55:47 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev ]
I've try setting several path in haddop-env.sh but there seems to be no right solution...
So, if you have any idea, link ... ? I'm really interested
[edit] after a week, i'm still trying to make it functionnal.
I've try sudhirvn.blogspot.fr/2010/08/hadoop-lzo-installation-errors-and.html but removing all Lzo and gplcompression libraries and then making a nez install was not better at all.
Is that due to my hadoop core version ? Is it possible to have hadoop-core-0.20 and hadoop-core-1.0.4 at the same time ? Should i compile Lzo on a 0.20 hadoop in order to use lzo ?
By the way I already tried compiling hadoop-lzo like this :
CLASSPATH=/usr/lib/hadoop/hadoop-core-1.0.4.jar CFLAGS=-m64 CXXFLAGS=-m64 ant compile-native tar
If it helps the full error is :
INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
WARN lzo.LzoCompressor: java.lang.NoSuchFieldError: workingMemoryBuf
ERROR lzo.LzoCodec: Failed to load/initialize native-lzo library
INFO lzo.LzoIndexer: [INDEX] LZO Indexing file test/table.lzo, size 0.00 GB...
WARN snappy.LoadSnappy: Snappy native library is available
INFO util.NativeCodeLoader: Loaded the native-hadoop library
INFO snappy.LoadSnappy: Snappy native library loaded
Exception in thread "main" java.lang.RuntimeException: native-lzo library not available
at com.hadoop.compression.lzo.LzopCodec.createDecompressor(LzopCodec.java:87)
at com.hadoop.compression.lzo.LzoIndex.createIndex(LzoIndex.java:229)
at com.hadoop.compression.lzo.LzoIndexer.indexSingleFile(LzoIndexer.java:117)
at com.hadoop.compression.lzo.LzoIndexer.indexInternal(LzoIndexer.java:98)
at com.hadoop.compression.lzo.LzoIndexer.index(LzoIndexer.java:52)
at com.hadoop.compression.lzo.LzoIndexer.main(LzoIndexer.java:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
I really want to use lzo because I have to deal with very large files on a rather small cluster (5 nodes). Having splittable compressed files could make it run really fast.
Every remark or idea is welcome.
I was having the exact same issue and finally resolved it by randomly choosing a datanode, and checking whether lzop was installed properly.
If it wasn't, I did:
sudo apt-get install lzop
Assuming you are using Debian-based packages.
I had this same issue on my OSX Machine. The problem was solved when I removed hadoop-lzo.jar (0.4.16) from my classpath and put hadoop-gpl-compression jar instead.

Resources