Am trying to install Hortonworks Hadoop single node cluster. I am able to start namenode and secondary namenode, but datanode failed with the following error. How do I solve this issue?
2014-04-04 18:22:49,975 FATAL datanode.DataNode (DataNode.java:secureMain(1841)) - Exception in secureMain
java.lang.RuntimeException: Although a UNIX domain socket path is configured as /var/lib/hadoop-hdfs/dn_socket, we cannot start a localDataXceiverServer because libhadoop cannot be loaded."
See Native Libraries Guide. Make sure libhadoop.so is available in $HADOOP_HOME\bin. Look into the logs for this message:
INFO util.NativeCodeLoader - Loaded the native-hadoop library
If instead you find
INFO util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
then it means the libhadoop.so is not available, and you'll have to investigate why. Alternatively you can turn off HDFS shortcircuit if you wish, or enable the legacy short-circuit instead using dfs.client.use.legacy.blockreader.local, to remove the libhadoop dependency. But I reckon would be better to find out what's the problem with your library.
Make sure you read and understand the articles linked before asking further questions.
Related
I am using Rancher for manage an environment , I am using Hadoop + Yarn (Experimental) for flink and zookeeper in rancher .
I am trying to configure the hdfs on the flink-conf.yaml.
This is the changes that I made in connection to Hdfs :
fs.hdfs.hadoopconf: /etc/hadoop
recovery.zookeeper.storageDir: hdfs://:8020/flink/recovery
state.backend.fs.checkpointdir: hdfs://:8020/flink/checkpoints
And I get an error that say that :
2016-09-06 14:10:44,965 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
What I did wrong ?
Best regards
When I run hadoop command like hadoop fs -ls, I get following error/warnings:
16/08/04 11:24:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: Call From master/172.17.100.54 to master:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Am I doing anything wrong with the hadoop path?
Hadoop Native Libraries Guide say its some thing to do with
installation. please check documentation to resolve this.
Native Hadoop Library
Hadoop has native implementations of certain components for performance reasons and for non-availability of Java implementations. These components are available in a single, dynamically-linked native library called the native hadoop library. On the *nix platforms the library is named libhadoop.so.
Please note the following:
It is mandatory to install both the zlib and gzip development packages on the target platform in order to build the native hadoop library; however, for deployment it is sufficient to install just one package if you wish to use only one codec.
It is necessary to have the correct 32/64 libraries for zlib, depending on the 32/64 bit jvm for the target platform, in order to build and deploy the native hadoop library.
Runtime
The bin/hadoop script ensures that the native hadoop library is on the library path via the system property: -Djava.library.path=<path>
During runtime, check the hadoop log files for your MapReduce tasks.
If everything is all right, then: DEBUG util.NativeCodeLoader - Trying to load the custom-built native-hadoop library... INFO util.NativeCodeLoader - Loaded the native-hadoop library
If something goes wrong, then: INFO util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Check
NativeLibraryChecker is a tool to check whether native libraries are loaded correctly. You can launch NativeLibraryChecker as follows
$ hadoop checknative -a
14/12/06 01:30:45 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
14/12/06 01:30:45 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /home/ozawa/hadoop/lib/native/libhadoop.so.1.0.0
zlib: true /lib/x86_64-linux-gnu/libz.so.1
snappy: true /usr/lib/libsnappy.so.1
lz4: true revision:99
bzip2: false
Second thing Connection refused is something related to your setup. please double check setup.
also see the below as pointers..
Hadoop cluster setup - java.net.ConnectException: Connection refused
Hadoop - java.net.ConnectException: Connection refused
I have started working in hadoop, i am a beginner. I have succuefully install the hadoop-2.6.0 in ubuntu 15.04 64 bit.
The commond like start-all.sh, start-dfs.sh etc are working nicely.
I am facing the problem when i am trying to move the local file system to HDFS.
Like in copyFromLocal command:
hadoop dfs -copyFromLocal ~/Hadoop/test/text2.txt ~/Hadoop/test_hds/input.txt
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/06/04 23:18:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
copyFromLocal: Call From royaljay-Inspiron-N4010/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Same problem in mkdir command:
hadoop dfs -put ~/test/test/test1.txt hd.txt
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/06/03 20:49:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
put: Cannot create file/user/hduser/hd.txt.COPYING. Name node is in safe mode.
I have found many solutions, but no one is working out.
If anyone have idea about this please tell me.
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
You should not use hadoop dfs, instead use the following command:
hdfs dfs -copyFromLocal ...
Do not use ~, instead mention full path like /home/hadoop/Hadoop/test/text2.txt
Call From royaljay-Inspiron-N4010/127.0.1.1 to localhost:9000 failed
on connection exception: java.net.ConnectException: Connection
refused; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused
127.0.1.1 will cause loopback problems. Remove the line with 127.0.1.1 from /etc/hosts.
NOTE: For copying files from local filesystem to HDFS, try using -put command instead of -copyFromLocal.
I have installed Hadoop-2.4.0 on sigle node cluster. After starting the dfs and yarn and executing the jps I get the following services running..
6584 ResourceManager
5976 NameNode
6706 NodeManager
6407 SecondaryNameNode
6148 DataNode
7471 Jps
When I try to execute the following command I get the error
hduser#dhruv-VirtualBox:/usr/local/hadoop$ bin/hdfs dfs -mkdir /hello
OpenJDK 64-Bit Server VMwarning: You have loaded llibrary
/usr/local/hadoop-2.4.0/lib/native/libhadoop.so.1.0.0 which might
have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack' -c
', or link it with '-z noexecstack. 14/10/22 12:21:36 WARN
util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin java classes where applicable.
Can please somebody suggest me, what is wrong and how to rectify this ?
Thanks
Dhruv
You can ignore that if you want. It means that you are running a 32-bit native libraries on a 64-bit runtime.
If the log still annoys you, then you need to build those native libraries on a 64-bit environment.
after several try installing Lzo compression for hadoop, I need help because I have really no idea why it doesn't work.
I'using hadoop 1.0.4 on CentOs 6. I tried http://opentsdb.net/setup-hbase.html, https://github.com/kevinweil/hadoop-lzo and some others but i'm still getting error :
13/07/03 19:52:23 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
13/07/03 19:52:23 WARN lzo.LzoCompressor: java.lang.NoSuchFieldError: workingMemoryBuf
13/07/03 19:52:23 ERROR lzo.LzoCodec: Failed to load/initialize native-lzo library
even if native gpl is loaded. I've updated my mapred-site and core-site according to links below, I've copy/paste libs in right path (still according to links).
The real problem is that the lzo test works on the namenode :
13/07/03 18:55:47 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
13/07/03 18:55:47 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev ]
I've try setting several path in haddop-env.sh but there seems to be no right solution...
So, if you have any idea, link ... ? I'm really interested
[edit] after a week, i'm still trying to make it functionnal.
I've try sudhirvn.blogspot.fr/2010/08/hadoop-lzo-installation-errors-and.html but removing all Lzo and gplcompression libraries and then making a nez install was not better at all.
Is that due to my hadoop core version ? Is it possible to have hadoop-core-0.20 and hadoop-core-1.0.4 at the same time ? Should i compile Lzo on a 0.20 hadoop in order to use lzo ?
By the way I already tried compiling hadoop-lzo like this :
CLASSPATH=/usr/lib/hadoop/hadoop-core-1.0.4.jar CFLAGS=-m64 CXXFLAGS=-m64 ant compile-native tar
If it helps the full error is :
INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
WARN lzo.LzoCompressor: java.lang.NoSuchFieldError: workingMemoryBuf
ERROR lzo.LzoCodec: Failed to load/initialize native-lzo library
INFO lzo.LzoIndexer: [INDEX] LZO Indexing file test/table.lzo, size 0.00 GB...
WARN snappy.LoadSnappy: Snappy native library is available
INFO util.NativeCodeLoader: Loaded the native-hadoop library
INFO snappy.LoadSnappy: Snappy native library loaded
Exception in thread "main" java.lang.RuntimeException: native-lzo library not available
at com.hadoop.compression.lzo.LzopCodec.createDecompressor(LzopCodec.java:87)
at com.hadoop.compression.lzo.LzoIndex.createIndex(LzoIndex.java:229)
at com.hadoop.compression.lzo.LzoIndexer.indexSingleFile(LzoIndexer.java:117)
at com.hadoop.compression.lzo.LzoIndexer.indexInternal(LzoIndexer.java:98)
at com.hadoop.compression.lzo.LzoIndexer.index(LzoIndexer.java:52)
at com.hadoop.compression.lzo.LzoIndexer.main(LzoIndexer.java:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
I really want to use lzo because I have to deal with very large files on a rather small cluster (5 nodes). Having splittable compressed files could make it run really fast.
Every remark or idea is welcome.
I was having the exact same issue and finally resolved it by randomly choosing a datanode, and checking whether lzop was installed properly.
If it wasn't, I did:
sudo apt-get install lzop
Assuming you are using Debian-based packages.
I had this same issue on my OSX Machine. The problem was solved when I removed hadoop-lzo.jar (0.4.16) from my classpath and put hadoop-gpl-compression jar instead.