Hadoop is not starting any datanode - hadoop

I've configured hadoop-2.2.0 on Ubuntu/linux but when I tried to run it via start-dfs.sh and start-yarn it gave me this error:
Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
And when I go to localhost:50070/nn_browsedfscontent.jsp then it gives me the following error:
Can't browse the DFS since there are no live nodes available to redirect to.
So I followed this link to build hadoop from source but the problem still persists. Help needed!

Try hadoop-daemon.sh start namenode and then hadoop-daemon.sh start datanode and check your browser at localhost:50070

Related

How can i solve hadoop error while installing on mac terminal?

I am getting this error in hadoop while trying to create a dir.
2023-02-07 23:43:38,731 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
mkdir: `/BigDataFirstName': Input/output error
please explain me what to do step by step if possible.
i tried some stuff from internet but didnt work.
like
$ hadoop-daemon.sh stop namenode
$ hadoop-daemon.sh stop datanode
and then start it again.
The first line, you can ignore
The second line simply says you're unable to create that directory. You'll need to first format the namenode using hdfs namenode -format, if you haven't, and only then should you start the namenode, then datanode.
If you stop your machine without stopping the namenode cleanly, it may become corrupt and not start correctly, causing other issues when trying to use hdfs commands

why hdfs dfs commands are stuck?

I have recently installed hadoop 3.2.0 on Linux and trying to issue commands like hdfs dfs -ls /. But I don't see any output at all, it seems like stuck somewhere.
I also get the warning message:
'Unable to load native-hadoop library for your platform... using builtin-java classes where applicable'
But that is all. Please let me know how to resolve this.
$hadoop fs -ls /
no output, command stuck

Spark with HDFS without YARN

I am trying to configure HDFS for spark. Simple running spark-sub
mit with --master spark://IP:7077 --deploy-mode cluster.... ends up with
16/04/08 10:16:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Then it end work.
I downloaded and lanuched Hadoop cluster, for testing purposes only one machine. I also set envirtoment variables, althouth I think that I forget about some of them. In the fact I set:
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:/usr/local/hadoop/bin:/usr/local/spark/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/
Could you try to help me ?
I followed it: http://www.ccs.neu.edu/home/cbw/spark.html

Hadoop file system commands not found

I have installed Hadoop 2.6.0 on my laptop which runs Ubuntu 14.04 lts.
Below is the link I followed for Hadoop installation: https://github.com/ev2900/YouTube_Vedio/blob/master/Install%20Hadoop%202.6%20on%20Ubuntu%2014.04%20LTS/Install%20Hadoop%202.6%20--%20Virtual%20Box%20Ubuntu%2014.04%20LTS.txt
After installation, I ran two commands:
hadoop namenode -format - It works fine
hadoop fs -ls - It is giving the following error
15/11/15 16:15:28 WARN
util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
ls: `.': No such file or directory
help me solve the error.
15/11/15 16:15:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable is a perpetual annoyance and not an error, so don't worry about that.
The ls: '.': No such file or directory error means that you haven't made your home directory yet, so you're trying to ls on a folder that doesn't exist. Do the following (as HDFS root user) to create your home folder. Ensure it has the correct permissions (which I guess depends on what specifically you want to do re: groups etc):
hdfs dfs -mkdir -p /user/'your-username'

Namenode cannot start

I am trying to upgrade HDFS from 1.2.1 to version 2.6. However, whenever I run start-dfs.sh -upgrade command, I get the below error:
hduser#Cluster1-NN:/usr/local/hadoop2/hadoop-2.6.0/etc_bkp/hadoop$ $HADOOP_NEW_HOME/sbin/start-dfs.sh -upgrade
15/05/17 12:45:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [nn]
Error: Please specify one of --hosts or --hostnames options and not both.
nn: starting datanode, logging to /var/hadoop/logs/hadoop-hduser-datanode-Cluster1-NN.out
dn1: starting datanode, logging to /var/hadoop/logs/hadoop-hduser-datanode-Cluster1-DN1.out
dn2: starting datanode, logging to /var/hadoop/logs/hadoop-hduser-datanode-Cluster1-DN2.out
Starting secondary namenodes [0.0.0.0]
Error: Please specify one of --hosts or --hostnames options and not both.
Please let me know if any of you experts have come across such error.
I got the same problem on Arch Linux with newly installed Hadoop 2.7.1. I'm not sure whether my case is the same as yours or not, but my experience should help. I just comment out the line HADOOP_SLAVES=/etc/hadoop/slaves in /etc/profile.d/hadoop.sh and re-login. Both accessing HDFS and running streaming jobs work for me.
The cause is that the Arch-specific script /etc/profile.d/hadoop.sh declares the $HADOOP_SLAVES environment variable. And in start-dfs.sh, hadoop-daemons.sh is called with --hostnames arguments. This confuses libexec/hadoop-config.sh.
You may want to type echo $HADOOP_SLAVES as the hadoop user. If there are non-empty outputs, check your .bashrc and/or other shell startup scripts. Hope that helps :)
Maybe it is short of some hadoop library.Can you show the detail information of namenode logs?

Resources