I am trying to upgrade HDFS from 1.2.1 to version 2.6. However, whenever I run start-dfs.sh -upgrade command, I get the below error:
hduser#Cluster1-NN:/usr/local/hadoop2/hadoop-2.6.0/etc_bkp/hadoop$ $HADOOP_NEW_HOME/sbin/start-dfs.sh -upgrade
15/05/17 12:45:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [nn]
Error: Please specify one of --hosts or --hostnames options and not both.
nn: starting datanode, logging to /var/hadoop/logs/hadoop-hduser-datanode-Cluster1-NN.out
dn1: starting datanode, logging to /var/hadoop/logs/hadoop-hduser-datanode-Cluster1-DN1.out
dn2: starting datanode, logging to /var/hadoop/logs/hadoop-hduser-datanode-Cluster1-DN2.out
Starting secondary namenodes [0.0.0.0]
Error: Please specify one of --hosts or --hostnames options and not both.
Please let me know if any of you experts have come across such error.
I got the same problem on Arch Linux with newly installed Hadoop 2.7.1. I'm not sure whether my case is the same as yours or not, but my experience should help. I just comment out the line HADOOP_SLAVES=/etc/hadoop/slaves in /etc/profile.d/hadoop.sh and re-login. Both accessing HDFS and running streaming jobs work for me.
The cause is that the Arch-specific script /etc/profile.d/hadoop.sh declares the $HADOOP_SLAVES environment variable. And in start-dfs.sh, hadoop-daemons.sh is called with --hostnames arguments. This confuses libexec/hadoop-config.sh.
You may want to type echo $HADOOP_SLAVES as the hadoop user. If there are non-empty outputs, check your .bashrc and/or other shell startup scripts. Hope that helps :)
Maybe it is short of some hadoop library.Can you show the detail information of namenode logs?
Related
I am getting this error in hadoop while trying to create a dir.
2023-02-07 23:43:38,731 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
mkdir: `/BigDataFirstName': Input/output error
please explain me what to do step by step if possible.
i tried some stuff from internet but didnt work.
like
$ hadoop-daemon.sh stop namenode
$ hadoop-daemon.sh stop datanode
and then start it again.
The first line, you can ignore
The second line simply says you're unable to create that directory. You'll need to first format the namenode using hdfs namenode -format, if you haven't, and only then should you start the namenode, then datanode.
If you stop your machine without stopping the namenode cleanly, it may become corrupt and not start correctly, causing other issues when trying to use hdfs commands
I set up and configured a multi-node Hadoop .Will appear when I start
My Ubuntu is 16.04 and Hadoop is 3.0.2
Starting namenodes on [master]
Starting datanodes
localhost: ERROR: Cannot set priority of datanode process 2984
Starting secondary namenodes [master]
master: ERROR: Cannot set priority of secondarynamenode process 3175
2018-07-17 02:19:39,470 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers
Who can tell me which link is wrong?
I had the same error and fixed it by ensuring that the datanode and namenode locations have the right permissions and are owned by the user starting hadoop daemons.
Check that
The directory path properties in hdfs-site.xml under $HADOOP_CONF_DIR are pointing to valid locations.
dfs.namenode.name.dir
dfs.datanode.data.dir
dfs.namenode.checkpoint.dir
Hadoop user must have write permission for these paths
If the write permission is not present for the mentioned paths, then the processes might not start and the error you see can occur.
I had the same error, and tried the above method, but it doesn't work.
I set XXX_USER in all xxx-env.sh files, and got the same result.
Finally I set HADOOP_SHELL_EXECNAME="root" in ${HADOOP_HOME}/bin/hdfs, and the error disappeared.
The default value of HADOOP_SHELL_EXECNAME is "HDFS".
I had the same error when I renamed my Ubuntu home directory, and had to edit core-site.xml, changing the value of the property hadoop.tmp.dir to the new path.
Just append the word "native" to your HADOOP_OPTS like this:
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib/native"
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
I had the same issue, you just need to check hadoop/logs directory and look for a .log file for datanode, type more nameofthefile.log and check for the errors, mine was a problem in to configuration, I fixed it and it worked.
I've configured hadoop-2.2.0 on Ubuntu/linux but when I tried to run it via start-dfs.sh and start-yarn it gave me this error:
Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
And when I go to localhost:50070/nn_browsedfscontent.jsp then it gives me the following error:
Can't browse the DFS since there are no live nodes available to redirect to.
So I followed this link to build hadoop from source but the problem still persists. Help needed!
Try hadoop-daemon.sh start namenode and then hadoop-daemon.sh start datanode and check your browser at localhost:50070
I have started configuring Hadoop 2.1.0-beta version for single node. I followed steps mentioned in Michael Noll's Tutorial (http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/#configuring-single-node-clusters-first). Every thing I did and configured well. As a result of JPS, I got that NameNode, DataNode, Secondary NameNode started fine. Then I found out that there is no start-mapred.sh script. So I tried starting the jobtracker using hadoop-daemons.sh (hadoop-daemon.sh --config /home/nayan/dev/hadoop/etc/hadoop/ start jobtracker) and it resulted in failure with message "Sorry, the jobtracker command is no longer supported. You may find similar functionality with the "yarn" shell command.". I do not know what all configuration changes (if any) I need to make. I made changes in "yarn-site.xml" file, as suggested in Hadoop:The Definitive Guide. But could not proceed further. Where can I find out about Yarn. I checked Apache site, but could not figure it out.
You need to check your configuration xml files. Sometimes if you have any problrm in xml then some daemons wont start.
and try to use ./start-all.sh and then JPS
you can use start-yarn.sh to start the ResourceManger and Jobtracker daemons
I usually start everything using these two commands
./start-dfs.sh
./start-yarn.sh
You Should use start-dfs.sh for Hdfs Daemons and start-yarn.sh for Resource manager and nodemanager daemon both are in /bin of hadoop.
./start-dfs.sh or start-dfs.sh will start only HDFS components , while ./start-yarn.sh or start-yarn.sh will start Yarn component like NodeManager , Resource manager etc. If you don't want to start both the components separately , try using this command :
./start-all.sh or start-all.sh (This is deprecated command though).
To answer your question , use ./start-yarn.sh
Cheers!
First have to start the yarn daemons in the YARN( HADOOP 2.x) Environment.
So start with this
at /hadoop_installed_path/sbin$ ./start-yarn.sh
Once the yarn daemons started then we can start df daemons
at /hadoop_installed_path/sbin$ ./start-dfs.sh
1.You should check all the steps in Hadoop The definitive guide.
if it's all proper than use start-all.sh
than run jps.
2.some time You have to close console for reflecting your changes.so close the console and reopen it again and then try jps,
hope this will help.
I am a student, interested in Hadoop and started to explore it recently.
I tried adding an additional DataNode in the pseudo-distributed mode but failed.
I am following the Yahoo developer tutorial and so the version of Hadoop I am using is hadoop-0.18.0
I tried to start up using 2 methods I found online:
Method 1 (link)
I have a problem with this line
bin/hadoop-daemon.sh --script bin/hdfs $1 datanode $DN_CONF_OPTS
--script bin/hdfs doesn't seem to be valid in the version I am using. I changed it to --config $HADOOP_HOME/conf2 with all the configuration files in that directory, but when the script is ran it gave the error:
Usage: Java DataNode [-rollback]
Any idea what does the error mean? The log files are created but DataNode did not start.
Method 2 (link)
Basically I duplicated conf folder to conf2 folder, making necessary changes documented on the website to hadoop-site.xml and hadoop-env.sh. then I ran the command
./hadoop-daemon.sh --config ..../conf2 start datanode
it gives the error:
datanode running as process 4190. stop it first.
So I guess this is the 1st DataNode that was started, and the command failed to start another DataNode.
Is there anything I can do to start additional DataNode in the Yahoo VM Hadoop environment? Any help/advice would be greatly appreciated.
Hadoop start/stop scripts use /tmp as a default directory for storing PIDs of already started daemons. In your situation, when you start second datanode, startup script finds /tmp/hadoop-someuser-datanode.pid file from the first datanode and assumes that the datanode daemon is already started.
The plain solution is to set HADOOP_PID_DIR env variable to something else (but not /tmp). Also do not forget to update all network port numbers in conf2.
The smart solution is start a second VM with hadoop environment and join them in a single cluster. It's the way hadoop is intended to use.