Error in Hadoop 2.2 while starting in windows - hadoop

I am trying to install hadoop on windows7.i have installed cygwin, when i do ./start-dfs.sh i am getting the following error:
Error: Could not find or load main class org.apache.hadoop.hdfs.tools.GetConf
Starting namenodes on []
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-kalai-namenode kalai-PC.out
localhost: Error: Could not find or load main class org.apache.hadoop.hdfs.server.namenode.NameNode
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-kalai-datanode-kalai-PC.out
localhost: Error: Could not find or load main class org.apache.hadoop.hdfs.server.datanode.DataNode
Error: Could not find or load main class org.apache.hadoop.hdfs.tools.GetConf
Can anyone let me know what i'm doing here wrong?

The above issue gets cleared when I used Command Prompt with admin privileges for Formatting namenode and starting services.
Removed the C:\tmp and C:\data directories manually
Open a cmd.exe with 'Run as Administrator"
Format the namenode and start the services.

Related

start-all.sh: command not found. How do I fix this?

I tried installing hadoop using this tutorial, link (timestamped the video from where problem occurs)
However, after formatting the namenode(hdfs namenode -format) I don't get the "name" folder in /abc.
Also the start-all.sh and other /sbin commands dont work.
P.S I did try installing hadoop as a single node which didnt work so I tried removing it, redoing everything as a double node setup, so i had to reformat the namenode..i dont know if that affected this somehow.
EDIT 1: I fixed the start-all.sh command not working because there was a mistake in .bashrc that i corrected.
However I get these error messages when running start-all.sh or start-dfs.sh etc.
hadoop#linux-virtual-machine:~$ start-dfs.sh
Starting namenodes on [localhost]
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.10.0/logs’: Permission denied
localhost: chown: cannot access '/usr/local/hadoop-2.10.0/logs': No such file or directory
localhost: starting namenode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: head: cannot open '/usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out' for reading: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.10.0/logs’: Permission denied
localhost: chown: cannot access '/usr/local/hadoop-2.10.0/logs': No such file or directory
localhost: starting datanode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
localhost: head: cannot open '/usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out' for reading: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:a37ThJJRRW+AlDso9xrOCBHzsFCY0/OgYet7WczVbb0.
Are you sure you want to continue connecting (yes/no)? no
0.0.0.0: Host key verification failed.
EDIT 2: Fixed the above error my changing the permissions to hadoop folder (in my case both hadoop-2.10.0 and hadoop)
start-all.sh works perfectly but namenode doesnt show up.
It's not clear how you setup your PATH variable. Or how the scripts are not "working". Did you chmod +x them to make them executable? Any logs output that comes from them at all?
The start-all script is available in the sbin directory of where you downloaded Hadoop, so just /path/to/sbin/start-all.sh is all you really need.
Yes, the namenode needs formatted on a fresh cluster. Using the official Apache Guide is the most up-to-date source and works fine for most.
Otherwise, I would suggest you learn about Apache Amabri, which can automate your installation. Or just use a Sandbox provided by Cloudera, or use many of the Docker containers that already exist for Hadoop if you don't care about fully "installing" it.

Error running pseudo-distributed hbase

I installed Hadoop and HBase in Mac OSX 10.9 through Homebrew. The version of Hadoop is 2.5.1, and the version of HBase is 0.98.6.1.
After I started HDFS, and try to start HBase, I got these errors:
Error: Could not find or load main class org.apache.hadoop.hbase.util.HBaseConfTool
Error: Could not find or load main class org.apache.hadoop.hbase.zookeeper.ZKServerTool
starting master, logging to /usr/local/Cellar/hbase/0.98.6.1/logs/hbase-lsphate-master-Ethans-MacBook-Pro.local.out
Error: Could not find or load main class org.apache.hadoop.hbase.master.HMaster
localhost: starting regionserver, logging to /usr/local/Cellar/hbase/0.98.6.1/logs/hbase-lsphate-regionserver-Ethans-MacBook-Pro.local.out
localhost: Error: Could not find or load main class org.apache.hadoop.hbase.regionserver.HRegionServer
Is there any suggestion of this error? I've googled it and tried any solution I can find but they were all no use.
Your's HBASE_HOME might not be pointing to the correct location. Try exporting HBASE_HOME and HBASE_CONF_DIR like
export HBASE_HOME=/usr/local/Cellar/hbase/0.98.6.1/libexec
export HBASE_CONF_DIR=$HBASE_HOME/conf
Thanks.

Hadoop Fail to start - Unrecognized option: -jvm

I am using hadoop-0.20.203, after did required changes when i start the hdfs it throws following warning while start up
root#master:/usr/local/hadoop-0.20.203# bin/start-all.sh
starting namenode, logging to /usr/local/hadoop-0.20.203/bin/../logs/hadoop-root-namenode-master.out
master: starting datanode, logging to /usr/local/hadoop-0.20.203/bin/../logs/hadoop-root-datanode-master.out
master: Unrecognized option: -jvm
master: Error: Could not create the Java Virtual Machine.
master: Error: A fatal exception has occurred. Program will exit.
master: starting secondarynamenode, logging to /usr/local/hadoop-0.20.203/bin/../logs/hadoop-root-secondarynamenode-master.out
starting jobtracker, logging to /usr/local/hadoop-0.20.203/bin/../logs/hadoop-root-jobtracker-master.out
master: starting tasktracker, logging to /usr/local/hadoop-0.20.203/bin/../logs/hadoop-root-tasktracker-master.out
Run the script normally as a non-root user. Ensure your non-root user has appropriate permissions. Refer to this bug report for more information
https://issues.apache.org/jira/browse/HDFS-1943
Use sudo bin/start-all.sh and see if it helps. Ideally you should avoid running Hadoop as root user.

error in namenode starting

When i try to start the hadoop on master node i am getting the following output.and the namenode is not starting.
[hduser#dellnode1 ~]$ start-dfs.sh
starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-dellnode1.library.out
dellnode1.library: datanode running as process 5123. Stop it first.
dellnode3.library: datanode running as process 4072. Stop it first.
dellnode2.library: datanode running as process 4670. Stop it first.
dellnode1.library: secondarynamenode running as process 5234. Stop it first.
[hduser#dellnode1 ~]$ jps
5696 Jps
5123 DataNode
5234 SecondaryNameNode
"Stop it first".
First call stop-all.sh
Type jps
Call start-all.sh (or start-dfs.sh and start-mapred.sh)
Type jps (if namenode don't appear type "hadoop namenode" and check error)
According to running "stop-all.sh" on newer versions of hardoop, this is deprecated. You should instead use:
stop-dfs.sh
and
stop-yarn.sh
Today, while executing pig scripts I got the same error mentioned in the question:
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-namenode-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost:
localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-datanode-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost:
localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-jobtracker-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost:
localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-tasktracker-localhost.localdomain.out
So, the answer is:
[training#localhost bin]$ stop-all.sh
and then type:
[training#localhost bin]$ start-all.sh
The issue will be resolved. Now you can run the pig script with mapreduce!
In Mac (If you install using homebrew) Where 3.0.0 is Hadoop version. In Linux change the installation path accordingly(only this part will change . /usr/local/Cellar/).
> /usr/local/Cellar/hadoop/3.0.0/sbin/stopyarn.sh
> /usr/local/Cellar/hadoop/3.0.0/sbin/stopdfs.sh
> /usr/local/Cellar/hadoop/3.0.0/sbin/stop-all.sh"
Better for pro users write this alias at the end of your ~/.bashrc or ~/.zshrc(If you are zsh user). And just type hstopfrom your command line everytime you want to stop Hadoop and all the related processes.
alias hstop="/usr/local/Cellar/hadoop/3.0.0/sbin/stop-yarn.sh;/usr/local/Cellar/hadoop/3.0.0/sbin/stop-dfs.sh;/usr/local/Cellar/hadoop/3.0.0/sbin/stop-all.sh"

Unable to start daemons using start-dfs.sh

We are using cdh4-0.0 distribution from cloudera. We are unable to start the daemons using the below command.
>start-dfs.sh
Starting namenodes on [localhost]
hduser#localhost's password:
localhost: mkdir: cannot create directory `/hduser': Permission denied
localhost: chown: cannot access `/hduser/hduser': No such file or directory
localhost: starting namenode, logging to /hduser/hduser/hadoop-hduser-namenode-canberra.out
localhost: /home/hduser/work/software/cloudera/hadoop-2.0.0-cdh4.0.0/sbin/hadoop-daemon.sh: line 150: /hduser/hduser/hadoop-hduser-namenode-canberra.out: No such file or directory
localhost: head: cannot open `/hduser/hduser/hadoop-hduser-namenode-canberra.out' for reading: No such file or directory
Looks like you're using tarballs?
Try to set an override the default HADOOP_LOG_DIR location in your etc/hadoop/hadoop-env.sh config file like so:
export HADOOP_LOG_DIR=/path/to/hadoop/extract/logs/
And then retry sbin/start-dfs.sh, and it should work.
In packaged environments, the start-stop scripts are tuned to provide a unique location for each type of service, via the same HADOOP_LOG_DIR env-var, so they do not have the same issue you're seeing.
If you are using packages instead, don't use these scripts and instead just do:
service hadoop-hdfs-namenode start
service hadoop-hdfs-datanode start
service hadoop-hdfs-secondarynamenode start

Resources