Exception in thread "main" while formatting namenode in hadoop - hadoop

satya#ubuntu:~/hadoop/bin$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/hdfs/server/namenode/NameNode : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.apache.hadoop.hdfs.server.namenode.NameNode. Program will exit.

This error (Unsupported major.minor version) generally appears because of using a higher JDK during compile time and lower JDK during runtime. In this case 51 corresponds to JDK 7 (for more version mappings visit this link), this indicates that
whatever the JVM 1.6 runtime loaded, it was meant for JVM 1.7. Try using JDK 1.7 and set that using JAVA_HOME environment variable in hadoop-env.sh.

The default java version and you Hadoop's java version should match. Do this:
java -version
Open hadoop-env.sh (can be found in hadoop config folder) and search for JAVA_HOME. This java version and the default java version should match.
NOTE: Set your JAVA_HOME to point to jdk folder and not your java's bin folder

It is better if you can show your Hadoop version... but for Hadoop 2, I think you can try the new format command
[hdfs]$ $HADOOP_PREFIX/bin/hdfs namenode -format [-clusterid cid] [-force] [-nonInteractive]
So in your case, type
satya#ubuntu:~/hadoop/bin$ hdfs namenode -format
(I'm referring to Hadoop 2.7.0 which should apply to your situation.)

I also met the question. And when I type:
$hadoop classpath
I find the classpath of hdfs is wrong. Then I did
vi ~/.bashrc
export HADOOP_HDFS_HOME=$HADOOP_HOME
It works, hope it helps.

Related

Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/fs/FsShell : Unsupported major.minor version 51.0

I am trying to execute below command on Hadoop
hadoop fs -ls /
but it is returing with the error
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/fs/FsShell : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
Could not find the main class: org.apache.hadoop.fs.FsShell. Program will exit.
I have tried updating the java but it is still giving me the same error.
Note: The same command is working on the other nodes but not on 2 for the cluster nodes.
Try updating the JDK to version 1.7. Perhaps, you have updated JRE not JDK.
If you used the CDH, you may need change the java version to jdk1.7.0_67-cloudera. After I changed the JAVA_HOME from /usr/java/jdk1.6.0_31 to /usr/java/jdk1.7.0_67-cloudera, I solved the problem.
Unsupported major.minor version 51.0
You need Java 7 (or higher) to run this.
See: https://www.java.com/de/download/faq/java_7.xml
If you already installed Java 7 (or higher) then execute it with:
C:\Program Files\Java\<Java Version (jre7/jre8>\bin\java.exe -jar <Path To .Jar>
The command is not working because it is pointing to the newer version of hadoop jars avilable on the nodes instead of the hadoop jars of installed version
It was pointing to jars placed in
/usr/lib/hadoop
Then i tried to execute it from the installation directory as below
/opt/cloudera/parcels/CDH/lib/hadoop/bin/hadoop fs -ls /
It worked for me.

Running wordcount Hadoop example on Windows using Hadoop 2.6.0

I am new to Hadoop and learnt that with 2.x version, I can try Hadoop on my local Windows 7 64-bit machine.
I installed hadoop 2.6.0 and installed cygwin.
I could execute bin/hadoop version but I get the below error while executing the jar command:
Note: I have also placed the winutils.jar in the bin, from hadoop-common-2.2.0.jar. Please help. I am not able to get rid of this error. I have also entered the input and output parameters, it still fails.
$ bin/hadoop jar /Hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount
15/02/03 12:40:45 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:355)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:370)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:363)
at org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows
(GenericOptionsParser.java:438)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions
(GenericOptionsParser.java:484)
at org.apache.hadoop.util.GenericOptionsParser.<init>
(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>
(GenericOptionsParser.java:153)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke
(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke
(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke
(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke
(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke
(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Usage: wordcount <in> [<in>...] <out>
I could run the below command as well:
$ bin/hadoop jar /Hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar
It used to be an issue earlier. However if you are able to run the program through jar, there could be something else at fault.
If the same thing works for you using a Java code, you can edit the jar to remove the code where a new exception is being raised.
To be doubly sure, check if the bin directory contains winutils.exe and hadoop.dll.
If they are not present, chances are that someone else must have faced a similar issue and would have kept the files. These files are created when Hadoop is built from source code on the OS.
It seems like that you have installed hadoop 2.6.0 and older version of hadoop winutils. You must install hadoop winutils of your current hadoop version. Try to download winutils from this github repo https://github.com/steveloughran/winutils/tree/master/hadoop-2.6.0/bin
Finally replace your bin directory with the winutils bin directory!

org.apache.nutch.crawl.Crawl missing in nutch 1.9 on hadoop 1.2.1

I have installed fully distributed Hadoop 1.2.1. I was trying to integrated nutch with steps below:
Download apache-nutch-1.9-src.zip
Add value http.agent.name into nutch-site.xml
Copy hadoop-env.sh, core-site.xml, hdfs-site.xml, mapred-site.xml,
masters, slaves into $NUTCH_HOME/conf
compile using ant runtime
create urls/seed.txt and put on hadoop dfs
edit $NUTCH_HOME/conf/regex-urlfilter.txt
Test crawl using command:
bin/hadoop -jar nutch-1.9.job org.apache.nutch.crawl.Crawl urls -dir urls -depth 1 -topN 5
and get this error:
Exception in thread "main" java.lang.ClassNotFoundException: org.apache.nutch.crawl.Crawl
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
I tried extract nutch-1.9.job and I didn't find out class Crawl in org/apache/nutch/crawl.
Do I need to config something?
Crawl.java removed at 1.8 version. You can use crawl shell script for all crawling.
Deprecated class o.a.n.crawl.Crawler is still in code base https://issues.apache.org/jira/browse/NUTCH-1621

Problems running Manning's Hadoop in Practice 4.1 MapReduce code on Hadoop 1.0.3

I am attempting to run the 4.1 example code from Manning's "Hadoop in Practice" at http://www.manning.com/lam/
I am running Ubuntu 10.4 using hadoop 1.0.3 java 6.
The examples from http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/, I used the wordcount example to verify the installation.
I then tried to running the 4.1 example using:
hduser#ubuntu:/usr/local/hadoop$ bin/hadoop jar MyJob.jar MyJob /user/hduser/4.1/input /user/hduser/4.1output
I get the error:
Exception in thread "main" java.lang.ClassNotFoundException: MyJob
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at org.apache.hadoop.util.RunJar.main(RunJar.java:149)
The public run method in the example that runs and manning's code appear to be different.
I appreciate your assistance!
Give the complete path of the jar. For example, if MyJob.jar is present inside your home directory then : hduser#ubuntu:/usr/local/hadoop$ bin/hadoop jar /home/hduser/MyJob.jar MyJob /user/hduser/4.1/input /user/hduser/4.1output
I had the same problem with Hadoop 1.0.3.16 and java 6 but I managed to get the Manning example 4.1 working by adding job.setJar("/path/to/MyJob.jar"); after job.setJobName("MyJob"); I thought of making this change because I was getting a warning: WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). Do you get the same warning Tariq?
I also tried adding job.setJarByClass(MyJob.class); instead but this did not work.
Cheers, Alex

hcatalog with mapreduce

I get the following error while executing a MapReduce program.
I have placed all jars in hadoop/lib directory and have also mentioned the jars in -libjars.
This is the cmd I am executing:
$HADOOP_HOME/bin/hadoop --config $HADOOP_HOME/conf jar /home/shash/distinct.jar HwordCount -libjars $LIB_JARS WordCount HWordCount2
java.lang.RuntimeException: java.lang.ClassNotFoundException:
org.apache.hcatalog.mapreduce.HCatOutputFormat at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:996) at
org.apache.hadoop.mapreduce.JobContext.getOutputFormatClass(JobContext.java:248) at org.apache.hadoop.mapred.Task.initialize(Task.java:501) at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:306) at org.apache.hadoop.mapred.Child$4.run(Child.java:270) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) at
org.apache.hadoop.mapred.Child.main(Child.java:264) Caused by: java.lang.ClassNotFoundException: org.apache.hcatalog.mapreduce.HCatOutputFormat
at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at
java.net.URLClassLoader$1.run(URLClassLoader.java:355) at
java.security.AccessController.doPrivileged(Native Method) at
java.net.URLClassLoader.findClass(URLClassLoader.java:354) at
java.lang.ClassLoader.loadClass(ClassLoader.java:423) at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at
java.lang.ClassLoader.loadClass(ClassLoader.java:356) at
java.lang.Class.forName0(Native Method) at
java.lang.Class.forName(Class.java:264) at
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:943)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:994) ...
8 more
Make sure LIB_JARS is a comma-separated list (as opposed to colon-separated like CLASSPATH)
Applies To CDH 5.0.x CDH 5.1.x CDH 5.2.x CDH 5.3.x Sqoop
Cause Sqoop cannot pick up the HCatalog libraries because Cloudera
Manager does not set the HIVE_HOME environment. It needs to be set
manually.
This problem is tracked with below JIRA:
https://issues.apache.org/jira/browse/SQOOP-2145
The fix of this JIRA has been included in CDH since version 5.4.0.
Workaround: Applicable to CDH versions lower than 5.4.0.
Execute below commands in shell before calling Sqoop command or adding them to /etc/sqoop/conf/sqoop-env.sh (create one, if it does not already exists):
export HIVE_HOME=/opt/cloudera/parcels/CDH/lib/hive (for parcel installation)
export HIVE_HOME=/usr/lib/hive (for package installation)

Resources