Hadoop : ClassNotFound Error at MapReduce - hadoop

Just to state my setup before posing the question,
Hadoop Version : 1.0.3
The default WordCount example is running fine. But when I created a new WordCount program according to this page http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html
I compiled it and jar-ed it in similar fashion as given in the tutorial. But when I ran it using :
/usr/local/hadoop$ bin/hadoop jar wordcount.jar org.myorg.WordCount ../Space/input/ ../Space/output
I got the following error,
java.lang.RuntimeException: java.lang.ClassNotFoundException: org.myorg.WordCount$Map
The whole error log has been pasted here : http://pastebin.com/GNbsfpg3
Where did I go wrong?

There are some clues in the error messages:
12/07/14 18:09:38 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/07/14 18:09:38 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
You'll need to share your driver code with us (where you create and configure the job), but it appears you are not configuring the 'job jar', that is to say the job client is not given a hint as to where your code is bundled into a jar, and hence when you run your job, the classes cannot be found when the map instances actually run.
You probably want something like
jobConf.setJarByClass(org.myorg.WordCount.class);

I had exactly the same problem, and got it fixed by adding the following on the main code
jobConf.setJarByClass(org.myorg.WordCount.class);
here you can find the full main function
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setJarByClass(org.myorg.WordCount.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);

I also got the above error.
What I did is that I just copied the jar files into all nodes of the cluster and set the classpath such that every slave node can access this jar.
And this worked for me.
It might help you.

before runJob, set conf like this:
conf.setJar("Your jar file name");
it may work, try it !

Related

Hadoop - Input directory issue

The main problem is that the program launches an
Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://quickstart.cloudera:8020/user/davide/wordcount/input already exists
The command I run to launch the job is the following:
hadoop jar wordcount.jar org.wordcount.WordCount /user/davide/wordcount/input /user/davide/wordcount/output which seems correct (the output directory does not exist, as hadoop pretends).
In the java file the paths seem to be correctly set:
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
I tried several solutions, but couldn't figure out what the problem is.
Thanks in advance.
The problem lies in your argument numbering: args[0] is actually org.wordcount.WordCount, and so you need to use args[1] for input and args[2] for output. If you notice, the error says Output directory hdfs://quickstart.cloudera:8020/user/davide/wordcount/input already exists - it's trying to use the input folder as output.
To fix this:
FileInputFormat.addInputPath(job, new Path(args[1]));
FileOutputFormat.setOutputPath(job, new Path(args[2]));

Hadoop WordCount MapReduce: Getting invalid argument error for setInputFormatClass

I am trying to run a wordcount program but I am getting error for below code
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
Error:- "The method setInputFormatClass(Class)
in the type Job is not applicable for the arguments
(Class)"
The likely problem (without seeing all of your code) is that you've mixed the two mapreduce APIs, the mapred and mapreduce.
Check the imports for the two classes. I'm guessing yours probably look like:
org.apache.hadoop.mapred.TextInputFormat
When it should be:
org.apache.hadoop.mapreduce.lib.input.TextInputFormat

Mapreduce queue Setup

I have a jar called WordCountMain.jar. I would like to run this jar using hadoop command in multimode cluster.
but my user id is tagged to queue name as "omega". so if I run the above jar using the below command then I am getting a error that indicates that my id is not having submit_job access.
hadoop jar WordCountMain.jar /user/cloudera/inputs/words.txt /user/cloudera/output
So the above command not works in multimode cluster,but it works in single node CDH3 cluster
How do i include the queue name while running the above jar?
Configuration conf = new Configuration();
Job job = new Job(conf,"word count");
job.getConfiguration().set("mapreduce.job.queuename","omega");
job.setJarByClass(WordCountCombinerMain.class);
Path inputFilePath = new Path(args[0]);
Path outputFilePath = new Path(args[1]);
FileInputFormat.addInputPath(job, inputFilePath);
FileOutputFormat.setOutputPath(job, outputFilePath);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setMapperClass(CWordCountMapper.class);
job.setCombinerClass(CWordCountCombiner1.class);
job.setReducerClass(CWordCountCombiner1.class);
//job.setReducerClass(CwordCountReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.waitForCompletion(true);
job.submit();
But i am getting the below error. This error says that my mapreduce job is get submitted on default queue.. Can someone help me on this
ERROR ipc.RPC: FailoverProxy: Failing this Call: submitJob for error(RemoteException): org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: User mytra cannot perform operation SUBMIT_JOB on queue default
Try the possible solutions in your driver class
Solution1: configuration.set("mapred.job.queue.name", "omega");
Solution2:
String queueName= "omega";
job.getConfiguration().set("mapreduce.job.queuename", queueName);
You can use
-Dmapred.job.queue.name=yourpoolname or -Dmapreduce.job.queuename=yourpoolname
as a parameter to submit Jobs to different queues.
Be aware that mapred.job.queue.name is a deprecated property name and new name is mapreduce.job.queuename after Hadoop 2.4.1.

Error when running Hadoop Map Reduce for map-only job

I want to run a map-only job in Hadoop MapReduce, here's my code:
Configuration conf = new Configuration();
Job job = new Job(conf);
job.setJobName("import");
job.setMapperClass(Map.class);//Custom Mapper
job.setInputFormatClass(TextInputFormat.class);
job.setNumReduceTasks(0);
TextInputFormat.setInputPaths(job, new Path("/home/jonathan/input"));
But I get the error:
13/07/17 18:22:48 ERROR security.UserGroupInformation: PriviledgedActionException
as: jonathan cause:org.apache.hadoop.mapred.InvalidJobConfException:
Output directory not set.
Exception in thread "main" org.apache.hadoop.mapred.InvalidJobConfException:
Output directory not set.
Then I tried to use this:
job.setOutputFormatClass(org.apache.hadoop.mapred.lib.NullOutputFormat.class);
But it gives me a compilation error:
java: method setOutputFormatClass in class org.apache.hadoop.mapreduce.Job
cannot be applied to given types;
required: java.lang.Class<? extends org.apache.hadoop.mapreduce.OutputFormat>
found: java.lang.Class<org.apache.hadoop.mapred.lib.NullOutputFormat>
reason: actual argument java.lang.Class
<org.apache.hadoop.mapred.lib.NullOutputFormat> cannot be converted to
java.lang.Class<? extends org.apache.hadoop.mapreduce.OutputFormat>
by method invocation conversion
What am I doing wrong?
Map-only jobs still need an output location specified. As the error says, you're not specifying this.
I think you mean that your job produces no output at all. Hadoop still wants you to specify an output location, though nothing need be written.
You want org.apache.hadoop.mapreduce.lib.output.NullOutputFormat not org.apache.hadoop.mapred.lib.NullOutputFormat, which is what the second error indicates though it's subtle.

Hadoop: Input and Output paths in AWS EMR job

I am trying to run a Hadoop job in Amazon Elastic Mapreduce. I have my data and jar located in aws s3. When i setup the job flow I pass the JAR Arguments as
s3n://my-hadoop/input s3n://my-hadoop/output
Below is my hadoop main function
public static void main(String[] args) throws Exception
{
Configuration conf = new Configuration();
Job job = new Job(conf, "MyMR");
job.setJarByClass(MyMR.class);
job.setMapperClass(MyMapper.class);
job.setReducerClass(CountryReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setInputFormatClass(TextInputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
However my jobflow fails with the following log in stderr
Exception in thread "main" java.lang.ClassNotFoundException: s3n://my-hadoop/input
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at org.apache.hadoop.util.RunJar.main(RunJar.java:180)
So how do I specify my input and output paths in aws emr?
So basically this is a classic error of not-defining-the-main-class while trying to create an executable jar. when you do not let the jar have the knowledge of the main-class, the first argument is taken to be the main-class, and hence the error here.
So make sure that while you create the executable jar, you specify the main-class in the manifest.
OR
Use args[1] and args[2] respectively for input and output and execute the hadoop step something like following:
ruby elastic-mapreduce -j $jobflow --jar s3:/my-jar-location/myjar.jar --arg com.somecompany.MyMainClass --arg s3:/input --arg s3:/output
I met with the same problem with you.It's because you need 3 aruguments(other than 2) when u submit the custom jar file. The first is your Main class name,the second is inputpath to your input file, the third is outputpath to your output folder.
I think you probably solved this problem,anyway.

Resources