I just started working with MapReduce, and I'm running into a weird bug that I haven't been able to answer through Google. I'm making a basic program using ArrayWritable, but when I run it, I get the following error during Reduce:
java.lang.RuntimeException:
java.lang.NoSuchMethodException:org.apache.hadoop.io.ArrayWritable.<init>()
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:62)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:1276)
at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1214)
at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:250)
at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:246)
at PageRank$Reduce.reduce(Unknown Source)
at PageRank$Reduce.reduce(Unknown Source)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:522)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
I'm using Hadoop 1.2.1. Here's my code:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.join.*;
import java.io.IOException;
import java.util.Iterator;
public class TempClass {
public static class MapClass extends MapReduceBase
implements Mapper<LongWritable, Text, Text, ArrayWritable> {
public void map(LongWritable key, Text value,
OutputCollector<Text, ArrayWritable> output,
Reporter reporter) throws IOException {
String[] arr_str = new String[]{"a","b","c"};
for(int i=0; i<3; i++)
output.collect(new Text("my_key"), new ArrayWritable(arr_str));
}
}
public static class Reduce extends MapReduceBase
implements Reducer<Text, ArrayWritable, Text, ArrayWritable> {
public void reduce(Text key, Iterator<ArrayWritable> values,
OutputCollector<Text, ArrayWritable> output,
Reporter reporter) throws IOException {
ArrayWritable tmp;
while(values.hasNext()){
tmp = values.next();
output.collect(key, tmp);
}
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
JobConf job = new JobConf(conf, TempClass.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(ArrayWritable.class);
job.setOutputFormat(TextOutputFormat.class);
job.setInputFormat(TextInputFormat.class);
job.setMapperClass(MapClass.class);
job.setReducerClass(Reduce.class);
FileInputFormat.setInputPaths( job, new Path( args[0] ) );
FileOutputFormat.setOutputPath( job, new Path( args[1] ) );
job.setJobName( "TempClass" );
JobClient.runJob(job);
}
}
If I comment below lines (Reduce Class):
//while(values.hasNext()){
// tmp = values.next();
output.collect(key, tmp);
//}
everything will be ok. Do you have any ideas?
A Writable for arrays containing instances of a class. The elements of
this writable must all be instances of the same class. If this
writable will be the input for a Reducer, you will need to create a
subclass that sets the value to be of the proper type. For example:
public class IntArrayWritable extends ArrayWritable { public
IntArrayWritable() { super(IntWritable.class); } }
Here is from the docs of ArrayWritable. Generally, Writable should have an constructor with no parameter.
I just modified your code to:
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.ArrayWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
public class TempClass {
public static class TextArrayWritable extends ArrayWritable {
public TextArrayWritable() {
super(Text.class);
}
public TextArrayWritable(String[] strings) {
super(Text.class);
Text[] texts = new Text[strings.length];
for (int i = 0; i < strings.length; i++) {
texts[i] = new Text(strings[i]);
}
set(texts);
}
}
public static class MapClass extends MapReduceBase implements
Mapper<LongWritable, Text, Text, ArrayWritable> {
public void map(LongWritable key, Text value,
OutputCollector<Text, ArrayWritable> output, Reporter reporter)
throws IOException {
String[] arr_str = new String[] {
"a", "b", "c" };
for (int i = 0; i < 3; i++)
output.collect(new Text("my_key"), new TextArrayWritable(
arr_str));
}
}
public static class Reduce extends MapReduceBase implements
Reducer<Text, TextArrayWritable, Text, TextArrayWritable> {
public void reduce(Text key, Iterator<TextArrayWritable> values,
OutputCollector<Text, TextArrayWritable> output,
Reporter reporter) throws IOException {
TextArrayWritable tmp;
while (values.hasNext()) {
tmp = values.next();
output.collect(key, tmp);
}
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
JobConf job = new JobConf(conf, TempClass.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(TextArrayWritable.class);
job.setOutputFormat(TextOutputFormat.class);
job.setInputFormat(TextInputFormat.class);
job.setMapperClass(MapClass.class);
job.setReducerClass(Reduce.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setJobName("TempClass");
JobClient.runJob(job);
}
}
Related
I'm trying to run basic wordcount mapreduce example using outputcollector , but im getting exceptions.
INFO mapreduce.Job: Job job_local1048833344_0001 failed with state FAILED due to: NA
java.lang.Exception: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, received org.apache.hadoop.io.LongWritable ...
Here is the code I'm trying to run:
import java.io.*;
import java.util.StringTokenizer;
import java.util.Iterator;
import org.apache.hadoop.io.*;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.io.ObjectWritable;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCountOutputCollector {
public static class WordCountOutputCollectorMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
public static class WordCountOutputCollectorReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count outputcollector");
job.setJarByClass(WordCountOutputCollector.class);
job.setMapperClass(WordCountOutputCollectorMapper.class);
job.setCombinerClass(WordCountOutputCollectorReducer.class);
job.setReducerClass(WordCountOutputCollectorReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
//conf.setInputFormat(TextInputFormat.class);
//conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
//JobClient.runJob(conf);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Try This : hadoop-wordcount
import java.io.IOException;
import java.io.PrintStream;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Mapper.Context;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Reducer.Context;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount
{
public static class Map
extends Mapper<LongWritable, Text, Text, IntWritable>
{
private static final IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable paramLongWritable, Text paramText, Mapper<LongWritable, Text, Text, IntWritable>.Context paramMapper)
throws IOException, InterruptedException
{
StringTokenizer localStringTokenizer = new StringTokenizer(paramText.toString());
while (localStringTokenizer.hasMoreTokens())
{
this.word.set(localStringTokenizer.nextToken());
paramMapper.write(this.word, one);
}
}
}
public static class Reduce
extends Reducer<Text, IntWritable, Text, IntWritable>
{
private IntWritable result = new IntWritable();
public void reduce(Text paramText, Iterable<IntWritable> paramIterable, Reducer<Text, IntWritable, Text, IntWritable>.Context paramReducer)
throws IOException, InterruptedException
{
int i = 0;
for (IntWritable localIntWritable : paramIterable) {
i += localIntWritable.get();
}
this.result.set(i);
paramReducer.write(paramText, this.result);
}
}
public static void main(String[] paramArrayOfString)
throws Exception
{
Configuration localConfiguration = new Configuration();
String[] arrayOfString = new GenericOptionsParser(localConfiguration, paramArrayOfString).getRemainingArgs();
if (arrayOfString.length != 2)
{
System.err.println("Usage: WordCount <in> <out>");
System.exit(2);
}
Job localJob = new Job(localConfiguration, "wordcount");
localJob.setJarByClass(WordCount.class);
localJob.setMapperClass(WordCount.Map.class);
localJob.setReducerClass(WordCount.Reduce.class);
localJob.setCombinerClass(WordCount.Reduce.class);
localJob.setOutputKeyClass(Text.class);
localJob.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(localJob, new Path(arrayOfString[0]));
FileOutputFormat.setOutputPath(localJob, new Path(arrayOfString[1]));
System.exit(localJob.waitForCompletion(true) ? 0 : 1);
}
}
I think it's mainly because the map output has not been converted into Text.
Try to uncomment the code below:
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
JobClient.runJob(conf);
Write a map reduce programm to print the most frequenty ocuring words in a text document.
The threshld value can be fixed and the word whose frequency exceeds the threshold need to be output.
Eg: If thereshold=100, and “is” occurs 150 times in the document, it has to be printed in the output.
program :
package org.myorg;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text, Inritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context coext)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}
Here's the complete code,
Driver Class
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class FrequentWordClassDriver extends Configured implements Tool{
#Override
public int run(String[] args) throws Exception {
if(args.length != 2){
return -1;
}
JobConf conf = new JobConf(getConf(), FrequentWordClassDriver.class);
conf.setJobName(this.getClass().getName());
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
conf.setMapperClass(FrequentWordClassMapper.class);
conf.setReducerClass(FrequentWordClassReducer.class);
conf.setMapOutputKeyClass(Text.class);
conf.setMapOutputValueClass(IntWritable.class);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
JobClient.runJob(conf);
return 0;
}
public static void main(String[] args) throws Exception{
int exitCode = ToolRunner.run(new FrequentWordClassDriver(), args);
System.exit(exitCode);
}
}
Mapper Class
import java.io.IOException;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
public class FrequentWordClassMapper extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable>{
#Override
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
for(String phrase : line.split(" ")){
output.collect(new Text(phrase.toUpperCase()), new IntWritable(1));
}
}
}
Reducer Class
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
public class FrequentWordClassReducer extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable>{
#Override
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException{
int wordcount = 0, threshold = 100;
while(values.hasNext()){
IntWritable value = values.next();
wordcount +=value.get();
}
if(wordcount >= threshold){
output.collect(key, new IntWritable(wordcount));
}
}
}
The Driver Class, Mapper Class and Reducer Class is fairly simple and self explanatory. The mapper class split each sentence into words and send them to reducer class in the format <word, 1>. The reducer class will receive the data in the format <word, [1, 1, 1, 1]> and it will aggregate and count the occurrence of each word, and if the occurrence of each word is greater than or equal to threshold value then it will send the word as output.
Hope this will help you.
It's very simple.
Have a look at traditional word count example. You can use same code.
After setting Reducer class, add below line (If you want your output in single reduce file)
job.setNumReduceTasks(1);
Add your condition in reduce method.
Before writing to context.write(key, result);, add your condition
if ( sum > threshold) {
context.write(key, result);
}
You can better achieve this by using counters.
You can set the number of counter
public void reduce(Text word, Iterable<IntWritable> count,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : count) {
sum += val.get();
}
context.getCounter(word.toString()).increment(sum);
}
And then in the your driver program, you can get the counter using
Counters counters=job.getCounters();
You can use this and run multiple mappers and reducer, thus not compromising the performance.
I am trying to run MaxTemperature example from MapReduce. But I couldnot find the MaxTemperature.jar in Hadoop MapReduce Examples. Can someone help me out finding the jar file or what is the possibility of executing this program and see the output?
try this,
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class Temp {
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
// private final static IntWritable one = new IntWritable();
// private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException InterruptedException
{
String line = value.toString();
String year=line.substring(0,4);
//StringTokenizer tokenizer = new StringTokenizer(line);
// while (tokenizer.hasMoreTokens()) {
// word.set(tokenizer.nextToken());
// output.collect(word, one);
int Temp=Integer.parseInt(line.substring(6,8));
context.write(new Text(year),new IntWritable(Temp));
}
}
}
/*
public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
// int max=Integer.MIN_VALUE;
int sum=0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}*/
public class Reduce extends Reducer<Text, IntWritable, Text, IntWritable>
{
#Override
public void reduce(Text key, Iterable<IntWritable> values,Context context)
throws IOException, InterruptedException
{
int maxValue = Integer.MIN_VALUE;
for (IntWritable value : values) {
maxValue = Math.max(maxValue, value.get());
}
context.write(key, new IntWritable(maxValue));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(Temp.class);
conf.setJobName("Temp");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
make jar file of this program and execute command
hadoop jar Temp.jar Temp /hdfs_inputFile /hdfs_inputFile
I am coding a MapReduce job for finding the occurrence of a search string (passed through Command Line argument) in an input file stored in HDFS using old API.
Below is my Driver class -
public class StringSearchDriver
{
public static void main(String[] args) throws IOException
{
JobConf jc = new JobConf(StringSearchDriver.class);
jc.set("SearchWord", args[2]);
jc.setJobName("String Search");
FileInputFormat.addInputPath(jc, new Path(args[0]));
FileOutputFormat.setOutputPath(jc, new Path(args[1]));
jc.setMapperClass(StringSearchMap.class);
jc.setReducerClass(StringSearchReduce.class);
jc.setOutputKeyClass(Text.class);
jc.setOutputValueClass(IntWritable.class);
JobClient.runJob(jc);
}
}
Below is my Mapper Class -
public class StringSearchMap extends MapReduceBase implements
Mapper<LongWritable, Text, Text, IntWritable>
{
String searchWord;
public void configure(JobConf jc)
{
searchWord = jc.get("SearchWord");
}
#Override
public void map(LongWritable key, Text value,
OutputCollector<Text, IntWritable> out, Reporter reporter)
throws IOException
{
String[] input = value.toString().split("");
for(String word:input)
{
if (word.equalsIgnoreCase(searchWord))
out.collect(new Text(word), new IntWritable(1));
}
}
}
On running the job (command line string passed is "hi"), I am getting the below error -
14/09/21 22:35:41 INFO mapred.JobClient: Task Id : attempt_201409212134_0005_m_000001_2, Status : FAILED
java.lang.ClassCastException: interface javax.xml.soap.Text
at java.lang.Class.asSubclass(Class.java:3129)
at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:795)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:964)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:422)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Please suggest.
You auto imported the wrong import.
Instead of import org.apache.hadoop.io.Text you import javax.xml.soap.Text
You can find a sample wrong import in this blog.
One point , It is better to adopt New API
EDIT
I used New Api
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
/**
* #author Unmesha sreeveni
* #Date 23 sep 2014
*/
public class StringSearchDriver extends Configured implements Tool {
public static class Map extends
Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
Configuration conf = context.getConfiguration();
String line = value.toString();
String searchString = conf.get("word");
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
String token = tokenizer.nextToken();
if(token.equals(searchString)){
word.set(token);
context.write(word, one);
}
}
}
}
public static class Reduce extends
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
int res = ToolRunner.run(conf, new StringSearchDriver(), args);
System.exit(res);
}
#Override
public int run(String[] args) throws Exception {
// TODO Auto-generated method stub
if (args.length != 3) {
System.out
.printf("Usage: Search String <input dir> <output dir> <search word> \n");
System.exit(-1);
}
String source = args[0];
String dest = args[1];
String searchword = args[2];
Configuration conf = new Configuration();
conf.set("word", searchword);
Job job = new Job(conf, "Search String");
job.setJarByClass(StringSearchDriver.class);
FileSystem fs = FileSystem.get(conf);
Path in =new Path(source);
Path out =new Path(dest);
if (fs.exists(out)) {
fs.delete(out, true);
}
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, in);
FileOutputFormat.setOutputPath(job, out);
boolean sucess = job.waitForCompletion(true);
return (sucess ? 0 : 1);
}
}
This works.
For Text; required hadoop package is org.apache.hadoop.io..
Check your packages
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
I am newbie to Hadoop but have read the Yahoo tutorial on that and have already written a few mapReduce jobs. All my previous works used TextInputFormat but I now need to change that to KeyValueInputFormat. The problem is that KeyValueInputFormat.class cannot be found in hadoop 0.20.2?
I am attaching my code below (It is the word count example with only input format being changed)
package org.myorg;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
public class WordCount {
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(keyValueInputFormat.class); //The modified input format
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
There is KeyValueTextInputFormat in org.apache.hadoop.mapreduce.lib.input.
Some of the old tutorials are based on older versions of Hadoop API. I recommend that you go through some of the newer tutorials.
This is what I get when I do go to source on KeyValueTextInputFormat.
package org.apache.hadoop.mapreduce.lib.input;
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
public class KeyValueTextInputFormat extends FileInputFormat<Text, Text> {
public KeyValueTextInputFormat() {
//compiled code
throw new RuntimeException("Compiled Code");
}
protected boolean isSplitable(JobContext context, Path file) {
//compiled code
throw new RuntimeException("Compiled Code");
}
public RecordReader<Text, Text> createRecordReader(InputSplit genericSplit, TaskAttemptContext context) throws IOException {
//compiled code
throw new RuntimeException("Compiled Code");
}
}