Writing text output from map function hadoop - hadoop

Input :
a,b,c,d,e
q,w,34,r,e
1,2,3,4,e
In mapper, I would grab all the values of the last field, and I want to emit (e,(a,b,c,d)) i.e. it emits (key, (rest of the fields from the line)).
Help appreciated.
Current code:
public static class Map extends Mapper<LongWritable, Text, Text, Text> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString(); // reads the input line by line
String[] attr = line.split(","); // extract each attribute values from the csv record
context.write(attr[argno-1],line); // gives error seems to like only integer? how to override this?
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
// further process , loads the chunk into 2d arraylist object for processing
}
public static void main(String[] args) throws Exception {
String line;
String arguements[];
Configuration conf = new Configuration();
// compute the total number of attributes in the file
FileReader infile = new FileReader(args[0]);
BufferedReader bufread = new BufferedReader(infile);
line = bufread.readLine();
arguements = line.split(","); // split the fields separated by comma
conf.setInt("argno", arguements.length); // saving that attribute value
Job job = new Job(conf, "nb");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(Map.class); /* The method setMapperClass(Class<? extends Mapper>) in the type Job is not applicable for the arguments (Class<Map>) */
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}`
Please note the errors (see comments) I get face.

So this is simple. First parse your string to get the key and pass the rest of the line as the value. Then use the identity reducer which will combine all the same key values as list together as your output. It should be in the same format.
So your map function will output:
e, (a,b,c,d,e)
e, (q,w,34,r,e)
e, (1,2,3,4,e)
Then after the identity reduce it should output:
e, {a,b,c,d,e; q,w,34,r,e; 1,2,3,4,e}
public static class Map extends Mapper<LongWritable, Text, Text, Text> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString(); // reads the input line by line
String[] attr = line.split(","); // extract each attribute values from the csv record
context.write(attr[argno-1],line); // gives error seems to like only integer? how to override this?
}
}
public static void main(String[] args) throws Exception {
String line;
String arguements[];
Configuration conf = new Configuration();
// compute the total number of attributes in the file
FileReader infile = new FileReader(args[0]);
BufferedReader bufread = new BufferedReader(infile);
line = bufread.readLine();
arguements = line.split(","); // split the fields separated by comma
conf.setInt("argno", arguements.length); // saving that attribute value
Job job = new Job(conf, "nb");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(Map.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}

Found alternate logic. Implemented , tested and verified.

Related

First Hadoop program using map and reducer

I'm trying to compile my first Hadoop program. I have as input file something like that:
1 54875451 2015 LA89LP
2 47451451 2015 LA89LP
3 878451 2015 LA89LP
4 54875 2015 LA89LP
5 2212 2015 LA89LP
When I'm compiling it i get map 100%, reducer 0% and an java.lang.Exception: java.util.NoSuchElementException caused by a lot of staff, including:
java.util.NoSuchElementException
java.util.StringTokenizer.nextToken(StringTokenizer.java:349)
I don't really understand why. Any help is really appreciate
My Map and Reducer are in this way:
public class Draft {
public static class TokenizerMapper extends Mapper<Object, Text, Text, Text>{
private Text word = new Text();
private Text word2 = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
String id = itr.nextToken();
String price = itr.nextToken();
String dateTransfer = itr.nextToken();
String postcode = itr.nextToken();
word.set(postcode);
word2.set(price);
context.write(word, word2);
}
}
}
public static class MaxReducer extends Reducer<Text,Text,Text,Text> {
private Text word = new Text();
private Text word2 = new Text();
public void reduce(Text key, Iterable<Text> values, Context context
) throws IOException, InterruptedException {
String max = "0";
HashSet<String> S = new HashSet<String>();
for (Text val: values) {
String d = key.toString();
String price = val.toString();
if (S.contains(d)) {
if (Integer.parseInt(price)>Integer.parseInt(max)) max = price;
} else {
S.add(d);
max = price;
}
}
word.set(key.toString());
word2.set(max);
context.write(word, word2);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "Draft");
job.setJarByClass(Draft.class);
job.setMapperClass(TokenizerMapper.class);
job.setReducerClass(MaxReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(Text.class); // output key type for mapper
job.setOutputValueClass(Text.class); // output value type for mapper
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
This error occurs, when some of your records have less than 4 fields. Your code in the mapper assumes that each record contains 4 fields: id, price, dateTransfer and postcode.
But, some of the records may not contain all the 4 fields.
For e.g. if the record is:
1 54875451 2015
then, following line will throw an exception (java.util.NoSuchElementException):
String postcode = itr.nextToken();
You are trying to assign postcode (which is assumed to be the 4th field), but there are only 3 fields in the input record.
To overcome this problem, you need to change your string tokenizer code in the map() method. Since you are emitting only postcode and price from the map(), you can change your are code as below:
String[] tokens = value.toString().split(" ");
String price = "";
String postcode = "";
if(tokens.length >= 2)
price = tokens[1];
if(tokens.length >= 4)
postcode = tokens[3];
if(!price.isEmpty())
{
word.set(postcode);
word2.set(price);
context.write(word, word2);
}

Getting error:- Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable

I have written a mapreduce job for doing log file analysis.My mappers output text both as key and value and I have explicitly set the map output classes in my driver class.
But i still get the error:-Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable
public class CompositeUserMapper extends Mapper<LongWritable, Text, Text, Text> {
IntWritable a = new IntWritable(1);
//Text txt = new Text();
#Override
protected void map(LongWritable key, Text value,
Context context)
throws IOException, InterruptedException {
String line = value.toString();
Pattern p = Pattern.compile("\bd{8}\b");
Matcher m = p.matcher(line);
String userId = "";
String CompositeId = "";
if(m.find()){
userId = m.group(1);
}
CompositeId = line.substring(line.indexOf("compositeId :")+13).trim();
context.write(new Text(CompositeId),new Text(userId));
// TODO Auto-generated method stub
super.map(key, value, context);
}
My Driver class is as below:-
public class CompositeUserDriver extends Configured implements Tool {
public static void main(String[] args) throws Exception {
CompositeUserDriver wd = new CompositeUserDriver();
int res = ToolRunner.run(wd, args);
System.exit(res);
}
public int run(String[] arg0) throws Exception {
// TODO Auto-generated method stub
Job job=new Job();
job.setJarByClass(CompositeUserDriver.class);
job.setJobName("Composite UserId Count" );
FileInputFormat.addInputPath(job, new Path(arg0[0]));
FileOutputFormat.setOutputPath(job, new Path(arg0[1]));
job.setMapperClass(CompositeUserMapper.class);
job.setReducerClass(CompositeUserReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
return job.waitForCompletion(true) ? 0 : 1;
//return 0;
}
}
Please advise how can sort this problem out.
Remove the super.map(key, value, context); line from your mapper code: it calls map method of the parent class, which is identity mapper that returns key and value passed to it, in this case the key is the byte offset from the beginning of the file

Hadoop not all values get assembled for one key

I have some data that I would like to aggregate by key using Mapper code and then perform something on all values that belong to a key using Reducer code. For example if I have:
key = 1, val = 1,
key = 1, val = 2,
key = 1, val = 3
I would like to get key=1, val=[1,2,3] in my Reducer.
The thing is, I get something like
key = 1, val=[1,2]
key = 1, val=[3]
Why is that so?
I thought that all the values for one specific key will be assembled in one reducer, but now it seems that there can be more key, val [ ] pairs, since there can be multiple reducers, is that so?
Should I set number of reducers to be 1?
I'm new to Hadoop so this confuses me.
Here's the code
public class SomeJob {
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException
{
Job job = new Job();
job.setJarByClass(SomeJob.class);
FileInputFormat.addInputPath(job, new Path("/home/pera/data/input/some.csv"));
FileOutputFormat.setOutputPath(job, new Path("/home/pera/data/output"));
job.setMapperClass(SomeMapper.class);
job.setReducerClass(SomeReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.waitForCompletion(true);
}
}
public class SomeMapper extends Mapper<LongWritable, Text, Text, Text>{
#Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
String parts[] = line.split(";");
context.write(new Text(parts[0]), new Text(parts[4]));
}
}
public class SomeReducer extends Reducer<Text, Text, Text, Text>{
#Override
protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
String properties = "";
for(Text value : values)
{
properties += value + " ";
}
context.write(key, new Text(properties));
}
}

Writing a value to file without moving to reducer

I have an input of records like this,
a|1|Y,
b|0|N,
c|1|N,
d|2|Y,
e|1|Y
Now, in mapper, i has to check the value of third column. If it is 'Y' then that record has to write directly to output file without moving that record to reducer or else i.e, 'N' value records has to move to reducer for further processing..
So,
a|1|Y,
d|2|Y,
e|1|Y
should not go to reducer but
b|0|N,
c|1|N
should go to reducer and then to output file.
How can i do this??
What you can probably do is use MultipleOutputs - click here to separate out records of 'Y' and 'N' type to two different files from mappers.
Next, you run saparate jobs for the two newly generated 'Y' and 'N' type data sets.
For 'Y' types set number of reducers to 0, so that, Reducers aren't use. And, for 'N' types do it the way you want using reducers.
Hope this helps.
See if this works,
public class Xxxx {
public static class MyMapper extends
Mapper<LongWritable, Text, LongWritable, Text> {
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
FileSystem fs = FileSystem.get(context.getConfiguration());
Random r = new Random();
FileSplit split = (FileSplit)context.getInputSplit();
String fileName = split.getPath().getName();
FSDataOutputStream out = fs.create(new Path(fileName + "-m-" + r.nextInt()));
String parts[];
String line = value.toString();
String[] splits = line.split(",");
for(String s : splits) {
parts = s.split("\\|");
if(parts[2].equals("Y")) {
out.writeBytes(line);
}else {
context.write(key, value);
}
}
out.close();
fs.close();
}
}
public static class MyReducer extends
Reducer<LongWritable, Text, LongWritable, Text> {
public void reduce(LongWritable key, Iterable<Text> values,
Context context) throws IOException, InterruptedException {
for(Text t : values) {
context.write(key, t);
}
}
}
/**
* #param args
* #throws IOException
* #throws InterruptedException
* #throws ClassNotFoundException
*/
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
// TODO Auto-generated method stub
Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://localhost:9000");
conf.set("mapred.job.tracker", "localhost:9001");
Job job = new Job(conf, "Xxxx");
job.setJarByClass(Xxxx.class);
Path outPath = new Path("/output_path");
job.setMapperClass(MyMapper.class);
job.setReducerClass(MyReducer.class);
FileInputFormat.addInputPath(job, new Path("/input.txt"));
FileOutputFormat.setOutputPath(job, outPath);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
In your map function, you will get input line by line. Split it according by using | as the delimiter. (by using the String.split() method to be exact)
It will look like this
String[] line = value.toString().split('|');
Access the third element of this array by line[2]
Then, using a simple if else statement, emit the output with N value for further processing.

HADOOP - Mapreduce - I obtain same value for all keys

I have a problem with mapreduce. Giving as input a list of song ("Songname"#"UserID"#"boolean") i must have as result a song list in which is specified how many time different users listen them... so a '' output ("Songname","timelistening").
I used hashtable to allow only one couple .
With short files it works well but when I put as input a list about 1000000 of records it returns me the same value (20) for all records.
This is my mapper:
public static class CanzoniMapper extends Mapper<Object, Text, Text, IntWritable>{
private IntWritable userID = new IntWritable(0);
private Text song = new Text();
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String[] caratteri = value.toString().split("#");
if(caratteri[2].equals("1")){
song.set(caratteri[0]);
userID.set(Integer.parseInt(caratteri[1]));
context.write(song,userID);
}
}
}
This is my reducer:
public static class CanzoniReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
Hashtable<IntWritable,Text> doppioni = new Hashtable<IntWritable,Text>();
for (IntWritable val : values) {
doppioni.put(val,key);
}
result.set(doppioni.size());
doppioni.clear();
context.write(key,result);
}
}
and main:
Configuration conf = new Configuration();
Job job = new Job(conf, "word count");
job.setJarByClass(Canzoni.class);
job.setMapperClass(CanzoniMapper.class);
//job.setCombinerClass(CanzoniReducer.class);
//job.setNumReduceTasks(2);
job.setReducerClass(CanzoniReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
Any idea???
Maybe I solved it. It's an input problem. There were too many records compared to the number of songs, so in these records' list each song was listed at least once by each user.
In my test I had 20 different users, so naturally the result gives me 20 for each song.
I must increase the number of different songs.

Resources