What's the Input key of MapReduce by default? - hadoop

I'm using MpaReduce based on hadoop 2.6.0,and I want to skip the first six lines of my data file, so I use
if(key.get()<6)
return ;
else
{do ....}
in my map() function.
But it was not right. I find that the input key of map() is not the offset of file line. The key is the sum of the length of every line. Why? It doesn't look like the words in many books.

If you look at the code, it is the actual byte offset of the file and not the line.
If you want to skip the first n lines of your file, you probably have to write your own input format / record reader, or make sure that you keep a line counter in the mapper logic ala:
int lines = 0;
public void map(LongWritable key, Text value, ...) {
if(++lines < 6) { return; }
}
This obviously doesn't work if you split the text file (so having > 1 mapper). So writing a dedicated InputFormat is the cleanest way to solve this problem.
Another trick would be to measure how many bytes the first n lines are in that specific file and then just skipping this amount of bytes at the start.

Related

How would I convert a txt file containing a lot of symbols into a array?

so I just have a quick question. The program is supposed to create a character array, and get the content from a text file, containing a lot of random symbols like &,?,!,letters, and numbers. I am not allowed to create seperate arrays, and put them into the 2d array instead. How would I go about doing so? I already know the number of rows and columns because it tells me at the top of the file before actually having all the symbols and stuff. Heres what I have so far:
char [][]charArray=new char[a][b];
for(int z=0;z<charArray.length;z++)
{
for(int y=0;y<charArray[y].length;y++)
{
charArray[y]=fileReader.next();
}
}
So A is the number of rows, and B is the number of columns to read from. When I run the program, it says that it is expecting a char []charArray, and it found a string, and the error is called an incompatible type error.
ALso ps: fileReader is my scanner to read from a file. THanks!
First of all, you need to use more descriptive names for your variables. For example, why name the variable a when a really represents the number of rows in the file? Instead, use numRows (and likewise for b, use numCols). Also, you really should name your scanner scanner. There is a FileReader class and your fileReader variable name is misleading---it makes everyone think you're using a FileReader instead of a Scanner. Finally, the brackets used to declare an array type in Java are normally placed adjacent to the type name, as in char[][] instead of char [][]. This does not change the way the code executes, but it conforms better to common convention.
Now, to your problem. You stated that the number of rows/columns are declared at the beginning of the file. This solution assumes the file does in fact contain numRows rows and numCols columns. Basically, next returns a String. You can use String.toCharArray to convert the String to a char[]. Then you simply copy the characters to the appropriate position in your charArray.
Scanner scanner = new Scanner(theFile);
char[][] charArray=new char[numRows][numCols];
for (int i = 0; i < numRows; i++) {
final char[] aLine = scanner.next().toCharArray();
for(int j = 0; j < aLine.length;j++){
charArray[i][j] = aLine[j];
}
}

How to see if a string exists in a huge (>19GB) sorted file?

I have files that can be 19GB or greater, they will be huge but sorted. Can I use the fact that they are sorted to my advantage when searching to see if a certain string exists?
I looked at something called sgrep but not sure if its what I'm looking for. An example is I will have a 19GB text file with millions of rows of
ABCDEFG,1234,Jan 21,stackoverflow
and I want to search just the first column of these millions of row to see if ABCDEFG exists in this huge text file.
Is there a more efficient way then just greping this file for the string and seeing if a result comes. I don't even need the line, I just need almost a boolean, true/false if it is inside this file
Actually sgrep is what I was looking for. The reason I got confused was because structured grep has the same name as sorted grep and I was installing the wrong package. sgrep is amazing
I don't know if there are any utilities that would help you out if the box, but it would be pretty straight forward to write an application specific to your problem. A binary search would work well, and should yield your result within 20-30 queries against the file.
Let's say your lines are never more than 100 characters, and the file is B bytes long.
Do something like this in your favorite language:
sub file_has_line(file, target) {
a = 0
z = file.length
while (a < z) {
m = (a+z)/2
chunk = file.read(m, 200)
// That is, read 200 bytes, starting at m.
line = chunk.split(/\n/)[2]
// split the line on newlines, and keep only the second line.
if line < target
z = m - 1
else
a = m + 1
}
return (line == target)
}
If you're only doing a single lookup, this will dramatically speed up your program. Instead of reading ~20 GB, you'll be reading ~20 KB of data.
You could try to optimize this a bit by extrapolating that "Xerox" is going to be at 98% of the file and starting the midpoint there...but unless your need for optimization is quite extreme, you really won't see much difference. The binary search will get you that close within 4 or 5 passes, anyway.
If you're doing lots of lookups (I just saw your comment that you will be), I would look to pump all that data into a database where you can query at will.
So if you're doing 100,000 lookups, but this is a one-and-done process where having it in a database has no ongoing value, you could take another approach...
Sort your list of targets, to match the sort order of the log file. Then walk through each in parallel. You'll still end up reading the entire 20 GB file, but you'll only have to do it once and then you'll have all your answers. Something like this:
sub file_has_lines(file, target_array) {
target_array = target_array.sort
target = ''
hits = []
do {
if line < target
line = file.readln()
elsif line > target
target = target_array.pop()
elseif line == target
hits.push(line)
line = file.readln()
} while not file.eof()
return hits
}

Ensemble SVM in map reduce

I am making SVM models for each dataset in Map Reduce(I am using LibSVM Library for that). Even , I have testing result of each model.
Testing result file contain following details.(IT givens prediction about testing result)
+1
-1
+1
+1
..
..
+1
I have such 5 testing file. Now I want to combine testing result using Majority Voting in map reduce.
In map phase,I want to give line number as value of key . How can I give line number as value in map phase for all testing files.
I don't know if you need MapReduce for this task, but if you do need to do it in MapReduce, I would just use a Map-only job, and even that without an output file. Just using two counters (I didn't find a decrCounter method and incrCounter cannot take negative values). Here is a simple pseudocode for that:
enum MyCounter = {POSITIVES, NEGATIVES};
map(LongWritable key, Text value, Reporter reporter) {
if (value.toString().equals("+1")) {
reporter.incrCounter(MyCounter.POSITIVES, 1);
} else {
reporter.incrCounter(MyCounter.NEGATIVES, 1);
}
}
Then, if POSITIVES > NEGATIVES, +1 wins!
If you don't need MapReduce, you can just count the lines of all files, e.g. using wc -l command in Linux and then count the lines that have +1, e.g. using grep -c.

Need help in writing Map/Reduce job to find average

I'm fairly new to Hadoop Map/Reduce. I'm trying to write a Map/Reduce job to find average time taken by n processes, given an input text file as below:
ProcessName Time
process1 10
process2 20
processn 30
I went through few tutorials but I'm still not able to get a thorough understanding. What should my mapper and reducer classes do for this problem? Will my output always be a text file or is it possible to directly store the average in some sort of a variable?
Thanks.
Your Mappers read the text file and apply the following map function on every line
map: (key, value)
time = value[2]
emit("1", time)
All map calls emit the key "1" which will be processed by one single reduce function
reduce: (key, values)
result = sum(values) / n
emit("1", result)
Since you're using Hadoop, you probably have seen the use of StringTokenizer in the map function, you can use this to get only the time in one line. Also you can think of some ways how to compute n (the number of processes), you could use for example a Counter in another job which just counts lines.
Update
If you were to execute this job, for each line a tuple would have to be sent to the reducer, potentially clogging the network if you run a Hadoop cluster on multiple machines.
A more clever approach can compute the sum of the times closer to the inputs, e.g. by specifying a combiner:
combine: (key, values)
emit(key, sum(values))
This combiner is then executed on the results of all map functions of the same machine, i.e., without networking in between.
The reducer would then only get as many tuples as there are machines in the cluster, rather than as many as lines in your log files.
Your mapper maps your inputs to the value that you want to take the average of. So let's say that your input is a text file formatted like
ProcessName Time
process1 10
process2 20
.
.
.
Then you would need to take each line in your file, split it, grab the second column, and output the value of that column as an IntWritable (or some other Writable numeric type). Since you want to take the average of all times, not grouped by process name or anything, you will have a single fixed key. Thus, your mapper would look something like
private IntWritable one = new IntWritable(1);
private IntWritable output = new IntWritable();
proctected void map(LongWritable key, Text value, Context context) {
String[] fields = value.split("\t");
output.set(Integer.parseInt(fields[1]));
context.write(one, output);
}
Your reducer takes these values, and simply computes the average. This would look something like
IntWritable one = new IntWritable(1);
DoubleWritable average = new DoubleWritable();
protected void reduce(IntWritable key, Iterable<IntWrtiable> values, Context context) {
int sum = 0;
int count = 0;
for(IntWritable value : values) {
sum += value.get();
count++;
}
average.set(sum / (double) count);
context.Write(key, average);
}
I'm making a lot of assumptions here, about your input format and what not, but they are reasonable assumptions and you should be able to adapt this to suit your exact needs.
Will my output always be a text file or is it possible to directly store the average in some sort of a variable?
You have a couple of options here. You can post-process the output of the job (written a single file), or, since you're computing a single value, you can store the result in a counter, for example.

process bunch of string effective

I need to read some data from a file in chuck of 128M, and then for each line, I will do some processing, naive way to do is using split to convert the string into collection of lines and then process each line, but maybe that is not effective as it will create a collection which simply stores the temp result which could be costy. Is there is a way with better performance?
The file is huge, so I kicked off several thread, each thread will pick up 128 chuck, in the following script rawString is a chuck of 128M.
randomAccessFile.seek(start)
randomAccessFile.read(byteBuffer)
val rawString = new String(byteBuffer)
val lines=rawString.split("\n")
for(line <- lines){
...
}
It'd be better to read text line by line:
import scala.io.Source
for(line <- Source.fromFile("file.txt").getLines()) {
...
}
I'm not sure what you're going to do with the trailing bits of lines at the beginning and end of the chunk. I'll leave that to you to figure out--this solution captures everything delimited on both sides by \n.
Anyway, assuming that byteBuffer is actually an array of bytes and not a java.nio.ByteBuffer, and that you're okay with just handling Unix line encodings, you would want to
def lines(bs: Array[Byte]): Array[String] = {
val xs = Array.newBuilder[Int]
var i = 0
while (i<bs.length) {
if (bs(i)=='\n') xs += i
i += 1
}
val ix = xs.result
val ss = new Array[String](0 max (ix.length-1))
i = 1
while (i < ix.length) {
ss(i-1) = new String(bs, ix(i-1)+1, ix(i)-ix(i-1)-1)
i += 1
}
ss
}
Of course this is rather long and messy code, but if you're really worried about performance this sort of thing (heavy use of low-level operations on primitives) is the way to go. (This also takes only ~3x the memory of the chunk on disk instead of ~5x (for mostly/entirely ASCII data) since you don't need the full string representation around.)

Resources