How to write mapreduce program for counting lines in text file? - hadoop

I have a .dat file which has n number of lines with multiple fields in one line.
each field is separated by '|'. Now i would like to write a map reduce program to count
number of lines for particular field(same i can do in hive using count(Column_name)).
i am very new to map reduce programming. Any help would be appreciated.

You should first learn the example "word count", then you can know how to deal with your problem.
Here is the example http://kickstarthadoop.blogspot.com/2011/04/word-count-hadoop-map-reduce-example.html

Related

Line count in csv doesn't match

I have a large CSV with a large number of columns. I am trying to count the number of lines using
File.open(file).readlines.to_a.compact.count.to_i
It displays 57 although there are only 56 rows. Upon close examination I found that a part of one line is wrapped to form the next line. How to get the correct count?
Upon close examination I found that a part of one line is wrapped to form the next line. How to get the correct count?
You need to show an example of the incoming data if you want us to help beyond generic answers.
To fix the problem, you have to be able to identify the line. We can't help you there because it could look like anything. Making a wild guess, I'd say that one of the columns had an embedded new-line in it, which forces the line to wrap.
It the file is a true CSV file, that column should be wrapped in double-quotes, so you could search the file for lines that do NOT end with whatever data type should be in the last column, then read the next line, join them, then rewrite the file. But, again, we have nothing to work with, because your file's format could be a huge number of different things.
Your best bet is to use the CSV class that comes with Ruby, and let it read the file, instead of trying to treat it like a text file. CSV files are text, but they are formatted to maintain the columns and rows, so using the CSV class will give you a better chance of getting at the data.
Looking at your code:
There are a number of ways to count the number of lines in a file, including the easiest which is:
`wc -l /path/to/file`.to_i
if you're using *nix.
Using File.open(file).readlines.to_a is horribly redundant and not fast or scalable if your file is big.
readlines returns an array.
to_a returns an array.
Why turn the array into an array?
readlines loads an entire file into memory, then splits it on line ends into an array. That process can be a lot slower than simply reading the file line-by-line and incrementing a counter, plus "slurping" can make your program crawl if the file is larger than available memory.
See "Why is "slurping" a file not a good practice?" for more information.
compact removes nils from an array. readlines should never return any nils so compact will iterate over the array looking for something that shouldn't exist.
count returns an integer.
to_i converts the receiver to an integer.
In other words, to_i is turning an integer into an integer. Why?
If you want to do it in Ruby instead of using wc -l, do something simple and fast:
lines_in_file = 0
File.foreach(some_file) { lines_in_file += 1 }
After running that, lines_in_file will contain the number of lines read. Memory won't be impacted and it'll run like blue blazes on huge files.

Process variable numbers of lines in a Record using mapreduce

I have a file which I need to process which contains records which contains variable number of lines.
For example I have the following file:-
100,abc,123101,abc,123120,abc,123100,abc,123111,abc,123123,abc,123120,abc,123100,abc,123111,abc,123120,abc,123100,abc,123114,abc,123120,abc,123
So bold and non-bold above show each record.
So each of the record as you can see from above starts with 100 and ends with 120.But each of the record contains variable number of lines like 3 or 4 etc. Now I know this could be solved using custom input format and custom record reader where I can reuse linerecordreader to handle variable lines. But with that approach the problem is that if the record(starting with 100 line and ending with 120) is itself too large to contain in map as single record.So in such cases this will fail. So I need some better solution by which this could be solved using default inputformat and recordreader and doing something in mapper or reducer etc. More than one job is also welcome if the problem could be solved somehow.

VBA read large text file line by line in reverse order

VBA question
There is a large log file (around 500,000 lines), I need to read it line by line in reverse order, i.e. from the last line to the first line.
I know I can use FileSystemObject in the Microsoft Scripting Runtime reference, but there is no such option like reverse for ReadLine Method in TextStream
Now, the only way I can think of is like this, has a counter and skip previous lines for each of the line I read, but definitely this is not good enough. Any suggestion code/algo will be much appreciated.
If your log is a kind of database with field which allows to determine the order (is there a date field or line number field), if so you could try to use ADO solution with SQL query to read the log in reverse order (ORDER BY ... DESC). So, you will be able to read from last to first. Or generally- try to use ADO.
A file is not line based, or even character based, it's just bytes so there is no way to read lines in reverse order in a file. How the text is separated into lines is only determined by where there are line break characters in the text.
You can read lines from the beginning and store them in a rotating buffer, so that you have for example the last 1000 lines in the buffer when you reach the end of the file. That way you have a certain number of lines that you can access from your buffer without having to read the entire file for every single line.
After that you know how many lines there are in the file, so when you need to refill the buffer you can just skip a certain number of lines and read the following lines into the buffer.

Finding date in file, getting data after it

Help me brainstorm how I would solve this problem.
I have a file of dates with corresponding data, the format looks like this:
Date,data,data,data,data,data
Date,data,data,data,data,data
It's a plain csv file, only commas being used.
I need to be able to select a beginning date. And then get the data for the next 20 days beginning with the date selected.
Date format:
2007.05.21 (y,m,d)
So I think it would be best to search for the date. Either loading the entire file first into memory or read line by line. The file is only 1 megabyte, however I might want to do this with a 100 megabyte file as well. Is that still little?
Also I will want to do this very many times. I think I may want to keep the file in memory for the entire run of the program. So I can repeatedly access it.
After finding the date. I need to be able to get column 2 day 1, column 4 day 4. Ect. However there is always the same amount of columns for each day. So I guess if this is loaded into some kind of array I can always know in what array number the next and next day starts.
Any help would be greatly appreciated. Also any code examples provided would really help. This is not a homework problem or anything like that and I'm really new to programming.
You can user csv library to parse your file like this line by line
require 'csv'
date_to_search = Date(2009, 10, 10)
CSV.read('yourfilename.txt', :col_sep => ',') do |row|
# row will be an array of strings which you can parse
cur_date = Date.parse(row[0])
if cur_date == date_to_search
# you are set to read next 19 lines
# you can keep a counter and increment it after parsing each line (row here)
end
# compare and check if you need this line (and next 19)
# other calculations
end
As your data is sorted, Binary Search is what you want to use.
Simply put, you look up an element near the middle of your CSV, compare its date to the one you're looking for, and continue recursively in the matching half of the file (See the Wikipedia link for details).
Binary search has a runtime complexity of O(log n), which means that the number of read operations on a file containing 1,000,000 lines (Reasonable estimation for 100 MB) will never (under normal circumstances, that is, lines of different length are equally distributed) exceed 20.
Therefore, there is no need to keep the file in memory, quite the contrary. The operating system's disk cache will do the task of accelerating consecutive operations for you without running into memory shortage.
To read and process a line, you first need to find its first character, which is either the first letter after a newline character (\n) or the beginning of the file. Reading multiple lines can be achieved similar.
To parse a line, I suggest you split the line at the separation characters and/or the date's dots. This is, of course, only appropriate if the CSV comes from a trustworthy source and never changes its layout.

Hadoop custom split of TextFile

I have a fairly large text file that I would like to convert into a SequenceFile. Unfortunately, the file consists of Python code with logical lines running over several physical lines. For example,
print "Blah Blah\
... blah blah"
Each logical line is terminated by a NEWLINE. Could someone clarify how I could possibly generate Key, Value pairs in Map-Reduce where each Value is the entire logical line?
I don't find the question asked earlier, but you just have to iterate over your lines via a simple mapreduce job and save them into a StringBuilder. Flush the StringBuilder to the context if you want to begin with a new record. The trick is to setup the StringBuilder in your mappers class as a field and not as a local variable.
here it is:
Processing paraphragraphs in text files as single records with Hadoop
You should create your own variation on TextInputFormat. In there you make a new RecordReader that skips lines until it sees the start of a logical line.
Preprocess the input file to remove the newlines. What is your goal in creating the SequenceFile?

Resources