for Map Reducer Job
In my input directory having around 1000 files. and each files contains some GB's of data.
for example /MyFolder/MyResults/in_data/20140710/ contains 1000 files.
when I give the inputpath as /MyFolder/MyResults/in_data/20140710 it's taking all 1000 files to process.
I would like to run a job by talking 200 files only at a time. How we can do this?
Here my command to execute:
hadoop jar wholefile.jar com.form1.WholeFileInputDriver -libjars myref.jar -D mapred.reduce.tasks=15 /MyFolder/MyResults/in_data/20140710/ <<Output>>
Can any help me, how to run a job like a batch size for the inputfiles.
Thanks in advance
-Vim
A simple way would be to modify your driver to take only 200 files as input out of all the files in that directory. Something like this:
FileSystem fs = FileSystem.get(new Configuration());
FileStatus[] files = fs.globStatus(new Path("/MyFolder/MyResults/in_data/20140710/*"));
for (int i=0;i<200;i++) {
FileInputFormat.addInputPath(job, files[i].getPath());
}
Related
In a cluster of hdfs, i receive multiple files on a daily basis which can be of 3 types :
1) product_info_timestamp
2) user_info_timestamp
3) user_activity_timestamp
The number of files received can be of any number but they will belong to one of these 3 categories only.
I want to merge all the files(after checking whether they are less than 100mb) belonging to one category into a single file.
for eg: 3 files named product_info_* should be merged into one file named product_info.
How do i achieve this?
You can use getmerge toachieve this, but the result will be stored in your local node (edge node), so you need to be sure you have enough space there.
hadoop fs -getmerge /hdfs_path/product_info_* /local_path/product_inf
You can move them back to hdfs with put
hadoop fs -put /local_path/product_inf /hdfs_path
You can use hadoop archive (.har file) or sequence file. It is very simple to use - just google "hadoop archive" or "sequence file".
Another set of commands along the similar lines as suggested by #SCouto
hdfs dfs -cat /hdfs_path/product_info_* > /local_path/product_info_combined.txt
hdfs dfs -put /local_path/product_info_combined.txt /hdfs_path/
We have files in HDFS with raw logs, each individual log is a line as these logs are line separated.
Our requirement is that to add a text (' 12345' for e.g. ) by the end of every log in these files ... using pig / hadoop command / or any other map reduce based tool.
Please advice
Thanks
AJ
Load the files where each log entry is loaded into one field i.e. line:chararray and use CONCAT to add the text to each line.Store it into new log file.If you want the individual files then you will have to parameterize the script to load each file and store into a new file instead of wildcard load.
Log = LOAD '/path/wildcard/*.log' USING TextLoader(line:chararray);
Log_Text = FOREACH Log GENERATE CONCAT(line,'Your Text') as newline;
STORE Log_Text INTO /path/NewLog.log';
If your files aren't extremely large, you can do that with a single shell command.
hdfs dfs -cat /user/hdfs/logfile.log | sed 's/$/12345/g' |\
hdfs dfs -put - /user/hdfs/newlogfile.txt
I have the multiple text files.
The total size of them exceeds the largest disk size available to me (~1.5TB)
A spark program reads a single input text file from HDFS. So I need to combine those files into one. (I cannot re-write the program code. I am given only the *.jar file for execution)
Does HDFS have such a capability? How can I achieve this?
What I understood from your question is you want to Concatenate multiple files into one. Here is a solution which might not be the most efficient way of doing it but it works. suppose you have two files: file1 and file2 and you want to get a combined file as ConcatenatedFile
.Here is the script for that.
hadoop fs -cat /hadoop/path/to/file/file1.txt /hadoop/path/to/file/file2.txt | hadoop fs -put - /hadoop/path/to/file/Concatenate_file_Folder/ConcatenateFile.txt
Hope this helps.
HDFS by itself does not provide such capabilities. All out-of-the-box features (like hdfs dfs -text * with pipes or FileUtil's copy methods) use your client server to transfer all data.
In my experience we always used our own written MapReduce jobs to merge many small files in HDFS in distributed way.
So you have two solutions:
Write your own simple MapReduce/Spark job to combine text files with
your format.
Find already implemented solution for such kind of
purposes.
About solution #2: there is the simple project FileCrush for combining text or sequence files in HDFS. It might be suitable for you, check it.
Example of usage:
hadoop jar filecrush-2.0-SNAPSHOT.jar crush.Crush -Ddfs.block.size=134217728 \
--input-format=text \
--output-format=text \
--compress=none \
/input/dir /output/dir 20161228161647
I had a problem to run it without these options (especially -Ddfs.block.size and output file date prefix 20161228161647) so make sure you run it properly.
You can do a pig job:
A = LOAD '/path/to/inputFiles' as (SCHEMA);
STORE A into '/path/to/outputFile';
Doing a hdfs cat and then putting it back to hdfs means, all this data is processed in the client node and will degradate your network
I want to run Hadoop MapReduce on a small part of my text file.
One of my task is failing. I can read in the log:
Processing split: hdfs://localhost:8020/user/martin/history/history.xml:3556769792+67108864
Can I execute once again MapReduce on this file from offset 3556769792 to 3623878656 (3556769792+67108864) ?
A way to do is to copy the file from the offset define and add it back into HDFS. From this point simply run the mapreduce job only on this block.
1) copy file from offset 3556769792 follow by 67108864:
dd if=history.xml bs=1 skip=3556769792 count=67108864 >
history_offset.xml
2) import into HDFS
hadoop fs -copyFromLocal history_offset.xml offset/history_offset.xml
3) run again MapReduce
hadoop jar myJar.jar 'offset' 'offset_output'
Does anyone know of a tool that can "crunch" the output files of Apache Hadoop into fewer files or one file. Currently I am downloading all the files to a local machine and the concatenate them in one file. So does anyone know of an API or a tool that does the same.
Thanks in advance.
Limiting the number of output files means you want to limit the number of reducers. You could do that with the help of mapred.reduce.tasks property from the Hive shell. Example :
hive> set mapred.reduce.tasks = 5;
But it might affect the performance of your query. Alternatively, you could use getmerge command from the HDFS shell once you are done with your query. This command takes a source directory and a destination file as input and concatenates files in src into the destination local file.
Usage :
bin/hadoop fs -getmerge <src> <localdst>
HTH
See https://community.cloudera.com/t5/Support-Questions/Hive-Multiple-Small-Files/td-p/204038
set hive.merge.mapfiles=true; -- Merge small files at the end of a map-only job.
set hive.merge.mapredfiles=true; -- Merge small files at the end of a map-reduce job.
set hive.merge.size.per.task=???; -- Size (bytes) of merged files at the end of the job.
set hive.merge.smallfiles.avgsize=??? -- File size (bytes) threshold
-- When the average output file size of a job is less than this number,
-- Hive will start an additional map-reduce job to merge the output files
-- into bigger files. This is only done for map-only jobs if hive.merge.mapfiles
-- is true, and for map-reduce jobs if hive.merge.mapredfiles is true.