How to test hadoop mapreduce with hdfs? - hadoop

I am using MRUnit to write unit tests for my mapreduce jobs.
However, I am having trouble including hdfs into that mix. My MR job needs a file from hdfs. How do I mock out the hdfs part in MRUnit test case?
Edit:
I know that I can specify inputs/exepctedOutput for my MR code in the test infrastructure. However, that is not what I want. My MR job needs to read another file that has domain data to do the job. This file is in HDFS. How do I mock out this file?
I tried using mockito but it didnt work. The reason was that FileSystem.open() returns a FSDataInputStream which inherits from other interfaces besides java.io.Stream. It was too painful to mock out all the interfaces. So, I hacked it in my code by doing the following
if (System.getProperty("junit_running") != null)
{
inputStream = this.getClass().getClassLoader().getResourceAsStream("domain_data.txt");
br = new BufferedReader(new InputStreamReader(inputStream));
} else {
Path pathToRegionData = new Path("/domain_data.txt");
LOG.info("checking for existence of region assignment file at path: " + pathToRegionData.toString());
if (!fileSystem.exists(pathToRegionData))
{
LOG.error("domain file does not exist at path: " + pathToRegionData.toString());
throw new IllegalArgumentException("region assignments file does not exist at path: " + pathToRegionData.toString());
}
inputStream = fileSystem.open(pathToRegionData);
br = new BufferedReader(new InputStreamReader(inputStream));
}
This solution is not ideal because I had to put test specific code in my production code. I am still waiting to see if there is an elegant solution out there.

Please follow the this small tutorial for MRUnit.
https://github.com/malli3131/HadoopTutorial/blob/master/MRUnit/Tutorial
In MRUnit test case, we supply the data inside the testMapper() and testReducer() methods. So there is no need of input from HDFS for MRUnit Job. Only MapReduce jobs require data inputs from HDFS.

Related

How to read multiple files, process and write separately using spring batch

I want to read multiple files, name*.txt and process them.
For that I am using MultiResourceItemReader.
It is reading all files and process and write at one time only. I want to read multiple files seperately, process and write to them.
The code:
#Bean
public MultiResourceItemReader<POJO> multiResourceItemReader() {
MultiResourceItemReader<POJO> resourceItemReader = new MultiResourceItemReader<POJO>();
ClassLoader cl = this.getClass().getClassLoader();
ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver(cl);
Resource[] resources = resolver.getResources("file:" + filePath );
resourceItemReader.setResources(resources);
resourceItemReader.setDelegate(reader());
return resourceItemReader;
}
That's how the MultiResourceItemReader is designed to work. In your case, you can create a job instance per file.
There are many advantages of making one thing do one thing and do it well, one of them in your use case is restartability: If one of the jobs fail, you only restart the failed one.

Sequence file reading issue using spark Java

i am trying to read the sequence file generated by hive using spark. When i try to access the file , i am facing org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException:
I have tried the workarounds for this issue like making the class serializable, still i face the issue. I am writing the code snippet here , please let me know what i am missing here.
Is it because of the BytesWritable data type or something else which is causing the issue.
JavaPairRDD<BytesWritable, Text> fileRDD = javaCtx.sequenceFile("hdfs://path_to_the_file", BytesWritable.class, Text.class);
List<String> result = fileRDD.map(new Function<Tuple2<BytesWritables,Text>,String>(){
public String call (Tuple2<BytesWritable,Text> row){
return row._2.toString()+"\n";
}).collect();
}
Here is what was needed to make it work
Because we use HBase to store our data and this reducer outputs its result to HBase table, Hadoop is telling us that he doesn’t know how to serialize our data. That is why we need to help it. Inside setUp set the io.serializations variable
You can do it in spark accordingly
conf.setStrings("io.serializations", new String[]{hbaseConf.get("io.serializations"), MutationSerialization.class.getName(), ResultSerialization.class.getName()});

HBase map/reduce dependency issue

Overview
I developed a Rest api service based on resteasy framework. In the service, i will store data to HBase database. then, execute map/reduce process trigged by some condition(e.g. insert one record).
Requires
In the Map class, i import some third part libraries. i do not want to package those libraries to the war file.
TableMapReduceUtil.initTableMapperJob(HBaseInitializer.TABLE_DATA, // input HBase table name
scan, // Scan instance to control CF and attribute selection
LuceneMapper.class, // mapper
null, // mapper output key
null, // mapper output value
job);
FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9000/qin/luceneFile"));
job.submit();
Problem
If package all libraries in the war file which will be deploy to jetty container, it work well. if not package third part library to the war,but upload those library to hdfs and add them to class path, it does not work. like below
conf.set("fs.defaultFS","hdfs://master:9000");
FileSystem hdfs = FileSystem.get(conf);
Path classpathFilesDir = new Path("bjlibs");
FileStatus[] jarFiles = hdfs.listStatus(classpathFilesDir);
for (FileStatus fs : jarFiles) {
Path disqualified = new Path(fs.getPath().toUri().getPath());
DistributedCache.addFileToClassPath(disqualified, conf);
}
hdfs.close();
try TableMapReduceUtil.addHBaseDependencyJars()

Hive setup()-like functionality similar to Mapper setup()?

I want to replace a Hadoop job with Hive. My challenge is in Hadoop, I'm using setup() to build a kdtree by reading in reference data (points of interest) from the distributed cache. I then use the kdtree in map() to evaluate distance of the target data against the kdtree.
In Hive, I wanted to use a udf with evaluate() method to determine the distance, but I don't know how to setup the kdtree with the reference data. Is this possible?
I probably don't have the entire answer, so I'm just going to throw out some ideas that might be of help.
You can add files to the distributed cache in hive using ADD FILE ...
Hive 11+ (I think) should let you access to the distributed cache in GenericUDF.initialize
https://issues.apache.org/jira/browse/HIVE-1016 which references...
https://issues.apache.org/jira/browse/HIVE-3628
So when you initialize the UDF, you might be able to build your kdtree by accessing the file you added in the distributed cache.
Like climbage says ADD FILE command adds the file into distributed cache.
You can access the distributed cache in your UDF simply by opening a file which is in the current directory.
ie... open( new File( System.getProperty("user.dir") + "/myfile") );
You can use a ConstantObjectInspector to access the filename in the initialize method of GenericUDF, where you can open the file and read into memory into your data structure.
The distributed_map UDF of Brickhouse does something similar ( https://github.com/klout/brickhouse/blob/master/src/main/java/brickhouse/udf/dcache/DistributedMapUDF.java )
Something like
public ObjectInspector initialize(ObjectInspector[] inspArr) {
ConstantObjectInspector fileNameInsp = (ConstantObjectInspector)inspArr[0];
String fileName = fileNameInsp.getWritableConstantValue().toString();
FileInputStream inFile = new FileInputStream("./" + fileName);
doStuff( inFile );
.....
}

Files not stored in Distributed Cache

I am using DistributedCache. But there are no files in the cache after execution of code.
I have referred to other similar questions but the answers does not solve my issue.
Please find the code below:
Configuration conf = new Configuration();
Job job1 = new Job(conf, "distributed cache");
Configuration conf1 = job1.getConfiguration();
DistributedCache.addCacheFile(new Path("File").toUri(), conf1);
System.out.println("distributed cache file "+DistributedCache.getLocalCacheFiles(conf1));
This gives null..
The same thing when given inside mapper also gives null hence. Please let me know your suggestions.
Thanks
try getCacheFiles() instead of getLocalCacheFiles()
I believe this is (at least partly) due to what Chris White wrote here:
After you create your Job object, you need to pull back the
Configuration object as Job makes a copy of it, and configuring values
in conf2 after you create the job will have no effect on the job
iteself. Try this:
job = new Job(new Configuration());
Configuration conf2 = job.getConfiguration();
job.setJobName("Join with Cache");
DistributedCache.addCacheFile(new URI("hdfs://server:port/FilePath/part-r-00000"), conf2);
I guess if it still does not work, there is another problem somewhere, but that doesn't mean that Chris White's point is not correct.
When distributing, don't forget the local link name, preferably using a relative path:
URI is of the form hdfs://host:port/absolute-path#local-link-name
When reading:
if you don't use distributed cache possibilities, you are supposed to use HDFS's FileSystem to access the hdfs://host:port/absolute-path
if you use the distributed cache, then you have to use standard Java file utilities to access the local-link-name
The cache file needs to be in the Hadoop FileSystem. You can do this:
void copyFileToHDFS(JobConf jobConf, String from, String to){
try {
FileSystem aFS = FileSystem.get(jobConf);
aFS.copyFromLocalFile(false, true, new Path(
from), new Path(to));
} catch (IOException e) {
throw new RuntimeException(e);
}
}
Once the files are copied you can add them to the cache, like so:
void fillCache(JobConf jobConf){
Job job;
copyFileToHDFS(jobConf, fromLocation, toLocation);
job = Job.getInstance(jobConf);
job.addCacheFile(new URI(toLocation));
JobConf newJobConf = new JobConf(job.getConfiguration());
}

Resources