How to specify multiple jar files in oozie - hadoop

I need a solution for the following problem:
My project has two jars in which
one jar contains all bean classes like Employee etc, and the other jar contains MR jobs which uses the first jar bean class so when iam trying to run the MR job as a simple java program i am facing the issue of class not found (com.abc.Employee class not found as it is in another jar) so can any one provide me the solution how to solve the issue .... as in real time there may be many jars not 1 or 2 how to specify all those jars can any one please reply as soon as possible.

You should have a lib folder in the HDFS directory where you are storing your Oozie workflow. You can place both jar files in this folder and oozie will ensure both are on the classpath when your MR job executes:
hdfs://namenode:8020/path/to/oozie/app/workflow.xml
hdfs://namenode:8020/path/to/oozie/app/lib/first.jar
hdfs://namenode:8020/path/to/oozie/app/lib/second.jar
See Workflow Application Deployment for more details
If you often use jars in a number of oozie workflows, you can place these common jars (HBase jars for example) in a directory in HDFS, and then denote in an oozie property to include this folder's jars See HDFS Share Libraries for more details

Related

how to add xml mahout classifier jar into hadoop cluster, as i dont want to add that library into hadoop classpath

I am parsing xml file using XMLInputFormat.class which is present in mahout-exmaples jar. but while running the jar file of map reduce i am getting below error
Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.mahout.classifier.bayes.XmlInputFormat not found
Please let me know how can i make these jars available while running on multinode hadoop cluster.
Include the all mahout-examples JARs in the “-libjars” command line option of the hadoop jar ... command. The jar will be placed in distributed cache and will be made available to all of the job’s task attempts. More specifically, you will find the JAR in one of the ${mapred.local.dir}/taskTracker/archive/${user.name}/distcache/… subdirectories on local nodes.
Please refer this link for more details.

How do I specify multiple libpath in oozie job?

My oozie job uses 2 jars x.jar and y.jar and following is my job.properties file.
oozie.libpath=/lib
oozie.use.system.libpath=true
This works perfectly when both the jars are present at same location on HDFS at /lib/x.jar and /lib/y.jar
Now I have 2 jars placed at different locations /lib/1/x.jar and /lib/2/y.jar.
How can I re-write my code such that both the jars are used while running the map reduce job?
Note: I have already refernced the answer How to specify multiple jar files in oozie but, this does not solve my problem
Found the answer at
http://blog.cloudera.com/blog/2014/05/how-to-use-the-sharelib-in-apache-oozie-cdh-5/
Turns out that I can specify multiple paths separated by comma in the job.properties file:
oozie.libpath=/path/to/jars,another/path/to/jars

Can I run JAR file which includes another JAR file under lib folder in HDInsight?

Is it possible to run a JAR file in HDInsight which includes another JAR file under the lib folder?
JAR file
├/folder1/subfolder1/myApp/…
│    └.class file
|
|
└ lib/dependency.jar // library (jar file)
Thank you!
On HDInsight, we should be able to run a Java MapReduce JAR, which has a dependency on another JAR. There are a few ways to do this, but typically not by copying the second JAR under lib folder on headnode.
Reasons are – Depending on where the dependency is, you may need to copy the JAR under the lib folder of all worker nodes and headnodes – becomes a tedious task. Also, this change will be erased when the node gets re-imaged by Azure, and hence not a supported way.
Now, there are two types of dependencies –
1. MapReduce driver class has dependency on another external JAR
2. Map or reduce task has dependency on another JAR, where Map or Reduce functions calls an API on the external JAR.
Scenario #1 (MapReduce driver class depends on another JAR):
we can use one of the following options –
a. Copy your dependency JAR to a local folder (like d:\test on windows HDI) on the headnode and then use RDP to append this path to HADOOP_CLASSPATH environment variable on head node– this is suitable for dev/test to run jobs directly from headnode, but won’t work with remote job submissions. So this is not suitable for production scenarios.
b. Using a ‘fat or uber jar’ to include all the dependent jars inside your JAR – you can use Maven ‘Shade’ plugin , example here
Scenario #2 ( Map or Reduce function calls API on external JAR) -
Basically use –libjars option.
If you want to run the mapreduce JAR from Hadoop command line -
a. Copy the Mapreduce JAR to a local path (like d:\test )
b. Copy the dependent JAR on WASB
Example of running a mapreduce JAR with dependency-
hadoop jar D:\Test\BlobCount-0.0.1-SNAPSHOT.jar css.ms.BlobCount.BlobCounter -libjars wasb://mycontainername#azimwasb.blob.core.windows.net/mrdata/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar -DStorageAccount=%StorageAccount% -DStorageKey=%StorageKey% -DContainer=%Container% /mcdpoc/mrinput /mcdpoc/mroutput
The example is using HDInsight windows – you can use similar approach on HDInsight Linux as well.
Using PowerShell or .Net SDK (remote job submission) –With PowerShell, you can use the –LibJars parameter to refer to dependent jars.
you can review the following documentations, these have various examples of using powerShell, SSH etc.
https://azure.microsoft.com/en-us/documentation/articles/hdinsight-use-mapreduce/
https://azure.microsoft.com/en-us/documentation/articles/hdinsight-use-mapreduce/
I hope it helps!
Thanks,
Azim

Confused between -libjars and -archives to distribute side data to task nodes

I have a MapReduce job which uses a 3rd party jar and for passing a jar file to the task nodes I know that there are 2 ways to do it which is hadoop jar -archive /custom.jar or hadoop jar -libjars /custom.jar provided my Job uses GenericOptionsParser.
My Question is which is the best option to choose, as jar files can be passes by both -archive and -libjars options ?
-libjar is mostly suited to ship jars as documentation says. -archive is a general purpose and the option unarchives them(this might not be needed for jar usage, as you will never want the jar to be unzipped) at the task node. archive is mostly for shipping any other files and making them available at the task node.

Hadoop job DocumentDB dependency jar file

I have a hadoop job which gets its input from azure documentdb. I have put the documentdb jar dependency files under a directory called 'lib'. However when I ran the job it gives me a ClassNotFound error message for one of the classes in the jar file. I also tried adding the jar files using the -libjars option but it didn't work either. Does anyone have any idea what can be wrong?

Resources