Replicate updated jar file to each slave node on Spark - hadoop

I have an Apache Spark cluster consisting of a master and multiple slave nodes. In the jars folder of each node I require the jar file for a program I run on Spark.
There are regular updates to this jar so I find myself constantly copying the updated jar file.
Is there a quick and easy way that an updated jar file can be replicated from master to all slave nodes or any other way of distributing this each time the jar is updated?

When you running your Spark job with spark-submit use --jars option. Using this option you can write path to jar file that you need.
Also, jars in --jars option will be automatically transferred to the cluster, so you need this jar only on the master node.
Read about how to use this option here.

Related

Running hadoop jar command in edge Node

I am new to hadoop and have below question(s) on running hadoop jar command from edgeNode(http://www.dummies.com/programming/big-data/hadoop/edge-nodes-in-hadoop-clusters/).hadoop jar ${JAR_FILE} {CLASS_NAMEWithPackage} . Have below Question(s)
After running above command why the jar is extracted in
Djava.io.tmpdir dir in edgeNode ? Every time I run this command I get a
directory something like hadoop-unjar7637059002474165348 in temp
dir,that has extracted jar.Is this expected? I was thinking hadoop
jar submits whole jar to yarn but I could not understand why it is
extracted in temp folder ?
After extracting the jar in edge Node,does the program expected to
remove the extracted jar directory.In this case
hadoop-unjar7637059002474165348 ?
Thanks!
You can probably look at here and this question for why your jars are extracted in the edge node (client node) when you run the hadoop jar command. It's to support the 'jar-within-jar' idea while running your jar from the client node. Pushing jars to HDFS, yarn and all those happens after that but, before these happens, your jar has to be executed to begin with, right? In your case, you might have jar-within-jar or you might not, but the concept is supported.
About the auto delete, probably it's not auto deleted.

How to execute map reduce program(ex. wordcount) from HDFS and see the output?

I am new to Hadoop. I have a simple wordcount program in eclipse which takes input files and then shows the output. But I need to execute the same program from HDFS. I have already created a JAR file for the wordcount program.
Can any one pls let me know how to proceed?
You need to have a cluster set up, even if is a single node cluster. Then you can run your .jar from the hadoop command line:
jar
Runs a jar file. Users can bundle their Map Reduce code in a jar
file and execute it using this command.
Usage: hadoop jar <jar> [mainClass] args...
The streaming jobs are run via this command. Examples can be referred
from Streaming examples
Word count example is also run using jar command. It can be referred
from Wordcount example
Initially you need to set up a hadoop cluster as discussed by Remus.
Single Node SetUp and Multi Node SetUp are two good way to start with.
Once you have the set up done, start hadoop daemons and copy the input files into any hdfs directory.
Prepare the jar of your program.
Run the jar on the terminal using hadoop jar <you jar name> <your main class> <input path><output directory path>
(The jar arguments depend on your program)

How does zookeeper determine the 'java.library.path' for a hadoop job?

I am running hadoop jobs on a distributed cluster using oozie. I give a setting 'oozie.libpath' for the oozie jobs.
Recently, I have deleted few of my older version jar files from the library path oozie uses and I have replaced them with newer versions. However, when I run my hadoop job, my older version of jar files and newer version of jar files both get loaded and the mapreduce is still using the older version.
I am not sure where zookeeper is loading the jar files from. Are there any default settings that it loads the jar files from ? There is only one library path in my HDFS and it does not have those jar files.
I have found what is going wrong. The wrong jar is shipped with my job file. Should have check here first

Location of Hadoop Libjars

I am running a hadoop job on a cluster and passing some jars using -libjars option while running a hadoop job. I am not sure where I can find these jars on cluster. One more thing whether these jars are copied from localmachine to cluster. Where I can find these jars on cluster
According to the Hadoop - The Definitive Guide
Copies the specified JAR files from the local filesystem (or any filesystem if a scheme is
specified)to the shared filesystem used bythe jobtracker (usually HDFS), and adds them
to the MapReduce task’s classpath. This option is a useful way of shipping JAR files that
a job is dependent on.
So, the specified files are copied from the local file system to HDFS and then to the mapper/reducer nodes in the classpath. Also, these files are replicated mapreduce.client.submit.file.replication number of times, which is defaulted to 10. The reason why it's replicated more than 3 times is because the file has to be distributed to all the required nodes.

Hadoop Mapreduce with two jars (one of the jars is needed on namenode only)

The mapred task is a very simple 'wordcount' implemented by Java (plz, see http://wiki.apache.org/hadoop/WordCount ).
after the last line, "job.waitForCompletion(true);"
I add some code implemented by Jython.
It means the libraries for Jythoon is only needed on namenode.
However, I added all libraries for Jython to a single jar, and then
executed it
hadoop jar wordcount.jar in out
The wordcount is done without any problem.
The problem I want to solve is I have to heavy libraries for Jython that is not needed for the slave nodes(mappers and reducers). the jar is almost 15M (upper than 14M is for Jython).
Can I split them, and get the same results?
Nobody knows this question.
I've solved this problem as follows: even if it's not the best.
Simply, copy jython.jar to /usr/local/hadoop (or path of hadoop installed) which is the default classpath of hadoop, and make a jar without jython.jar
If you need very big libraries to mapreduce task, then
upload jython.jar to hdfs
hadoop fs -put jython.jar Lib/jython.jar
add the follow line to your main code
DistributedCache.addFileToClassPath(new URI("Lib/jython.jar"));

Resources