I couldn't find an answer to my issue while sifting through some Hadoop guides: I am committing various Hadoop jobs (up to 200) in one go via a shell script on a client computer. Each job is started by means of a JAR (which is quite large; approx. 150 MB). Right after submitting the jobs, the client machine has a very high CPU load (each core on 100%) and the RAM is getting full quite fast. That way, the client is no longer usable. I thought that the computation of each job is entirely done within the Hadoop framework, and only some status information is exchanged between the cluster and the client while the job is running.
So, why is the client fully stretched? Am I committing Hadoop jobs the wrong way? Is each JAR too big?
Thanks in advance.
It is not about the jar. The client side is calculating the InputSplits.
So it can be possible that when having large number of input files for each job the client machine gets a lot of load.
But I guess when submitting 200 jobs the RPC Handler on the jobtracker has some problems. How many RPC handlers are active on the jobtracker?
Anyways, I would batch the submission up to 10 or 20 jobs at a time and wait for their completion. I guess you're having the default FIFO scheduler? So you won't benefit from submitting all 200 jobs at a time either.
Related
I am working on a spark program that monitor each executors' performance such as mark down when one executor start to work and when it finishes its job. I am thinking two ways to do that:
First, develop programs so when the executor starts work, it mark down the current time to a file, when it finishes, mark down that time to the same file. In the ends, all "log" files will be spread the whole cluster networks except for the driver machine.
Second, since executors will report to driver periodically, each time the driver receives message from executors, if the message contains "start" and "finish" information, let the driver record everything.
Is that possible?
There are many ways to Monitor the executor performance as well as application performance
Best ways are to Monitor with the help of Spark Web UI and Other Monitoring tools available Open Source (Ganglia)
You Need to Monitor your application whether your cluster is under utilized or not how much resources are used by your application which you have created.
Monitoring can be done using various tools eg. Ganglia From Ganglia you can find CPU, Memory and Network Usage.Based on Observation about CPU and Memory Usage you can get a better idea what kind of tuning is needed for your application
Hope this Helps!!!....
This is in relation to my previous post (here) regarding the OOM I'm experiencing on a driver after running some Spark steps.
I have a cluster with 2 nodes in addition to the master, running the job as client. It's a small job that is not very memory intensive.
I've paid particular attention to the hadoop processes via htop, they are the user generated ones and also the highest memory consumers. The main culprit is the amazon.emr.metric.server process, followed by the state pusher process.
As a test I killed the process, the memory as shown by Ganglia dropped quite drastically whereby I was then able to run 3-4 consecutive jobs before the OOM happened again. This behaviour repeats if I manually kill the process.
My question really is regarding the default behaviour of these processes and whether what I'm witnessing is the norm or whether something crazy is happening.
I have a spark project running on 4 Core 16GB (both master/worker) instance, now can anyone tell me what are all the things to keep monitoring so that my cluster/jobs will never go down?
I have created a small list which includes the following items, please extend the list if you know more:
Monitor Spark Master/Worker from failing
Monitor HDFS from getting filled/going down
Monitor network connectivity for master/worker
Monitor Spark Jobs from getting killed
That's a good list. But in addition to those I would actually monitor the status of the receivers of the streaming application (assuming you are some non-HDFS source of data), whether they are connected or not. Well, to be honest, this was tricky to do with older versions of Spark Streaming as the instrumentation to get the receiver status didnt quite exist. However, with Spark 1.0 (to be released very soon), you can use the org.apache.spark.streaming.StreamingListener interface to get the events regarding the status of the receiver.
A sneak peak to the to-be-released Spark 1.0 docs is at
http://people.apache.org/~tdas/spark-1.0.0-rc10-docs/streaming-programming-guide.html
Couple of days ago Yahoo posted about Storm-on-YARN project http://developer.yahoo.com/blogs/ydn/storm-yarn-released-open-source-143745133.html that makes possibility to run Storm on YARN.
That's big improvement, however I have two questions regarding to running tasks like Storm with YARN. Tasks like Storm don't have some limit on execution time... I mean, when you run Storm you expect it will work days or months - listen queue or whatever.
I mean there are set of tasks that don't have limitation in time execution (I'd like to report 0% progress)
1) what's about timeout? regular M/R is killed when it hangs on, how to prevent it? I walked through the code, but didn't find any special code
2) also, MR1 has queue where jobs waited for execution: when cluster finish one job, it picked up next job from queue. What about YARN? if I will push endless Storm-like jobs A, and the job B, will job B be executed?
Sorry, if my questions seem ridiculous, maybe I miss/don't understand something
Hadoop's JobTracker was(is) responsible for both cluster resources and the application lifecycle. YARN is only responsible for managing cluster resources and the application lifecycle is the responsibility of the application.
This change means that YARN can be used to manage any distributed paradigm. MR2 is of course the initial implementation ( map/reduce over YARN) but you can see some other implementations like the Storm-on-YARN you mentioned or HortonWorks intention to integrate SQL in hadoop etc.
You can take a look at a library called Weave from continuuity that provides a simple API for building distributed apps on YARN
I'm thinking of learning hadoop but not sure if it'll solve my problem. Basically I have a job with a queue and a bunch of workers. Each worker does a small amount of work and then either saves the results(if successful) or sends it back to the queue for further processing. My problem is scalable, is limited by the bandwidth on the network(ec2) which will never keep up with multiple cpu's crunching the data. I thought maybe I could run my jobs in Java in a hadoop cluster and have hadoop distribute the work via a queue. Would this be a better approach? I am correct in assuming hadoop can a queue and try to run jobs as locally as possible to minimize bandwidth usage and maximize cpu usage? My program is very cpu bound but most of my recent problems with its performence are related to passing work over a network(I want to keep the work as local as possible), but the difference between the hadoop tutorials I see and my problem is that in the tutorials all the work is known in advance while my program is generating new work for its self constantly(until its finally done). Would this work and would it help me reduce the impact of passing messages over a network?
Sorry I'm new to hadoop and wanted to know if it could solve my problem.
Hadoop is all about running jobs in a batch-like mode over a large data set. It's hard to get it to have some sort of queue-like behavior, but not impossible. There is Apache ZooKeeper, which will give you synchronization to build a queue if you need it.
There are plenty of tools to solve the problem it looks like you are trying to solve. I suggest taking a look at RabbitMQ. If you use python, Celery is quite fantastic.