Spark program to monitor the executors performance - hadoop

I am working on a spark program that monitor each executors' performance such as mark down when one executor start to work and when it finishes its job. I am thinking two ways to do that:
First, develop programs so when the executor starts work, it mark down the current time to a file, when it finishes, mark down that time to the same file. In the ends, all "log" files will be spread the whole cluster networks except for the driver machine.
Second, since executors will report to driver periodically, each time the driver receives message from executors, if the message contains "start" and "finish" information, let the driver record everything.
Is that possible?

There are many ways to Monitor the executor performance as well as application performance
Best ways are to Monitor with the help of Spark Web UI and Other Monitoring tools available Open Source (Ganglia)
You Need to Monitor your application whether your cluster is under utilized or not how much resources are used by your application which you have created.
Monitoring can be done using various tools eg. Ganglia From Ganglia you can find CPU, Memory and Network Usage.Based on Observation about CPU and Memory Usage you can get a better idea what kind of tuning is needed for your application
Hope this Helps!!!....

Related

How do I find a marathon runaway process

I have a mesos / marathon system, and it is working well for the most part. There are upwards of 20 processes running, most of them using only part of a CPU. However, sometimes (especially during development), a process will spin up and start using as much CPU as is available. I can see on my system monitor that there is a pegged CPU, but I can't tell what marathon process is causing it.
Is there a monitor app showing CPU usage for marathon jobs? Something that shows it over time. This would also help with understanding scaling and CPU requirements. Tracking memory usage would be good, but secondary to CPU.
It seems that you haven't configured any isolation mechanism on your agent (slave) nodes. mesos-slave comes with an --isolation flag that defaults to posix/cpu,posix/mem. Which means isolation at process level (pretty much no isolation at all). Using cgroups/cpu,cgroups/mem isolation will ensure that given task will be killed by kernel if exceeds given memory limit. Memory is a hard constraint that can be easily enforced.
Restricting CPU is more complicated. If you have machine that offers 8 CPU cores to Mesos and each of your tasks is set to require cpu=2.0, you'll be able run there at most 4 tasks. That's easy, but at given moment any of your 4 tasks might be able to utilize all idle cores. In case some of your jobs is misbehaving, it might affect other jobs running on the same machine. For restricting CPU utilization see Completely Fair Scheduler (or related question How to understand CPU allocation in Mesos? for more details).
Regarding monitoring there are many possibilities available, choose an option that suits your requirements. You can combine many of the solutions, some are open-source other enterprise level solutions (in random order):
collectd for gathering stats, Graphite for storing, Grafana for visualization
Telegraf for gathering stats, InfluxDB for storing, Grafana for visualization
Prometheus for storing and gathering data, Grafana for visualization
Datadog for a cloud based monitoring solution
Sysdig platform for monitoring and deep insights

Hadoop Performance Monitoring tools for Windows

Any tools for monitoring performance on a Hadoop cluster in Windows. We installed Hortonworks HDP 2.2.0 on windows single node cluster and tested our jar. we were able to process 5 million records in 26 minutes. Now we have set up a cluster with 4 slave machines and 1 name node. Though the RAM of each machine is 8 Gigs, we are just doing a proof of concept. we see no improvement in the processing time in the cluster. Are there any tools which point out the problem. All the available are written for Linux.
Thanks,
Kishore.
5 million records doesn't sound like a lot to throw on Hadoop. What's the size of your data in gb?
I don't know any Hadoop monitoring tools for Windows but you should start with the basics - is your data splittable? Have a look at the resource manager's view - how many containers did you have for your map-reduce app? Were they distributed on all machines? (the capacity scheduler tends not to distribute the load on several machines if it can stick all of it on one). CPU usage per task attempt, io per task attempt?
You should also store, compare and analyze Windows performance counters - cpu, i/o, network to see if you have any bottlenecks.
You may not need Windows-native tools to surface the kinds of performance metrics you are looking for. If you're after performance metrics from YARN, MapReduce, or HDFS, you can collect metrics from each of those technologies out of the box from a web interface/HTTP endpoint exposed by each tech in question.
With HDFS, for example, you can collect metrics from the NameNode and DataNodes via HTTP. In addition, you can access the full suite of metrics via JMX, though that option requires a little more configuration.
I wrote a guide to collecting Hadoop performance metrics with native tools which you might find useful. It details methods for collecting metrics for MapReduce, YARN, HDFS, and ZooKeeper.

Spark Streaming: What are things we should monitor to keep the streaming running?

I have a spark project running on 4 Core 16GB (both master/worker) instance, now can anyone tell me what are all the things to keep monitoring so that my cluster/jobs will never go down?
I have created a small list which includes the following items, please extend the list if you know more:
Monitor Spark Master/Worker from failing
Monitor HDFS from getting filled/going down
Monitor network connectivity for master/worker
Monitor Spark Jobs from getting killed
That's a good list. But in addition to those I would actually monitor the status of the receivers of the streaming application (assuming you are some non-HDFS source of data), whether they are connected or not. Well, to be honest, this was tricky to do with older versions of Spark Streaming as the instrumentation to get the receiver status didnt quite exist. However, with Spark 1.0 (to be released very soon), you can use the org.apache.spark.streaming.StreamingListener interface to get the events regarding the status of the receiver.
A sneak peak to the to-be-released Spark 1.0 docs is at
http://people.apache.org/~tdas/spark-1.0.0-rc10-docs/streaming-programming-guide.html

How to Monitor Resource Utilization?

Is there a tool which logs the system resource utilization like cpu,memory,io and network for a period of time and generate graph ?
I need to monitor system and identify in which period resource is been highly utilized.
If anyone of you had experience with this kind of tool,kindly suggest.
Thanks in Advance.
Besides third party tools, there is Windows Performance Monitor that can help. It shows real-time graphs, and can save the performance information into files that you can open and analyze later
It provides multiple metrics for CPU, memory, I/O and Network utilization, and shows an instance for each processor on the machine. It can also be used to monitor remote machines
You can also create collector sets, to have all monitored counters in a single component
Performance Monitoring Getting Started Guide
Create a Data Collector Set to Monitor Performance Counters
I think this tool will help you
System-Resources-Monitoring
System Monitoring

Java - how to determine the current load

How would I determine the current server load? Do I need to use JMX here to get the cpu time, or is there another way to determine that or something similar?
I basically want to have background jobs run only when the server is idle. I will use Quartz to fire the job every 30 minutes, check the server load then proceed if it is low or halt if it is busy.
Once I can determine how to measure the load (cpu time, memory usage), I can measure these at various points to determine how I want to configure the server.
Walter
Tricky to do in a portable way, it would likely depend considerably on your platform.
An alternative is to configure your Quartz jobs to run in low-priority threads. Quartz allows you to configure the thread factory, and if the server is busy, then the thread should be shuffled to the back of the pack until it can be run without getting in the way.
Also, if the load spikes in the middle of the job, then the VM will automatically throttle your batch job until the load drops again. It should be self-regulating, which you wouldn't get by manual introspection of the current load.
I think you've answered your own question. If you want a pure Java solution, then the best that you can do is the information returned by the ThreadMXBean.
You can find out how many threads there are, how many processors the host machine has and how much time has been used by each thread, and calculate CPU load from that.

Resources