Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I deployed hadoop 2 (HDP2), and would like to get the disk i/o metrics of my slave nodes in ganglia. I didn't find any relevant metric so far.
What metric do you suggest to use or to add to ganglia ?
Thanks
You can also use Ganglia Python Module Disk-stat provided here
cloudera-manager can help monitor the cluster status including Disk I/O
As stated by #vishnu: Ganglia Python Module Disk-stat provided here
You can also use gmetric scripts that you would run from cron from here
Finally, you can use collectl and export disk metrics to ganglia, see here
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I want to monitor different servers for my entreprise and I'm going to use Prometheus for that, I want to know if I can monitor them?
example: I want to install Prometheus on a server and monitor the rest of servers using agents, but I dont find any documentations about Prometheus agents.
Thanks for your help.
Yes you can. You need to install Prometheus on a server (see how to install here) and install "exporters" in the other servers (see a list of exporters here). You could start installing the "Node Exporter" (see a good guide here). The "Node Exporter" exposes a wide variety of hardware and kernel related metrics no Prometheus.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
How to move Ignite cached data from one cluster to other cluster ?
You need to copy your persistence files. By default, they're stored in the working directory - ${IGNITE_HOME}/work. Copy that directory from the nodes of the first cluster to the nodes of the second.
Note that you need to have the same number of nodes and the same configuration for this to work.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm running storm and trident-storm local cluster. My goal is to compare them. I'd like to do it by comparing execution time. How can I see working time of every bolt in storm and in trident-storm?
The Storm UI that runs on the nimbus server can show you this. If you don't already have that running, checkout these instructions for details on how to run it (should be at the bottom of the page).
If you're trying to do this test in local mode, though, and don't have the UI, I'd recommend you not even bother. Local mode is not really representative of what kind of performance you'll see on a cluster once all your workers have started and are processing tuples.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
To process big data storm tool can be used. We got the information about storm. But we dont know whether it can be configured in windows operating system. So if anyone could answer it will be very useful to us
Thank you
Yes you can. Checkout this article and see whether it can help you: Running Apache Storm on Windows.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I havent had extensive use of vps before. I was wondering, if I purchase one node from Linode, can I run deploy multiple instances? Similar to Amazon EC2? Or would I have to purchase another Linode separately?
Thanks!
Yes, you will—you're paying for guaranteed resources in a particular location, so another Linode it is.
I understand what you trying to say, under single VPS linode instance multiple deployment not possible as of now.