Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I want to monitor different servers for my entreprise and I'm going to use Prometheus for that, I want to know if I can monitor them?
example: I want to install Prometheus on a server and monitor the rest of servers using agents, but I dont find any documentations about Prometheus agents.
Thanks for your help.
Yes you can. You need to install Prometheus on a server (see how to install here) and install "exporters" in the other servers (see a list of exporters here). You could start installing the "Node Exporter" (see a good guide here). The "Node Exporter" exposes a wide variety of hardware and kernel related metrics no Prometheus.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm running storm and trident-storm local cluster. My goal is to compare them. I'd like to do it by comparing execution time. How can I see working time of every bolt in storm and in trident-storm?
The Storm UI that runs on the nimbus server can show you this. If you don't already have that running, checkout these instructions for details on how to run it (should be at the bottom of the page).
If you're trying to do this test in local mode, though, and don't have the UI, I'd recommend you not even bother. Local mode is not really representative of what kind of performance you'll see on a cluster once all your workers have started and are processing tuples.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
To process big data storm tool can be used. We got the information about storm. But we dont know whether it can be configured in windows operating system. So if anyone could answer it will be very useful to us
Thank you
Yes you can. Checkout this article and see whether it can help you: Running Apache Storm on Windows.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I deployed hadoop 2 (HDP2), and would like to get the disk i/o metrics of my slave nodes in ganglia. I didn't find any relevant metric so far.
What metric do you suggest to use or to add to ganglia ?
Thanks
You can also use Ganglia Python Module Disk-stat provided here
cloudera-manager can help monitor the cluster status including Disk I/O
As stated by #vishnu: Ganglia Python Module Disk-stat provided here
You can also use gmetric scripts that you would run from cron from here
Finally, you can use collectl and export disk metrics to ganglia, see here
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 days ago.
Improve this question
I'm looking for a cloud solution that allows me to deploy a Linux virtual host remotely, and use it for security testing (ie port scanning, etc). When not in use, maybe have it act as a honeypot. I really like AMAZON's pay what you use approach. Has anyone here used AMAZON's services in a similar fashion?
Any suggestions??
I haven't but my comment on the Amazon services is that they can rack up costs very quickly and it is hard to control the costs as there are too many variables.
Unless you need high resilience, I would recommend simply using a VPS.
Also make sure that, whoever you use, you carefully check the terms and conditions as most providers will not be happy about you doing port scanning from their service.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I havent had extensive use of vps before. I was wondering, if I purchase one node from Linode, can I run deploy multiple instances? Similar to Amazon EC2? Or would I have to purchase another Linode separately?
Thanks!
Yes, you will—you're paying for guaranteed resources in a particular location, so another Linode it is.
I understand what you trying to say, under single VPS linode instance multiple deployment not possible as of now.