Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
From experiences with process monitoring in Ruby, what do we recommend as the best process monitor. These are some of the features I'm interested in:
Efficient memory management without memory leaks
Monitor processes that are consuming a huge amount of RAM and automatically restart them
Optimum up time i.e automatically restart processes when they die off for some reason
Easy debugging i.e the process should still be able to log to a log file
I have used Eye gem now in one of our production apps and it's been running for the past 3 years. We haven't experienced any memory issues with it, although, we don't do heavy computational task with it.
Eye was inspired by God and Bluepill. So far, I haven't experienced any memory leaks with Eye. The eye process itself is super light weight. Uses just about few kilobytes of memory and less than 1% of CPU.
You also have various features with eye such as easy debugs, memory monitoring for processes, cpu monitoring, nested process configuration, mask matching etc.
Eye is awesome, I do recommend it.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed last year.
Improve this question
I have an embedded linux system containing two threads that must run in real time (or soft real time). When using SCHED_OTHER, I noted a lot of jitter but the two threads always executed within their allocated time.
I have applied the RT patch with PREEMPT_RT enabled, and running those two threads with SCHED_FIFO (with a high thread priority of ~80) leads to a lot less jitter, it's overall a lot better, except once and a while both threads miss their deadline (instead of executing every 10 ms or so, they may not get schedule for almost a second!).
I wanted to ask which tool is best when debugging linux scheduling (under RT) on an embedded Linux OS. ftrace came to mind, but I don't know if it is the best and/or only tool. My goal is to find out why the two threads don't get scheduled for an extensive amount of time once in a while.
UPDATE: I've been running ftrace today with wakeup_rt. wakeup_rt as a tracer didn't get the job done: the max latency it recorded was 5ms when my thread can run up to 1000ms late. Maybe it is not a scheduler issue? What other tracer in ftrace would you recommend please?
Try using rt-app which is used by scheduler developers.
You might also want to use SCHED_DEADLINE instead of SCHED_FIFO to reduce your jitter.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have developed an app to analyze videos using OpenCV and Visual Studio 2013. I was planning to run this app in the Azure assuming that it will run faster in cloud. However, to my surprise, the app ran slower than my desktop, taking about twice the time when I configured the Azure instance with 8 cores. It is a 64-bit app, compiled with appropriate compiler optimization. Can someone help me understand why am I losing time in the cloud, and is there a way to improve the timing there?
The app takes as input a video (locally in each case) and outputs a flat file with the analysis data.
I am not sure why people are voting to close this question. This is very much about programming and if possible, please help me in pinpointing the problem.
There is only going to be 3 reasons for this
Disk IO speed
CPU Speed
Memory Speed
Taking a look here you can see someone who actually checked the performance of on premise to cloud: Azure compute power: Extra Large VM slow
Basically the Ghz is most likely slower (around 1.6) and disk IO speed, while local, is normally capped at 300 or 500 IOPS, which is only just higher than 15k rpm drives and no where near SSD level.
I am not sure on memory speed. While you can keep adding cores, most programs, even ones optimized for multiple cores, have a lot of dependencies on single threads, hence slowing the whole operation down. Increased Ghz is what can make a large difference.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Given toolkits like Linux-HA and cluster layers on top like Corosync; file-system replicators like DRBD and other various bits and pieces there are the components available to developers to build highly available, robust systems.
High-availability architecture-level patterns are often fairly easy to describe, but I'm looking for the level(s) below that.
While each of these toolkit-parts seems to be fairly well documented, and some of them show how to use them in a robust application, they don't show examples of an end-to-end or multi-resource-using application.
So, what are the concrete steps, patterns, recipes, etc. that should be followed in order for developed code to play nice in an environment like this?
What books, web-tutorials, etc., should I point a team to in order to refactor a working single-box custom TCP server (for example) and make it run under cluster control, writing to shared file system space, and working in such a way that when it fails over, it has a chance to recover and keep working.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I currently analyse inefficient Firefox addons by uninstalling them and seeing empirically in a long run whether the addon was a problem or not. However, this way of finding the inefficient addons is very time-consuming.
I would like to know exact numerical ways to see
the CPU consumption for each addon independently in Firefox
the CPU consumption for two different addons at the same time in Firefox (note that it is not practical to have two addons in your browser at each time and then measure the need in a long term)
It is apparently enough to measure only CPU, not memory consumption at all, to keep tests simple.
Is there any tool to measure CPU consumption for the combination of 2 in a set of addons?
No, unfortunately there is no such tool. The closest thing are various profiling tools (like Venkman), which can show you time spent in various JS functions, but aggregating that data to determine if an extension is inefficient will be tricky.
Mozilla also uses dtrace on Mac (with special builds of Firefox and special dtrace scripts) to analyze performance. I imagine it could be adapted for this too.
There is a Firefox add-on to see the memory usage: about addons memory.
Install the addon and open the page about:addons-memory, it will display the memory usage for all installed addon (including Firefox native addons).
You might also be interested by tab memory usage, which display memory usage for each opened tab.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
When requesting hardware for a WebLogic server, what hardware would best improve its performance? Should I give it lots of memory, CPU, fast hard drives? The OS is going to be Redhat 4 either Standard or Enterprise.
Memory is cheap. Give it as much as you can. 4 gigs is what, $50?
Depends on the applications you deploy on it. It's impossible to answer, given that we know nothing about the apps deployed, the hardware you have, and the myriad settings available on a typical Java EE app server.
More is always better. Supply the most memory, the biggest hard drives, the fastest multicore processors you can afford.
That of course depends a lot on what type of applications you run on the server. I know that our WebLogic portal eats quite a lot of memory (10+ gigs) while other apps make due with a lot less.