Tools for scheduler debugging in Linux [closed] - linux-kernel

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed last year.
Improve this question
I have an embedded linux system containing two threads that must run in real time (or soft real time). When using SCHED_OTHER, I noted a lot of jitter but the two threads always executed within their allocated time.
I have applied the RT patch with PREEMPT_RT enabled, and running those two threads with SCHED_FIFO (with a high thread priority of ~80) leads to a lot less jitter, it's overall a lot better, except once and a while both threads miss their deadline (instead of executing every 10 ms or so, they may not get schedule for almost a second!).
I wanted to ask which tool is best when debugging linux scheduling (under RT) on an embedded Linux OS. ftrace came to mind, but I don't know if it is the best and/or only tool. My goal is to find out why the two threads don't get scheduled for an extensive amount of time once in a while.
UPDATE: I've been running ftrace today with wakeup_rt. wakeup_rt as a tracer didn't get the job done: the max latency it recorded was 5ms when my thread can run up to 1000ms late. Maybe it is not a scheduler issue? What other tracer in ftrace would you recommend please?

Try using rt-app which is used by scheduler developers.
You might also want to use SCHED_DEADLINE instead of SCHED_FIFO to reduce your jitter.

Related

Best Process Monitor In Ruby [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
From experiences with process monitoring in Ruby, what do we recommend as the best process monitor. These are some of the features I'm interested in:
Efficient memory management without memory leaks
Monitor processes that are consuming a huge amount of RAM and automatically restart them
Optimum up time i.e automatically restart processes when they die off for some reason
Easy debugging i.e the process should still be able to log to a log file
I have used Eye gem now in one of our production apps and it's been running for the past 3 years. We haven't experienced any memory issues with it, although, we don't do heavy computational task with it.
Eye was inspired by God and Bluepill. So far, I haven't experienced any memory leaks with Eye. The eye process itself is super light weight. Uses just about few kilobytes of memory and less than 1% of CPU.
You also have various features with eye such as easy debugs, memory monitoring for processes, cpu monitoring, nested process configuration, mask matching etc.
Eye is awesome, I do recommend it.

Why Go takes so much CPU to build a package? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have download a golang package from github. it is in middle size. when compiling it from source, my computor gets slow down for i have more than one golang compile process and it takes much of the cpu.
how does golang make it to do concurrent compile?
any params to turn the number of cpu it use when compile?
Go is using a lot of CPU because it's trying to build as fast as possible, like any other compiler. It may also be because you're using a package that is using cgo, which can drastically increase compiling times as compiling medium to large C libraries is often quite intensive.
You can control the number of processes Go is using by setting the GOMAXPROCS environment variable. Such as GOMAXPROCS=1 go get ... to limit Go to only use 1 process (and thus only 1 CPU core). This however does not affect the number of processes used by external compilers that cgo may invoke.
If you need further CPU control, on Unix based systems you can use the nice command to change the priority of processes such that other programs have higher CPU priority, making your computer less sluggish.

Cloud performance vs Desktop [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have developed an app to analyze videos using OpenCV and Visual Studio 2013. I was planning to run this app in the Azure assuming that it will run faster in cloud. However, to my surprise, the app ran slower than my desktop, taking about twice the time when I configured the Azure instance with 8 cores. It is a 64-bit app, compiled with appropriate compiler optimization. Can someone help me understand why am I losing time in the cloud, and is there a way to improve the timing there?
The app takes as input a video (locally in each case) and outputs a flat file with the analysis data.
I am not sure why people are voting to close this question. This is very much about programming and if possible, please help me in pinpointing the problem.
There is only going to be 3 reasons for this
Disk IO speed
CPU Speed
Memory Speed
Taking a look here you can see someone who actually checked the performance of on premise to cloud: Azure compute power: Extra Large VM slow
Basically the Ghz is most likely slower (around 1.6) and disk IO speed, while local, is normally capped at 300 or 500 IOPS, which is only just higher than 15k rpm drives and no where near SSD level.
I am not sure on memory speed. While you can keep adding cores, most programs, even ones optimized for multiple cores, have a lot of dependencies on single threads, hence slowing the whole operation down. Increased Ghz is what can make a large difference.

To analyse CPU consumption of Firefox addons in SO [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I currently analyse inefficient Firefox addons by uninstalling them and seeing empirically in a long run whether the addon was a problem or not. However, this way of finding the inefficient addons is very time-consuming.
I would like to know exact numerical ways to see
the CPU consumption for each addon independently in Firefox
the CPU consumption for two different addons at the same time in Firefox (note that it is not practical to have two addons in your browser at each time and then measure the need in a long term)
It is apparently enough to measure only CPU, not memory consumption at all, to keep tests simple.
Is there any tool to measure CPU consumption for the combination of 2 in a set of addons?
No, unfortunately there is no such tool. The closest thing are various profiling tools (like Venkman), which can show you time spent in various JS functions, but aggregating that data to determine if an extension is inefficient will be tricky.
Mozilla also uses dtrace on Mac (with special builds of Firefox and special dtrace scripts) to analyze performance. I imagine it could be adapted for this too.
There is a Firefox add-on to see the memory usage: about addons memory.
Install the addon and open the page about:addons-memory, it will display the memory usage for all installed addon (including Firefox native addons).
You might also be interested by tab memory usage, which display memory usage for each opened tab.

What hardware changes will affect WebLogic performance the most? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
When requesting hardware for a WebLogic server, what hardware would best improve its performance? Should I give it lots of memory, CPU, fast hard drives? The OS is going to be Redhat 4 either Standard or Enterprise.
Memory is cheap. Give it as much as you can. 4 gigs is what, $50?
Depends on the applications you deploy on it. It's impossible to answer, given that we know nothing about the apps deployed, the hardware you have, and the myriad settings available on a typical Java EE app server.
More is always better. Supply the most memory, the biggest hard drives, the fastest multicore processors you can afford.
That of course depends a lot on what type of applications you run on the server. I know that our WebLogic portal eats quite a lot of memory (10+ gigs) while other apps make due with a lot less.

Resources