how to controls the CPU usage of an app on OS X? - performance

I'm running an application right now which seems to be running at full throttle, but even though the fan seems to be spinning at it's max and the activity monitor reports that the application is using 100% of the processor, I'm suspecting that at the most it is using 100% only of a single of the two cores on my machine.
How can I tell OS X to allow an application use 100%, or as much as the OS can allow, of the processing power of my computer? I have tried some terminal commands like "nice" and "renice" to set up the priority of this process but still can't get it to run at full throttle.
I also would like to know how to do the opposite, set a limit of the processor usage of an app, example set app X to run at 20%.
Is this possible to do without modifying the code of the app?

The answer to this depends upon whether your application is multi-threaded or not. If this is a single-threaded application, which it is unless you have specifically made it multi-threaded then the process will run on one core of your multi-core hardware. There is nothing you can do about this it's a function of the underlying operating system.
If your program is multi-threaded then it is possible to have different threads executing on separate cores. This will increase the overall usage of the process and allow figures greater than 100%.
You can not however force the machine to use 'all' of the processing power available, but you can influence it with nice.
In order to reduce the amount of processor used then you can use nice to lower the priority of the process. If you are root you can also use nice to increase the priority of your process

Related

Do Operating Systems slow down program execution?

This question is about operating systems in general. Is there any necessary mechanism in implementation of operating systems that impacts flow of instructions my program sends to CPU?
For example if my program was set for maximum priority in OS, would it perform exactly the same when run without OS?
Is there any necessary mechanism in implementation of operating systems that impacts flow of instructions my program sends to CPU?
Not strictly necessary mechanisms (depending on how you define "OS"); but typically there's IRQs, exceptions and task switches.
IRQs are used by devices to ask the OS (their device driver) for attention; and interrupting the flow of instructions your program sends to CPU. The alternative is polling, which wastes a huge amount of CPU time checking if the device needs attention when it probably doesn't. Because applications need to use devices (file IO, keyboard, video, etc) and wasting CPU time is bad; IRQs significantly improve the performance of applications.
Exceptions (like IRQs) also interrupt the normal flow of instructions. They occur when the normal flow of instructions can't continue, either because your program crashed, or because your program needs something. The most common cause of exceptions is virtual memory (e.g. using swap space to let the application have more memory than actually exists so that the application can actually work properly; where the exception tells the OS that your program tried to access memory that has to be fetched from disk first). In general; this also improves performance for multiple reasons (because "can't execute because there's not enough RAM" can be considered "zero performance"; and because various tricks reduce RAM consumption and increase the amount of RAM that can be used for things like caching files which improve file IO speed).
Task switches is the basis of multi-tasking (e.g. being able to run more than one application at a time). If there are more tasks that want CPU time than there are CPUs, then the OS (scheduler) may (depending on task priorities and scheduler design) switch between them so that all the tasks get some CPU time. However; most applications spend most of their time waiting for something to do (e.g. waiting for user to press a key) and don't need CPU time while waiting; and if the OS is only running one task then the scheduler does nothing (no task switches because there's no other task to switch to). In other words, if the OS supports multi-tasking but you're only running one task, then it makes no difference.
Note that in some cases, IRQs and/or tasks are also used to "opportunistically" do work in the background (when hardware has nothing better to do) to improve performance (e.g. pre-fetch, pre-process and/or pre-calculate data before it's needed so that the resulting data is available instantly when it is needed).
For example if my program was set for maximum priority in OS, would it perform exactly the same when run without OS?
It's best to think of it as many layers - hardware & devices (CPU, etc), with kernel and device drivers on top, with applications on top of that. If you remove any of the layers nothing works (e.g. how can an application read and write files when there's no file system and no disk device drivers?).
If you shift all of the functionality that an OS provides into the application (e.g. a statically linked library that can make an application boot on bare metal); then if the functionality is the same the performance will be the same.
You can only improve performance by reducing functionality. For example, if you get rid of security you'll improve performance (temporarily, until your application becomes part of an attacker's botnet and performance becomes significantly worse due to all the bitcoin mining it's doing). In a similar way, you can get rid of flexibility (reboot the computer when you plug in a different USB flash stick), or fault tolerance (trash all of your data without any warning when the storage devices start failing because software assumed hardware is permanently perfect).

Does Process Overhead Make Linux a Better Node.js Host than Windows?

My understanding is that node.js is designed to scale by adding processes rather than by spawning threads in a process. In fact, from watching an awesome introductory video by Ryan Dahl, I get the idea that spawning threads is forbidden in node.js. I like the simplicity of this approach, but I am concerned that there might be downside when running on Windows, since processes creation is more expensive on Windows than Linux.
Given modern hardware and the fact that node.js processes can be expected to be relatively long running, does process overhead still create a significant advantage for Linux when considering hosting node.js? To put it in concrete terms, if we assume an organization that is using the Windows stack only, but is planning a big move onto node.js, is there a point in considering a new OS because of this issue?
No. Node.js runs in only 1 process and doesn't spawn processes during execution.
The reason you might have gotten the impression that node uses processes to scale is because you can add a process per CPU core to enable node to take advantage of your multicore computer (you'll need a load balancer like solution for this tho). Still: you don't spawn processes on the fly. So yes, you can run node perfectly fine on Windows (or Azure) without too much of a performance hit (if any).

Is there any way to find out and/or limit GPU usage by process in Windows?

I'd like to launch CPU and GPU intensive process on some machines, but these processes must not interfere with user's tasks. So I need to limit or at least detect GPU usage by my processes. These processes are closed-source, so I can't watch GPU usage from inside.
The answer to your subject line question is: yes (On Windows Vista and up), use Process Explorer from Microsoft to monitor per process GPU usage. nvidia's parallel nsight can do this also. Now, the body of your question sounds like you want to do this remotely. Unfortunately I'm not aware of a way to do this remotely. Still, hopefully this will be of some use to you.
edit to add: If you fire up Process Explorer I don't think it shows the GPU stats by default, to get them right click on the list of columns and add them.
The GPU is a resource that can only be used by one program at a time. If another process is using the GPU, then you can't get access to it.
A program may run multiple GPU kernels at the same time, but it's up to that program how those get run. There's no real concept of scheduling like there is with the OS and CPU processes.
Some vendors may have a way for you to check on the status of the device, like # cores in use, heat, fan speed, etc, but that won't let you change what's happening on it, and it will be specific to each vendor/device.

Will a task be completed faster if it is the active window?

I have heard a myth that a job will finish faster if it is kept as the active window, and not in the background or minimized.
Is there any truth to this? Does the CPU put precedence to tasks where this happens?
Thanks,
On Windows the foreground application gets a priority boost. This is to help it maintain responsiveness to the user and makes sure that when it's ready to run after waiting for some I/O event, it'll get to run next, ahead of most other applications that might be waiting to run.
There is also a potential for a longer quantum for foreground applications.
I don't know how much faster an application would complete if it's run in the foreground instead of the background - there are so many factors that would go into this (particularly I/O). The intent is to make the application more responsive.
This is all configurable to some extent (maybe only on server SKUs):
http://support.microsoft.com/kb/259025
It depends on your setup. On a default Windows desktop operating system, this is true. On a Windows server operating system (like Windows 2003) this is not true.
You can change the setting by going into System Properties and clicking on the Performance tab. The exact layout differs depending on Windows version, but you should see (or be able to see by clicking an Advanced sub tab or finding "Scheduler") either a radio/combo choice between "Workstation" and "Server" configuration, or a choice between prioritizing Programs or Background Services. In both cases these are the same thing (just different language - the Server/Workstation language is from Windows 2000, while Programs/Services was created for the more consumer oriented XP) - they determine if the scheduler gives extra importance to the thread of the top most window, or if all threads are treated equally (based on the thread prioritity property).
Windows allows you to give the "foreground" task a priority advantage, so it may be no myth. You can also set it the other way, to give "service" tasks the priority advantage instead, so it depends on the install.
Note that this only affects the priority... if there are no other tasks running, it won't run noticeably different in either case. It's only when there is another application that needs CPU time that you might notice the difference.
That's partially true on Windows. Windows would assign a GUI application that has its window on-top slightly higher priority. So if there're other tasks with normal or lower priority the program could really run a bit faster at the expense of other programs running a bit slower.
There's catch however. When you start compilation in Visual Studio IDE the IDE will spawn a separate process for the compilation and only redirect its output to its own window. Since the compilation process now has no own windows it will not gain acceleration.

Is there any way of throttling CPU/Memory of a process?

Problem: I have a developers machine (read: fast, lots of memory), but the user has a users machine (read: slow, not very much memory).
I can simulate a slow network using Fiddler (http://www.fiddler2.com/fiddler2/)
I can look at how CPU is used over time for a process using Process Explorer (http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx).
Is there any way I can restrict the amount of CPU a process can have, or the amount of memory a process can have in order to simulate a users machine more effectively? (In order to isolate performance problems for instance)
I suppose I could use a VM, but I'm looking for something a bit lighter.
I'm using Windows XP, but a solution for any Windows machine would be welcome. Thanks.
The platform SDK used to come with stress tools for doing just this back in the good old days (STRESS.EXE, CPUSTRESS.EXE in the SDK), but they might still be there (check your platform SDK and/or Visual Studio installation for these two files -- unfortunately I have niether the PSDK nor VS installed on the machine I'm typing from.)
Other tools:
memory: performance & reliability (e.g. handling failed memory allocation): can use EatMem
CPU: performance & reliability (e.g. race conditions): can use CPU Burn, Prime95, etc
handles (GDI, User): reliability (e.g. handling failed GDI resource allocation): ??? may have to write your own, but running out of GDI handles (buggy GTK apps would usually eat them all away until all other apps on the system would start falling dead like flies) is a real test for any Windows app
disk: performance & reliability (e.g. handling disk full): DiskFiller, etc.
AppVerifier has a low-resource simulation feature.
You could also try setting the priority of your process to be very low.
You can run MemAlloc to chew up RAM, possibly a few copies at once.
I found a related question:
Set Windows process (or user) memory limit
The accepted answer for the question has a link to the Windows API's SetProcessWorkingSetSize, so it's not exactly a tool that can limit the amount of memory that a process can use.
In terms of changing the amount of CPU resources a process can use, if you don't mind the granularity of per-core limiting of resources, Task Manager can change the processor affinity of a process.
In Task Manager, right-click a process and select "Set Affinity...", then select the processor cores that the process can be assigned to.
If the development machine has many cores but the user machine only has one, then, rather than allowing the process to run on all the available cores, set the process' processor affinity to only one core.
It has nothing to do with SetProcessWorkingSetSize
Just use internal Win32 kernel apis to restrict CPU Usage

Resources