Can a Linux scheduler in the kernel run simultaneously on multiple cores? Or is there only a single scheduler which will run on each processer as and when needed?
OS usually runs on a specific processor, default being core-0. However, kernel or the scheduler can start kernel threads to handle OS operations and each CPU is seen as a separate data structure (i.e., run-queue) from the scheduler perspective. There is a single scheduler, but it is not a single process.
Related
This question is about operating systems in general. Is there any necessary mechanism in implementation of operating systems that impacts flow of instructions my program sends to CPU?
For example if my program was set for maximum priority in OS, would it perform exactly the same when run without OS?
Is there any necessary mechanism in implementation of operating systems that impacts flow of instructions my program sends to CPU?
Not strictly necessary mechanisms (depending on how you define "OS"); but typically there's IRQs, exceptions and task switches.
IRQs are used by devices to ask the OS (their device driver) for attention; and interrupting the flow of instructions your program sends to CPU. The alternative is polling, which wastes a huge amount of CPU time checking if the device needs attention when it probably doesn't. Because applications need to use devices (file IO, keyboard, video, etc) and wasting CPU time is bad; IRQs significantly improve the performance of applications.
Exceptions (like IRQs) also interrupt the normal flow of instructions. They occur when the normal flow of instructions can't continue, either because your program crashed, or because your program needs something. The most common cause of exceptions is virtual memory (e.g. using swap space to let the application have more memory than actually exists so that the application can actually work properly; where the exception tells the OS that your program tried to access memory that has to be fetched from disk first). In general; this also improves performance for multiple reasons (because "can't execute because there's not enough RAM" can be considered "zero performance"; and because various tricks reduce RAM consumption and increase the amount of RAM that can be used for things like caching files which improve file IO speed).
Task switches is the basis of multi-tasking (e.g. being able to run more than one application at a time). If there are more tasks that want CPU time than there are CPUs, then the OS (scheduler) may (depending on task priorities and scheduler design) switch between them so that all the tasks get some CPU time. However; most applications spend most of their time waiting for something to do (e.g. waiting for user to press a key) and don't need CPU time while waiting; and if the OS is only running one task then the scheduler does nothing (no task switches because there's no other task to switch to). In other words, if the OS supports multi-tasking but you're only running one task, then it makes no difference.
Note that in some cases, IRQs and/or tasks are also used to "opportunistically" do work in the background (when hardware has nothing better to do) to improve performance (e.g. pre-fetch, pre-process and/or pre-calculate data before it's needed so that the resulting data is available instantly when it is needed).
For example if my program was set for maximum priority in OS, would it perform exactly the same when run without OS?
It's best to think of it as many layers - hardware & devices (CPU, etc), with kernel and device drivers on top, with applications on top of that. If you remove any of the layers nothing works (e.g. how can an application read and write files when there's no file system and no disk device drivers?).
If you shift all of the functionality that an OS provides into the application (e.g. a statically linked library that can make an application boot on bare metal); then if the functionality is the same the performance will be the same.
You can only improve performance by reducing functionality. For example, if you get rid of security you'll improve performance (temporarily, until your application becomes part of an attacker's botnet and performance becomes significantly worse due to all the bitcoin mining it's doing). In a similar way, you can get rid of flexibility (reboot the computer when you plug in a different USB flash stick), or fault tolerance (trash all of your data without any warning when the storage devices start failing because software assumed hardware is permanently perfect).
I use Debian. When I run parent process from user in terminal, "forked" processes are run on different processor cores. But the parent-process is run from rc.local with root permission, all "forked" process are run on the same processor core as the parent core. How to make the processes running under root is distributed to the processor cores as well as if we ran under a standard user?
i just want to know if this statement is true or false:
"The operating system can only function if it is the executable that has the time slice." If true/false, why? Thank you for your help.
you question become relevant for single processor with single core machine where only one task at a time can be executed.
Operating system, is just collection of routines and services to facilitate the user applications. lets say if app1 needs more memory, OS will involve or app needs I/O then OS will involve. When app needs OS intention, there is specific system to tell OS which do you want from OS. which is System calls. When one of Os task will be in execution then no user app will be in execution. when OS will complete its task it will again assign CPU to App. So in this scenario, OS is an event driven: on some specific events control is handed over to OS.
On above rationale, No OS does not need CPU slice for its execution.
is there any way to prevent the linux kernel to migrate threads to other cpus?
Using hwloc (which in turn uses pthread_setaffinity_np), I bind threads to cores. However, sometimes I see that the kernel starts expensive migration tasks. Is there any way I can prevent the kernel of doing this? I have not found any flags in hwloc / the pthreads library, nor did setting kernel/sched_nr_migrate to 0 result in the desired behavior.
Any suggestions are highly appreciated. Thanks
You can set the CPU affinity of the process (not the thread) and from what I understand the kernel will try real hard to respect that. If you want all the threads that a particular process spawns to run on the same CPU then this is an acceptable solution.
Here's an article from IBM that gives some additional background and specific system calls:
http://www.ibm.com/developerworks/library/l-affinity/index.html
I'm running an application right now which seems to be running at full throttle, but even though the fan seems to be spinning at it's max and the activity monitor reports that the application is using 100% of the processor, I'm suspecting that at the most it is using 100% only of a single of the two cores on my machine.
How can I tell OS X to allow an application use 100%, or as much as the OS can allow, of the processing power of my computer? I have tried some terminal commands like "nice" and "renice" to set up the priority of this process but still can't get it to run at full throttle.
I also would like to know how to do the opposite, set a limit of the processor usage of an app, example set app X to run at 20%.
Is this possible to do without modifying the code of the app?
The answer to this depends upon whether your application is multi-threaded or not. If this is a single-threaded application, which it is unless you have specifically made it multi-threaded then the process will run on one core of your multi-core hardware. There is nothing you can do about this it's a function of the underlying operating system.
If your program is multi-threaded then it is possible to have different threads executing on separate cores. This will increase the overall usage of the process and allow figures greater than 100%.
You can not however force the machine to use 'all' of the processing power available, but you can influence it with nice.
In order to reduce the amount of processor used then you can use nice to lower the priority of the process. If you are root you can also use nice to increase the priority of your process