What system process is responsible for executing system call, when user process calls ‘system call’ and the CPU switches to supervisor mode?
Are system calls scheduled by thread scheduler (can CPU switch to executing another system call after getting interrupt)?
What system process is responsible for executing system call?
The system call wrapper(the function you call to perform the system call, yeah it's just a wrapper, not the actually System call) will take the parameters, pass them to the approperiate registers(or on stack, depends on implementation), next it will put the system call number you're requesting in the eax (assuming x86) and finally will call INT 0x80 assembly instruction which is basically telling the OS that it received an interrupt and this interrupt is a system call that needs to be served, which system call to serve is available in the eax and the parameters are in the registers.
(modern implementations stopped using INT because it's expensive in performance and now use SYSENTER and SYSEXIT; the above is still almost the same though)
From the perspective of the scheduler, it makes no difference if you perform a system call or not; the thing is, once you ask the OS for a service(via the x86 instruction INT or SYSENTER and SYSEXIT ) the CPU mode flag will change to a privileged set, then the kernel will perform the task you asked for on behalf of your process and once done, it sets the flag back and returns the execution to the next instruction.
So, from a scheduler point of view, the OS will see no difference when you execute a system call or anything else.
Few notes:
-What I mentioned above is a general description, I am not sure if Windows applies this but if it doesn't, it should be doing something of similar fashion.
-Many System Calls perform blocking tasks(like I/O handling); to make better CPU utilization if your process asks for a blocking system call, the scheduler will let your process wait in the wait-queue till what it requested is ready, meanwhile other processes run on the CPU BUT do not confuse this with anything, the OS did not 'schedule system calls'.
The scheduler's task is to organize tasks, and from its perspective the system call is just a routine that the process is executing.
A final note, some system calls are atomic which means they should be performed without any interruption to their execution, these system calls if interrupted, will be be asked to restart execution once the interrupt's cause is over; still this is far from the scheduling concept.
First question: it depends. Some system calls go to services which are already running (say a network call) as a process. Some system calls result in a new process getting created and then getting scheduled for execution.
Last question: yes windows is a multiprocessing system. The process scheduler handles when a thread runs, for how long, and hardware interrupts can end up causing the running process to release the CPU or a idle process that the hardware is now ready for to get the CPU.
In windows (at least > Win 7 but I think in the past it was true too) a lot of the system services run in processes called svchost. A good application for seeing what is running were is Process Explorer from sys internals. It is like task manager on steroids and will show you all the threads that a given process owns. For finer grained "I called this dos command what happened" details you'd probably want to use a debugging tool where you can step through your call. Generally though you don't have to concern yourself with these things, you make a system call the system knows you aren't ready to continue processing until whatever process is handling that request has returned. Your request might get the CPU right after your process releases it, it might get the CPU 2 days from now but as far as the OS is concerned (or your program should be concerned) it doesn't matter, execution stops and waits for a result unless you are running multithreaded and then it gets really complicated.
Related
Is this based on context switching that schedules processes on the cpu? Im a bit lost with understand how this works
system call is not context switch based. context switch is exchange of process running on cpu. which call is going to be used is decided on the system call number which is used as index in system call table. only process context is changed from user to kernel. I always suggest for reading understanding linux kernel an excellent book
I would like to reserve one core for my application. On my searches I could find dwProcessAffinityMask to limit my process to run on the cores I want. But this does not
prevent threads of other processes to run on "my" core as well.
Is there a way to disallow a specific core/processor to be used by any (system-wide) process/thread except my process/thread?
Even if it was possible to set the SystemAffinityMask, this won't help because this would also prohibit the execution of my process/thread on that processor/core.
If your goal is to ensure that your process gets to run in a timely manner, just set a high priority for your process (for instance HIGH_PRIORITY_CLASS) using SetPriorityClass. Unless the system is running other equally high-priority work (of which there is little on a typical machine), your work will get to run immediately when it's ready to execute.
I am spawning few threads inside ioctl call to my driver. I am also assigning kernel affinity to my driver. I want to ensure one of the thread does not get scheduled out till a particular event is flagged by the other thread. Is there any way to not allow windows scheduler to context out my thread. Using _disable() may hang the system as event may take couple of seconds.
Environment is windows 7,64bit
Thanks,
What you are probably after is a spin lock. However this is probably a bad idea unless you can guarantee that your driver/application is always run on a multi-processor system, even then it is still very bad practice. On a single processor system if a thread spin locks then the other thread signalling the spin locked thread will never be scheduled and so can't signal your event. Spin locks are meant to be used sparingly and only when the lock is for a very short time, never a couple of seconds.
It sounds like you need to use an event or other signally mechanism to synchronise your threads and let the windows scheduler do its job. If you need to respond to an event very quickly then interrupts or a Deferred Procedure Call (DPC) could be used instead.
I'm very interested in the answer to another question regarding watchdog timers for Windows services (see here). That answer stated:
I have also used an internal watchdog system running in another thread. That thread looks at the main thread for activity like log output or a toggling event. If the activity is not seen then the service is considered hung and I shutdown the service.
In this case you can configure windows to auto-restart a stopped service and that might clear the problem (as long as it's not an internal logic bug).
Also services I work with have text logs that are written to a log. In addition for services that are about to "sleep for a bit", I log the time for the next wake up. I use MTAIL to watch a log for output."
Could anyone give some sample code how to use an internal watchdog running in another thread, since I currently have a task to develop a windows service which will be able to self restart in case it failed, hung up, etc.
I really appreciate your help.
I'm not a big fan of running a watchdog as a thread in the process you're watching. That means if the whole process hangs for some reason, the watchdog won't work.
Watchdogs are an idea lifted from the hardware world and they had it right. Use an external circuit as simple as possible (so it can be provably correct). Typical watchdogs simply ran an timer and, if the process hadn't done something before the timer expired (like access a memory location the watchdog was watching), the whole thing was reset. When the watchdog was "kicked", it would restart the timer.
The act of the process kicking the watchdog protected that process from summary termination.
My advice would be to write a very simple stand-alone program which just monitored an event (such as file update time being modified). If that event didn't occur within the required time, kill the process being watched (and let Windows restart it).
Then have your watched program periodically rewrite that file.
Other approaches you might want to consider besides regularly modifying the lastwritetime of a file would be to create a proper performance counter or even a WMI object. We do the later in our build infrastructure, the 'trick' is to find a meaningful work unit in the service being monitored and pulse your 'heartbeat' each time a unit is finished.
The advantage of WMI or Perf Counters over a the file approach is that you then become visible to a whole bunch of professional MIS / management tools. This can add a lot of value.
You can configure from service properties to self restart in case of failure
Services -> right-click your service -> Properties -> First failure : restart the service -> Second failure : restart the service -> Subsequent failure : restart
I have a massive number of shell commands being executed with root/admin priveleges through Authorization Services' "AuthorizationExecuteWithPrivileges" call. The issue is that after a while (10-15 seconds, maybe 100 shell commands) the program stops responding with this error in the debugger:
couldn't fork: errno 35
And then while the app is running, I cannot launch any more applications. I researched this issue and apparently it means that there are no more threads available for the system to use. However, I checked using Activity Monitor and my app is only using 4-5 threads.
To fix this problem, I think what I need to do is separate the shell commands into a separate thread (away from the main thread). I have never used threading before, and I'm unsure where to start (no comprehensive examples I could find)
Thanks
As Louis Gerbarg already pointed out, your question has nothing to do with threads. I've edited your title and tags accordingly.
I have a massive number of shell commands being executed with root/admin priveleges through Authorization Services' "AuthorizationExecuteWithPrivileges" call.
Don't do that. That function only exists so you can restore the root:admin ownership and the setuid mode bit to the tool that you want to run as root.
The idea is that you should factor out the code that should run as root into a completely separate program from the part that does not need to run as root, so that the part that needs root can have it (through the setuid bit) and the part that doesn't need root can go without it (through not having setuid).
A code example is in the Authorization Services Programming Guide.
The issue is that after a while (10-15 seconds, maybe 100 shell commands) the program stops responding with this error in the debugger:
couldn't fork: errno 35
Yeah. You can only run a couple hundred processes at a time. This is an OS-enforced limit.
It's a soft limit, which means you can raise it—but only up to the hard limit, which you cannot raise. See the output of limit and limit -h (in zsh; I don't know about other shells).
You need to wait for processes to finish before running more processes.
And then while the app is running, I cannot launch any more applications.
Because you are already running as many processes as you're allowed to. That x-hundred-process limit is per-user, not per-process.
I researched this issue and apparently it means that there are no more threads available for the system to use.
No, it does not.
The errno error codes are used for many things. EAGAIN (35, “resource temporarily unavailable”) may mean no more threads when set by a system call that starts a thread, but it does not mean that when set by another system call or function.
The error message you quoted explicitly says that it was set by fork, which is the system call to start a new process, not a new thread. In that context, EAGAIN means “you are already running as many processes as you can”. See the fork manpage.
However, I checked using Activity Monitor and my app is only using 4-5 threads.
See?
To fix this problem, I think what I need to do is separate the shell commands into a separate thread (away from the main thread).
Starting one process per thread will only help you run out of processes much faster.
I have never used threading before …
It sounds like you still haven't, since the function you're referring to starts a process, not a thread.
This is not about threads (at least not threads in your application). This is about system resources. Each of those forked processes is consuming at least 1 kernel thread (maybe more), some vnodes, and a number of other things. Eventually the system will not allow you to spawn more processes.
The first limits you hit are administrative limits. The system can support more, but it may causes degraded performance and other issues. You can usually raise these through various mecahanisms, like sysctls. In general doing that is a bad idea unless you have a particular (special) work load that you know will benefit from specific tweaks.
Chances are raising those limits will not fix your issues. While adjusting those limits may make you run a little longer, in order to actually fix it you need to figure out why the resources are not being returned to the system. Based on what you described above I would guess that your forked processes are never exiting.