What I'd like to do is to code a process like the following:
while(forever)
{
do something;
relinquish current cpu time slice;
}
Is there some call to make in Linux that simply terminates your time slice so that the forever loop will not hog the entire a CPU thread? I'm sure you can make another system call or something but that invokes possibly unnecessary kernel/user CPU work, I just want to say I am done with my timeslice and reschedule.
This type of call could also be very nice in a realtime environment.
The sched_yield(2) system call does exactly what you want. Just #include <sched.h> and call sched_yield();.
Related
I have read about user space and kernel space and how a program's execution path can bring it from user space to kernel space, I suppose an example of this is if my program runs like this
Poco::Net::SocketAddress sender;
char buffer[64000];
.
.
.
socket.receiveFrom(buffer, sizeof(buffer), sender);
since this call requires accessing the network card, I think it should go into kernel space.
My question is:
What happens as the program makes the socket.receivefrom(...) call
Does the thread go to sleep and give up its core since it is going
to kernel space and only gets woken up when the char buffer has been
written
Does the thread go directly to kernel space and come back to user space after writing into the char buffer
No. The thread goes to execute kernel code, at kernel-permissions (ring 0 in an x86). The thread might go to "sleep" (i.e. the CPU might go and execute a different program, or will go idle, this depends on what the scheduler decides) inside the kernel. However, it might not go to sleep at all, if, for example, data is already available from the network card. From a user perspective, you know that when the call returns you have your data in the buffer, and you might expect the call to take a while.
It depends on the scheduler. You might get an interrupt at any time and execute something else. But generally, yes, you go to the kernel and back.
I am aware that I certainly can't use msleep or usleep or any such function for introducing delays in a kernel ISR routine.
I have a kernel driver which have certain ISRs defined inside it. In one of the ISR block I have to insert a certain delay of order of millisecs. Lets say:
{
//A
//here I need sleep
//B
}
can I use something like:
{
//A
for(i=0;i<1000;i++);
//B
}
Lets say my processor is executing at 1Gbps, will the above for loop give me a delay of 1000 usecs, i.e. 1ms?
You must not sleep inside an interrupt handler.
Furthermore, you should wait for a long time inside an interrupt handler; this would block all proccesses and all other interrupts on the same CPU.
If you driver needs to do two things at different times, it should use a second interrupt or a timer to do the second thing.
I would be interested to hear about the reasons for having an intentional delay in an ISR. Generally speaking, it's a no-no. If you think you need one, then most probably it means that you need to rethink your code design.
As for introducing microscopic delays, one thing that I have used is cpu_relax(). This function is also used in the kernel to implement the above mentioned udelay() and ndelay() for some CPU architectures. I would advise you to take a look and see where and how this function is used in the Linux kernel. That might give you some ideas for your specific situation.
Functions udelay and ndelay implements busy-waiting delays, so you may use them in ISR. As suggested by Tsyvarev.
In ubuntu10.04 linux kernel if I insmod a module which runs
while(1);
in init_module part, entire system stops.
However, if I load a sys file in Windows 7
which runs while(1); in DriverEntry part,
system gets slow but still works.
can someone explain me why two system differs
and what is happening inside kernel?...
I think in first case(infinite loop in init_module),
there is no reason the system stops. because
even if I make while(1); in init_module, it is running
in context of insmod user application program.
so the flow infinite loop has to be scheduled by hardware interrupt signal.
This is just my opinion, I want to know the details if I am wrong...
init_module() is a system call, it runs in kernel space and not in user space.
From what you have observed, it looks like the NT kernel performs module initialization in parallel, whereas the Linux kernel does it sequentially. It might have to do with their respective architectures, NT being a hybrid kernel and Linux being monolithic.
Adding to Frédéric's answer: on Windows the DriverEntry function runs at IRQL PASSIVE_LEVEL (same as virtually all user mode code, all if we exclude APCs). Which means that it can be interrupted by any code running at a higher IRQL at any point. So what you probably encounter here is that the thread that goes into the infinite loop is still being scheduled (thus consuming CPU time), but due to its (low) IRQL it isn't able to starve the system threads or much of the other code that is running. It will, however, be able to starve user mode threads. The effect can be anything from a slowdown to a perceived hanging system.
Suppose in a two process environment, one process is scheduled for execution by the kernel, and it demanded for some data which is not available in the RAM. So the cpu will indicate the kernel that something is not available and the process will be suspended. Then after kernel loads the second process for execution through the CPU and start investigating about the data in secondary memory location (say virtual memory) and gets it, puts it back to main memory by a swap to the memory data which is currently inactive, and puts the process back in the ready queue for execution.
We know that everything in computer system is get manipulated by CPU only and if CPU is busy executing continuously the process code then who is executing the kernel code to perform the tasks done by kernel?
Please let me know if i am able to explain the scenario.
At any point in time, CPU (/s) will be
Running a process in User Mode.
Running on behalf of a process in Kernel Mode to execute previleged instruction or access hardware (for example when system call read / write is issued).
Running in repsonse to a hardware interrupt. i.e. running in interrupt context. (Not associated with any process in particular) and yes in kernel mode.
Running some kernel threads to serve deferred work like soft irq. (Tasklet / Softirq)
Running CPU idle thread if nothing is there to execute.
If you are in particular asking about scheduling, then
Suppose a process is running and now it has issued a read call to retrieve data from hard disk, say, then process is removed from cpu and kernel invokes schedule() functions. So here, first process issues read system call, which results in switching from user mode to kernel mode. The kernel which is running on behalf of the process prepares for the hard disk read operation and then calls schedule() function
Suppose a hardware interrupt has come, then currently running process is removed, and interrupt service handler for that interrupt begins to execute in kernel mode (obviously).
Basically, kernel runs in between user processes !!
Clear now ?
Shash
The kernel runs either as a result of a hardware interrupt, or as a result of being invoked by a process to do something. In both cases the code which was executing at that moment stops running until the kernel finishes its job.
It is similar to a function call: when function A calls function B, function A has to wait until function B is done doing what it does, and returns control to function A. You do not need multiple CPUs, or any kind of magic to accomplish this.
The CPU is not continuously executing process code. The CPU is interrupted to perform various operations. Interrupts can occur for various reasons: a resource becomes available, a previous action completes, or simply a timer goes off.
I recommend this series of videos for more in-depth information: http://academicearth.org/courses/operating-systems-and-system-programming
I'm writing a kernel module which uses a customized print-on-screen system. Basically each time a print is involved the string is inserted into a linked list.
Every X seconds I need to process the list and perform some operations on the strings before printing them.
Basically I have two choices to implement such a filter:
1) Timer (which restarts itself in the end)
2) Kernel thread which sleeps for X seconds
While the filter is performing its stuff nothing else can use the linked list and, of course, while inserting a string the filter function shall wait.
AFAIK timer runs in interrupt context so it cannot sleep, but what about kernel threads? Can they sleep? If yes is there some reason for not to use them in my project? What other solution could be used?
To summarize: my filter function has got only 3 requirements:
1) Must be able to printk
2) When using the list everything else which is trying to access the list must block until the filter function finishes execution
3) Must run every X seconds (not a realtime requirement)
kthreads are allowed to sleep. (However, not all kthreads offer sleepful execution to all clients. softirqd for example would not.)
But then again, you could also use spinlocks (and their associated cost) and do without the extra thread (that's basically what the timer does, uses spinlock_bh). It's a tradeoff really.
each time a print is involved the string is inserted into a linked list
I don't really know if you meant print or printk. But if you're talking about printk(), You would need to allocate memory and you are in trouble because printk() may be called in an atomic context. Which leaves you the option to use a circular buffer (and thus, you should be tolerent to drop some strings because you might not have enough memory to save all the strings).
Every X seconds I need to process the list and perform some operations on the strings before printing them.
In that case, I would not even do a kernel thread: I would do the processing in print() if not too costly.
Otherwise, I would create a new system call:
sys_get_strings() or something, that would dump the whole linked list into userspace (and remove entries from the list when copied).
This way the whole behavior is controlled by userspace. You could create a deamon that would call the syscall every X seconds. You could also do all the costly processing in userspace.
You could also create a new device says /dev/print-on-screen:
dev_open would allocate the memory, and print() would no longer be a no-op, but feed the data in the device pre-allocated memory (in case print() would be used in atomic context and all).
dev_release would throw everything out
dev_read would get you the strings
dev_write could do something on your print-on-screen system