How to implement a kernel thread that never sleeps? - linux-kernel

Problem
I need a kernel thread that is able to work for prolonged periods of time without yielding, basically fully dedicating a CPU core to it on demand:
int my_kthread(void *arg)
{
while(!kthread_should_stop()) {
do_some_work();
if(sleeping_enabled) msleep(1000);
else {
// What to do here to avoid lockup warnings
// and ensure system stability?
}
}
return 0;
}
Background
The thread is created like this when the module that I am working on is loaded:
my_task = kthread_run(&my_kthread, (void *)some_data, "My KThread")
set_cpus_allowed(my_task, *cpumask_of(10)); // Pin thread to core #10
and stopped like this when the module is unloaded:
kthread_stop(my_task);
Everything works just fine when sleeping_enabled is true.
Otherwise, soon after the thread is started, the kernel complains of the apparent lockup.
At first, I merely aimed to avoid the various warnings such as
BUG: soft lockup - CPU#10 stuck for 22s!
and
INFO: rcu_sched detected stalls on CPUs/tasks: { 10} (detected by 15, t=30567 jiffies)
since they tend to flood my console with dumps for all >20 cores, and the "lockup" is desired behavior.
I tried poking the watchdog like this:
if(sleeping_enabled) msleep(1000);
else touch_softlockup_watchdog();
in combination with (echo 1 > /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress)
and pretty much got what I want (a never-sleeping thread that successfully does what I want and no spam in the console).
However, not only does this "solution" feel like cheating, it seems I am completely breaking something by hogging that one core: when unloading the module via rmmod, the whole system freezes. The console starts periodically dumping soft lockups on all cores, with this call trace:
[<ffffffff810c96b0>] ? queue_stop_cpus_work+0xd0/0xd0
[<ffffffff810c9903>] cpu_stopper_thread+0xe3/0x1b0
[<ffffffff8108639a>] ? finish_task_switch+0x4a/0xf0
[<ffffffff8169e654>] ? __schedule+0x3c4/0x700
[<ffffffff81080e98>] ? __wake_up_common+0x58/0x90
[<ffffffff810c9820>] ? __stop_cpus+0x80/0x80
[<ffffffff81077e93>] kthread+0x93/0xa0
[<ffffffff816a9724>] kernel_thread_helper+0x4/0x10
[<ffffffff81077e00>] ? flush_kthread_worker+0xb0/0xb0
[<ffffffff816a9720>] ? gs_change+0x13/0x13
Meanwhile, my kernel thread continues running (as evidenced by some console messages that it prints out every now and then), so it never saw kthread_should_stop() return true.
Unloading did work correctly and stopped the thread before I switched to not sleeping at all. Now, I am unable to make iterative modifications without having to reboot.
Note that I have simplified the description here a lot. I am trying to add such a thread (to poll some hardware registers and log their changes) to a GPU driver, so there may be module-dependent reasons for the freeze on unload. However, this does not change my general question about how to best implement a thread that never sleeps.

I think this question is similar to your question "whole one core dedicated to single process" , please check the replies there.

Related

Call to ExAllocatePoolWithTag never returns

I am having some issues with my virtualHBA driver on Windows Server 2016. A ran the HLK crashdump support test. 3 times out of 10 the test passed. In those 3 failing tests, the crashdump hangs at 0% while taking Complete dump, or Kernel dump or minidump.
By kernel debugging my code, I found that the call to ExAllocatePoolWithTag() for buffer allocation never actually returns.
Below is the statement which never returns.
pDeviceExtension->pcmdbuf=(struct mycmdrsp *)ExAllocatePoolWithTag(NonPagedPoolCacheAligned,pcmdqSignalSize,((ULONG)'TA1'));
I searched on the web regarding this. However, all of the found pages are focusing on this function returning NULL which in my case never returns.
Any help on how to move forward would be highly appreciated.
Thanks in advance.
You can't allocate memory in crash dump mode. You're running at HIGH_LEVEL with interrupts disabled and so you're calling this API at the wrong IRQL.
The typical solution for a hardware adapter is to set the RequestedDumpBufferSize in the PORT_CONFIGURATION_INFORMATION structure during the normal HwFindAdapter call. Then when you're called again in crash dump mode you use the CrashDumpRegion field to get your dump buffer allocation. You then need to write your own "crash dump mode only" allocator to allocate buffers out of this memory region.
It's a huge pain, especially given that it's difficult/impossible to know how much memory you're ultimately going to need. I usually calculate some minimal configuration overhead (i.e. 1 channel, 8 I/O requests at a time, etc.) and then add in a registry configurable slush. The only benefit is that the environment is stripped down so you don't need to be in your all singing, all dancing configuration.

Kernel freeze : How to debug it?

I have an embedded board with a kernel module of thousands of lines which freeze on random and complexe use case with random time. What are the solution for me to try to debug it ?
I have already try magic System Request but it does not work. I guess that the explanation is that I am in a loop or a deadlock in a code where hardware interrupt is disable ?
Thanks,
Eva.
Typically, embedded boards have a watch dog. You should enable this timer and use the watchdog user process to kick the watch dog hard ware. Use nice on the watchdog process so that higher priority tasks must relinquish the CPU. This gives clues as to the issue. If the device does not reset with a watch dog active, then it maybe that only the network or serial port has stopped communicating. Ie, the kernel has not locked up. The issue is that there is no user visible activity. The watch dog is also useful if/when this type of issue occurs in the field.
For a kernel lockup case, the lockup watchdogs kernel features maybe useful. This will work if you have an infinite loop/deadlock as speculated. However, if this is custom hardware, it is also possible that SDRAM or a peripheral device latches up and causes abnormal bus activity. This will stop the CPU from fetching proper code; obviously, it is tough for Linux to recover from this.
You can combine the watchdog with some fallow memory that is used as a trace buffer. memmap= and mem= can limit the memory used by the kernel. A driver/device using this memory can be written that saves trace points that survive a reboot. The fallow memory's ring buffer is dumped when a watchdog reset is detected on kernel boot.
It is also useful to register thread notifiers that can do a printk on context switches, if the issue is repeatable or to discover how to make the event repeatable. Once you determine a sequence of events that leads to the lockup, you can use the scope or logic analyzer to do some final diagnosis. Or, it maybe evident which peripheral is the issue at this point.
You may also set panic=-1 and reboot=... on the kernel command line. The kdump facilities are useful, if you only have a code problem.
Related: kernel trap (at web archive). This link may no longer be available, but aren't important to this answer.

infinite loop in linux/windows kernel module

In ubuntu10.04 linux kernel if I insmod a module which runs
while(1);
in init_module part, entire system stops.
However, if I load a sys file in Windows 7
which runs while(1); in DriverEntry part,
system gets slow but still works.
can someone explain me why two system differs
and what is happening inside kernel?...
I think in first case(infinite loop in init_module),
there is no reason the system stops. because
even if I make while(1); in init_module, it is running
in context of insmod user application program.
so the flow infinite loop has to be scheduled by hardware interrupt signal.
This is just my opinion, I want to know the details if I am wrong...
init_module() is a system call, it runs in kernel space and not in user space.
From what you have observed, it looks like the NT kernel performs module initialization in parallel, whereas the Linux kernel does it sequentially. It might have to do with their respective architectures, NT being a hybrid kernel and Linux being monolithic.
Adding to Frédéric's answer: on Windows the DriverEntry function runs at IRQL PASSIVE_LEVEL (same as virtually all user mode code, all if we exclude APCs). Which means that it can be interrupted by any code running at a higher IRQL at any point. So what you probably encounter here is that the thread that goes into the infinite loop is still being scheduled (thus consuming CPU time), but due to its (low) IRQL it isn't able to starve the system threads or much of the other code that is running. It will, however, be able to starve user mode threads. The effect can be anything from a slowdown to a perceived hanging system.

Why is a thread's status running but it doesn't use any CPU?

Today I found a very strange problem.
I ran Redhat Enterprise Linux 6, and the CPU was Intel E31275 (4 cores, 8 threads). I found one kernel thread (I called it as my_thread) didn't work correctly.
With "ps" command, I found the status of my_thread was always running:
ps ax
5545 ? R 3:14 [my_thread]
15774 ttyS0 Ss 0:00 -bash
...
But its running time was always 3:14. Since it ws running, why didn't the total time increase?
From the proc file /proc/5545/sched, I found the all statistics including wakeups count (se.nr_wakeups) for this thread was always the same, too.
From /proc/5545/stack, I found this thread called this function and never returned:
interruptible_sleep_on_timeout(&q, 3*HZ);
In theory this function would return every 3 seconds if no other threads woke up the thread. Each time after the function returned, se.nr_wakeups in /proc/5545/sched would be increased by 1. But this never happened after I found the thread had some problems.
Does any one have some ideas? Is it possible that interruptible_sleep_on_timeout() never returns?
Update:
I find the problem won't occur if I set CPU affinity for this thread. If I pin it to a dedicated core, then everything is OK. Are there any problems with SMP scheduling?
Update again:
After I disalbe hyperthread in BIOS, I have not seen such a problem until now.
First off, R indicates the thread is not in running state but runnable. That is, it does not mean it runs, it means it is in a state the scheduler is allowed to pick it for running. There is a big difference between the two.
In a similar sense interruptible_sleep_on_timeout(&q, 3*HZ); will not run the thread after 3 jiffies, but rather make it available for running after 3 jiffies - and indeed you see it in "ps" as available for running, so possibly the timeout has indeed occurred.
Since you did not say anything about the kernel thread in question I don't even know if it is in your own code or standard kernel code so I cannot really answer in detail.
One possible reason for the situation you described is that some other thread (user or kernel) has higher priority then your thread and so the scheduler never picks it for running. If so, it is not probably a thread running in real time priority (SCHED_FIFO or SCHED_RR).

How to determine which task is dead?

I have an embedded system that has multiple (>20) tasks running at different priorities. I also have watchdog task that runs to check that all the other tasks are not stuck. My watchdog is working because every once in a blue moon, it will reboot the system because a task did not check in.
How do I determine which task died?
I can't just blame the oldest task to kick the watchdog because it might have been held off by a higher priority task that is not yielding.
Any suggestions?
A per-task watchdog requires that the higher priority tasks yield for an adequate time so that all may kick the watchdog. To determine which task is at fault, you'll have to find the one that's starving the others. You'll need to measure task execution times between watchdog checks to locate the actual culprit.
Is this pre-emptive? I gather so since otherwise a watchdog task would not run if one of the others had gotten stuck.
You make no mention of the OS but, if a watchdog task can check if a single task has not checked in, there must be separate channels of communication between each task and the watchdog.
You'll probably have to modify the watchdog to somehow dump the task number of the one that hasn't checked in and dump the task control blocks and memory so you can do a post-mortem.
Depending on the OS, this could be easy or hard.
Even I was working last few weeks on Watchdog reset problem. But fortunately for me in the ramdump files (in ARM development environment), which has one Interrupt handler trace buffer, containing PC and SLR at each of the interrupts. Thus from the trace buffer I could exactly find out which part of code was running before WD reset.
I think if you have same kind of mechanism of storing PC, SLR at each interrupt then you can precisely find out culprit task.
Depending on your system and OS, there may be different approaches. One very low level approach I have used is to blink an LED on when each of the tasks is running. You may need to put a scope on the LEDs to see very fast task switching.
For an interrupt-driven watchdog, you'd just make the task switcher update the currently running task number each time it is changed, allowing you to identify which one didn't yield.
However, you suggest you wrote the watchdog as a task yourself, so before rebooting, surely the watchdog can identify the starved task? You can store this in memory that persists beyond a warm reboot, or send it over a debug interface. The problem with this is that the starved task is probably not the problematic one: you'll probably want to know the last few task switches (and times) in order to identify the cause.
A simplistic, back of the napkin approach would be something like this:
int8_t wd_tickle[NUM_TASKS]
void taskA_main()
{
...
// main loop
while(1) {
...
wd_tickle[TASKA_NUM]++;
}
}
... tasks B, C, D... follow similar pattern
void watchdog_task()
{
for(int i= 0; i < NUM_TASKS; i++) {
if(0 == wd_tickle[i]) {
// Egads! The task didn't kick us! Reset and record the task number
}
}
}
How is your system working exactly? I always use a combination of software and hardware watchdogs. Let me explain...
My example assumes you're working with a preemptive real time kernel and you have watchdog support in your cpu/microcontroller. This watchdog will perform a reset if it was not kicked withing a certain period of time. You want to check two things:
1) The periodic system timer ("RTOS clock") is running (if not, functions like "sleep" would no longer work and your system is unusable).
2) All threads can run withing a reasonable period of time.
My RTOS (www.lieron.be/micror2k) provides the possibility to run code in the RTOS clock interrupt handler. This is the only place where you refresh the hardware watchdog, so you're sure the clock is running all the time (if not the watchdog will reset your system).
In the idle thread (always running at lowest priority), a "software watchdog" is refreshed. This is simply setting a variable to a certain value (e.g. 1000). In the RTOS clock interrupt (where you kick the hardware watchdog), you decrement and check this value. If it reaches 0, it means that the idle thread has not run for 1000 clock ticks and you reboot the system (can be done by looping indefinitely inside the interrupt handler to let the hardware watchdog reboot).
Now for your original question. I assume the system clock keeps running, so it's the software watchdog that resets the system. In the RTOS clock interrupt handler, you can do some "statistics gathering" in case the software watchdog situation occurs. Instead of resetting the system, you can see what thread is running at each clock tick (after the problem occurs) and try to find out what's going on. It's not ideal, but it will help.
Another option is to add several software watchdogs at different priorities. Have the idle thread set VariableA to 1000 and have a (dedicated) medium priority thread set Variable B. In the RTOS clock interrupt handler, you check both variables. With this information you know if the looping thread has a priority higher then "medium" or lower then "medium". If you wish you can add a 3rd or 4th or how many software watchdogs you like. Worst case, add a software watchdog for each priority that's used (will cost you as many extra threads though).

Resources