I'd like to list PIDs of user-processes opening my TTY driver, to be able to kill them under some conditions.
How can I get the PID of client user-processes, from my kernel module ?
When user process calls some syscall to your driver, you are in the user thread context. Just read current pid and save it.
When user process calls some syscall to your driver, it's doing so in the context of the process that issued the system call. You should thus be able to use the global current task structure, i.e.
pid_t mypid;
mypid = current->pid.
Related
Suppose I have a character device driver in Linux that allocates some memory in the kernel to store some state against every open file descriptor.
Some process opens a fd on the driver and through some ioctls the process also has provided initialization parameters for this state.
Now the process forks. All the file descriptors will also be created for the child process.
How will the fd specific state be duplicated? AFAIK do_fork only duplicates the data structure the kernel knows about.
Will the child process have to re-initialize the fd or it will end up sharing the state with the parent process?
No open file description state is duplicated on fork or dup. All such state will be shared between parent and child.
I was initially trying to use getpid() in my kernel module for OS X/macOS, is there a way to get the PID (process ID) of the process in whose context my kext is running in the kernel? Is there an existing function or variable that I can use ?
To get the PID of the process with which the currently running kernel thread is associated, call the proc_selfpid() function; you'll need to #include <sys/proc.h> in your kext's code to get the prototype. The PID will of course only correspond to a user process if your code is running in the context of some kind of callback for a syscall.
Out of these three steps, is this the right order, or do I need to switch any?
1) Save current state data
2) Turn on kernel mode
3) Determine cause of interrupt
So, let me try to help you figuring out the correct order.
Only the kernel can switch a context as only the kernel has access to the necessary data and can for example change the page tables for the other process' address space.
To determine whether to do a context switch or not, the kernel needs to analyse some "inputs". A context switch might be done for example because the timer interrupt fired and the time slice of a process is over or because the process started doing some IO.
Only the kernel can save the state of a user process because a user process would change its state when it would try storing it. The kernel however knows that if its running, the user process is currently interrupted (eg because of an interrupt or because the user space process voluntarily entered the kernel eg for a system call)
The current context of a process is first saved partly by the hardware(processor) and rest by the software(kernel).
Then the control is transferred from the user process to the kernel by loading the new eip, esp and other saved context of kernel is loaded by hardware from Task State Segment(TSS).
Then based on the interrupt or trap no. the request is dispatched to the appropriate handler.
I'm working in kernel space and I want to find out when an application has stopped or crashed.
When I receive an ioctl call, I can get the struct task_struct where I have a lot of information regarding the process of the application.
My problem is that I want to periodically check if the process is still alive or better yet, to have some asynchronous call when the process is killed.
My test environment was on QEMU and after a while in the application I've run a system("kill -9 pid"). Meanwhile in the kernel I've had a periodical check on task_struct with:
volatile long state; /* -1 unrunnable, 0 runnable, >0 stopped */
static inline int pid_alive(struct task_struct *p)
The problem is that my task_struct pointer seems to be unmodified. Normally I would say that each process has a task_struct and of course it is corespondent with the process state. Otherwise I don't see the point of "volatile long state"
What am I missing? Is it that I'm testing on QEMU, it is that I've tested checking the task_struct in a while(1) with an msleep of 100? Any help would be appreciated.
I would be partially happy if I could receive the pid of the application when the app is closing the file descriptor of the module ("/dev/driver").
Thanks!
You cannot hive off the task_struct pointer and refer to it later. If the process has been killed, the pointer is no longer valid - that task_struct is gone. You also should not be using PID values within the kernel to refer to processes. PID values are re-used, so you might not even be talking about the same process.
Your driver can supply a .release callback, which will be called when your driver file is closed, including if the process is terminated or killed. You can access current from this callback. Note that if a process opens your file and then forks, the process calling .release could well be different from the process that called .open. Your driver must be able to handle this.
It has been a long time since I mucked around inside the kernel. It seems to me if your process actually dies, then your best bet would be to put hooks into the code that tears down processes. If it doesn't die but gets caught in a non-responsive loop, you'd probably be better off causing an application level core dump.
A solution that worked beautifully in my operating systems homework is to use a kprobe to detect when do_exit is called. What's beautiful is that do_exit will always be called, no matter how the process is closed. I think even in the case of a kernel oops this one will still be called.
You should also hook into _do_fork, just in case.
Oh, and look at the .release callback mentioned in the other answer (do note that dup2 and fork will cause unexpected behavior -- you will only be notified when the last of the copies created by these two is closed).
I'm wondering if there is a hook that could be used in a Linux Kernel Module that is fired when a user space application/process is killed ?
You could first register for a notifier chain within your kernel module.
Inside get_signal_to_deliver(kernel/signal.c), any process which has just (this being a relative term IMHO) been killed has its PF_SIGNALED flag being set. Here you could check for the name of the current process using its tcomm field like so:
char tcomm[sizeof(current->comm)];
get_task_comm(tcomm, current);
If it is indeed the process under question, you could just fire the notification chain which will awaken your module which has been waiting on that chain.