I am currently developing a Linux kernel module.
My module has a callback for the net_dev_xmit tracepoint event.
One of the parameters of such tracepoint is the struct sk_buff *skb.
My question is the following: how can one retrieve the task struct of the
thread that generated the packet that the kernel is going to emit?
Is there any guarantee that the such thread is the "current" one?
Related
In the Linux kernel, I need to periodically check the state of the switch chip by calling the mdiobus_read() (drivers/net/phy/mdio_bus.c).
I tried to use Linux timer add_timer() but I found out that the callback is called in the interrupt context. Actually, there is a warning in the comment to the function mdiobus_read():
NOTE: MUST NOT be called from interrupt context,
because the bus read/write functions may wait for an interrupt
to conclude the operation.
So, how to periodically call the function mdiobus_read()?
I want to call IShellFolder.EnumObjects from the background thread to avoid freezing the GUI thread in case enumerating takes extensive amounts of time.
The IShellFolder interface is obtained (and used) in the GUI thread. What is the proper way to pass its pointer to the background thread?
Marshal the interface pointer using CoMarshalInterThreadInterfaceInStream?
Directly pass the interface pointer to another thread?
What about returned PIDL's - can they be safely exchanged between the threads?
Threads use STA threading model.
Note that I am using Python but this could apply to any other bindings from glib.
I have a class that sets up several sockets connections via glib.io_add_watch() and a callback method called foo(). In addition, I have a glib.idle_add() callback to a method called bar(). foo() creates or update a list (class member) of elements that can be any value including None. bar() removes any None item form the above list -- we done with those, we no longer care. In effect it cleans things up.
Does glib grantee that only one callback will be called at any one time per thread?
If I were to run this code so that foo() is in thread one and bar() in thread two, there would be a race condition. I assume that a simple mutex would solve this but is there a more efficient way to do this?
Callbacks added via g_io_add_watch and g_add_idle are executed in the main loop's thread regardless of what thread they were added from.
I want to use inverted model of ioctl. I mean I want to schedule some work item which is a user space thread when a particular activity is detected by the driver. For eg.
1. I register a callback for a particular interrupt in my kernel mode driver.
2. Whenever I get an interrupt, I want to schedule some user space thread which user had registered using ioctl.
Can I use either DPC, APC or IRP to do so. I do know that one should not/cant differ driver space work to user space. What I want is to do some independent activities in the user space when a particular hardware event happens.
Thanks
creating usermode threads from driver is really bad practice and you can`t simple transfer control from kernel mode to user mode. You must create worker threads in user app and wait in this threads for event. There are two main approaches for waiting.
1) You can wait on some event, witch you post to driver in ioctl. In some moment driver set event to alertable and thread go and process event. This is major and simple approach
2) You can post ioctl synchronously and in driver pend this irp -> thread blocks in DeviceIoControl call. When event occured driver complete these irp and thread wake up and go for processing.
Whenever I get an interrupt, I want to schedule some user space threads which user had registered using ioctl.
You must go to safe irql (< DISPATCH_IRQL) first : Interrupt -> DPC push into queue -> worker thread, because for example you can`t signal event on high irql.
read this
http://www.osronline.com/article.cfm?id=108
and Walter Oney book
You don't need to queue a work item or do anything too fancy with posting events down. The scheduler is callable at DISPATCH_LEVEL, so a DPC is sufficient for signalling anyone.
Just use a normal inverted call:
1) App sends down an IOCTL (if more than one thread must be signalled, it must use FILE_FLAG_OVERLAPPED and async I/O).
2) Driver puts the resulting IRP into a driver managed queue after setting cancel routines, etc. Marks the irp pending and returns STATUS_PENDING.
3) Interrupt arrives... Queue a DPC from your ISR (or if this is usb or some other stack, you may already be at DISPATCH_LEVEL).
4) Remove the request from the queue and call IoCompleteRequest.
Use KMDF for steps 2 and 4. There's lot of stuff you can screw up with queuing irps, so it's best to use well-tested code for that.
Does GCD assure that all the blocks working in the same queue are always working in a same thread?
If I create a dispatch queue and dispath_async blocks to this queue, does all the blocks that dispatch to that queue works in the same thread?
Since I'm working on a project that uses ABAdrressbook Framerowk and the document says that ABAddressBookRef and ABRecordRef can't be used between threads, so I wonder if all the blocks in the queue are in the same thread, I can create only one AddressBookRef for that queue.
The only queue bound to a specific thread is the main queue, which is bound to the main (UI) thread.
If the only requirement is not to concurrently access the object, using a serial queue should work fine.
If the object instead relies on thread-local state, you will have to force all manipulation to a specific thread. The easiest would be to target your serial queue to the main thread, but that only works if you know you're not going to be stuck for long in the block; otherwise, you will hang your UI. In that case, you'll have to create your own handler thread and send the work over there.
dispatch_queue_create(3) Mac OS X Manual Page
Queues are not bound to any specific
thread of execution
There is no guarantee that all blocks sent to a serial queue are sent to the same thread. And I couldn't find any source code for combination of ABAddressBookCreate with GCD Serial Queue...
When the documentation says that something can't be used between threads, it only means that the API can't be used concurrently from different threads at the same time. The API itself doesn't remember anything special about the thread calling it or something like that and forcing it to be the same each time.
Previous to GCD you would serialize access to a shared resource with #synchronize. As you suggest yourself, creating a queue for this is another way of serializing access to this resource which is more efficient.