one tasklet used by different drivers - linux-kernel

Is it possible to define a single tasklet in one module, and "export" it for use by others? I wonder if this is theoretically possible, what about synchronization and ordered access to the tasklet? Or such idea is stupid?
Thanks.

Sure. No reason why you could not do so. I can't see why it would be a good idea to do so, but there's nothing stopping you. The tasklet framework makes certain guarantees, one of which is that the tasklet will not run on more than one CPU at a time. So there's no real synchronization issue.
However, there is also no "ordered access" to the tasklet in the sense that you can queue up work for it. If you call tasklet_schedule while the tasklet is already running, the tasklet will be executed again, but its execution may be deferred to the ksoftirqd thread.
You should probably read the LDD3 section on tasklets at http://www.makelinux.net/ldd3/chp-7-sect-5.shtml

Related

What is the use-case for TryEnterCriticalSection?

I've been using Windows CRITICAL_SECTION since the 1990s and I've been aware of the TryEnterCriticalSection function since it first appeared. I understand that it's supposed to help me avoid a context switch and all that.
But it just occurred to me that I have never used it. Not once.
Nor have I ever felt I needed to use it. In fact, I can't think of a situation in which I would.
Generally when I need to get an exclusive lock on something, I need that lock and I need it now. I can't put it off until later. I certainly can't just say, "oh well, I won't update that data after all". So I need EnterCriticalSection, not TryEnterCriticalSection
So what exactly is the use case for TryEnterCriticalSection?
I've Googled this, of course. I've found plenty of quick descriptions on how to use it but almost no real-world examples of why. I did find this example from Intel that, frankly doesn't help much:
CRITICAL_SECTION cs;
void threadfoo()
{
while(TryEnterCriticalSection(&cs) == FALSE)
{
// some useful work
}
// Critical Section of Code
LeaveCriticalSection (&cs);
}
// other work
}
What exactly is a scenario in which I can do "some useful work" while I'm waiting for my lock? I'd love to avoid thread-contention but in my code, by the time I need the critical section, I've already been forced to do all that "useful work" in order to get the values that I'm updating in shared data (for which I need the critical section in the first place).
Does anyone have a real-world example?
As an example you might have multiple threads that each produce a high volume of messages (events of some sort) that all need to go on a shared queue.
Since there's going to be frequent contention on the lock on the shared queue, each thread can have a local queue and then, whenever the TryEnterCriticalSection call succeeds for the current thread, it copies everything it has in its local queue to the shared one and releases the CS again.
In C++11 therestd::lock which employs deadlock-avoidance algorithm.
In C++17 this has been elaborated to std::scoped_lock class.
This algorithm tries to lock on mutexes in one order, and then in another, until succeeds. It takes try_lock to implement this approach.
Having try_lock method in C++ is called Lockable named requirement, whereas mutexes with only lock and unlock are BasicLockable.
So if you build C++ mutex on top of CTRITICAL_SECTION, and you want to implement Lockable, or you'll want to implement lock avoidance directly on CRITICAL_SECTION, you'll need TryEnterCriticalSection
Additionally you can implement timed mutex on TryEnterCriticalSection. You can do few iterations of TryEnterCriticalSection, then call Sleep with increasing delay time, until TryEnterCriticalSection succeeds or deadline has expired. It is not a very good idea though. Really timed mutexes based on user-space WIndows synchronization objects are implemented on SleepConditionVariableSRW, SleepConditionVariableCS or WaitOnAddress.
Because windows CS are recursive TryEnterCriticalSection allows a thread to check whether it already owns a CS without risk of stalling.
Another case would be if you have a thread that occasionally needs to perform some locked work but usually does something else, you could use TryEnterCriticalSection and only perform the locked work if you actually got the lock.

How to update the configuration of an apache nifi processor without stopping it?

Good morning, I'm using Apache Nifi, I wonder if anyone knows any way to change the setting of a processor without having to stop it. Or some viable alternative to prevent the loss of information.
Thanks
The configuration of a processor cannot be changed while the processor is running and this is done intentionally. This provides guarantees to the developer of a processor so that in the onTrigger method they can be guaranteed all the properties have the same values that passed validation when the processor was started.
If you can describe your use-case more we might be able to come up with alternative approaches.
there is an alternative solution. Duplicate the processor will update its configuration to the desired one. the output of the duplicate is connected to the next processor. the original processor is stopped and its queued connected to the duplicate and then turned on.
In one way or another the data flow has to be interrupted, but in this way the changes that take more time to make in the processor, can be made in the duplicate first, in order to reduce the impact of the interruption as much as possible.
regards

The best way to store restart information in spring-batch readers, processors, writers and tasklets

Currently I'm designing my first batch application with spring batch using several tasklets and own readers, writers and processors primarily doing input data checks and tif-file handling (split, merge etc) depending on the input data i.e. document metadata with the appertaining image files. I want to store and use restart information persistet in the batch_step_execution_context in the spring-batch job-repository. Unfortunately I did not find many examples where and how to do this best. I want to make the application restartable so that it can continue after error correction at the point it left off.
What I have done so far and checked if in case of an exception the step information has been persistet:
Implemented ItemStream in a CustomItemWriter using update() and open() to store and regain information to/from the step_execution_context e.g. executionContext.putLong("count", count). Works good.
Used StepListeners and found that the context information written in beforeStep() has been persistet. Also works.
I appreciate help which will give or point to some examples, "restart tutorial" or sources where to read how to do it in Readers, Processors, Writers and tasklets. Does it make sense in Readers and Processors? I'm aware that handling restart information might also depend on commit-interval, restartable flags etc..
Remark: Maybe I require some deeper understanding of spring-batch concepts beyond what I read and tried so far. Also hints regarding this are welcome. I consider myself as intermediate level lacking details to make my application using some comforts of spring-batch.

Cocoa Lock that does not use cpu power

I need a lock in cocoa that does not use one cpu when I try to lock it and it is locked somewhere else. Something that is implemented in the kernel scheduler.
It sounds like you're trying to find a lock that's not a spin lock. EVERY lock must use some CPU, or else it couldn't function. :-)
NSLock is the most obvious in Cocoa. It has a simple -lock, -unlock interface and uses pthread mutexes in its implementation. There are a number of more sophisticated locks in Cocoa for more specific needs: NSRecursiveLock, NSCondition, NSDistributedLock, etc.
There is also the #synchronized directive which is even simpler to use but has some additional overhead to it.
GCD also has a counted semaphore object if you're looking for something like that.
My recommendation is that, instead of locks, you look at using NSOperations and an NSOperationQueue where you -setMaxConcurrentOperationCount: to 1 to access the shared resource. By using a single-wide operation queue, you can guarantee that only one thing at a time will make use of a resource, while still allowing for multiple threads to do so.
This avoids the need for locks, and since everything is done in user space, can provide much better performance. I've replaced almost all of my locking around shared resources with this technique, and have been very pleased with the results.
Do you mean "lock" as in a mutex between threads, or a mutex between processes, or a mutex between disparate resources on a network, or...?
If it's between threads, you use NSLock. If it's between processes, then you can use POSIX named semaphores.
If you really want kernel locks and know what you are doing, you can use
<libkern/OSAtomic.h>
Be sure to always use the "barrier" variants. These are faster and much more dangerous than posix locks. If you can target 10.6 with new code, then GCD is a great way to go. There is a great podcast on using the kernel synchronization primitives at: http://www.mac-developer-network.com/shows/podcasts/lnc/lnc032/

OS X inter thread communication question

I am developing a multi-threaded application in Cocoa. The main thread takes values from the user, and when a button is clicked I invoke a secondary thread in which a long calculation takes place. Now from this thread I have to return the output of every step of the calculation to the main thread. I want to periodically send data from one thread to the other. I can't find any simple example that does this. Any ideas?
There are a number of ways to do this, in rough order of complexity (easiest first):
use NSObject's performSelectorOnMainThread:withObject:waitUntilDone: which is pretty self explanatory.
use performSelector:onThread:withObject:waitUntilDone:, which will let you go the other way
use an NSNotification (and NSDistributedNotificationCenter), though you can easily run into a race condition if you're not careful
Use NSPorts to send data back and forth
Check out the doc that Abizer mentioned for details on all of these.
performSelectorOnMainThread:withObject:waitUntilDone: is often the easiest way to update the UI with a background thread's progress. You could also create your own storage area that's safe to access between threads using NSLock or a similar mechanism, or even use distributed objects (which also works between processes or over a network).
Then there's NSOperationQueue and NSOperation which does help a lot to simplify multi-threaded programming, although a lot of programmers have been avoiding it since it can cause a crash in certain circumstances under Leopard.
Have a look at the Apple docs for this.
You may need to create an ADC member account, but this is free
Multi-threaded Cocoa Programs

Resources