I am planning to use the State Machine WorkFlow of Windows Workflows.
The state machine will be receiving events from two separate threads, the state machine of course will both change its state and execute actions based on its current state and the event that came in.
My question is, is the state machine of windows workflow thread safe, meaning that it will guarantee the correct state change when two threads access it at the same time?
Workflow execution follows single-threaded apartment conventions - that is, one particular instance of a workflow can only be executed by one thread at a time within any runtime. This is by design.
The workflow runtime uses an internal scheduling queue to execute operations for workflow instances, so two threads invoking operations on the same workflow instance will be serialized to the scheduler queue first, then invoked in sequence either by a new thread scheduled by the runtime (default scheduling) or by the thread donated by the calling context for each operation (manual scheduling).
When using the persistence service, the workflow runtime also ensures that the database version is synchronized as well - another workflow runtime running on another process / machine cannot load the same workflow instance from persistence if it is currently open by another workflow runtime.
This means that you don't have to be concerned with thread-safety on code executing within a workflow model (eg you don't have to lock property setters), and you don't have to be concerned with race conditions.
What's your interpretation of this kind of thing in the Microsoft Documentation for (for example) the State Activity CLass in System.Workflow.Activitie:
Thread Safety
Any public static (Shared in Visual
Basic) members of this type are thread
safe. Any instance members are not
guaranteed to be thread safe.
Similar passages are given on many relevent classes. My inference is "no" not thread safe for the usage you're intending.
Related
The Windows Antimalware scan Interface (AMSI) contains abstractions which can be used to call the currently active virus scanner in Windows:
https://learn.microsoft.com/en-us/windows/desktop/amsi/antimalware-scan-interface-functions
There are 2 methods related to initialization:
AmsiInitialize
AmsiUninitialize
AmsiInitialize returns "A handle of type HAMSICONTEXT that must be passed to all subsequent calls to the AMSI API.".
After initialization is complete, I can use AmsiScanBuffer to scan a buffer for malware.
My question:
Can I use the same context concurrently from many threads in my application, or do I need to create one per thread from which I'm going to call the methods?
Reading the documentation, for AsmiUnitialize, it tells me that When the app is finished with the AMSI API it must call AmsiUninitialize.. This tells me that the context can be used for many calls, but it doesn't tell me anything about thread safety or concurrency.
Generally, API calls that are not specifically marked as thread-safe are not (this is usually true for any library). The easiest solution is to open an AMSI handle per thread.
(P.S. This only works with Windows Defender so far as I 've tested).
I want to use the spring state machine as the main processor of my application. I want to start the application, do the bootstrapping as an action of the initial state and tear down as an action of the end state. In the middle the application should wait for events.
So, I started by doing as shown in
http://docs.spring.io/spring-statemachine/docs/current/reference/html/developing-your-first-spring-statemachine-application.html
Everything works as described except that after exiting the run method the entire application stops and does not listen to further events.
How can this behavior be achieved? Is there a blueprint/template available? I didn't found one. Similar to a web component, listening for request, I want the state machine to wait for configured events. My application runs on a Raspberry Pi and those events are triggered by external actions like "button pressed", "a connected device delivers a measurement result".
Next to my main question I asked myself, whether spring state mechanine will work correct in my environment: I use Pi4J for hardware interaction. This framework usually uses its own threads for watching for hardware events. How will concurrent events be treated. Are actions always run synchroniously in the thread triggering the event or is there a separate thread pool?
Thanks,
Steve
This is a normal spring boot question as app will exit if nothing is keeping it alive. With boot apps you usually have a web layer and a thread from there keeps app alive.
statemachine docs have more info on how to configure executor to be threaded. On default execution happens in a same thread.
Pi4J is a good question as I'm not that familiar with its threading. I know that many bugs has been fixed as it used to create a lot of threads user didn't have no control and it's probably still a case. There's been some development on Pi4J to allow user to define thread factories which in theory could also passed to Spring TaskExecutor used by statemachine.
Shared resource is used in two application process A and in process B. To avoid race condition, decided that when executing portion of code dealing with shared resource disable context switching and again enable process switching after exiting shared portion of process.
But don't know how to avoid process switching to another process, when executing shared resource part and again enable process switching after exiting shared portion of process.
Or is there any better method to avoid race condition?
Regards,
Learner
But don't know how to avoid process switching to another process, when executing shared resource part and again enable process switching after exiting shared portion of process.
You can't do this directly. You can do what you want with kernel help. For example, waiting on a Mutex, or one of the other ways to do IPC (interprocess communication).
If that's not "good enough", you could even make your own kernel driver that has the semantics you want. The kernel can move processes between "sleeping" and "running". But you should have good reasons why existing methods don't work before thinking about writing your own kernel driver.
Or is there any better method to avoid race condition?
Avoiding race conditions is all about trade-offs. The kernel has many different IPC methods, each with different characteristics. Get a good book on IPC, and look into how things like Postgres scale to many processors.
For all user space application, and vast majority of kernel code, it is valid that you can't disable context switching. The reason for this is that context switching is not responsibility of application, but operations system.
In scenario that you mentioned, you should use a mutex. All processes must follow convention that before accessing shared resource, they acquire mutex, and after they are done with accessing shared resource, they release the mutex.
Lets say an application accessing the shared resource acquired mutex, and is doing some processing of shared resource, and that operating system performed context switch, thus stopping the application from processing shared resource. OS can schedule other processes wanting to access shared resource, but they will be in waiting state, waiting for mutex to be released, and none of such processes will not do anything with shared resource. After certain number of context switches, OS will again schedule original application, that will continue processing of shared resource. this will continue until original application finally releases the mutex. And then, some other process will start accessing shared resource in orderly fashion, as designed.
If you want more authoritative and detailed explanations of whats and whys of similar scenarios, you can watch this MIT lesson, for example.
Hope this helps.
I would suggest looking into named semaphores. sem_overview (7). This will allow you to ensure mutual exclusion in your critcal sections.
I have searched the Internet but failed to find a satisfactory answer. What is the threading model present in an OSGi container? Does it simply spawn a new thread to each registered bundle e.g.? Any reference regarding the threading model would be great.
You have not found anything because there is no such thing as an "OSGi threading model". Bundles simply exist and don't "have threads" unless they start them.
The OSGi framework follows a synchronous model, ie. everything happens in a strict order. Bundles are not executed in threads (but they have their own classloader instances). There are some exceptions, though. For example, when an event is raised via the postEvent method, the delivery of the event is done asynchronously, usually implemented in many framework implementations as a thread.
When you start a bundle, code in activator is executed in one thread, similar to the 'main' thread. When the main thread completes its execution, bundle is changed from the 'Starting' state to 'Active' state. So it is better to execute time consuming code in another thread and starting another thread from the main thread.
When service method gets called from service consumer. At that time, the code written in the service method get executed in service consumer's thread.
I didn't find any difference between static variables and local variable in the service method.
Besides some special cases (Events/Listeners) the application threads are neighter managed nor restricted. You can use threading freely. You do need to be aware that some operations in the bundle lifecylce must be (therefore) thread safe and you need to be very carefull to tear down threads cleanly. You also need to be carefull not to block OSGi operations needlessly long.
The workflow is being published as a wcf service, and I need to guarantee that workflows execute sequentially. Is there a way--in code or in the config--to guarantee the runtime doesn't launch two workflows concurrently?
There is no way to configure the runtime to limit the number of workflows in progress.
Consider though that its the responsibility of the workflow itself to control flow. Hence the workflow itself should have means to determine if another instance of itself is currently in progress.
I would consider creating an Activity that would transactionally attempt to update a DB record to the effect that an instance of this workflow is in progress. If it finds that another is currently in progress it could take the appropriate action. It could fail or it could queue itself using an EventActivity to be alerted when the previous workflow has completed.
You probably will need to check at workflow start for another running instance.
If found, cancel it.
I don't agree that this needs to be handled at the WorkflowRuntime level. I like the idea of a custom Activity, sort of a MutexActivity that would be a CompositeActivity that has a DB backend. The first execution would log to the database it has a hold of the mutex. Subsequent calls would queue up their workflow IDs and then go idle. When the MutexActivity completes, it would release the Mutex, load up the next workflow in the queue and invoke the contained child activities.