Can I be sure that the code I write is always executed in the same thread? - windows

I normally work on single threaded applications and have generally never really bothered with dealing with threads. My understanding of how things work - which certainly, may be wrong - is that as long as we're always dealing with single threaded code (i.e. no forks or anything like that) it will always be executed in the same thread.
Is this assumption correct? I have a fuzzy idea that UI libraries/frameworks may spawn off threads of their own to handle GUI stuff (which accounts for the fact that the Windows task manager tells me that my 'single threaded' application is actually running on 10 threads) but I'm guessing that this shouldn't affect me?
How does this apply to COM? For instance, if I were to create an instance of a COM component in my code; and that COM component writes some information to a thread-based location (using System.Threading.Thread.GetData for instance) will my application be able to get hold of that information?
So in summary:
In single threaded code, can I be sure that whatever I store in a thread-based location can be retrievable from anywhere else in the code?
If that single threaded code were to create an instance of a COM component which stores some information in a thread-based location, can that be similarly retrievable from anywhere else?

UI usually has the opposite constraint (sadly): it's single threaded and everything must happen on that thread.
The easiest way to check if you are always in the same thread (for, say, a function) is to have an integer variable set at -1, and have a check function like (say you are in C#):
void AssertSingleThread()
{
if (m_ThreadId < 0) m_ThreadId = Thread.CurrentThread.ManagedThreadId;
Debug.Assert(m_ThreadId == Thread.CurrentThread.ManagedThreadId);
}
That said:
I don't understand the question #1, really. Why store in a thread-based location if your purpose is to have a global scope ?
About the second question, most COM code runs on a single thread and, most often, on the thread where your UI message processing lives - this is because most COM code is designed to be compatible with VB6 which is single-thread.
The reason your program has about 10 threads is because both Windows (if you use some of its features like completion ports, or some kind of timers) and the CLR (for example for the GC or, again, some types of timers) may create threads in your process space (technically any program with enough priviledges, can too).

Think about having the model of having a single dataStore class running in your mainThread that all threads can read and write their instance variables to. This will avoid a lot of problems that might arise accessing threads all over the shop.
Simple idea, until you reach the fun part of threading. Concurrency and synchronization; simply, if you have two threads that want to read and write to the same variable inside dataStore at the same time, you have a problem.
Java handles this by allowing you to declare a variable or method synchronized, allowing only one thread access at a time.
I believe some .NET objects have Lock and Synchronized methods defined on them, but I know no more than this.

Related

CoInitializeEx(COINIT_MULTITHREADED) and Goroutines using WMI

We have a monitoring agent written in Go that uses a number of goroutines to gather system metrics from WMI. We recently discovered the program was leaking memory when the go binary is run on Server 2016 or Windows 10 (also possibly on other OS using WMF 5.1). After creating a minimal test case to reproduce the issue it seems that the leak only occurs if you make a large number of calls to the ole.CoInitializeEx method (possibly something changed in WMF 5.1 but we could not reproduce the issue using the python comtypes package on the same system).
We are using COINIT_MULTITHREADED for multithread apartment (MTA) in our application, and my question is this: Because we are issuing OLE/WbemScripting calls from various goroutines, do we need to call ole.CoInitializeEx just once on startup or once in each goroutine? Our query code already uses runtime.LockOSThread to prevent the scheduler from running the method on different OS threads, but the MSDN remarks on CoInitializeEx seem to indicate it must be called at least once on each thread. I am not aware of any way to make sure new goroutines run on an already initialized OS thread, so multiple calls to CoInitializeEx seemed like the correct approach (and worked fine for the last few years).
We have already refactored the code to do all the WMI calls on a dedicated background worker, but I am curious to know if our original code should work using only one CoInitializeEx at startup instead of once for every goroutine.
AFAIK, since Win32 API is defined only in terms of native OS threads, a call to CoInitialize[Ex]() only ever affects the thread it completed on.
Since the Go runtime uses free M×N scheduling of the goroutines to OS threads, and these threads are created / deleted as needed at runtime in a manner completely transparent to the goroutines, the only way to make sure the CoInitialize[Ex]() call has any lasting effect on the goroutine it was performed on is to first bind that goroutine to its current OS thread by calling runtime.LockOSThread() and doing this for every goroutine intended to do COM calls.
Please note that this basically creates an 1×1 mapping between goroutines and OS threads which defeats much of the purpose of goroutines to begin with. So supposedly you might want to consider having just a single goroutine calling into COM and listening for requests on a channel, or having
a pool of such worker goroutines hidden behing another one which would dispatch the clients' requests onto the workers.
Update regarding COINIT_MULTITHREADED.
To cite the docs:
Multi-threading (also called free-threading) allows calls to methods
of objects created by this thread to be run on any thread. There is no
serialization of calls — many calls may occur to the same method or
to the same object or simultaneously. Multi-threaded object
concurrency offers the highest performance and takes the best
advantage of multiprocessor hardware for cross-thread, cross-process,
and cross-machine calling, since calls to objects are not serialized
in any way. This means, however, that the code for objects must
enforce its own concurrency model, typically through the use of
synchronization primitives, such as critical sections, semaphores, or
mutexes. In addition, because the object doesn't control the lifetime
of the threads that are accessing it, no thread-specific state may be
stored in the object (in Thread Local Storage).
So basically the COM threading model has nothing to do with initialization of the threads theirselves—but rather with how the COM subsystem is allowed to call the methods of the COM objects you create on the COM-initialized threads.
IIUC, if you will COM-initialize a thread as COINIT_MULTITHREADED and create some COM object on it, and then pass its reference to some outside client of that object so that it is able to call that object's methods, those methods can be called by the OS on any thread in your process.
I really have no idea how this is supposed to interact with Go runtime,
so I'd start small and would use a single thread with STA model and then
maybe try to make it more complicated if needed.
On the other hand, if you only instantiate external COM objects and not
transfer their descriptors outside (and it appears that's the case),
the threading model should not be relevant. That is, only unless some
code in the WUA API would call some "event-like" method on a COM object you
have instantiated.

Which Model to use STA/MTA

I'm trying to create COM component, where it will be frequently call by a excel application (excel will load the COM at its initialization) and another process (let say procA) also sends (with high frequency) windows messages to this component. Currently I implemented COM as a STA, however, I experienced that while COM is busy with processing messages from procA, the excel UI get stuck.
Please help me to get around this problem. Can I just create a simple window thread to process messages from procA while keeping COM as STA model ? Or do I need to make COM as MTA model, if so please explain how to handle it .
Thank You
Moving to an MTA requires you to perform all necessary locking to protect the state of your component. And it will add thread switch overhead because Excel's UI runs on a specific thread, which will block1 while calling cross thread into your component. The other process is already incurring the cross process overhead so no real change there.
You could avoid the Excel cross-thread overhead by marking your component model as threading model 'Neutral'—it can still be used from any thread while not being tied to exist in the MTA (i.e. all in process calls will be direct, no thread swicthes). Write it as free threaded (all the locking is still needed) but just change the registration.
Given all the effort to ensure your component is thread safe you may find there is no advantage unless multiple calls into your component can really run concurrently. If you just take a lock for the duration of each method you are not saving anything over being in an STA. Finer grained locking might give an advantage, but you need more detailed analysis of what concurrency is possible, and then profiling to prove you have been able to achieve it. A look at Amdahl's Law will cover these issues.
1 This is very simplistically put... the real situation is rather more complex.

How can I implement a blocking process in a single slot without freezing the GUI?

Let's say I have an event and the corresponding function is called. This function interacts with the outside world and so can sometimes have long delays. If the function waits or hangs then my UI will freeze and this is not desirable. On the other hand, having to break up my function into many parts and re-emitting signals is long and can break up the code alot which would make hard to debug and less readable and slows down the development process. Is there a special feature in event driven programming which would enable me to just write the process in one function call and be able to let the mainThread do its job when its waiting? For example, the compiler could reckognize a keyword then implement a return then re-emit signals connected to new slots automatically? Why do I think this would be a great idea ;) Im working with Qt
Your two options are threading, or breaking your function up somehow.
With threading, it sounds like your ideal solution would be Qt::Concurrent. If all of your processing is already in one function, and the function is pretty self-contained (doesn't reference member variables of the class), this would be easy to do. If not, things might get a little more complicated.
For breaking your function up, you can either do it as you suggested and break it into different functions, with the different parts being called one after another, or you can do it in a more figurative way, but scattering calls to allow other processing inside your function. I believe calling processEvents() would do what you want, but I haven't come across its use in a long time. Of course, you can run into other problems with that unless you understand that it might cause other parts of your class to run once more (in response to other events), so you have to treat it almost as multi-threaded in protecting variables that have an indeterminate state while you are computing.
"Is there a special feature in event driven programming which would enable me to just write the process in one function call and be able to let the mainThread do its job when its waiting?"
That would be a non-blocking process.
But your original query was, "How can I implement a blocking process in a single slot without freezing the GUI?"
Perhaps what you're looking for a way to stop other processing when some - any - process decides it's time to block? There are typically ways to do this, yes, by calling a method on one of the parental objects, which, of course, will depend on the specific objects you are using (eg a frame).
Look to the parent objects and see what methods they have that you'd like to use. You may need to overlay one of them to get your exactly desired results.
If you want to handle a GUI event by beginning a long-running task, and don't want the GUI to wait for the task to finish, you need to do it concurrently, by creating either a thread or a new process to perform the task.
You may be able to avoid creating a thread or process if the task is I/O-bound and occasional callbacks to handle I/O would suffice. I'm not familiar with Qt's main loop, but I know that GTK's supports adding event sources that can integrate into a select() or poll()-style loop, running handlers after either a timeout or when a file descriptor becomes ready. If that's the sort of task you have, you could make your event handler add such an event source to the application's main loop.

What can I access from a BackgroundWorker without "Cross Threading"?

I realise that I can't access Form controls from the DoWork event handler of a BackgroundWorker. (And if I try to, I get an Exception, as expected).
However, am I allowed to access other (custom) objects that exist on my Form?
For instance, I've created a "Settings" class and instantiated it in my Form and I seem to be able to read and write to its properties.
Is it just luck that this works?
What if I had a static class? Would I be able to access that safely?
#Engram:
You've got the gist of it - CrossThreadCalls are just a nice feature MS put into the .NET Framework to prevent the "bonehead" type of parallel programming mistakes. It can be overridden, as I'm guessing you've already found out, by setting the "AllowCrossThreadCalls" property on the class (and not on an instance of the class, e.g. set Label.AllowCrossThreadCalls and not lblMyLabel.AllowCrossThreadCalls).
But more importantly, you're right about the need to use some kind of locking mechanism. Whenever you have multiple threads of execution (be it threads, processes or whatever), you need to make sure that when you have one thread reading/writing to a variable, you probably don't want some other thread barging and changing that value under the feet of the first thread.
The .NET Framework actually provides several other mechanisms which might be more useful, depending on circumstances, than locking in code. The first is to use a Monitor class, which has the effect of locking a particular object. When you use this, other threads can continue to execute, as long as they don't try to lock that same object. Another very useful and common parallel-programming idea is the Mutex (or Semaphore). The Mutex is basically like a game of Capture the Flag between your threads. If one thread grabs the flag, no other threads can grab it until the first thread drops it. (A Semaphore is just like a Mutex, except that there can be more than one flag in a game.)
Obviously, none of these concepts will work in every particular problem - but having a few more tools to help you out might come in handy some day :)
You should communicate to the user interface through the ProgressChanged and RunWorkerCompleted events (and never the DoWork() method as you have noted).
In principle, you could call IsInvokeRequired, but the designers of the BackgroundWorker class created the ProgressChanged callback event for the purpose of updating UI elements.
[Note: BackgroundWorker events are not marshaled across AppDomain boundaries. Do not use a BackgroundWorker component to perform multithreaded operations in more than one AppDomain.]
MSDN Ref.
Ok, I've done some more research on this and I think have an answer. (Let the votes decide if I'm right!)
The answer is.. you can access any custom object that's in scope, however your access will not be thread-safe.
To ensure that it is thread-safe you should probably be using lock. The lock keyword prevents more than one thread executing a particular piece of code. (Subject to actually using it properly!)
The Cross Threading Exception that occurs when you try and access a Control is a safety mechanism designed especially for Controls. (It's easier and probably more efficient to get the user to make thread-safe calls then it is to design the controls themselves to be thread-safe).
You can't access controls that where created in one thread from another thread.
You can either use Settings class that you mentioned, or use InvokeRequired property and Invoke methods of control.
I suggest you look at the examples on those pages:
http://msdn.microsoft.com/en-us/library/ms171728.aspx
http://msdn.microsoft.com/en-us/library/system.windows.forms.control.invokerequired.aspx

Is it safe to manipulate objects that I created outside my thread if I don't explicitly access them on the thread which created them?

I am working on a cocoa software and in order to keep the GUI responsive during a massive data import (Core Data) I need to run the import outside the main thread.
Is it safe to access those objects even if I created them in the main thread without using locks if I don't explicitly access those objects while the thread is running.
With Core Data, you should have a separate managed object context to use for your import thread, connected to the same coordinator and persistent store. You cannot simply throw objects created in a context used by the main thread into another thread and expect them to work. Furthermore, you cannot do your own locking for this; you must at minimum lock the managed object context the objects are in, as appropriate. But if those objects are bound to by your views a controls, there are no "hooks" that you can add that locking of the context to.
There's no free lunch.
Ben Trumbull explains some of the reasons why you need to use a separate context, and why "just reading" isn't as simple or as safe as you might think, in this great post from late 2004 on the webobjects-dev list. (The whole thread is great.) He's discussing the Enterprise Objects Framework and WebObjects, but his advice is fully applicable to Core Data as well. Just replace "EC" with "NSManagedObjectContext" and "EOF" with "Core Data" in the meat of his message.
The solution to the problem of sharing data between threads in Core Data, like the Enterprise Objects Framework before it, is "don't." If you've thought about it further and you really, honestly do have to share data between threads, then the solution is to keep independent object graphs in thread-isolated contexts, and use the information in the save notification from one context to tell the other context what to re-fetch. -[NSManagedObjectContext refreshObject:mergeChanges:] is specifically designed to support this use.
I believe that this is not safe to do with NSManagedObjects (or subclasses) that are managed by a CoreData NSManagedObjectContext. In general, CoreData may do many tricky things with the sate of managed objects, including firing faults related to those objects in separate threads. In particular, [NSManagedObject initWithEntity:insertIntoManagedObjectContext:] (the designated initializer for NSManagedObjects as of OS X 10.5), does not guarantee that the returned object is safe to pass to an other thread.
Using CoreData with multiple threads is well documented on Apple's dev site.
The whole point of using locks is to ensure that two threads don't try to access the same resource. If you can guarantee that through some other mechanism, go for it.
Even if it's safe, but it's not the best practice to use shared data between threads without synchronizing the access to those fields. It doesn't matter which thread created the object, but if more than one line of execution (thread/process) is accessing the object at the same time, since it can lead to data inconsistency.
If you're absolutely sure that only one thread will ever access this object, than it'd be safe to not synchronize the access. Even then, I'd rather put synchronization in my code now than wait till later when a change in the application puts a second thread sharing the same data without concern about synchronizing access.
Yes, it's safe. A pretty common pattern is to create an object, then add it to a queue or some other collection. A second "consumer" thread takes items from the queue and does something with them. Here, you'd need to synchronize the queue but not the objects that are added to the queue.
It's NOT a good idea to just synchronize everything and hope for the best. You will need to think very carefully about your design and exactly which threads can act upon your objects.
Two things to consider are:
You must be able to guarantee that the object is fully created and initialised before it is made available to other threads.
There must be some mechanism by which the main (GUI) thread detects that the data has been loaded and all is well. To be thread safe this will inevitably involve locking of some kind.
Yes you can do it, it will be safe
...
until the second programmer comes around and does not understand the same assumptions you have made. That second (or 3rd, 4th, 5th, ...) programmer is likely to start using the object in a non safe way (in the creator thread). The problems caused could be very subtle and difficult to track down. For that reason alone, and because its so tempting to use this object in multiple threads, I would make the object thread safe.
To clarify, (thanks to those who left comments):
By "thread safe" I mean programatically devising a scheme to avoid threading issues. I don't necessarily mean devise a locking scheme around your object. You could find a way in your language to make it illegal (or very hard) to use the object in the creator thread. For example, limiting the scope, in the creator thread, to the block of code that creates the object. Once created, pass the object over to the user thread, making sure that the creator thread no longer has a reference to it.
For example, in C++
void CreateObject()
{
Object* sharedObj = new Object();
PassObjectToUsingThread( sharedObj); // this function would be system dependent
}
Then in your creating thread, you no longer have access to the object after its creation, responsibility is passed to the using thread.

Resources