I have a thread that executes arbitrary code, so I don't want it writing anything that's not in its own memory space. I know there are things like job objects, as well as special functions that set thread priviliges, but I don't understand specifically which privileges to set to make my thread "safe".
Related
We have a monitoring agent written in Go that uses a number of goroutines to gather system metrics from WMI. We recently discovered the program was leaking memory when the go binary is run on Server 2016 or Windows 10 (also possibly on other OS using WMF 5.1). After creating a minimal test case to reproduce the issue it seems that the leak only occurs if you make a large number of calls to the ole.CoInitializeEx method (possibly something changed in WMF 5.1 but we could not reproduce the issue using the python comtypes package on the same system).
We are using COINIT_MULTITHREADED for multithread apartment (MTA) in our application, and my question is this: Because we are issuing OLE/WbemScripting calls from various goroutines, do we need to call ole.CoInitializeEx just once on startup or once in each goroutine? Our query code already uses runtime.LockOSThread to prevent the scheduler from running the method on different OS threads, but the MSDN remarks on CoInitializeEx seem to indicate it must be called at least once on each thread. I am not aware of any way to make sure new goroutines run on an already initialized OS thread, so multiple calls to CoInitializeEx seemed like the correct approach (and worked fine for the last few years).
We have already refactored the code to do all the WMI calls on a dedicated background worker, but I am curious to know if our original code should work using only one CoInitializeEx at startup instead of once for every goroutine.
AFAIK, since Win32 API is defined only in terms of native OS threads, a call to CoInitialize[Ex]() only ever affects the thread it completed on.
Since the Go runtime uses free M×N scheduling of the goroutines to OS threads, and these threads are created / deleted as needed at runtime in a manner completely transparent to the goroutines, the only way to make sure the CoInitialize[Ex]() call has any lasting effect on the goroutine it was performed on is to first bind that goroutine to its current OS thread by calling runtime.LockOSThread() and doing this for every goroutine intended to do COM calls.
Please note that this basically creates an 1×1 mapping between goroutines and OS threads which defeats much of the purpose of goroutines to begin with. So supposedly you might want to consider having just a single goroutine calling into COM and listening for requests on a channel, or having
a pool of such worker goroutines hidden behing another one which would dispatch the clients' requests onto the workers.
Update regarding COINIT_MULTITHREADED.
To cite the docs:
Multi-threading (also called free-threading) allows calls to methods
of objects created by this thread to be run on any thread. There is no
serialization of calls — many calls may occur to the same method or
to the same object or simultaneously. Multi-threaded object
concurrency offers the highest performance and takes the best
advantage of multiprocessor hardware for cross-thread, cross-process,
and cross-machine calling, since calls to objects are not serialized
in any way. This means, however, that the code for objects must
enforce its own concurrency model, typically through the use of
synchronization primitives, such as critical sections, semaphores, or
mutexes. In addition, because the object doesn't control the lifetime
of the threads that are accessing it, no thread-specific state may be
stored in the object (in Thread Local Storage).
So basically the COM threading model has nothing to do with initialization of the threads theirselves—but rather with how the COM subsystem is allowed to call the methods of the COM objects you create on the COM-initialized threads.
IIUC, if you will COM-initialize a thread as COINIT_MULTITHREADED and create some COM object on it, and then pass its reference to some outside client of that object so that it is able to call that object's methods, those methods can be called by the OS on any thread in your process.
I really have no idea how this is supposed to interact with Go runtime,
so I'd start small and would use a single thread with STA model and then
maybe try to make it more complicated if needed.
On the other hand, if you only instantiate external COM objects and not
transfer their descriptors outside (and it appears that's the case),
the threading model should not be relevant. That is, only unless some
code in the WUA API would call some "event-like" method on a COM object you
have instantiated.
I was reading a MSDN doc about driver synchronization and I come across a statement that goes like this
a driver can wait if
• The driver is executing in a nonarbitrary thread context. That is,
you can identify the thread that will enter a wait state. In practice,
the only driver routines that execute in a nonarbitrary thread context
are the DriverEntry, AddDevice, Reinitialize, and Unload routines of
any driver, plus the dispatch routines of highest-level drivers. All
these routines are called directly by the system
now my question is that why dispatch routines are considered in arbitrary thread context ?? Since read, write and other routines will be invoked when a request will be raised from user space, so we can know that which thread did that in system space ?? My be I am completely messed up or it could be a silly question but still help me coz i am a newbie in windwos.
ok i found the answer in a document :) and here is what it states ..
Although the highest-level drivers receive I/O requests in the context
of the requesting thread, they often forward those requests to their
lower level drivers on different threads. Consequently, you can make
no assumptions about the contents of the user-mode address space at
the time such routines are called
I have written a compiler and interpreter for a scripting language. The interpreter is a DLL ('The Engine') which runs in a single thread and can load many 100s or 1000s of compiled byte-code applications and excecute them as a set of internal processes. There is a main loop that excecutes a few instructions from each of the loaded app processes before moving one to the next process.
The byte code instruction in the compiled apps can either be a low level instructions (pop, push, add, sub etc) or a call to an external function library (which is where most of the work is done). These external libararies can call back to the engine to put the internal processes into a sleep state waiting for a particular event upon which the external function (probably after receiving an event) will wake up the internal process again. If all internal processes are in a sleep state (which the are most of the time) then I can put the Engine to sleep as well thus handing off the CPU to other threads.
However there is nothing to prevent someone writing a script which just does a tight loop like this:
while(1)
x=1;
endwhile
Which means my main loop will never enter a sleep state and so the CPU goes up to 100% and locks up the system. I want my engine to run as fast as possibly, whilst still handling windows events so that other applications are still responsive when a tight loop similar to the above is encountered.
So my first question is how to add code to my main loop to ensure windows events are handled without slowing down the main engine which should run at the fastest speed possible..
Also it would be nice to be able to set the maximum CPU usage my engine can use and throttle down the CPU usage by calling the occasional Sleep(1)..
So my second question is how can I throttle down then CPU usage to the required level?
The engine is written in Borland C++ and makes calls to the win32 API.
Thanks in advance
1. Running a message loop at the same time as running your script
I want my engine to run as fast as
possibly, whilst still handling
windows events so that other
applications are still responsive when
a tight loop similar to the above is
encountered.
The best way to continue running a message loop while performing another operation is to move that other operation to another thread. In other words, move your script interpreter to a second thread and communicate with it from your main UI thread, which runs the message loop.
When you say Borland C++, I assume you're using C++ Builder? In this situation, the main thread is the only one that interacts with the UI, and its message loop is run via Application->Run. If you're periodically calling Application->ProcessMessages in your library callbacks, that's reentrant and can cause problems. Don't do it.
One comment to your question suggested moving each script instance to a separate thread. This would be ideal. However, beware of issues with the DLLs the scripts call if they keep state - DLLs are loaded per-process, not per-thread, so if they keep state you may encounter threading issues. For the moment purely to address your current question, I'd suggest moving all your script execution to a single other thread.
You can communicate between threads many ways, such as by posting messages between them using PostMessage or PostThreadMessage. Since you're using Borland C++, you should have access to the VCL. It has a good thread wrapper class called TThread. Derive from this and put your script loop in Execute. You can use Synchronize (blocks waiting) or Queue (doesn't block; method may be run at any time, when the target thread processes its message loop) to run methods in the context of another thread.
As a side note:
so that other
applications are still responsive when
a tight loop similar to the above is
encountered.
This is odd. In a modern, preemptively multitasked version of Windows other applications should still be responsive even when your program is very busy. Are you doing anything odd with your thread priorities, or are you using a lot of memory so that other applications are paged out?
2. Handling an infinite loop in a script
You write:
there is nothing to prevent someone
writing a script which just does a
tight loop like this:
while(1) x=1; endwhile
Which means my main loop will never
enter a sleep state and so the CPU
goes up to 100% and locks up the
system.
but phrase how to handle this as:
Also it would be nice to be able to
set the maximum CPU usage my engine
can use and throttle down the CPU
usage by calling the occasional
Sleep(1)..
So my second question is how can I
throttle down then CPU usage to the
required level?
I think you're taking the wrong approach. An infinite loop like while(1) x=1; endwhile is a bug in the script, but it should not take down your host application. Just throttling the CPU won't make your application able to handle the situation. (And using lots of CPU isn't necessarily a problem: if it the work is available for the CPU to run, do it! There's nothing holy about using only a bit of your computer's CPU. It's there to use after all.) What (I think) you really want is to be able to continue to have your application able to respond when running this script (solved by a second thread) and then:
Detect when a script is 'not responding', or not calling into your callbacks
Be able to take action, such as asking the user if they want to terminate the script
An example of another program that does this is Firefox. If you go to a page with a misbehaving script, eventually you'll get a dialog asking if you want to stop the script running.
Without knowing more about how your script is actually interpreted or run, I can't give a detailed answer to these two. But I can suggest an approach, which is:
Your interpreter probably runs a loop, getting the next instruction and executing it. Your interactivity is currently provided by a callback running from one of those instructions being executed. I'd suggest making use of that by having your callback simply log the time it was last called. Then in your processing thread, every instruction (or every ten or a hundred) check the current time against the last callback time. If a long time has passed, say fifteen or thirty seconds, it may be an indication that the script is stuck. Notify the main thread but keep processing.
For "time", something like GetTickCount is probably sufficient.
Next step: Your main UI thread can react to this by asking the user what to do. If they want to terminate the script, communicate with the script thread to set a flag. In your script processing loop, again every instruction (or hundred) check for this flag, and if it's set, stop.
When you move to having one thread per script interpreter, you TThread's Terminated flag for this. Idiomatically for something that runs infinitely in a thread, you run in a while (!Terminated && [any other conditions]) loop in your Execute function.
To actually answer your question about using less CPU, the best approach is probably to change your thread's priority using SetThreadPriority to a lower priority, such as THREAD_PRIORITY_BELOW_NORMAL. It will still run if nothing else needs to run. This will affect your script's performance. Another approach is to use Sleep as you suggest, but this really is artificial. Perhaps SwitchToThread is slightly better - it yields to another thread the OS chooses. Personally, I think the CPU is there to use, and if you solve the problem of an interactive UI and handling out-of-control scripts then there should be no problem with using all CPU if your script needs it. If you're using "too much" CPU, perhaps the interpreter itself could be optimised. You'll need to run a profiler and find out where the CPU time is being spent.
Although a badly designed script might put you in a do-nothing loop, don't worry about it. Windows is designed to handle this kind of thing, and won't let your program take more than its fair share of the CPU. If it does manage to get 100%, it's only because nothing else wants to run.
My sense from the Address Book documentation and my understanding of the underlying CoreData implementation suggests that Address Book should be thread safe, and making queries from multiple threads should pose no problems. But I'm having trouble finding any explicit discussion of thread safety in the docs. This raises a few questions:
Is it safe to use +sharedAddressBook on multiple threads for read-only access? I believe the answer is yes.
For write-access on background threads, it appears that you should use +addressBook instead (and save your changes manually). Do I understand this correctly?
Has anyone investigated the performance impact of making multiple simultaneous queries to Address Book on multiple threads? This should be very similar to the performance of making multiple CoreData queries on multiple threads. My sense is that I would gain little by making parallel queries since I assume they will serialize when they hit SQLLite, but I'm not certain here.
I need to make dozens of queries (some complex) against AddressBook and am doing so on a background thread using NSOperation to avoid blocking the UI (which it currently does). My underlying question is whether it makes sense to set the max concurrent operations to a value larger than 1, and whether there is any danger in doing so if the application may also be writing to AddressBook at the same time on another thread.
Unless an API says it is threadsafe it is not. Even if the current implementation happens to be thread safe it might not be in the future. In other words, do not use AB from multiple threads.
As an aside, what about it being CoreData based makes you think it would be thread safe? CoreData uses a thread confinement model where it is only safe to access a context on a single thread, all the objects from the context must be accessed on the same thread.
That means that sharedAddressBook will not be thread safe if it keeps an NSManagedObjectContext around to use. It would only be safe if AB creates a new context every time it needs to do something and immediately disposes of it, or if it creates a context per thread and always uses the appropriate context (probably by storing a ref to it in the threadDictionary). In either event it would not be safe to store anything as NSManagedObjects since the contexts would be constantly destroyed, which means every ABRecord would have to store an NSManagedObjectID so it could reconstitute the object in the appropriate context whenever it needed it.
Clearly all of that is possible, it may be what is done, but it is hardly the obvious implementation.
I normally work on single threaded applications and have generally never really bothered with dealing with threads. My understanding of how things work - which certainly, may be wrong - is that as long as we're always dealing with single threaded code (i.e. no forks or anything like that) it will always be executed in the same thread.
Is this assumption correct? I have a fuzzy idea that UI libraries/frameworks may spawn off threads of their own to handle GUI stuff (which accounts for the fact that the Windows task manager tells me that my 'single threaded' application is actually running on 10 threads) but I'm guessing that this shouldn't affect me?
How does this apply to COM? For instance, if I were to create an instance of a COM component in my code; and that COM component writes some information to a thread-based location (using System.Threading.Thread.GetData for instance) will my application be able to get hold of that information?
So in summary:
In single threaded code, can I be sure that whatever I store in a thread-based location can be retrievable from anywhere else in the code?
If that single threaded code were to create an instance of a COM component which stores some information in a thread-based location, can that be similarly retrievable from anywhere else?
UI usually has the opposite constraint (sadly): it's single threaded and everything must happen on that thread.
The easiest way to check if you are always in the same thread (for, say, a function) is to have an integer variable set at -1, and have a check function like (say you are in C#):
void AssertSingleThread()
{
if (m_ThreadId < 0) m_ThreadId = Thread.CurrentThread.ManagedThreadId;
Debug.Assert(m_ThreadId == Thread.CurrentThread.ManagedThreadId);
}
That said:
I don't understand the question #1, really. Why store in a thread-based location if your purpose is to have a global scope ?
About the second question, most COM code runs on a single thread and, most often, on the thread where your UI message processing lives - this is because most COM code is designed to be compatible with VB6 which is single-thread.
The reason your program has about 10 threads is because both Windows (if you use some of its features like completion ports, or some kind of timers) and the CLR (for example for the GC or, again, some types of timers) may create threads in your process space (technically any program with enough priviledges, can too).
Think about having the model of having a single dataStore class running in your mainThread that all threads can read and write their instance variables to. This will avoid a lot of problems that might arise accessing threads all over the shop.
Simple idea, until you reach the fun part of threading. Concurrency and synchronization; simply, if you have two threads that want to read and write to the same variable inside dataStore at the same time, you have a problem.
Java handles this by allowing you to declare a variable or method synchronized, allowing only one thread access at a time.
I believe some .NET objects have Lock and Synchronized methods defined on them, but I know no more than this.