is ICU's ucnv_convertEx thread safe? - thread-safety

I'm wondering if ucnv_convertEx in ICU library is thread safe. Looking at source it seems that it is thread-safe, but I'm not 100% sure. Also I can not find explicit state of this in ICU documentation.
Thanks

The ICU User Guide discusses this, for all objects that have an open/close model. Each Converter object must be used in a single thread at a time. If you need more of them, clone them. They're cheap to clone.
By the way, where would you have expected this information? Maybe you could file a ticket, and we can improve the documentation. Thanks.

Basically ICU is thread safe, but:
You can't assume that is safe to call const member functions/functions operating on it of same object from different threads (in fact this is generally unsafe which makes ICU tricky in all thread related aspects)
Of course you can't use same object with non-cost member function/functions working on object from different threads.
Basically in case of ucnv_convertEx as long as you don't share UConverter between threads it is safe.

Related

Can I use boost named_semaphore in place of ACE_SEMAPHORE as I am trying to move from ACE to boost libraries?

I am moving my code from ACE library support to boost library support. I need to replace ACE_Semaphore. It seems C++11 doesn't support semaphore methods. I have seen named_semaphore in boost. Another choice I saw was to go for POCO semaphore where in I have to include POCO libraries. Kindly give me suggestions as to which is the best way to move forward.
Edit: This is for intra process thread synchronization.
The short answer is: yes.
If for intra-process synchronization, you can simply emulate one with a mutex+condition variable:
C++0x has no semaphores? How to synchronize threads?
Note though, usually a mutex + condition variable will do, as the concrete condition doesn't usually take the form of a counter. (E.g. Synchronizing three threads with Condition Variable)
For interprocess synchronization you use the named semaphore. An example: How to limit the number of running instances in C++ Note that there are implementation differences¹.
¹ e.g. named_semaphore in boost allocates its own shared resource, while in ACE it's assumed the user allocates it from existing shared space. In boost, you obviously also can, as long as your platform supports native synchronization primitives in shared memory

Is MultiByteToWideChar reentrant or threadsafe?

Multiple threads in my application will be calling MultiByteToWideChar for converting UTF-8 to wchar_t strings.
I've been unable to find any documentation which states whether this function is re-entrant or thread safe. I want to avoid synchronizing calls to this method if not needed. Does anyone know the answer or how to find it?
The function is thread safe ... but I don't have a definitive link to prove it!
There is some discussion on this thread ... but in general the rule would be that if an API call does not have some specific context (eg. a handle) it is called with or other explicit threading rules (ie. the whole GDI layer) then it should be thread safe.
It would certainly be good to see this more explicitly called out in the documentation though.

Reentrancy in Boost

When working with multithreading, I need to make sure that the boost classes I use are reentrant (even when each thread uses its own object instance).
I'm having hard time finding in the documentation of Boost's classes a statement about the reentrancy of the class. Am I missing something here? Are all the classes of Boost reentrant unless explicitly stated otherwise in the documentation? Or is Boost's documentation on reentrancy a gray area?
For example, I couldn't find anywhere in the documentation a statement on the reentrancy of the boost::numeric::ublas∷matrix class. So can I assume it's reentrant or not?
Thanks!
Ofer
Most of Boost is similar to most of the STL and the C++ standard library in that:
Creating two instances of a type in two threads at the same time is OK.
Using two instances of a type in two threads at the same time is OK.
Using a single object in two threads at the same time is often not OK.
But doing read-only operations on one object in two threads is often OK.
There is usually no particular effort taken to "enhance" thread safety, except where there is a particular need to do so, like shared_ptr, Asio, Signals2 (but not Signals), and so on. Parts of Boost that look like value types (such as your matrix example) probably do not have any special thread safety support at all, leaving it up to the application.

Why do we need boost::thread_specific_ptr?

Why do we need boost::thread_specific_ptr, or in other words what can we not easily do without it?
I can see why pthread provides pthread_getspecific() etc. These functions are useful for cleaning up after dead threads, and handy to call from C-style functions (the obvious alternative being to pass a pointer everywhere that points to some memory allocated before the thread was created).
In contrast, the constructor of boost:thread takes a callable class by value, and everything non-static in that class becomes thread local once it is copied. I cannot see why I would want to use boost::thread_specific_ptr in preference to a class member any more than I would want to use a global variable in OOP code.
Do I horribly misunderstand anything? A very brief example would help, please. Many thanks.
thread_specific_ptr simply provides portable thread local data access. You don't have to be managing your threads with Boost.Thread to get value from this. The canonical example is the one cited in the Boost docs for this class:
One example is the C errno variable,
used for storing the error code
related to functions from the Standard
C library. It is common practice (and
required by POSIX) for compilers that
support multi-threaded applications to
provide a separate instance of errno
for each thread, in order to avoid
different threads competing to read or
update the value.

NSThread or pythons' threading module in pyobjc?

I need to do some network bound calls (e.g., fetch a website) and I don't want it to block the UI. Should I be using NSThread's or python's threading module if I am working in pyobjc? I can't find any information on how to choose one over the other. Note, I don't really care about Python's GIL since my tasks are not CPU bound at all.
It will make no difference, you will gain the same behavior with slightly different interfaces. Use whichever fits best into your system.
Learn to love the run loop. Use Cocoa's URL-loading system (or, if you need plain sockets, NSFileHandle) and let it call you when the response (or failure) comes back. Then you don't have to deal with threads at all (the URL-loading system will use a thread for you).
Pretty much the only time to create your own threads in Cocoa is when you have a large task (>0.1 sec) that you can't break up.
(Someone might say NSOperation, but NSOperationQueue is broken and RAOperationQueue doesn't support concurrent operations. Fine if you already have a bunch of NSOperationQueue code or really want to prepare for working NSOperationQueue, but if you need concurrency now, run loop or threads.)
I'm more fond of the native python threading solution since I could join and reference threads around. AFAIK, NSThreads don't support thread joining and cancelling, and you could get a variety of things done with python threads.
Also, it's a bummer that NSThreads can't have multiple arguments, and though there are workarounds for this (like using NSDictionarys and NSArrays), it's still not as elegant and as simple as invoking a thread with arguments laid out in order / corresponding parameters.
But yeah, if the situation demands you to use NSThreads, there shouldn't be any problem at all. Otherwise, it's cool to stick with native python threads.
I have a different suggestion, mainly because python threading is just plain awful because of the GIL (Global Interpreter Lock), especially when you have more than one cpu core. There is a video presentation that goes into this in excruciating detail, but I cannot find the video right now - it was done by a Google employee.
Anyway, you may want to think about using the subprocess module instead of threading (have a helper program that you can execute, or use another binary on the system. Or use NSThread, it should give you more performance than what you can get with CPython threads.

Resources