Which is preferred boost::lock_guard or boost::mutex::scoped_lock?
I'm using Boost.Thread with the hope to move to C++11 threading when it becomes available.
Is scoped_lock part of the next c++ standard?
Are the any advantages to prefer one over the other?
NOTE: I'm aware that scoped_lock is just a typedef of lock_guard.
edit: I was wrong scoped_lock is not a typedef of lock_guard. It's a typedef of unique_lock.
Amit is right: boost::mutex::scoped_lock is a typedef for boost::unique_lock<boost::mutex>, not lock_guard. scoped_lock is not available in C++0x.
Unless you need the flexibility of unique_lock, I would use lock_guard. It is simpler, and more clearly expresses the intent to limit the lock to a defined scope.
Not much difference between the two. As per Boost, scoped_lock is a typedef for unique_lock<mutex>. Both of unique_lock and lock_guard implement RAII-style locking. The difference between is simply that unique_lock has a more complex interface -- it allows to defer lock and call unlock.
Related
I am learning the winAPI through the docs and I am kind of puzzled by this one thing. The docs use CALLBACK and WINAPI in the same example and when I tried peeking their definition, they were both defined as __stdcall. If both are defined as the same thing, what's the point of having two different definitions for just __stdcall?
Also worth noting that while peeking their definitions I also found APIPRIVATE and PASCAL which were defined as __stdcall. What's the point? Can I just replace every instance of those 4 definitions with __stdcall or is it problematic?
WINAPI is the decoration used for APIs that Windows exposes to you.
CALLBACK is the decoration used for callback functions that you pass to Windows.
Replacing them with __stdcall is problematic only insomuch as your code might ever be deemed good enough for other developers to use, who might try and use a gcc, llvm or other compiler that can target Windows, but does not support __stdcall as a keyword (except probably does as a backwards compatibility hack because of the number of times reasoning such as the above went unchallenged).
According to the documentation, a critical secion object cannot be copied or moved.
Does that imply it cannot be safely stored in a std::vector style collection as an instance?
Correct; the CRITICAL_SECTION object should not be copied/moved as it may stop working (E.g. perhaps it contains pointers to itself).
One approach would be to store a vector of smart pointers, e.g. (C++17 code):
#include <windows.h>
#include <memory>
#include <vector>
using CS_ptr = std::unique_ptr<CRITICAL_SECTION, decltype(&DeleteCriticalSection)>;
CS_ptr make_CriticalSection(void)
{
CS_ptr p(new CRITICAL_SECTION, DeleteCriticalSection);
InitializeCriticalSection(p.get());
return p;
}
int main()
{
std::vector<CS_ptr> vec;
vec.push_back( make_CriticalSection() );
}
Consider using std::recursive_mutex which is a drop-in replacement for CRITICAL SECTION (and probably just wraps it in a Windows implementation), and performs the correct initialization and release in its destructor.
Standard mutexes are also non-copyable so for this case you'd use std::unique_ptr<std::recursive_mutex> if you wanted a vector of them.
As discussed here also consider whether you actually want std::mutex instead of a recursive mutex.
NOTE: Windows Mutex is an inter-process object; std::mutex and friends correspond to a CRITICAL_SECTION.
I would also suggest reconsidering the need for a vector of mutexes; there may be a better solution to whatever you're trying to do.
I found a few questions on the site that approach that subject, but none of them seem to do this directly. An example is this, but the answer is not satisfying (it is untested, and doesn't explain why this is correct).
Consider this simple example:
class some_class
{
public:
Eigen::Matrix<double,3,4> M;
std::vector<Eigen::Matrix<double,4,2>,
Eigen::aligned_allocator<Eigen::Matrix<double,4,2>>> M2;
//other stuff
};
Now assume that I need to declare an std::vector of some_class objects. Then, is the declaration
std::vector<some_class,Eigen::aligned_allocator<some_class>>>
//Note that it compiles and doesn't seem to cause noticeable run-time problems either
the correct way to do so, or do I have to reimplement an aligned_allocator for that class? I find the documentation a bit short and confusing, since it only states
Using STL containers on fixed-size vectorizable Eigen types, or classes having members of such types requires ...
but it doesn't explicitly say whether one should write an aligned_allocatorĀ in such situations.
Is the declaration above safe or not, and why?
Suppose someAtomic is an std::atomic with an integral underlying type, such as atomic_uint16_t. I don't want to assume WHICH integral type, however, in particular code, so I want something to accomplish the following, which right now doesn't compile:
if (newVal > numeric_limits<decltype(someAtomic)>::max()) throw "newVal too large";
else someAtomic.store(newVal, memory_order_release);
It looks like at least in VC++2015 there are no numeric_limits specializations for atomic types even if their underlying types do have such specializations. What's the best way to deal with this?
template<class T>
struct my_numeric_limits : std::numeric_limits<T>{};
template<class T>
struct my_numeric_limits<std::atomic<T>> : my_numeric_limits<T>{};
Then you can use my_numeric_limits<SomeAtomic>::max().
This is less likely to violate (vague) parts of the standard than adding a specialization to std::numeric_limits that does not depend on a user-provided type. In C++11, there were requirements that you specialize over "user defined types", and I am uncertain if this has been resolved if std::atomic<int> is user-defined or not. I saw a fix proposal, but am uncertain if it went anywhere.
Regardless, this follows the principle of least surprise, and is just as efficient. Messing around with things in the std namespace should only be done when the alternatives are impractical.
Get something wrong, and your code suddenly becomes ill-formed, no diagnostic required. People checking your code are rightly scared. People modifying your code have to not screw up. my_numeric_limits is robust, safe, and resists error.
The C++ Standard allows (and encourages) you to add specializations to std::numeric_limits and you can do just that.
#include <limits>
#include <atomic>
#include <iostream>
template<typename T>
class std::numeric_limits<std::atomic<T>> : public std::numeric_limits<T> {};
int main()
{
std::cout << std::numeric_limits<std::atomic<int>>::max();
}
I got an old codebase, where I want to use some implementations in a new environment. The old base used the TBB framework which I am really unfamiliar with.
Are there any equivalents implementaions to these TBB's types in C++11:
tbb::enumerable_thread_specific<...>
mutex_t
mutex_t::scoped_lock
If not: Any tips how I can convert them (links to good TBB summaries, tutorials, ...) or do I need to work myself into the whole TBB documentation?
(And no. Inserting TBB to the project is not an option.)
EDIT: forget to mention tbb::this_tbb_thread::yield any suggestion about this?
The TBB features in your code do have near-equivalents in C++11 (or you can create one simply).
enumerable_thread_specific<T> is an implementation of thread-local storage. It can use the platform's local storage, or a tbb::concurrent_vector to hold instances. The default is to consume no platform thread-local storage keys. C++11 has the thread_local qualifier, so depending on how the enumerable_thread_specific is used you can replace it with a thread_local version of the same type. If you are using the structure to persist the data, or to access it outside a thread-local context, you may have your work cut out for you.
mutex_t is a generic mutex type, and can be replaced with std::mutex, though the developer may have chosen a particular implementation (like spin_mutex) that will be affected by the replacement.
scoped_lock is an RAII object that locks the mutex on construction, and when the leaving the scope will unlock the mutex (making it somewhat exception-friendly.) You can use std::lock_guard<std::mutex> if you're at C++17, otherwise you can roll your own.
It has been awhile since I read the yield documentation. I believe the implementation looks for other possible tasks before giving up the time slice. You can use std::this_thread::yield() to relenquish the time slice, but the behavior may differ if the code is using TBB constructs. The fact you haven't mentioned any other TBB stuff implies to me there are none in the program, and the tbb::yield() does the same thing as std::this_thread::yield().
I would suggest making the old codebase work first and only then change.
tbb::enumerable_thread_specific<...> does not have standard equivalents.
mutex_t and mutex_t::scoped_lock you can replace with std::mutex and std::unique_lock<std::mutex>.