numeric_limits of atomic types - c++11

Suppose someAtomic is an std::atomic with an integral underlying type, such as atomic_uint16_t. I don't want to assume WHICH integral type, however, in particular code, so I want something to accomplish the following, which right now doesn't compile:
if (newVal > numeric_limits<decltype(someAtomic)>::max()) throw "newVal too large";
else someAtomic.store(newVal, memory_order_release);
It looks like at least in VC++2015 there are no numeric_limits specializations for atomic types even if their underlying types do have such specializations. What's the best way to deal with this?

template<class T>
struct my_numeric_limits : std::numeric_limits<T>{};
template<class T>
struct my_numeric_limits<std::atomic<T>> : my_numeric_limits<T>{};
Then you can use my_numeric_limits<SomeAtomic>::max().
This is less likely to violate (vague) parts of the standard than adding a specialization to std::numeric_limits that does not depend on a user-provided type. In C++11, there were requirements that you specialize over "user defined types", and I am uncertain if this has been resolved if std::atomic<int> is user-defined or not. I saw a fix proposal, but am uncertain if it went anywhere.
Regardless, this follows the principle of least surprise, and is just as efficient. Messing around with things in the std namespace should only be done when the alternatives are impractical.
Get something wrong, and your code suddenly becomes ill-formed, no diagnostic required. People checking your code are rightly scared. People modifying your code have to not screw up. my_numeric_limits is robust, safe, and resists error.

The C++ Standard allows (and encourages) you to add specializations to std::numeric_limits and you can do just that.
#include <limits>
#include <atomic>
#include <iostream>
template<typename T>
class std::numeric_limits<std::atomic<T>> : public std::numeric_limits<T> {};
int main()
{
std::cout << std::numeric_limits<std::atomic<int>>::max();
}

Related

Can a critical section object be stored in a std::vector?

According to the documentation, a critical secion object cannot be copied or moved.
Does that imply it cannot be safely stored in a std::vector style collection as an instance?
Correct; the CRITICAL_SECTION object should not be copied/moved as it may stop working (E.g. perhaps it contains pointers to itself).
One approach would be to store a vector of smart pointers, e.g. (C++17 code):
#include <windows.h>
#include <memory>
#include <vector>
using CS_ptr = std::unique_ptr<CRITICAL_SECTION, decltype(&DeleteCriticalSection)>;
CS_ptr make_CriticalSection(void)
{
CS_ptr p(new CRITICAL_SECTION, DeleteCriticalSection);
InitializeCriticalSection(p.get());
return p;
}
int main()
{
std::vector<CS_ptr> vec;
vec.push_back( make_CriticalSection() );
}
Consider using std::recursive_mutex which is a drop-in replacement for CRITICAL SECTION (and probably just wraps it in a Windows implementation), and performs the correct initialization and release in its destructor.
Standard mutexes are also non-copyable so for this case you'd use std::unique_ptr<std::recursive_mutex> if you wanted a vector of them.
As discussed here also consider whether you actually want std::mutex instead of a recursive mutex.
NOTE: Windows Mutex is an inter-process object; std::mutex and friends correspond to a CRITICAL_SECTION.
I would also suggest reconsidering the need for a vector of mutexes; there may be a better solution to whatever you're trying to do.

std::hash no type named "hash_policy" when running ska::flat_hash_map

I'm primarily an R programmer, and I'm using Rcpp to run a hash map implementation by Malte Skarupke called ska::flat_hash_map on Windows 10 through RStudio (Microsoft OpenR). The C++ compiler is g++ run with c11 flags.
With no changes to his .hpp file, I am unable to get it running, as it produces the error
Line 276 no type named 'hash_policy' in 'struct std::hash<char>'
The offending line in flat_hash_map.hpp is
template<typename T>
struct HashPolicySelector<T, void_t<typename T::hash_policy>>
{
typedef typename T::hash_policy type;
};
I've found a few benchmark libraries on github that seem to include the library with no problems, and access it like std::unordered_map, so I don't understand why I am having problems getting it to run.
I've also tried providing different types instead of char, sticking to the ones that std::hash should be able to handle automatically, such as int, and std::string.
My source file is really simple, as I'm literally just trying to get a hash map created, for example, my last run was using this:
#include <Rcpp.h>
#include "flat_hash_map.hpp"
using namespace Rcpp;
// [[Rcpp::plugins(cpp11)]]
// [[Rcpp::export]]
void run_test()
{
ska::flat_hash_map<char,char> test_map;
}
I'm hoping someone with more C++ experience than myself could shed some light on the problem, or try running the library themselves if the issue is reproducible.
Thanks for the help!
This is also my first post on StackOverflow, please let me know if there is something I can do to improve my question.
GCC < 5.0 would not trigger substitution failure on unused parameters within an alias template. This case was actually underspecified in the standard, eventually solved by CWG Issue 1558.
As a workaround, you should manually replace line 266:
template<typename...> using void_t = void;
with:
template <typename...>
struct voider { using type = void; };
template <typename... Ts>
using void_t = typename voider<Ts...>::type;
This forces the usage of template parameters of the alias template, allowing the compiler to SFINAE-out types that don't declare hash_policy.

Standard sizeof macro for primitive types

Are there any standard macros that can be used to identify the size of a primitive type at compile time? Similar to the ones in GCC:
__SIZEOF_INT__
__SIZEOF_LONG__
__SIZEOF_LONG_LONG__
__SIZEOF_SHORT__
__SIZEOF_POINTER__
__SIZEOF_FLOAT__
__SIZEOF_DOUBLE__
__SIZEOF_LONG_DOUBLE__
__SIZEOF_SIZE_T__
I remember seeing something similar somewhere but for the death of me I can't find or remember their name anymore. The one I'm interested mostly is the long type.
There are no standard macro definitions for sizes of primitive types.
In boost/atomic there are macros giving you sizes of primitive types, they are using boost/cstdint.hpp among other sources. Example would look like follow:
#include <iostream>
#include <boost/atomic.hpp>
int main() {
std::cout << BOOST_ATOMIC_DETAIL_SIZEOF_LONG;
}
reference:
http://www.boost.org/doc/libs/1_60_0/boost/atomic/detail/int_sizes.hpp

gcc - gdb - pretty print stl

I'm currently doing some research on the STL, especially for printing the STL content during debug. I know there are many different approaches.
Like:
http://sourceware.org/gdb/wiki/STLSupport
or using a shared library to print the content of a container
What I'm currently looking for is, why g++ deletes functions, which are not used for example I have following code and use the compile setting g++ -g main.cpp -o main.o.
include <vector>
include <iostream>
using namespace std;
int main() {
std::vector<int> vec;
vec.push_back(10);
vec.push_back(20);
vec.push_back(30);
return;
}
So when I debug this code I will see that I can't use print vec.front(). The message I receive is:
Cannot evaluate function -- may be inlined
Therefore I tried to use the setting -fkeep-inline-functions, but no changes.
When i use nm main.o | grep front I see that there is no line entry for the method .front(). Doing the same again but, with an extra vec.front() entry within my code I can use print vec.front(), and using nm main.o | grep front where I see the entry
0000000000401834 W _ZNSt6vectorIiSaIiEE5frontEv
Can someone explain me how I can keep all functions within my code without loosing them. I think, that dead functions do not get deleted as long as I don't set optimize settings or do following.
How to tell compiler to NOT optimize certain code away?
Why I need it: Current Python implementations use the internal STL implementation to print the content of a container, but it would be much more interesting to use functions which are defined by ISO/IEC 14882. I know it's possible to write a shared library, which can be compiled to your actual code before you debug it, to maintain that you have all STL functions, but who wants to compile an extra lib to its code, before debugging. It would also be interesting to know if there are some advantages and disadvantages of this two approaches (Shared Lib. and Python)?
What's exactly a dead function, isn't it a function which is available in my source code but isn't used?
There are two cases to consider:
int unused_function() { return 42; }
int main() { return 0; }
If you compile above program, the unused_function is dead -- never called. However, it would still be present in the final executable (even with optimization [1]).
Now consider this:
template <typename T> int unused_function(T*) { return 42; }
int main() { return 0; }
In this case, unused_function will not be present, even when you turn off all optimizations.
Why? Because the template is not a "real" function. It's a prototype, from which the compiler can create "real" functions (called "template instantiation") -- one for each type T. Since you've never used unused_function, the compiler didn't create any "real" instances of it.
You can request that the compiler explicitly instantiate all functions in a given class, with explicit instantiation request, like so:
#include <vector>
template class std::vector<int>;
int main() { return 0; }
Now, even though none of the vector functions are used, they are all instantiated into the final binary.
[1] If you are using the GNU ld (or gold), you could still get rid of unused_function in this case, by compiling with -ffunction-sections and linking with -Wl,--gc-sections.
Thanks for your answer. Just to repeat, template functions don't get initiated by the gcc, because they are prototypes. Only when the function is used or it gets explicitly initiated it will be available within my executable.
So what we have mentioned until yet is :
function definition int unusedFunc() { return 10; }
function prototype int protypeFunc(); (just to break it down)
What happens when you inline functions? I always thought, that the function will be inserted within my source code, but now I read, that compilers often decide what to do on their own. (Sounds strange, because their must be rule). It doesn't matter if you use the keyword inline, for example.
inline int inlineFunc() { return 10; }
A friend of mine also told me that he hasn't had access to addresses of functions, although he hasn't used inline. Are there any function types I forgot? He also told me that their should be differences within the object data format.
#edit - forgot:
nested functions
function pointers
overloaded functions

boost::lock_guard vs boost::mutex::scoped_lock

Which is preferred boost::lock_guard or boost::mutex::scoped_lock?
I'm using Boost.Thread with the hope to move to C++11 threading when it becomes available.
Is scoped_lock part of the next c++ standard?
Are the any advantages to prefer one over the other?
NOTE: I'm aware that scoped_lock is just a typedef of lock_guard.
edit: I was wrong scoped_lock is not a typedef of lock_guard. It's a typedef of unique_lock.
Amit is right: boost::mutex::scoped_lock is a typedef for boost::unique_lock<boost::mutex>, not lock_guard. scoped_lock is not available in C++0x.
Unless you need the flexibility of unique_lock, I would use lock_guard. It is simpler, and more clearly expresses the intent to limit the lock to a defined scope.
Not much difference between the two. As per Boost, scoped_lock is a typedef for unique_lock<mutex>. Both of unique_lock and lock_guard implement RAII-style locking. The difference between is simply that unique_lock has a more complex interface -- it allows to defer lock and call unlock.

Resources