AFAIK there are pthread functions that acts as memory barriers (e.g. here clarifications-on-full-memory-barriers-involved-by-pthread-mutexes). But what about compile-time barrier, i.e. is compiler (especially gcc) aware of this?
In other words - e.g. - is pthread_create() reason for gcc not to perform reordering?
For example in code:
a = 1;
pthread_create(...);
Is it certain that reordering will not take place?
What about invocations from different functions:
void fun(void) {
pthread_create(...);
...
}
a = 1;
fun();
Is fun() also compile time barrier (assuming pthread_create() is)?
What about functions in different translation units?
Please note that I am interested in general gcc and pthreads behavior scpecification, not necessarily x86-specific (various different embedded platforms in focus).
I am also not interested in other compilers/thread libraries behavior.
Because functions such as pthread_create() are external functions the compiler must ensure that any side effects that could be visible to an external function (such as a write to a global variable) must be done before calling the function. The compile couldn't reorder the write to a until after the function call in the first case) assuming a was global or otherwise potentially accessible externally).
This is behavior that is necessary for any C compiler, and really has little to do with threads.
However, if the variable a was a local variable, the compiler might be able to reorder it until after the function call (a might not even end up in memory at all for that matter), unless something like the address of a was taken and made available externally somehow (like passing it as the thread parameter).
For example:
int a;
void foo(void)
{
a = 1;
pthread_create(...); // the compiler can't reorder the write to `a` past
// the call to `pthread_create()`
// ...
}
void bar(void)
{
int b;
b = 1;
pthread_create(...); // `b` can be initialized after calling `pthread_create()`
// `b` might not ever even exist except as a something
// passed on the stack or in a register to `printf()`
printf( "%d\n", b);
}
I'm not sure if there's a document that outlines this in more detail - this is covered largely by C's 'as if' rule. In C99 that's in 5.1.2.3/3 "Program execution". C is specified by an abstract machine with sequence points where side effects must be complete, and programs must follow that abstract machine model except where the compiler can deduce that the side effects aren't needed.
In my foo() example above, the compiler would generally not be able to deduce that setting a = 1; isn't needed by pthread_create(), so the side effect of setting a to the value 1 must be completed before calling pthread_create(). Note that if there are compilers that perform global optimizations that can deduce that a isn't used elsewhere, they could delay or elide the assignment. However, in that case nothing else is using the side effect, so there would be no problem with that.
Related
I have some heavily-used code that I would like GCC to optimize aggressively. But I also want to write clean, reusable code with (inlinable) functions that are called from several places. There are cases where in the inlined function, there is code that I know can be removed because the conditions can never happen.
Let's look at a concrete example:
#include <assert.h>
static inline int foo(int c)
{
if (c < 4)
return c;
else
return 4;
}
int bar(int c)
{
assert(c < 2);
return foo(c);
}
With -DNDEBUG -O3, GCC will still generate the (c < 4) comparison even though I know it is not needed, because a precondition of the bar function is that c is 0 or 1. Without -DNDEBUG, GCC does remove the comparison because it is implied by the asserts - but of course you have the overhead of the asserts then (which is a lot more).
Is there a way to convey the variable range to GCC so it can be used for optimisation?
If CLang can do better on this, I could also consider switching compilers.
You might use __builtin_unreachable (read about other builtins) in a test to tell the compiler, e.g.,
if (x<2 || x>100)
__builtin_unreachable();
// Here the compiler knows that x is between 3 and 99 inclusive
In your case, add this at the start of your bar (probably wrapped in some nice looking macro):
if (c >= 2)
__builtin_unreachable();
If you optimize strongly (e.g., -O2 at least), the compiler knows that x is between 3 and 99 (and recent versions of GCC contain code to do such analysis—at least processing simple constant interval constraints like above—and take advantage of them in later optimization passes).
However, I am not so sure that you should use that! (at least don't use it often and wrap that in some assert-like macro), because it might not worth the trouble, and because the compiler is in practice only able to handle and propagate simple constraints (whose details are compiler version specific).
As far as I know, both recent Clang and GCC accepts that builtin.
Also look into __builtin_trap (which also emits runtime code).
#define __verify_pcpu_ptr(ptr)
do {
const void __percpu *__vpp_verify = (typeof((ptr) + 0))NULL;
(void)__vpp_verify;
} while (0)
#define VERIFY_PERCPU_PTR(__p)
({
__verify_pcpu_ptr(__p);
(typeof(*(__p)) __kernel __force *)(__p);
})
What do these two functions do? What are they used for? How do they work?
Thanks.
This is part of the scheme used by per_cpu_ptr to support a pointer that gets a different value for each CPU. There are two motives here:
Ensure that accesses to the per-cpu data structure are only made via the per_cpu_ptr macro.
Ensure that the argument given to the macro is of the correct type.
Restating, this ensures that (a) you don't accidentally access a per-cpu pointer without the macro (which would only reference the first of N members), and (b) that you don't inadvertently use the macro to cast a pointer that is not of the correct declared type to one that is.
By using these macros, you get the support of the compiler in type-checking without any runtime overhead. The compiler is smart enough to eventually recognize that all of these complex machinations result in no observable state change, yet the type-checking will have been performed. So you get the benefit of the type-checking, but no actual executable code will have been emitted by the compiler.
I stumbled upon the following problem when using the checked implementation of glibcxx:
/usr/include/c++/4.8.2/debug/vector:159:error: attempt to self move assign.
Objects involved in the operation:
sequence "this" # 0x0x1b3f088 {
type = NSt7__debug6vectorIiSaIiEEE;
}
Which I have reduced to this minimal example:
#include <vector>
#include <random>
#include <algorithm>
struct Type {
std::vector<int> ints;
};
int main() {
std::vector<Type> intVectors = {{{1}}, {{1, 2}}};
std::shuffle(intVectors.begin(), intVectors.end(), std::mt19937());
}
Tracing the problem I found that shuffle wants to std::swap an element with itself. As the Type is user defined and no specialization for std::swap has been given for it, the default one is used which creates a temporary and uses operator=(&&) to transfer the values:
_Tp __tmp = _GLIBCXX_MOVE(__a);
__a = _GLIBCXX_MOVE(__b);
__b = _GLIBCXX_MOVE(__tmp);
As Type does not explicitly give operator=(&&) it is default implemented by "recursively" applying the same operation on its members.
The problem occurs on line 2 of the swap code where __a and __b point to the same object which results in effect in the code __a.operator=(std::move(__a)) which then triggers the error in the checked implementation of vector::operator=(&&).
My question is: Who's fault is this?
Is it mine, because I should provide an implementation for swap that makes "self swap" a NOP?
Is it std::shuffle's, because it should not try to swap an element with itself?
Is it the checked implementation's, because self-move-assigment is perfectly fine?
Everything is correct, the checked implementation is just doing me a favor in doing this extra check (but then how to turn it off)?
I have read about shuffle requiring the iterators to be ValueSwappable. Does this extend to self-swap (which is a mere runtime problem and can not be enforced by compile-time concept checks)?
Addendum
To trigger the error more directly one could use:
#include <vector>
int main() {
std::vector<int> vectorOfInts;
vectorOfInts = std::move(vectorOfInts);
}
Of course this is quite obvious (why would you move a vector to itself?).
If you where swapping std::vectors directly the error would not occur because of the vector class having a custom implementation of the swap function that does not use operator=(&&).
The libstdc++ Debug Mode assertion is based on this rule in the standard, from [res.on.arguments]
If a function argument binds to an rvalue reference parameter, the implementation may assume that this parameter is a unique reference to this argument.
i.e. the implementation can assume that the object bound to the parameter of T::operator=(T&&) does not alias *this, and if the program violates that assumption the behaviour is undefined. So if the Debug Mode detects that in fact the rvalue reference is bound to *this it has detected undefined behaviour and so can abort.
The paragraph contains this note as well (emphasis mine):
[Note: If a program casts an lvalue to an xvalue while passing that lvalue to a library function (e.g., by calling the function with the argument
std::move(x)), the program is effectively asking that function to treat that lvalue as a temporary object. The implementation is free to optimize away aliasing checks which might be needed if the
argument was an lvalue. —end note]
i.e. if you say x = std::move(x) then the implementation can optimize away any check for aliasing such as:
X::operator=(X&& rval) { if (&rval != this) ...
Since the implementation can optimize that check away, the standard library types don't even bother doing such a check in the first place. They just assume self-move-assignment is undefined.
However, because self-move-assignment can arise in quite innocent code (possibly even outside the user's control, because the std::lib performs a self-swap) the standard was changed by Defect Report 2468. I don't think the resolution of that DR actually helps though. It doesn't change anything in [res.on.arguments], which means it is still undefined behaviour to perform a self-move-assignment, at least until issue 2839 gets resolved. It is clear that the C++ standard committee think self-move-assignment should not result in undefined behaviour (even if they've failed to actually say that in the standard so far) and so it's a libstdc++ bug that our Debug Mode still contains assertions to prevent self-move-assignment.
Until we remove the overeager checks from libstdc++ you can disable that individual assertion (but still keep all the other Debug Mode checks) by doing this before including any other headers:
#include <debug/macros.h>
#undef __glibcxx_check_self_move_assign
#define __glibcxx_check_self_move_assign(x)
Or equivalently, using just command-line flags (so no need to change the source code):
-D_GLIBCXX_DEBUG -include debug/macros.h -U__glibcxx_check_self_move_assign '-D__glibcxx_check_self_move_assign(x)='
This tells the compiler to include <debug/macros.h> at the start of the file, then undefines the macro that performs the self-move-assign assertion, and then redefines it to be empty.
(In general defining, undefining or redefining libstdc++'s internal macros is undefined and unsupported, but this will work, and has my blessing).
It is a bug in GCC's checked implementation. According to the C++11 standard, swappable requirements include (emphasis mine):
17.6.3.2 §4 An rvalue or lvalue t is swappable if and only if t is swappable with any rvalue or lvalue, respectively, of type T
Any rvalue or lvalue includes, by definition, t itself, therefore to be swappable swap(t,t) must be legal. At the same time the default swap implementation requires the following
20.2.2 §2 Requires: Type T shall be MoveConstructible (Table 20) and MoveAssignable (Table 22).
Therefore, to be swappable under the definition of the default swap operator self-move assignment must be valid and have the postcondition that after self assignment t is equivalent to it's old value (not necessarily a no-op though!) as per Table 22.
Although the object you are swapping is not a standard type, MoveAssignable has no precondition that rv and t refer to different objects, and as long as all members are MoveAssignable (as std::vector should be) the generate move assignment operator must be correct (as it performs memberwise move assignment as per 12.8 §29). Furthermore, although the note states that rv has valid but unspecified state, any state except being equivalent to it's original value would be incorrect for self assignment, as otherwise the postcondition would be violated.
I read a couple of tutorials about copy constructors and move assignments and stuff (for example this). They all say that the object must check for self assignment and do nothing in that case. So I would say it is the checked implementation's fault, because self-move-assigment is perfectly fine.
For the following statement inside function func(), I'm trying to figure out the variable name (which is 'dictionary' in the example) that points to the malloc'ed memory region.
Void func() {
uint64_t * dictionary = (uint64_t *) malloc ( sizeof(uint64_t) * 128 );
}
The instrumented malloc() can record the start address and size of the allocation. However, no knowledge of variable 'dictionary' that will be assigned to, any features from the compilers side can help to solve this problem, without modifying the compiler to instrument such assignment statements?
One way I've been thinking is to use the feature that variable 'dictionary' and function 'malloc' is on one source code line or next to each other, the dwarf provides line information.
One thing you can do with Clang and LLVM is emit the code with debug information and then look for malloc calls. These will be assigned to LLVM values, which can be traced (when not compiled with optimizations, that is) to the original C/C++ source code via the debug information metadata.
With msvc, is there an equivalent to gcc's "__builtin_return_address"?
I'm looking to find the address of the calling function, 1 level deep.
__ReturnAddress
From MSDN:
The _ReturnAddress intrinsic provides
the address of the instruction in the
calling function that will be executed
after control returns to the caller
Note that on some platforms, the result could be misleading due to tail folding - the compiler might have your inner function return 2 levels deep. This can commonly occur for code like this:
int DoSomething()
{
return DoSomethingSpecial();
}
The compiler could generate code so DoSomethingSpecial returns directly to the caller of DoSomething.
Also, the return address is not trustworthy-enough to make security decisions, see here.