Looking at the C++11 Spec (n3485) section 5.3.7, note 3 says that the result of noexcept(expr) is false if:
... a potentially-evaluated call to a function... that does not have a
non-throwing exception-specification ... a potentially-evaluated
throw-expression ... a potentially-evaluated dynamic_cast ... a
potentially-evaluated typeid expression...
Does "potentially evaluated" mean that it drills down (not at all? a little?) to determine if one of the conditions can result in false?
I'm finding that (in test code, not an application) a function that claims to be noexcept but does, in fact, throw (even if in all cases) will still be considered to be noexcept. Am misunderstanding the spec or is the code in the following example all wrong?
double calculate(....) noexcept { throw "haha"; } // using simpsons::nelson
bool does_not_throw = noexcept(calculate());
According to Clang 3.3 this test says that calculate() does not throw.
All it's doing is checking what the expression does to see if the terms of the expression would throw an exception. It doesn't check the actual code that would potentially be called. If one of the expression terms is a function call that is not explicitly noexcept, then it is assumed to be able to throw exceptions.
Or, to put it another way, it checks to see if all of the functions being called in the expression are noexcept. That's all.
According to Clang 3.3 this test says that calculate() does not throw.
And that's true. Because calculate is defined as noexcept, if it tries to emit an exception, std::terminate will be called. Therefore, no exceptions will be emitted by the function.
Related
Here is the code:
class SomeType {
public:
SomeType() {}
~SomeType() {}
std::string xxx;
}
bool funtion_ab() {
SomeType(); // This is a right val;
// The right val destructs here when I test the code. I want to make sure that it would always destructs here.
int a = 0, b = 10;
....// other code
return true;
}
Please tell me if you know the truth. Thank you!
What you have is called a temporary object. From §6.7.7,
Temporary objects are created
when a prvalue is converted to an xvalue
or, more specifically,
[Note 3: Temporary objects are materialized:
...
when a prvalue that has type other than cv void appears as a discarded-value expression ([expr.context]).
— end note]
and, on the lifetime, the same section has this to say
Temporary objects are destroyed as the last step in evaluating the full-expression ([intro.execution]) that (lexically) contains the point where they were created.
You can read more about the expression semantics, but in your case "full-expression" is fairly unambiguous.
SomeType();
The "full-expression" containing your constructor call is... the constructor call itself. So the destructor will be called immediately after evaluating the constructor. There are some exceptions to this rule (such as if the temporary object is thrown as an exception or is bound as a reference), but none of those apply here.
As noted in the comments, compilers are free to inline your constructor and destructor calls and then are free to notice that they do nothing and omit them entirely. Optimizers can do fun stuff with your code, provided it doesn't change the semantics. But a strict reading of the standard states that the destructor is called exactly where you suggested.
I am trying to understand the noexcept feature.
I know it could be confusing, but besides that could noexcept be deduced from the calling function when possible.
This is a non working example of this situation,
void f(){}
void f() noexcept{} // not allowed in c++
void g(){f();} // should call f
void h() noexcept{f();} // should call f noexcept
int main(){
g();
h();
}
If there is no try/catch block in the calling function (h) then the compiler could deduce that one is interested in calling a particular f.
Is this pattern used in some other workaround form?
All I can imagine is somthing like this but it is not very generic:
template<bool NE> void F() noexcept(NE);
template<>
void F<true>() noexcept(true){}
template<>
void F<false>() noexcept(false){}
void g(){F<noexcept(g)>();} // calls F<false>
void h() noexcept{F<noexcept(h)>();} // call F<true>
Some may wonder why that would make sense.
My logic is that that C++ allows to overload with respect to const, both a argument of functions and a member functions.
const member functions prefer to call const member overloads for example.
I think it would make sense for noexcept functions to call noexcept "overloads". Specially if they are not called from a try/catch block.
It makes sense,
Of course it would make sense in principle. One version of the function could run, say, a faster algorithm, but which requires dynamically-allocated extra scratch memory, while the noexcept version could use a slower algorithm with O(1) extra space, on the stack.
but wouldn't be able to resolve the overload ...
As you may know, it's perfectly valid to call noexcept(false) functions from noexcept(true) functions. You're just risking a terminate instead of an exception being thrown; and sometimes - you're not risking anything because you've verified that the inputs you pass cannot trigger an exception. So, how would the compiler know which version of the function you're calling? And the same question for the other direction - maybe you want to call your noexcept(true) function from within a noexcept(false) function? That's also allowed.
... and - it would be mostly syntactic sugar anyway
With C++11, you can write:
#include <stdexcept>
template <bool ne2>
int bar(int x) noexcept(ne2);
template<> int bar<true>(int) noexcept { return 0; }
template<> int bar<false>(int) { throw std::logic_error("error!"); }
and this compiles just fine: GodBolt.
So you can have two function with the same and same arguments, differing only w.r.t. their noexcept value - but with different template arguments.
I don't think overloading on noexcept makes a lot sense on its own. For sure, it makes sense wether your function f is noexcept, especially when called from h, as h needs to catch the possible exception and call std::abort.
However, just overloading on noexcept ain't a good thing to do. It's like disabling exceptions in the standard library. I'm not arguing you shouldn't do that, though, you do loose functionality because of it. For example: std::vector::at throws if the index is invalid. If you disable exceptions, you don't have an alternative for using this functionality.
So if you really want to have 2 versions, you might want to use other alternatives to indicate failure. std::optional, std::expected, std::error_code ...
Even if you manage to overload on noexcept, your function will have a different return type. This ain't something I would expect as a user from your framework.
Hence, I think it's better to overload is a different way, so the user can choose which variant to use, this by using the boolean explicitly, std::nothrow as argument output argument with std::error_code. Or maybe, you should make a choice on the error handling strategy you use in your library and enforce that to your users.
It is my understanding that move semantics can use move-constructors to elide what would otherwise be a copy. For example, a function returning a (perhaps) large data structure can now return by value, and the move constructor will be used to avoid a copy.
My question is this: is the compiler required to not copy when this is possible? It doesn't seem to be the case. In that case, wouldn't the following code have "implementation-defined" semantics?
static const int INVALID_HANDLE = 0xFFFFFFFF;
class HandleHolder {
int m_handle;
public:
explicit HandleHolder(int handle) : m_handle(handle) {}
HandleHolder(HandleHolder& hh) {
m_handle = hh.m_handle;
}
HandleHolder(HandleHolder&& hh) : m_handle(INVALID_HANDLE) {
swap(m_handle, hh.m_handle);
}
~HandleHolder() noexcept {
if (m_handle != INVALID_HANDLE) {
destroy_the_handle_object(m_handle);
}
}
};
Say then we make a function:
HandleHolder make_hh(int handle) { return HandleHolder(handle); }
Which constructor is called? I would expect the move constructor, but am I guaranteed the move constructor?
I'm aware this is a silly example and that -- for example -- the copy constructor of this object should be deleted because there is no way to use it safely otherwise, but the semantics are simple enough that I wouldn't think something like this would be implementation-defined.
Yes, of course. There's nothing implementation-defined about it.
If there is a move constructor and it can be used, and it is a choice between a move constructor and a copy constructor, the move constructor will be invoked. That is a guarantee.
[C++11: 13.3.3.2/3]: [..] Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if:
[..]
S1 and S2 are reference bindings (8.5.3) and neither refers to an implicit object parameter of a non-static member function declared without a ref-qualifier, and S1 binds an rvalue reference to an rvalue and S2 binds an lvalue reference.
[..]
I think your confusion stems from misuse of the term "elide". The compiler may elide copies/moves and replace them with nothingness — with in-place construction that bypasses the invocation of a constructor altogether. Copy elision never results in a move, and move elision never results in a copy. Either the object "transferral" happens or it does not.
You could sort of argue that your program has "implementation-defined" semantics in the sense that you don't know whether copies/moves will be elided until the program has been compiled, and because such elision is allowed to modify side-effects (such as console output). But we don't tend to think of it that way.
Regardless, this does not affect which of the copy and move constructors will be invoked if either are to be.
Your example is further flawed because only your move constructor can be invoked: your copy constructor takes a ref-to-non-const which can't be bound through an rvalue initialiser.
I can check for the input and if it's an invalid input from the user, I can use a simple "if condition" which prints "input invalid, please re-enter" (in case there is an invalid input).
This approach of "if there is a potential for a failure, verify it using an if condition and then specify the right behavior when failure is encountered..." seems enough for me.
If I can basically cover any kind of failure (divide by zero, etc.) with this approach, why do I need this whole exception handling mechanism (exception class and objects, checked and unchecked, etc.)?
Suppose you have func1 calling func2 with some input.
Now, suppose func2 fails for some reason.
Your suggestion is to handle the failure within func2, and then return to func1.
How will func1 "know" what error (if any) has occurred in func2 and how to proceed from that point?
The first solution that comes to mind is an error-code that func2 will return, where typically, a zero value will represent "OK", and each of the other (non-zero) values will represent a specific error that has occurred.
The problem with this mechanism is that it limits your flexibility in adding / handling new error-codes.
With the exception mechanism, you have a generic Exception object, which can be extended to any specific type of exception. In a way, it is similar to an error-code, but it can contain more information (for example, an error-message string).
You can still argue of course, "well, what's the try/catch for then? why not simply return this object?".
Fortunately, this question has already been answered here in great detail:
In C++ what are the benefits of using exceptions and try / catch instead of just returning an error code?
In general, there are two main advantages for exceptions over error-codes, both of which are different aspects of correct coding:
With an exception, the programmer must either handle it or throw it "upwards", whereas with an error-code, the programmer can mistakenly ignore it.
With the exception mechanism you can write your code much "cleaner" and have everything "automatically handled", wheres with error-codes you are obliged to implement a "tedious" switch/case, possibly in every function "up the call-stack".
Exceptions are a more object-oriented approach to handling exceptional execution flows than return codes. The drawback of return codes is that you have to come up with 'special' values to indicate different types of exceptional results, for example:
public double calculatePercentage(int a, int b) {
if (b == 0) {
return -1;
}
else {
return 100.0 * (a / b);
}
}
The above method uses a return code of -1 to indicate failure (because it cannot divide by zero). This would work, but your calling code needs to know about this convention, for example this could happen:
public double addPercentages(int a, int b, int c, int d) {
double percentage1 = calculatePercentage(a, b);
double percentage2 = calculatePercentage(c, c);
return percentage1 + percentage2;
}
Above code looks fine at first glance. But when b or d are zero the result will be unexpected. calculatePercentage will return -1 and add it to the other percentage which is likely not correct. The programmer who wrote addPercentages is unaware that there is a bug in this code until he tests it, and even then only if he really checks the validity of the results.
With exceptions you could do this:
public double calculatePercentage(int a, int b) {
if (b == 0) {
throw new IllegalArgumentException("Second argument cannot be zero");
}
else {
return 100.0 * (a / b);
}
}
Code calling this method will compile without exception handling, but it will stop when run with incorrect values. This is often the preferred way since it leaves it up to the programmer if and where to handle exceptions.
If you want to force the programmer to handle this exception you should use a checked exception, for example:
public double calculatePercentage(int a, int b) throws MyCheckedCalculationException {
if (b == 0) {
throw new MyCheckedCalculationException("Second argument cannot be zero");
}
else {
return 100.0 * (a / b);
}
}
Notice that calculatePercentage has to declare the exception in its method signature. Checked exceptions have to be declared like that, and the calling code either has to catch them or declare them in their own method signature.
I think many Java developers currently agree that checked exceptions are bit invasive so most API's lately gravitate towards the use of unchecked exceptions.
The checked exception above could be defined like this:
public class MyCheckedCalculationException extends Exception {
public MyCalculationException(String message) {
super(message);
}
}
Creating a custom exception type like that makes sense if you are developing a component with multiple classes and methods which are used by several other components and you want to make your API (including exception handling) very clear.
(see the Throwable class hierarchy)
Let's assume that you need to write some code for some object, which consists of n different resources (n > 3) to be allocated in the constructor and deallocated inside the destructor.
Let's even say, that some of these resources depend on each other.
E.g. in order to create an memory map of some file one would first have to successfully open the file and then perform the OS function for memory mapping.
Without exception handling you would not be able to use the constructor(s) to allocate these resources but you would likely use two-step-initialization.
You would have to take care about order of construction and destruction yourself
-- since you're not using the constructor anymore.
Without exception handling you would not be able to return rich error information to the caller -- this is why in exception free software one usually needs a debugger and debug executable to identify why some complex piece of software is suddenly failing.
This again assumes, that not every library is able to simply dump it's error information to stderr. stderr is in certain cases not available, which in turn makes all code which is using stderr for error reporting not useable.
Using C++ Exception Handling you would simply chain the classes wrapping the matching system calls into base or member class relationships AND the compiler would take care about order of construction and destruction and to only call destructors for not failed constructors.
To start with, methods are generally the block of codes or statements in a program that gives the user the ability to reuse the same code which is ultimately the saving on the excessive use of memory. This means that there is now no wastage of memory on the computer.
struct STest : public boost::noncopyable {
STest(STest && test) : m_n( std::move(test.m_n) ) {}
explicit STest(int n) : m_n(n) {}
int m_n;
};
STest FuncUsingConst(int n) {
STest const a(n);
return a;
}
STest FuncWithoutConst(int n) {
STest a(n);
return a;
}
void Caller() {
// 1. compiles just fine and uses move ctor
STest s1( FuncWithoutConst(17) );
// 2. does not compile (cannot use move ctor, tries to use copy ctor)
STest s2( FuncUsingConst(17) );
}
The above example illustrates how in C++11, as implemented in Microsoft Visual C++ 2012, the internal details of a function can modify its return type. Up until today, it was my understanding that the declaration of the return type is all a programmer needs to know to understand how the return value will be treated, e.g., when passed as a parameter to a subsequent function call. Not so.
I like making local variables const where appropriate. It helps me clean up my train of thought and clearly structure an algorithm. But beware of returning a variable that was declared const! Even though the variable will no longer be accessed (a return statement was executed, after all), and even though the variable that was declared const has long gone out of scope (evaluation of the parameter expression is complete), it cannot be moved and thus will be copied (or fail to compile if copying is not possible).
This question is related to another question, Move semantics & returning const values. The difference is that in the latter, the function is declared to return a const value. In my example, FuncUsingConst is declared to return a volatile temporary. Yet, the implementational details of the function body affect the type of the return value, and determine whether or not the returned value can be used as a parameter to other functions.
Is this behavior intended by the standard?
How can this be regarded useful?
Bonus question: How can the compiler know the difference at compile time, given that the call and the implementation may be in different translation units?
EDIT: An attempt to rephrase the question.
How is it possible that there is more to the result of a function than the declared return type? How does it even seem acceptable at all that the function declaration is not sufficient to determine the behavior of the function's returned value? To me that seems to be a case of FUBAR and I'm just not sure whether to blame the standard or Microsoft's implementation thereof.
As the implementer of the called function, I cannot be expected to even know all callers, let alone monitor every little change in the calling code. On the other hand, as the implementer of the calling function, I cannot rely on the called function to not return a variable that happens to be declared const within the scope of the function implementation.
A function declaration is a contract. What is it worth now? We are not talking about a semantically equivalent compiler optimization here, like copy elision, which is nice to have but does not change the meaning of code. Whether or not the copy ctor is called does change the meaning of code (and can even break the code to a degree that it cannot be compiled, as illustrated above). To appreciate the awkwardness of what I am discussing here, consider the "bonus question" above.
I like making local variables const where appropriate. It helps me clean up my train of thought and clearly structure an algorithm.
That is indeed a good practice. Use const wherever you can. Here, however, you cannot (if you expect your const object to be moved from).
The fact that you declare a const object inside your function is a promise that your object's state won't ever be altered as long as the object is alive - in other words, never before its destructor is invoked. Not even immediately before its destructor is invoked. As long as it is alive, the state of a const object shall not change.
However, here you are somehow expecting this object to be moved from right before it gets destroyed by falling out of scope, and moving is altering state. You cannot move from a const object - not even if you are not going to use that object anymore.
What you can do, however, is to create a non-const object and access it in your function only through a reference to const bound to that object:
STest FuncUsingConst(int n) {
STest object_not_to_be_touched_if_not_through_reference(n);
STest const& a = object_not_to_be_touched_if_not_through_reference;
// Now work only with a
return object_not_to_be_touched_if_not_through_reference;
}
With a bit of discipline, you can easily enforce the semantics that the function should not modify that object after its creation - except for being allowed to move from it when returning.
UPDATE:
As suggested by balki in the comments, another possibility would be to bind a constant reference to a non-const temporary object (whose lifetime would be prolonged as per § 12.2/5), and perform a const_cast when returning it:
STest FuncUsingConst(int n) {
STest const& a = STest();
// Now work only with a
return const_cast<STest&&>(std::move(a));
}
A program is ill-formed if the copy/move constructor [...] for an object is implicitly odr-used and the special member function is not accessible
-- n3485 C++ draft standard [class.copy]/30
I suspect your problem is with MSVC 2012, and not with C++11.
This code, even without calling it, is not legal C++11:
struct STest {
STest(STest const&) = delete
STest(STest && test) : m_n( std::move(test.m_n) ) {}
explicit STest(int n) : m_n(n) {}
int m_n;
};
STest FuncUsingConst(int n) {
STest const a(n);
return a;
}
because there is no legal way to turn a into a return value. While the return can be elided, eliding the return value does not remove the requirement that the copy constructor exist.
If MSVC2012 is allowing FuncUsingConst to compile, it is doing so in violation of the C++11 standard.