Display non-const string (error message) in Qt - c++11

Recently, I had to code some kind of input validation in a Qt GUI project.
During a failing validation process, I would like to inform the user what exactly went wrong. Hence, I created QMessageBoxes with a short summary and the detailed error message gained by std::exception::what().
Alas, QMessageBoxes get constant strings, but exception::what() returns a non-constant C-like string.
What is the best way to solve / circumvent this issue? I need a temporary conventional widget with the ability to display a runtime error message non-editable, but preferably copyable. Perhaps, QMessageBox is not the best candidate for doing this?
Due to exterior constraints, I am using Qt 5.7 with C++ 11 and without Qt Quick.
EDIT: For the sake of a future reader: This solution seems to work:
try{ // ...
}
catch (const out_of_range& e){
QMessageBox(QMessageBox::Critical, "Knoten zu groß", QString::fromStdString(static_cast<const string&>("short info. error message:\n" + static_cast<string>(e.what()))));

Youcan use const_cast in order to cast an object to const and viceversa
An example from cppreference.com

Related

CBService/CBCharacteristic UUIDString returns nil

I have an iOS app that generates a new UUID when it's going to talk for the first time with my OSX app, which is then stored, this way I can have different characteristics identifiers for different devices.
After calling
peripheral.discoverCharacteristics([], forService: service as CBService)
in the "didDiscoverServices" method in the OSX app, in "didDiscoverCharacteristicsForService" I do this to try to find out what the new device's UUID is, so that I check if they've talked before, if not, I will store it:
for characteristic in service.characteristics {
println("\(characteristic.UUIDString)")
}
The problem is that it prints nil, if I use characteristic.UUID instead, it gives me "Unknown (<46c35c38 c4994106 b6d351c9 8d900368>)" which is not the format I want to deal with.
Any idea why?
Another thing I've noticed is that, after testing what I've explained repeatedly with LightBlue to see what my app is advertising, sometimes certain services/characteristics seem to get stuck and just won't go away, I've even had to restore my phone to get rid of them once. Is this normal?
Thanks a lot
It seems that the documentation for CBUUID is incorrect. Although it references the UUIDString method, as you pointed out in your comments this gives an unrecognised selector exception.
I tried both Swift and Objective C code.
The CBUUID that is returned from a CBMutableCharacteristic does respond to the UUIDString selector, but obviously you don't get CBMutableCharacteristics from the discovery process.
As a design note, your approach probably isn't the most efficient. It would probably be better to have a known characteristic that contains an identifier (such as identifierForVendor) and then you can use the contents of the characteristic itself rather than the identifier of the characteristic to determine if this is a new device or not.
This way you can send the list of desired characteristics to discoverCharacteristics and write more "defined" code, rather than having to assume that the "unknown" characteristic is the identifier.

How stringent should I be with Code Analysis compliance in Visual Studio?

After playing with Code Analysis for a small project I am working on, I am wondering just how severe I should be when resolving code to be analytically compliant.
I know I can suppress warnings for this, but to me, suppressing a warning to some extent is a Cop-out (no pun intended..."FXCop").
Example warning:
Do not raise exceptions in unexpected
locations 'CustomObject.Equals(object)' creates an exception of type
'ArgumentException'. Exceptions should not be raised in this type of
method. If this exception instance might be raised, change this
method's logic so it no longer raises an exception.
Reason for throwing this...
CustomObject.Equals(object) might try and compare CustomObject to FooBarObject...which aren't even of the same type, so in this instance, should I throw an exception, or just return false?
In general, should I be really anal (for want of a better word) in making my code absolutely compliant, or will I come across situations where warning suppression will become necessary?
FxCop warnings are just warnings, they don't flag invalid code. That's the job of the compiler. The rules FxCop uses were collected from years of experience writing .NET code. They represent "best practices" and in general are there to remind you of unintended consequences and the more obscure parts of .NET programming, like CAS.
Always refer back to the documentation to see why the rule exists. For CA1065 you'll see:
An Equals method should return true or false instead of throwing an exception. For example, if Equals is passed two mismatched types it should just return false instead of throwing an ArgumentException.
Which exactly matches your usage, you'll have no trouble adopting the advice. Unfortunately it is a bit short on the exact reason the rule was created. Which really doesn't go beyond the "don't throw in unexpected places" guidance. The unintended consequence here is that another programmer that uses your class won't realize that a try/catch would be needed if he doesn't want the code to fail. Feel free to put a Debug.Assert() in your Equals method. There are plenty of cases where you'll want to ignore the advice, CA2000 is particularly prone to false warnings for example. Apply the [SuppressMessage] attribute if necessary to not have to look at it again.

What's the rule on hijacking Windows error codes for return from my own code?

Are the error codes defined in WinError.h free to be hijacked and returned my by own code?
There are some generic Win32 error codes defined:
ERROR_FILE_NOT_FOUND: "The system cannot find the file specified."
that of course i could use for my own purpose when a file is not found.
Then there are still some generic errors:
ERROR_ACCESS_DENIED: "Access is denied."
but it's generally understood that this error comes when trying to access a file. i might have an HttpGet() function that returns ERROR_ACCESS_DENIED if the server returns a 401.
But then there are codes that are defined as being for a particular purpose. Imagine my HttpGet() function throws an error if the https certificate is not a type we support:
ERROR_IPSEC_IKE_INVALID_CERT_TYPE: "Invalid certificate type"
except that code is defined quite clearly in WinError.h as belonging to IPSec:
///////////////////////////////////////////////////
// //
// Start of IPSec Error codes //
// //
// 13000 to 13999 //
///////////////////////////////////////////////////
...
//
// MessageId: ERROR_IPSEC_IKE_INVALID_CERT_TYPE
//
// MessageText:
//
// Invalid certificate type
//
#define ERROR_IPSEC_IKE_INVALID_CERT_TYPE 13819L
But the error text is exactly what i want!
We have a financial system that requires a 2nd user to approve a transaction; this means that the entered credentials must be a different user than the first person:
MK_E_MUSTBOTHERUSER: "User input required for operation to succeed"
or perhaps:
ERROR_LOGON_TYPE_NOT_GRANTED: "Logon failure: the user has not been granted the requested logon type at this computer."
i get the sense that spelunking through WinError, cherry-picking error codes by the string they show and re-purposing them to indicate errors they were not designed for, is something Raymond would give me a slap for.
Are Windows error codes a freely available "pool" of error codes? Or are all Windows error codes only to be returned from Microsoft's code?
If i'm creating HRESULTS, i need to use some code. And it would be great if the user could call FormatMessage to turn my HRESULT into a string. (Especially since my return HRESULT could either me either my own code, or an HRESULT that was returned to me from Microsoft code).
When you define your own HRESULT codes, you are recommended to use FACILITY_ITF and define your codes in this range of application-defined codes. Yes the codes would overlap between applications, but you can also provide additional descriptive message through SetErrorInfo API and indicating support by implementing ISupportErrorInfo if you are implementing COM object/interface.
An excerpt from COM+ Programming book explains this in greater detail: COM+ programming: a practical guide using Visual C++ and ATL (page 67).
Simpler than that, you can use your custom facility code and this might be different in different modules, so you can easily identify which module is the source of the problem.
If you put MESSAGETABLE resource into your binary, FormatMessage API could resolve HRESULTs and extract description text for individual codes, just like it happens with regular Windows error codes (the app would still need to provide module handle to the API function).
See also: Creating your own HRESULT?
O.P. Edit:
From Microsoft's Open Protocol Specification of HRESULTs
2.1 HRESULT
The HRESULT numbering space is vendor-extensible. Vendors can supply their own values for this field, as long as the C bit (0x20000000) is set, indicating it is a customer code.
C (1 bit): Customer. This bit specifies if the value is customer-defined or Microsoft-defined. The bit is set for customer-defined values and clear for Microsoft-defined values. <1>
<1> Section 2.1: All HRESULT values used by Microsoft have the C bit clear.
This means that i can make up any HRESULT i like as long as i set the C bit. As a practical matter this means that i can any any code i like, e.g.:
E_LOGON_FAILURE = 0xA007052E;
Of course i didn't come up with this number at random.
First i set the high bit to 1, to indicate an error:
0x80000000
Then i "choose" an error code, e.g. 1326 (0x52E hex):
0x8000052E
Then i need to choose a facility. Let's chooooose, on i dunno, seven:
0x8007052E
Finally i set the Customer bit, to indicate it is not a Microsoft defined HRESULT:
0xA007052E
And coincidentally everything up until the last step is the guts of the Microsoft HRESULT_FROM_WIN32 macro:
Maps a system error code to an HRESULT value.
HRESULT HRESULT_FROM_WIN32(DWORD x);
Don't go down this road, and just define your own.
Reuse of the codes themselves is pointless, since having your own codes is just one #define away.
Reuse of the error strings is not such a good idea either:
These will be localized. So if your (English) application runs on a Chinese version of Windows, the user will see a mixture of the two languages.
Microsoft might change the text at their whim, with a new release of Windows or with an update. For example, if you used ERROR_IPSEC_IKE_INVALID_CERT_TYPE to indicate HTTPS certificate errors, and Microsoft decided to fix their vague message to say "Invalid IPSEC IKE certificate type" instead? (I don't know what IKE is at all, and have only a vague notion about IPSEC, but you get the point.)
You'll have no way of customizing the error messages to be clearer or more specific.
Once users of your API start using FormatMessage, you'll be forever tied to the set of error codes and messages that Microsoft provides.
If you already know that Raymond Chen would slap you, why are you even considering this? ;)
i think i may have found my own answer, by actually reading the documentation:
System Error Codes
The System Error Codes are very broad. ... Because these codes are defined in WinError.h for anyone to use, sometimes the codes are returned by non-system software.
Although only error codes are described as such (i.e. positive numbers from 0-15999).
But the same spirit starts to apply to COM error codes:
E_NOTIMPL: 0x80004001 Not implemented
E_INVALIDARG: 0x80070057 One or more arguments are invalid
E_FAIL: 0x80004005 Unspecified error (The dreaded 80004005 unspecified error, error)
In fact it's Delphi's safecall mapping that converts any generic exception into
E_UNEXPECTED 0x8000FFFF Catastrophic failure
So it's quite expected to use Microsoft's HRESULT codes.
You also have the HRESULTs that are HRESULT formulations of the standard Win32 error codes, e.g. E_HANDLE is a wrapper around ERROR_INVALID_HANDLE (6):
E_HANDLE = 0x80070006; //Invalid handle
Which is done using the HRESULT_FROM_WIN32 macro.
Although it might be morally questionable to use Microsoft HRESULTS outside generic facilities:
FACILITY_NULL For broadly applicable common status codes such as S_OK.
FACILITY_RPC For status codes returned from remote procedure calls.
FACILITY_DISPATCH For late-binding IDispatch interface errors.
FACILITY_ITF For most status codes returned from interface methods. The actual meaning of the error is defined by the interface. That is, two HRESULTs with exactly the same 32-bit value returned from two different interfaces might have different meanings.
FACILITY_WIN32 Used to provide a means of handling error codes from functions in the Windows API as an HRESULT.

glGenBuffers is NULL giving a 0x0000000 access violation when using glew

> I have visual studio c++ express and a NVIDIA GeForce 7900 GS. I'm using glew to get at the openGL extensions. Calling glGenBuffers crashes as its a NULL pointer tho. I have an open GL context before I make the call ( wglGetCurrentContext() != NULL ). I'm calling glewInit() before the call. glewGetString( GLEW_VERSION ) is returning GLEW_VERSION_1_5. What am I doing wrong ? Is the card too old ? Is it the driver ?
Remember to make a glewInit() call in your code so that you get valid pointers to GL functions.
Hope it helps.
Without seeing your code it would be difficult to tell, but what you are attempting to do seems like it could be helped a lot by using GLee. It is designed to load all current extensions and you have the ability to check what is supported, e.g. :
#include <gl\GLee.h> // (no need to link to gl.h)
...
if (GLEE_ARB_multitexture) //is multitexture support available?
{
glMultiTexCoord2fARB(...); //safe to use multitexture
}
else
{
//fallback
}
The above was shamelessly copy/pasted from the GLee site, but it displays the functionality I'm trying to showcase.
You need to call glewInit() post having a valid context. And that would be, in the world of glew, after you've called glfwMakeContextCurrent(myWindow);
I have actually run into this problem with GLEW. For me, it was nullifying the function pointer for glGenerateMipmap. I fixed it by simply restoring the pointer to the appropriate function. This is my example in Linux:
glGenerateMipmap = (void(*)(GLenum))
glXGetProcAddressARB((GLubyte*)"glGenerateMipmap");
There is a WGL equivalent for glXGetProcAddress; I just don't remember the name off the top of my head. Try manually restoring the functions using this method. If you come across many functions that are null, something is definitely wrong in your setup process. The only other functions I recall having to restore were glGenVertexArrays, glBindVertexArray, and glDeleteVertexArrays. If your glGenBuffers is null, odds are that glBindBuffer and glDeleteBuffers are null as well. :(
Test if the desired extension is actually supported by checking the string returned by glGetString(GL_EXTENSIONS); if it's not there you know what's causing your problems.

Should I make sure arguments aren't null before using them in a function?

The title may not really explain what I'm really trying to get at, couldn't really think of a way to describe what I mean.
I was wondering if it is good practice to check the arguments that a function accepts for nulls or empty before using them. I have this function which just wraps some hash creation like so.
Public Shared Function GenerateHash(ByVal FilePath As IO.FileInfo) As String
If (FilePath Is Nothing) Then
Throw New ArgumentNullException("FilePath")
End If
Dim _sha As New Security.Cryptography.MD5CryptoServiceProvider
Dim _Hash = Convert.ToBase64String(_sha.ComputeHash(New IO.FileStream(FilePath.FullName, IO.FileMode.Open, IO.FileAccess.Read)))
Return _Hash
End Function
As you can see I just takes a IO.Fileinfo as an argument, at the start of the function I am checking to make sure that it is not nothing.
I'm wondering is this good practice or should I just let it get to the actual hasher and then throw the exception because it is null.?
Thanks.
In general, I'd suggest it's good practice to validate all of the arguments to public functions/methods before using them, and fail early rather than after executing half of the function. In this case, you're right to throw the exception.
Depending on what your method is doing, failing early could be important. If your method was altering instance data on your class, you don't want it to alter half of the data, then encounter the null and throw an exception, as your object's data might them be in an intermediate and possibly invalid state.
If you're using an OO language then I'd suggest it's essential to validate the arguments to public methods, but less important with private and protected methods. My rationale here is that you don't know what the inputs to a public method will be - any other code could create an instance of your class and call it's public methods, and pass in unexpected/invalid data. Private methods, however, are called from inside the class, and the class should already have validated any data passing around internally.
One of my favourite techniques in C++ was to DEBUG_ASSERT on NULL pointers. This was drilled into me by senior programmers (along with const correctness) and is one of the things I was most strict on during code reviews. We never dereferenced a pointer without first asserting it wasn't null.
A debug assert is only active for debug targets (it gets stripped in release) so you don't have the extra overhead in production to test for thousands of if's. Generally it would either throw an exception or trigger a hardware breakpoint. We even had systems that would throw up a debug console with the file/line info and an option to ignore the assert (once or indefinitely for the session). That was such a great debug and QA tool (we'd get screenshots with the assert on the testers screen and information on whether the program continued if ignored).
I suggest asserting all invariants in your code including unexpected nulls. If performance of the if's becomes a concern find a way to conditionally compile and keep them active in debug targets. Like source control, this is a technique that has saved my ass more often than it has caused me grief (the most important litmus test of any development technique).
Yes, it's good practice to validate all arguments at the beginning of a method and throw appropriate exceptions like ArgumentException, ArgumentNullException, or ArgumentOutOfRangeException.
If the method is private such that only you the programmer could pass invalid arguments, then you may choose to assert each argument is valid (Debug.Assert) instead of throw.
If NULL is an inacceptable input, throw an exception. By yourself, like you did in your sample, so that the message is helpful.
Another method of handling NULL inputs is just to respont with a NULL in turn. Depends on the type of function -- in the example above I would keep the exception.
If its for an externally facing API then I would say you want to check every parameter as the input cannot be trusted.
However, if it is only going to be used internally then the input should be able to be trusted and you can save yourself a bunch of code that's not adding value to the software.
You should check all arguments against the set of assumptions that you make in that function about their values.
As in your example, if a null argument to your function doesn't make any sense and you're assuming that anyone using your function will know this then being passed a null argument shows some sort of error and some sort of action taken (eg. throwing an exception). And if you use asserts (as James Fassett got in and said before me ;-) ) they cost you nothing in a release version. (they cost you almost nothing in a debug version either)
The same thing applies to any other assumption.
And it's going to be easier to trace the error if you generate it than if you leave it to some standard library routine to throw the exception. You will be able to provide much more useful contextual information.
It's outside the bounds of this question, but you do need to expose the assumptions that your function makes - for example, through the comment header to your function.
According to The Pragmatic Programmer by Andrew Hunt and David Thomas, it is the responsibility of the caller to make sure it gives valid input. So, you must now choose whether you consider a null input to be valid. Unless it makes specific sense to consider null to be a valid input (e.g. it is probably a good idea to consider null to be a legal input if you're testing for equality), I would consider it invalid. That way your program, when it hits incorrect input, will fail sooner. If your program is going to encounter an error condition, you want it to happen as soon as possible. In the event your function does inadvertently get passed a null, you should consider it to be a bug, and react accordingly (i.e. instead of throwing an exception, you should consider making use of an assertion that kills the program, until you are releasing the program).
Classic design by contract: If input is right, output will be right. If input is wrong, there is a bug. (if input is right but output is wrong, there is a bug. That's a gimme.)
I'll add a couple of elaborations (in bold) to the excellent design by contract advice offerred by Brian earlier...
The priniples of "design by contract" require that you define what is acceptable for the caller to pass in (the valid domain of input values) and then, for any valid input, what the method/provider will do.
For an internal method, you can define NULLs as outside the domain of valid input parameters. In this case, you would immediately assert that the input parameter value is NOT NULL. The key insight in this contract specification is that any call passing in a NULL value IS A CALLER'S BUG and the error thrown by the assert statement is the proper behavior.
Now, while very well defined and parsimonius, if you're exposing the method to external/public callers, you should ask yourself, is that the contract I/we really want?
Probably not. In a public interface, you'd probably accept the NULL (as technically in the domain of inputs that the method accepts), but then decline to process gracefully w/ a return message. (More work to meet the naturally more complex customer-facing requirement.)
In either case, what you're after is a protocol that handles all of the cases from both the perspective of the caller and the provider, not lots of scattershot tests that can make it difficult to assess the completeness or lack of completeness of the contractual condition coverage.
Most of the time, letting it just throw the exception is pretty reasonable as long as you are sure the exception won't be ignored.
If you can add something to it, however, it doesn't hurt to wrap the exception with one that is more accurate and rethrow it. Decoding "NullPointerException" is going to take a bit longer than "IllegalArgumentException("FilePath MUST be supplied")" (Or whatever).
Lately I've been working on a platform where you have to run an obfuscator before you test. Every stack trace looks like monkeys typing random crap, so I got in the habit of checking my arguments all the time.
I'd love to see a "nullable" or "nonull" modifier on variables and arguments so the compiler can check for you.
If you're writing a public API, do your caller the favor of helping them find their bugs quickly, and check for valid inputs.
If you're writing an API where the caller might untrusted (or the caller of the caller), checked for valid inputs, because it's good security.
If your APIs are only reachable by trusted callers, like "internal" in C#, then don't feel like you have to write all that extra code. It won't be useful to anyone.

Resources