Currently my pylint is configured to emit only errors. This, I believe, is too restrictive. But when I enable warnings, the numbers of warning generated are overwhelmingly large. Many of them are just syntax related. I am not sure which warning has the potential of causing a serious bug. Are there any recommended warnings set one should always enable in pylint. For example in C++, I always treat warning about statement if(a = 1) (instead of if(a == 1)) as error and it has saved me many times from writing illogical code.
Related
I have recently joined the team that has a very peculiar coding guidance:
Whenever they have an if block not followed by "else", they put a semicolone after the closing brace. The rationale is that it would signal to the reader that if there is an "else" below it, it belongs to the outer level "if" (in case the indentation is wrong). A small example:
if (condition1)
{
//do something
if (condition2)
{
// do something else
};
}
else
{
// do something as a negative response to the first if
}
I have not seen it before, so my question is - other than being an eyesore, is there any performance penalty for these empty statements at the end of the block, or they are just being ignored by the compilers? This is not a single or rare occasion, I am seeing these empty statements all over a file I am supposed to modify, one among many similarly coded...
No, there is no runtime performance penalty for empty statements in C/C++ assuming optimizations are enabled or a mainstream compiler is used.
Basic optimizations are enough to remove any runtime overhead of the generated program. Mainstream compilers like Clang, GCC and MSVC remove such useless statements very early in the front-end part of the compilation stage. For example, Clang generates a Null statement during the generation of the Abstract Syntax Tree (AST), but it does not generate Intermediate Representation (IR) instruction related to the useless statements. The IR code is then used to optimize the output program and generate the assembly code. Note that this introduces some (small) overhead during the compilation though since compiler generate useless temporary data that should be stored and processed.
What's the simplest way to syntax-check my VHDL in Vivado without running through a full synthesis?
Sometimes I code many inter-related modules at once, and would like to quickly find naming errors, missing semi-colons, port omissions, etc. The advice I've read is to run synthesis, but that takes longer than I need for just a syntax check. I've observed that syntax errors will usually cause synthesis to abort within the first minute or so, so my workaround is to run synthesis and abort it manually after about a minute.
In the Vivado Tcl Console window, the check_syntax command performs a fast syntax check, catches typos, missing semi-colons, etc.
Vivado offers an elaboration step before synthesis. This is the lightweight version of y synthesis by just reading all sources and creating a design model based on the language without optimizations and transformations.
A pure syntax check per file is not enough in many cases. You also want to know if certain identifiers exist and if types are matching. Therefore, an elaboration is needed.
(If you never have heard of that step: VHDL compiling has 2 steps: Analysis and Elaboration. Think of elaboration like of linking in ANSI C.)
Is there a way to retrieve more detailed error information when OpenGL has flagged an error? I know there isn't in core OpenGL, but is there perhaps some common extension or platform- or driver-dependent way or anything at all?
My basic problem is that I have a game (written in Java with JOGL), and when people have trouble with it, which they do on certain hardware/software configurations, it can be quite hard to trace down where the root of the problem lies. For performance reasons, I can't keep calling glGetError for each command but only do so at a few points in the program, so it's kind of hard to even find what command even flagged the error to begin with. Even if I could, however, the extremely general error codes that OpenGL have don't really tell me all that much about what happened (seeing as how the manpages on the commands even describe how the various error codes are reused for sometimes quite many different actual error conditions).
It would be tremendously helpful if there were a way to find out what OpenGL command actually flagged the error, and also more details about the error that was flagged (like, if I get GL_INVALID_VALUE, what value to what argument was invalid and why?).
It seems a bit strange that drivers wouldn't provide this information, even if in a completely custom way, but looked as I have, I sure haven't found any way to find it. If it really is that they don't, is there any good reason for why that is so?
Actually, there is a feature in core OpenGL that will give you detailed debug information. But you are going to have to set your minimum version requirement pretty high to have this as a core feature.
Nevertheless, see this article -- even though it only went core in OpenGL 4.3, it existed in extension form for quite some time and it does not require any special hardware feature. So for the most part all you really need is a recent driver from NV or AMD.
I have an example of how to use this extension in an answer I wrote a while back, complete with a few utility functions to make the output easier to read. It is written in C, so I do not know how helpful it will be, but you might find something useful.
Here is the sort of output you can expect from this extension (AMD Catalyst):
OpenGL Error:
=============
Object ID: 102
Severity: Medium
Type: Performance
Source: API
Message: glDrawElements uses element index type 'GL_UNSIGNED_BYTE' that is not
optimal for the current hardware configuration; consider using
'GL_UNSIGNED_SHORT' instead.
Not only will it give you error information, but it will even give you things like performance warnings for doing something silly like using 8-bit vertex indices (which desktop GPUs do not like).
To answer another one of your questions, if you set the debug output to synchronous and install a breakpoint in your debug callback you can easily make any debugger break on an OpenGL error. If you examine the callstack you should be able to quickly identify exactly what API call generated most errors.
Here are some suggestions.
According to the man pages, glGetError returns the value of the error flag and then resets it to GL_NO_ERROR. I would use this property to track down your bug - if nothing else you can switch up where you call it and do a binary search to find where the error occurs.
I doubt calling glGetError will give you a performance hit. All it does is read back an error flag.
If you don't have the ability to test this on the specific hardware/software configurations those people have, it may be tricky. OpenGL drivers are implemented for specific devices, after all.
glGetError is good for basically saying that the previous line screwed up. That should give you a good starting point - you can look up in the man pages why that function will throw the error, rather than trying to figure it out based on its enum name.
There are other specific error functions to call, such as glGetProgramiv, and glGetFramebufferStatus, that you may want to check, as glGetError doesn't check for every type of error. IE Just because it reads clean doesn't mean another error didn't happen.
I am using Intel's FORTRAN compiler to compile a numerical library. The test case provided errors out within libc.so.6. When I attach Intel's debugger (IDB) the application runs through successfully. How do I debug a bug where the debugger prevents the bug? Note that the same bug arose with gfortran.
I am working within OpenSUSE 11.2 x64.
The error is:
forrtl: severe (408): fort: (3): Subscript #1 of the array B has value -534829264 which is less than the lower bound of 1
The error message is pretty clear to me, you are attempting to access a non-existent element of an array. I suspect that the value -534829264 is either junk when you use an uninitialised variable to identify the element in the array, or the result of an integer arithmetic overflow. Either way you should switch on the compilation flag to force array bounds checking and run some tests. I think the flag for the Intel compiler would be -CB, but check the documentation.
As to why the program apparently runs successfully in the debugger I cannot help much, but perhaps the debugger imposes some default values on variables that the run time system itself doesn't. Or some other factor entirely is responsible.
EDIT:
Doesn't the run-time system tell you what line of code causes the problem ? Some more things to try to diagnose the problem. Use the compiler to warn you of
use of variables before they are initialised;
integer arithmetic overflow (not sure if the compiler can spot this ?);
any forced conversions from one type to another and from one kind to another within the same type.
Also, check that the default integer size is what you expect it to be and, more important, what the rest of the code expects it to be.
Not an expert in the area but couple of things to consider:
1) Is the debugger initialising the variable used as the index to zero first, but the non-debug does not and so the variable starts with a "junk" value (had an old version of Pascal that used to do that).
2) Are you using threading? If so is the debug changing the order of execution so some prep-thread is completing in time.
The code in this question made me think
assert(value>0); //Precondition
if (value>0)
{
//Doit
}
I never write the if-statement. Asserting is enough/all you can do.
"Crash early, crash often"
CodeComplete states:
The assert-statement makes the application Correct
The if-test makes the application Robust
I don't think you've made an application more robust by correcting invalid input values, or skipping code:
assert(value >= 0 ); //Precondition
assert(value <= 90); //Precondition
if(value < 0) //Just in case
value = 0;
if (value > 90) //Just in case
value = 90;
//Doit
These corrections are based on assumptions you made about the outside world.
Only the caller knows what "a valid input value" is for your function, and he must check its validity before he calls your function.
To paraphrase CodeComplete:
"Real-world programs become too messy when we don't rely solely on assertions."
Question: Am I wrong, stuborn, stupid, too non-defensive...
The problem with trusting just Asserts, is that they may be turned off in a production environment. To quote the wikipedia article:
Most languages allow assertions to be
enabled or disabled globally, and
sometimes independently. Assertions
are often enabled during development
and disabled during final testing and
on release to the customer. Not
checking assertions avoiding the cost
of evaluating the assertions while,
assuming the assertions are free of
side effects, still producing the same
result under normal conditions. Under
abnormal conditions, disabling
assertion checking can mean that a
program that would have aborted will
continue to run. This is sometimes
preferable.
Wikipedia
So if the correctness of your code relies on the Asserts to be there you may run into serious problems. Sure, if the code worked during testing it should work during production... Now enter the second guy that works on the code and is just going to fix a small problem...
Use assertions for validating input you control: private methods and such.
Use if statements for validating input you don't control: public interfaces designed for consumption by the user, user input testing etc.
Test you application with assertions built in. Then deploy without the assertions.
I some cases, asserts are disabled when building for release. You may not have control over this (otherwise, you could build with asserts on), so it might be a good idea to do it like this.
The problem with "correcting" the input values is that the caller will not get what they expect, and this can lead to problems or even crashes in wholly different parts of the program, making debugging a nightmare.
I usually throw an exception in the if-statement to take over the role of the assert in case they are disabled
assert(value>0);
if(value<=0) throw new ArgumentOutOfRangeException("value");
//do stuff
I would disagree with this statement:
Only the caller knows what "a valid
input value" is for your function, and
he must check its validity before he
calls your function.
Caller might think that he know that input value is correct. Only method author knows how it suppose to work. Programmer's best goal is to make client to fall into "pit of success". You should decide what behavior is more appropriate in given case. In some cases incorrect input values can be forgivable, in other you should throw exception\return error.
As for Asserts, I'd repeat other commenters, assert is a debug time check for code author, not code clients.
Don't forget that most languages allow you to turn off assertions... Personally, if I was prepared to write if tests to protect against all ranges of invalid input, I wouldn't bother with the assertion in the first place.
If, on the other hand you don't write logic to handle all cases (possibly because it's not sensible to try and continue with invalid input) then I would be using the assertion statement and going for the "fail early" approach.
If I remember correctly from CS-class
Preconditions define on what conditions the output of your function is defined. If you make your function handle errorconditions your function is defined for those condition and you don't need the assert statement.
So I agree. Usually you don't need both.
As Rik commented this can cause problems if you remove asserts in released code. Usually I don't do that except in performance-critical places.
I should have stated I was aware of the fact that asserts (here) dissappear in production code.
If the if-statement actually corrects invalid input data in production code, this means the assert never went off during testing on debug code, this means you wrote code that you never executed.
For me it's an OR situation:
(quote Andrew) "protect against all ranges of invalid input, I wouldn't bother with the assertion in the first place." -> write an if-test.
(quote aku) "incorrect input values can be forgivable" -> write an assert.
I can't stand both...
For internal functions, ones that only you will use, use asserts only. The asserts will help catch bugs during your testing, but won't hamper performance in production.
Check inputs that originate externally with if-conditions. By externally, that's anywhere outside the code that you/your team control and test.
Optionally, you can have both. This would be for external facing functions where integration testing is going to be done before production.
A problem with assertions is that they can (and usually will) be compiled out of the code, so you need to add both walls in case one gets thrown away by the compiler.