How to define infinite numbers in visual studio 2015? - c++11

When I used #define INF (1.0/0.0) to define infinite number, it called the error C2124, which means that a constant expression has a zero denominator. It was told that do not divide by zero can avoid it. But I really need to define inifite number, could someone give a simple way to solve it?

Not sure if it works in VS, especially such an old version, but STL has special provision for that:
double never = std::numeric_limits<double>::infinity();

The C++ standard way is to utilize the function std::numeric_limits<double>::infinity() in the <limits> library.
This is also only meaningful if std::numeric_limits<double>::has_infinity is true (which it usually will be).

Related

When I use Conditional Compilation Arguments to Exclude Code, why doesn't VB6 EXE file size change?

Basically, when declaring Windows API functions in my VB6 code, there comes with these many constants that need to be declared or used with this function, in fact, usually most of these constants are not used and you only end up using one of them or so when making your API calls, so I am using Conditional Compilation Arguments to exclude these (and other things) using something like this:
IncludeUnused = 0 : Testing = 1
(this is how I set two conditional compilation arguments (they are of Boolean type by default).
So, many unused things are excluded like this:
#If IncludeUnused Then
' Some constant declarations and API declarations go here, sometimes functions
' and function calls go here as well, so it's not just declarations and constants
#End If
I also use a similar wrapper using the Testing Boolean declared in the Conditional Compilation Argument input field in the VB6 Properties windows "Make" tab. The Testing Boolean is used to display message boxes and things like that when I am in testing mode, and of course, these message boxed are removed (not displayed) if I have Testing set to 0 (and it is obviously 1 when I am Testing).
The problem is, I tried setting IncludeUnused and Testing to 0 and 1 and visa versa, a total of four (4) combinations, and no matter what combination I set these values to, the output EXE file size for my VB6 EXE does not change! It is always 49,152 when compiled to Native Code using Fast Code, and when using Small Code.
Additionally, if I compile to p-code under the four (4) combinations of Testing and IncludeUnused, i always end up with the file size 32,768 no matter what.
This is driving me crazy, since it is leading me to believe that no change is actually occuring, even though it is. Why is it that when segments of code are excluded from compilation, the file size is still the same? What am I missing or doing wrong, or what have I miscalculated?
I have considered the option that perhaps VB6 automatically does not compile code which is not used into the final output EXE, but I have read from a few sources that this is not true, in that, if it's included, it is compiled (correct me if I am wrong), and if this is right, then there is no need to use the IncludeUnused Boolean to remove unused code...?
If anyone can shed some light on these thoughts, I'd greatly appreciate it.
It could well be that the size difference is very small and that the exe size is padded to the next 512 or 1024 byte alignment. Try compressing the exe's with zip and see if the zip-file sizes differ.
You misunderstand what a compiler does. The output of the VB6 compiler is code. Constants are merely place holders for values, they are not code. The compiler adds them to its symbol table. And when it later encounters a statement in your code that uses the constant then it replaces the constant by its value. That statement produces the exact same code whether you use a constant or hard-code the value in the statement.
So this automatically implies that if you never actually use the constant anywhere then there is no difference at all in the generated code. All that you accomplished by using the #If is to keep the compiler's symbol table smaller. Which is something that makes very little sense to do, the actual gain from compilation speed you get is not measurable. Symbol tables are implemented as hash tables, they have O(1) amortized complexity.
You use constants only to make your code more readable. And to make it easy to change a constant value if the need ever arises. By using #If, you actually made your code less readable.
You can't test runtime data in conditional compilation directives.
These directives use expressions made up of literal values, operators, and CC constants. One way to set constant values is:
#Const IncludeUnused = 0
#Const Testing = 1
You can also define them via Project Properties for IDE testing. Go to the Make tab in that dialog and click the Help button for details.
Perhaps this is where you are setting the values? If so, consider this just additional info for later readers rather than an answer.
See #If...Then...#Else Directive
VB6 executable sizes are padded to 4KB blocks, so if the code difference is small it will make no difference to the executable.

GCC hidden/little-known features

This is my attempt to start a collection of GCC special features which usually do not encounter. this comes after #jlebedev in the another question mentioned "Effective C++" option for g++,
-Weffc++
This option warns about C++ code which breaks some of the programming guidelines given in the books "Effective C++" and "More Effective C++" by Scott Meyers. For example, a warning will be given if a class which uses dynamically allocated memory does not define a copy constructor and an assignment operator. Note that the standard library header files do not follow these guidelines, so you may wish to use this option as an occasional test for possible problems in your own code rather than compiling with it all the time.
What other cool features are there?
From time to time I go through the current GCC/G++ command line parameter documentation and update my compiler script to be even more paranoid about any kind of coding error. Here it is if you are interested.
Unfortunately I didn't document them so I forgot most, but -pedantic, -Wall, -Wextra, -Weffc++, -Wshadow, -Wnon-virtual-dtor, -Wold-style-cast, -Woverloaded-virtual, and a few others are always useful, warning me of potentially dangerous situations. I like this aspect of customizability, it forces me to write clean, correct code. It served me well.
However they are not without headaches, especially -Weffc++. Just a few examples:
It requires me to provide a custom copy constructor and assignment operator if there are pointer members in my class, which are useless since I use garbage collection. So I need to declare empty private versions of them.
My NonInstantiable class (which prevents instantiation of any subclass) had to implement a dummy private friend class so G++ didn't whine about "only private constructors and no friends"
My Final<T> class (which prevents subclassing of T if T derived from it virtually) had to wrap T in a private wrapper class to declare it as friend, since the standard flat out forbids befriending a template parameter.
G++ recognizes functions that never return a return value, and throw an exception instead, and whines about them not being declared with the noreturn attribute. Hiding behind always true instructions didn't work, G++ was too clever and recognized them. Took me a while to come up with declaring a variable volatile and comparing it against its value to be able to throw that exception unmolested.
Floating point comparison warnings. Oh god. I have to work around them by writing x <= y and x >= y instead of x == y where it is acceptable.
Shadowing virtuals. Okay, this is clearly useful to prevent stupid shadowing/overloading problems in subclasses but still annoying.
No previous declaration for functions. Kinda lost its importance as soon as I started copypasting the function declaration right above it.
It might sound a bit masochist, but as a whole, these are very cool features that increased my understanding of C++ and general programming.
What other cool features G++ has? Well, it's free, open, it's one of the most widely used and modern compilers, consistently outperforms its competitors, can eat almost anything people throw at it, available on virtually every platform, customizable to hell, continuously improved, has a wide community - what's not to like?
A function that returns a value (for example an int) will return a random value if a code path is followed that ends the function without a 'return value' statement. Not paying attention to this can result in exceptions and out of range memory writes or reads.
For example if a function is used to obtain the index into an array, and the faulty code path is used (the one that doesn't end with a return 'value' statement) then a random value will be returned which might be too big as an index into the array, resulting in all sorts of headaches as you wrongly mess up the stack or heap.

STL on custom OS - std::list works, but std::vector doesn't

I'm just playing around with a grub-bootable C++ kernel in visual studio 2010.
I've gotten to the point where I have new and delete written and things such as dynamically allocated arrays work. I can use STL lists, for example. I can even sort them, after I wrote a memcpy routine. The problem is when I use the std::vector type. Simply constructing the vector sends the kernel off into la la land.
Obviously I'm missing a function implementation of some kind, but I looked through STL searching for it and came up empty-handed. It fails at the push_back:
vector<int> v;
v.push_back(1);
and disappears into the ether.
Any guesses as to what I'm missing?
Edit yes it's vector of int. Sorry for the confusion. Not only that, but it's not the constructor it fails on, it's a call to push_back.
Stab in the dark: do you have new[] and delete[] implemented? A list will create one item at a time with new while a vector will likely allocate larger blocks of memory with new[].
As per our discussion above, creating a
std::vector<mySimpleStruct> v;
instead of a
std::vector<int> v;
appears to work correctly. This must mean the problem is with something being done in the specialization of some functions for std::vector in your standard template library. I'm assuming you're familiar with template specialization already, but in case you're not:
http://www.parashift.com/c++-faq-lite/templates.html#faq-35.7
Also, once you've figured out where the real problem is, could you come back and post the answer here? You have me curious about where the real problem is now, plus the answer may be helpful to others trying to build their own OS kernels.
Do you use a custom allocator or a default one?
You might try using a custom one just to see what allocations vector peforms that might destroy your implementation of the memory manager (this is probably what actually fails).
And yes, please post back once you solve it - it helps all other OSdevers out there.

What is the priority on NoStepInto entries in VS2008?

I've been looking into the NoStepInto feature of Visual Studio.
Andy Pennell's post How to Not Step Into Functions using the Visual C++ Debugger has been extremely helpful.
But as far as I can tell, in VS2008 the string name of the rule no longer has to be an integer, and no longer has any effect on the priority of the rule.
I have played around with the registry a bit and it seems to use the best match or maximal match (not sure what the correct expression is).
So if I have the following two rules
boost boost\:\:.*=NoStepInto
boost::shared_ptr boost\:\:shared_ptr.*=StepInto
it does step into shared pointers, which I assume is because the second rule is a more exact match.
Has anyone come across any information on anywhere confirming or refuting this? I can't seem to find any.
Thank you!
I just tested this and things seem to work the way I expect them to:
20 boost\:\:.*=NoStepInto
30 boost\:\:shared_ptr.*=StepInto
Does not step me into any boost namespace functions, except for shared_ptr's.
Changing the priorities around to
10 boost\:\:shared_ptr.*=StepInto
20 boost\:\:.*=NoStepInto
Does not step me into any boost namespace functions at all.

Defining constants for 0 and 1

I was wondering whether others find it redundant to do something like this...
const double RESET_TIME = 0.0;
timeSinceWhatever = RESET_TIME;
rather than just doing
timeSinceWhatever = 0.0;
Do you find the first example to aid in readability? The argument comes down to using magic numbers, and while 0 and 1 are considered "exceptions" to the rule, I've always kind of thought that these exceptions only apply to initializing variables, or index accessing. When the number is meaningful, it should have a variable attached to its meaning.
I'm wondering whether this assumption is valid, or if it's just redundant to give 0 a named constant.
Well, in your particular example it doesn't make much sense to use a constant.
But, for example, if there was even a small chance that RESET_TIME will change in the future (and become, let's say, 1) then you should definitely use a constant.
You should also use a constant if your intent is not obvious from the number. But in your particular example I think that timeSinceWhatever = 0; is more clear than timeSinceWhatever = RESET_TIME.
Typically, one benefit of defining a constant rather than just using a literal is if the value ever needs to change in several places at once.
From your own example, what if REST_TIME needed to be -1.5 due to some obscure new business rule? You could change it one place, the definition of the constant, or you could change it everywhere you had last used 0.0 as a float literal.
In short, defining constants, in general, aids primarily in maintainability.
If you want to be more specific and letting others know why you're changing doing what you're doing you might want to instead create a function (if your language permits functions to float about) such as
timeSinceWhenever = ResetStopWatch();
or better yet when dealing with units either find a library which has built in function types or create your own. I wouldn't suggest creating your own with time as there are an abundant amount of such libraries. I've seen this before in code if it helps:
Temperature groundTemp = Temperature.AbsoluteZero();
which is a nice way of indicating what is going on.
I would define it only if there was ever a chance that RESET_TIME could be something different than 0.0, that way you can make one change and update all references. Otherwise 0.0 is the better choice to my eye just so you don't have to trace back and see what RESET_TIME was defined to.
Constants are preferable as it allows to use a value that can be then changed in successive versions of the code. It is not always possible to use constants, especially if you are programming in a OO language, and it is not possible to define a constant that doesn't contain a basic datatype. Generally, a programming language always has a way to define not modifiable objects / datatypes.
Well suppose that RESET_TIME is used often in your code and you want to change the value, it will be better to do it once and not in every statement.
better than a constant, make it a configuration variable, and set it to a default value. But yes, RESET_TIME is more readable, provided its used more than once otherwise just use a code comment.
That code is ok. const variable are unchangeable variables. so whenever you feel to reset something, you can always have your const to do that

Resources