I keep getting errors similar to these:
pitstop.cpp:36:23: error: indirection requires pointer operand
('double' invalid)
cost = UNLEADED * gallons;
^ ~~~~~~~
pitstop.cpp:40:14: error: expected expression
cost = SUPER * gallons; ^
#include <iostream>
#include <iomanip>
using namespace std;
#define UNLEADED 3.45;
#define SUPER {UNLEADED + 0.10};
#define PREMIUM {SUPER + 0.10};
/*
Author: Zach Stow
Date:
Homework
Objective:
*/
double cost, gallons;
string gasType, finish, stop;
int main()
{
for(;;)
{
cout <<"Hi, welcome to Pitstop.\n";
cout <<"Enter the type of gas you need:";
cin >> gasType;
cout << endl;
cout <<"Enter the amount of gallons you need:";
cin >> gallons;
cout << endl;
if(gasType == "finish" || gasType == "stop")break;
else if(gasType == "UNLEADED")
{
cost = UNLEADED * gallons;
}
else if(gasType == "SUPER")
{
cost = SUPER * gallons;
}
else if(gasType == "PREMIUM")
{
cost = PREMIUM * gallons;
}
}
cout <<"You need to pay:$" << cost << endl;
return(0);
}
Not a c++ expert, but I'm sure that to define a constant you just need to use the #define directive followed by the symbol and by the value you want to assign to it (even if the value itself is an expression, even if such expression is referencing another constant), the braces and the trailing semicolon are excessive:
// [...]
#define UNLEADED 3.45
#define SUPER (UNLEADED + 0.10)
#define PREMIUM (SUPER + 0.10)
//[...]
It compiled at the first attempt with such corrections.
The cause of the error is the semi colon at the end of the #define directive.
You have also used the incorrect type of brackets, try this instead:
#define UNLEADED 3.45
#define SUPER (UNLEADED + 0.10)
#define PREMIUM (SUPER + 0.10)
Note that when you use the #define directive whatever follows #define is substituted into your code. In this case after the preprocessor ran your code looked like this:
else if(gasType == "UNLEADED")
{
cost = UNLEADED 3.45; * gallons;
}
else if(gasType == "SUPER")
{
cost = {UNLEADED + 0.10}; * gallons;
}
else if(gasType == "PREMIUM")
{
cost = PREMIUM {SUPER + 0.10}; * gallons;
}
The reason you were getting the indirection requires pointer operand error was the compiler trying to interpret this statement:
* gallons;
Because the * operator only has a single argument it is interpreted as a pointer dereference, luckily for you the gallons variable is not a pointer type. If gallons had been declared as a pointer type i.e. double cost, *gallons; and the cin wasn't there the code would compile but not do what you expect, probably throwing a segfault.
Macros defined with #define can be very powerful and very dangerous. There is usually a better way to achieve things in c++. In this case UNLEADED, SUPER_UNLEADED and PREMIUM could be declared as const double type i.e.
const double unleaded = 3.45;
const double super = unleaded + 0.10;
const double premium = super + 0.10;
Related
Having following code:
#include <iostream>
#include <bitset>
#include <limits>
#include <limits.h>
using namespace std;
constexpr std::size_t maxBits = CHAR_BIT * sizeof(std::size_t);
int main() {
std::size_t value =47;
unsigned int begin=0;
unsigned int end=32;
//std::size_t allBitsSet(std::numeric_limits<std::size_t>::max());
std::bitset<maxBits> allBitsSet(std::numeric_limits<std::size_t>::max());
//std::size_t mask((allBitsSet >> (maxBits - end)) ^(allBitsSet >> (maxBits - begin)));
std::bitset<maxBits> mask = (allBitsSet >> (maxBits - end)) ^(allBitsSet >> (maxBits - begin));
//std::size_t bitsetValue(value);
std::bitset<maxBits> bitsetValue(value);
auto maskedValue = bitsetValue & mask;
auto result = maskedValue >> begin;
//std::cout << static_cast<std::size_t>(result) << std::endl;
std::cout << static_cast<std::size_t>(result.to_ulong()) << std::endl;
}
Which in fact should return the same value as value, but for some reason the version with std::bitset works just fine and version with std::size_t does not.
It is strange as such, because AFAIK std::bitset, when something is wrong simply throws exception and what is more AFAIK bitset should behave the same way as operations on unsigned integers, but as we can see even if bitset has same number of bits it does not behave the same. In fact it seems for me, that std::bitset works fine, while std::size_t does not.
My configuration is:
intel corei7 - g++-5.4.0-r3
[expr.shift]/1 ... The behavior [of the shift operator - IT] is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand.
Emphasis mine. allBitsSet >> (maxBits - begin) (in the non-bitset version) exhibits undefined behavior.
On the other hand, the behavior of bitset::operator>> is well-defined: allBitsSet >> (maxBits - begin) produces a bitset with all zero bits.
I'm trying to run this code from C++ Primer plus
#include <iostream>
using namespace std;
int main() {
int i = 20, j= 2*i;
cout << "i = " << i << endl;
int cats = 17,240; //No, I don't want the number 17240
return 0;
}
Why I'm seeing this error expected unqualified-id before numeric constant int cats = 17,240; , I don't know, I need a short explanation. Thanks
int cats = 17,240; would be viewed by the compiler as int (cats = 17),240; due to operator precedence. And int 240; makes no sense, so a compiler diagnostic is issued.
Did you want 17240 cats? If so then drop the comma.
I want to redefine the bit shift operator on a 64 bit unsigned integer in c++ in such a way that I can do say, x<<d, where x is a 64 bit integer and d is an integer with |d|<64, to make it equivalent to x<<d for d>0 and x>>|d| for d<0.
The only way I know how to do this is to define a whole new class and overload the << operator, but I think that also means I need to overload all the other operators I need (unless there is a trick I don't know), which seems a bit silly considering I want them to behave exactly as they do for the pre-defined type. It's just the bitshift that I want to change. At present, I have just written a function called 'shift' to do this, which doesn't seem very c++ ish, even though it works fine.
What is the stylistically correct way to do what I need?
Thanks
If you were able to do this, it would be very confusing to other C++ programmers who read your code and see:
int64 x = 92134;
int64 y = x >> 3;
And have it behave differently than their expectations, and behave differently from what the C++ standard defines.
The stylistic choice that agrees most with the C++ code I've seen is to continue using your own myshift() function.
int64 y = myshift(x, 3);
I think it's very horrible (and I propose it just for fun) but... if you accept to wrap the number of bit shifted in a struct...
#include <iostream>
struct foo
{ int num; };
long long int operator<< (const long long int & lli, const foo & f)
{
int d { f.num };
if ( d < 0 )
d = -d;
if ( d >= 64 )
d = 0;
return lli << d;
}
int main()
{
long long int lli { 1 };
std::cout << (lli << foo{+3}) << std::endl; // shift +3
std::cout << (lli << foo{-3}) << std::endl; // shift +3 (-3 -> +3)
std::cout << (lli << foo{+90}) << std::endl; // no shift (over 64)
std::cout << (lli << foo{-90}) << std::endl; // no shift (over 64)
return 0;
}
It seem that the MSVC Compiler treats signed and unsigned overflow differnetly. When casting a double value that exceeds the maximum integer value, the result is the smallest possible integer value (always the same). When casting to unsigned int, the cast produces an overflow as expected (maximum unsigned int value + 1 produces 0, maximum unsigned int + 2 produces 1, ...)
Can someone explain the behaviour of the compiler, or is it a bug?
Tested compilers MSVC 10 and 14
#define BOOST_TEST_MODULE Tests
#include <boost/test/unit_test.hpp>
#include <climits>
#include <iostream>
BOOST_AUTO_TEST_CASE(test_overflow_signed) {
double d_int_max_1 = INT_MAX + 1.; //2147483647 + 1
double d_int_max_2 = INT_MAX + 2.; //2147483647 + 2
BOOST_CHECK((int)(2147483648.) != (int)(2147483649.)); //succeeds (overflows to -2147483648 and -2147483647)
BOOST_CHECK((int)(d_int_max_1) != (int)(d_int_max_2)); //fails (both values overflow to -2147483648)
std::cout << "(int)(2147483648.) == " << (int)(2147483648.) << std::endl; //-2147483648
std::cout << "(int)(2147483649.) == " << (int)(2147483649.) << std::endl; //-2147483647
std::cout << "(int)(d_int_max_1) == " << (int)(d_int_max_1) << std::endl; //-2147483648
std::cout << "(int)(d_int_max_2) == " << (int)(d_int_max_2) << std::endl; //-2147483648
}
BOOST_AUTO_TEST_CASE(test_overflow_unsigned) {
double d_int_max_1 = UINT_MAX + 1.;//4294967295 + 1
double d_int_max_2 = UINT_MAX + 2.;//4294967295 + 2
//BOOST_CHECK((unsigned int)(4294967296.) != (unsigned int)(4294967297.)); //compiler fails (!= truncation of constant value)
BOOST_CHECK((unsigned int)(d_int_max_1) != (unsigned int)(d_int_max_2)); //succeeds (overflows to 0 and 1)
std::cout << "(unsigned int)(d_int_max_1) == " << (unsigned int)(d_int_max_1) << std::endl; //0
std::cout << "(unsigned int)(d_int_max_2) == " << (unsigned int)(d_int_max_2) << std::endl; //1
}
[conv.fpint]/1 A prvalue of a floating point type can be converted to a prvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be
represented in the destination type.
Emphasis mine. Since the behavior is undefined, any outcome whatsoever is correct.
From the MSDN:
Ty atomic<Ty>::operator++(int) volatile _NOEXCEPT;
Ty atomic<Ty>::operator++(int) _NOEXCEPT;
Ty atomic<Ty>::operator++() volatile _NOEXCEPT;
Ty atomic<Ty>::operator++() _NOEXCEPT;
The first two operators return the incremented value; the last two operators return the value before the increment.
But, C++11 documentation defines returns from this operators as
The value of the atomic variable after the modification. Formally, the result of incrementing/decrementing the value immediately preceding the effects of this function in the modification order of *this.
Why MSVC++ compiler use non standard definitions ?
It's a documentation error on MSDN. This test program (LIVE):
#include <atomic>
#include <iostream>
template <typename T>
void foo(T&& t) {
std::cout << ++t << '\n';
std::cout << t++ << '\n';
std::cout << static_cast<int>(t) << '\n';
}
int main()
{
foo(0);
foo(std::atomic<int>{0});
}
correctly outputs:
1
1
2
1
1
2
when compiled by VS2013.