Integer or int in Processing? - processing

When I'm creating an integer variable in Processing, should I use int or Integer? They both seem to work the same way. Is it optional which one you would use?
// The same thing?
int a = 5;
Integer b = 4;
// I prefer Integer because it looks like String:
Integer c = 95;
String d = "Hello!";
// Then again, int looks like char:
int e = 3;
char f = 'a';
I'm thinking it's probably just what one prefers, though int is used more?

They have different uses. int is a primitive type while Integer is an object.
The primitive int has a default value of 0 while an Integer will default to null. Primitives use much less memory, just one location of memory, taking up 32 or 64 bits. An object requires more overhead.
Stick to using an int unless you have a need for a null integer or some other requirement.
For reference:
https://processing.org/reference/int.html
https://processing.org/tutorials/objects/

The int type is a primitive data type. That means you can use it in any place you can use a primitive literal, which you can think of as a typed-out number, like 1, 2, 3, 99, -15, etc.
However, you can't use an int in places you have to use an Object. For example, this code will not compile:
void setup(){
ArrayList<int> list = new ArrayList<int>();
}
This code won't compile, because the generic arguments require a class, and int is a primitive, not a class. So how do we get an ArrayList of ints?
That's where primitive wrapper Objects come into play. They are Objects that wrap a primitive, such as int. That way you can correct the above code:
void setup(){
ArrayList<Integer> list = new ArrayList<Integer>();
}
Other primitive wrapper classes include Float, Boolean, Character, etc.
However, it gets more complicated thanks to auto-boxing and auto-unboxing. Basically, Java (and therefore Processing) will automatically convert between primitive values and their primitive wrapper classes. That's why you can do stuff like this:
void setup(){
int primitive = 7;
Integer wrapper = 7;
println(primitive == wrapper);
}
So, for your purposes, it probably doesn't matter which one you use because Java (and therefore Processing) will automatically convert it for you.
However, using Integer instead of int might create Objects that you don't really need, and more importantly, it might prevent you from using Processing.js mode.
Recommended reading:
http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
http://en.wikipedia.org/wiki/Primitive_wrapper_class
http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html

Related

Why should we initialize data members on declaration (not necessarily on constructor)?

Does anyone could explain me the reason of this coding recommendation ?
Since C++11, please initialize data members on declaration (not
necessary on constructor) :
class Limit
{
public:
Limit() = default;
private:
int32_t quantity = 0;
double price = 0.0;
};
Someone thinks (correctly) that this way the variable is always initialised. Which is a good thing if it is initialised with a meaningful value and bad if the value is not meaningful. For example a person’s year of birth is a number from say 1890 to 2021. Initialising it to 0 isn’t useful and can only prevent the compiler from warning you.
So do this if you have a value that is always a useful initialisation value. I wouldn’t do it for anything that is likely to be overwritten in a constructor or shortly after.
I found this answer from CppCoreGuidelines C-48 :
C.48: Prefer in-class initializers to member initializers in constructors for constant initializers
Reason
Makes it explicit that the same value is expected to be used in all constructors. Avoids repetition. Avoids maintenance problems. It leads to the shortest and most efficient code.
Example, bad
class X { // BAD
int i;
string s;
int j;
public:
X() :i{666}, s{"qqq"} { } // j is uninitialized
X(int ii) :i{ii} {} // s is "" and j is uninitialized
// ...
};
How would a maintainer know whether j was deliberately uninitialized (probably a bad idea anyway) and whether it was intentional to give s the default value "" in one case and qqq in another (almost certainly a bug)? The problem with j (forgetting to initialize a member) often happens when a new member is added to an existing class.
Example
class X2 {
int i {666};
string s {"qqq"};
int j {0};
public:
X2() = default; // all members are initialized to their defaults
X2(int ii) :i{ii} {} // s and j initialized to their defaults
// ...
};
Alternative: We can get part of the benefits from default arguments to constructors, and that is not uncommon in older code. However, that is less explicit, causes more arguments to be passed, and is repetitive when there is more than one constructor:
class X3 { // BAD: inexplicit, argument passing overhead
int i;
string s;
int j;
public:
X3(int ii = 666, const string& ss = "qqq", int jj = 0)
:i{ii}, s{ss}, j{jj} { } // all members are initialized to their defaults
// ...
};
Enforcement
(Simple) Every constructor should initialize every member variable (either explicitly, via a delegating ctor call or via default construction).
(Simple) Default arguments to constructors suggest an in-class initializer might be more appropriate.
There is also the guideline C-45 that explains it.

How to use element wise integer power with Eigen

I would like to take the element wise power of an array of double with and array of int using Eigen power function.
Here is a sample code that reproduce the issue using Eigen v3.3.4 and v3.3.7:
#include <Eigen/Dense>
int main() {
Eigen::ArrayXd x(10);
Eigen::ArrayXd res(10);
Eigen::ArrayXi exponents(10);
x = Eigen::ArrayXd::Random(10);
exponents = Eigen::ArrayXi::LinSpaced(10, 0, 9);
res = Eigen::pow(x, exponents);
return (0);
}
The error message is quite long but in essence I get
YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY
which does not seem appropriate to me in this context, along with
Eigen3/Eigen/src/Core/functors/BinaryFunctors.h:294:84: error: no type named ‘ReturnType’ in ‘struct Eigen::ScalarBinaryOpTraits<double, int, Eigen::internal::scalar_pow_op<double, int> >’
typedef typename ScalarBinaryOpTraits<Scalar,Exponent,scalar_pow_op>::ReturnType result_type;
As the error message indicated, you can't mix scalar types implicitly. You have to explicitly cast so that the types match:
res = Eigen::pow(x, exponents.cast<double>());
As for a specialization for integer types, the template of the power function (as a functor) is:
template<typename ScalarX,typename ScalarY, bool IsInteger =
NumTraits<ScalarX>::IsInteger&&NumTraits<ScalarY>::IsInteger>
and calls a simple pow(x,y) unless both types are integers (IsInteger), in which case there is a different specialization.
There is also an overload for an array to the power of a constant, which doesn't seem to be what you are looking for. In that case (unless ggael corrects me), you can definitely implement your own CustomBinaryOp

Why does initialization of int by parenthesis inside class give error? [duplicate]

For example, I cannot write this:
class A
{
vector<int> v(12, 1);
};
I can only write this:
class A
{
vector<int> v1{ 12, 1 };
vector<int> v2 = vector<int>(12, 1);
};
Why is there a difference between these two declaration syntaxes?
The rationale behind this choice is explicitly mentioned in the related proposal for non static data member initializers :
An issue raised in Kona regarding scope of identifiers:
During discussion in the Core Working Group at the September ’07 meeting in Kona, a question arose about the scope of identifiers in the initializer. Do we want to allow class scope with the possibility of forward lookup; or do we want to require that the initializers be well-defined at the point that they’re parsed?
What’s desired:
The motivation for class-scope lookup is that we’d like to be able to put anything in a non-static data member’s initializer that we could put in a mem-initializer without significantly changing the semantics (modulo direct initialization vs. copy initialization):
int x();
struct S {
int i;
S() : i(x()) {} // currently well-formed, uses S::x()
// ...
static int x();
};
struct T {
int i = x(); // should use T::x(), ::x() would be a surprise
// ...
static int x();
};
Problem 1:
Unfortunately, this makes initializers of the “( expression-list )” form ambiguous at the time that the declaration is being parsed:
struct S {
int i(x); // data member with initializer
// ...
static int x;
};
struct T {
int i(x); // member function declaration
// ...
typedef int x;
};
One possible solution is to rely on the existing rule that, if a declaration could be an object or a function, then it’s a function:
struct S {
int i(j); // ill-formed...parsed as a member function,
// type j looked up but not found
// ...
static int j;
};
A similar solution would be to apply another existing rule, currently used only in templates, that if T could be a type or something else, then it’s something else; and we can use “typename” if we really mean a type:
struct S {
int i(x); // unabmiguously a data member
int j(typename y); // unabmiguously a member function
};
Both of those solutions introduce subtleties that are likely to be misunderstood by many users (as evidenced by the many questions on comp.lang.c++ about why “int i();” at block scope doesn’t declare a default-initialized int).
The solution proposed in this paper is to allow only initializers of the “= initializer-clause” and “{ initializer-list }” forms. That solves the ambiguity problem in most cases, for example:
HashingFunction hash_algorithm{"MD5"};
Here, we could not use the = form because HasningFunction’s constructor is explicit.
In especially tricky cases, a type might have to be mentioned twice. Consider:
vector<int> x = 3; // error: the constructor taking an int is explicit
vector<int> x(3); // three elements default-initialized
vector<int> x{3}; // one element with the value 3
In that case, we have to chose between the two alternatives by using the appropriate notation:
vector<int> x = vector<int>(3); // rather than vector<int> x(3);
vector<int> x{3}; // one element with the value 3
Problem 2:
Another issue is that, because we propose no change to the rules for initializing static data members, adding the static keyword could make a well-formed initializer ill-formed:
struct S {
const int i = f(); // well-formed with forward lookup
static const int j = f(); // always ill-formed for statics
// ...
constexpr static int f() { return 0; }
};
Problem 3:
A third issue is that class-scope lookup could turn a compile-time error into a run-time error:
struct S {
int i = j; // ill-formed without forward lookup, undefined behavior with
int j = 3;
};
(Unless caught by the compiler, i might be intialized with the undefined value of j.)
The proposal:
CWG had a 6-to-3 straw poll in Kona in favor of class-scope lookup; and that is what this paper proposes, with initializers for non-static data members limited to the “= initializer-clause” and “{ initializer-list }” forms.
We believe:
Problem 1: This problem does not occur as we don’t propose the () notation. The = and {} initializer notations do not suffer from this problem.
Problem 2: adding the static keyword makes a number of differences, this being the least of them.
Problem 3: this is not a new problem, but is the same order-of-initialization problem that already exists with constructor initializers.
One possible reason is that allowing parentheses would lead us back to the most vexing parse in no time. Consider the two types below:
struct foo {};
struct bar
{
bar(foo const&) {}
};
Now, you have a data member of type bar that you want to initialize, so you define it as
struct A
{
bar B(foo());
};
But what you've done above is declare a function named B that returns a bar object by value, and takes a single argument that's a function having the signature foo() (returns a foo and doesn't take any arguments).
Judging by the number and frequency of questions asked on StackOverflow that deal with this issue, this is something most C++ programmers find surprising and unintuitive. Adding the new brace-or-equal-initializer syntax was a chance to avoid this ambiguity and start with a clean slate, which is likely the reason the C++ committee chose to do so.
bar B{foo{}};
bar B = foo();
Both lines above declare an object named B of type bar, as expected.
Aside from the guesswork above, I'd like to point out that you're doing two vastly different things in your example above.
vector<int> v1{ 12, 1 };
vector<int> v2 = vector<int>(12, 1);
The first line initializes v1 to a vector that contains two elements, 12 and 1. The second creates a vector v2 that contains 12 elements, each initialized to 1.
Be careful of this rule - if a type defines a constructor that takes an initializer_list<T>, then that constructor is always considered first when the initializer for the type is a braced-init-list. The other constructors will be considered only if the one taking the initializer_list is not viable.

Construct a 'long long'

How do you construct a long long in gcc, similar to constructing an int via int()
The following fails in gcc (4.6.3 20120306) (but passes on MSVC for example).
myFunctionCall(someValue, long long());
with error expected primary-expression before 'long' (the column position indicates the first long is the location).
A simple change
myFunctionCall(someValue, (long long)int());
works fine - that is construct an int and cast to long long - indicating that gcc doesn't like the long long ctor.
Summary Solution
To summarize the brilliant explanation below from #birryree:
many compilers don't support long long() and it may not be standards compliant
constructing long long is equivalent to the literal 0LL, so use myFunctionCall(someValue, 0LL)
alternatively use a typedef long_long_t long long then long_long_t()
lastly, consider using uint64_t if you are after a type that is exactly 64 bits on any platform, rather than a type that is at least 64 bits, but may vary on different platforms.
I wanted a definitive answer on what the expected behavior was, so I posted a question on comp.lang.c++.moderated and got some great answers in return. So a thank you goes out to Johannes Schaub, Alf P. Steinbach (both from SO), and Francis Glassborrow for some information
This is not a bug in GCC - in fact it will break across multiple compilers - GCC 4.6, GCC 4.7, and Clang complain about similar errors like primary expression expected before '(' if you try this syntax:
long long x = long long();
Some primitives have spaces, and that is not allowed if you want to use the constructor-style initialization because of binding (long() is bound, but long long() has a free long). Types with spaces in them (like long long) can not use the type()-construction form.
MSVC is more permissive here, though technically non-standard compliant (and it's not a language extension that you can disable).
Solutions/Workarounds
There are alternatives for what you want to do:
Use 0LL as your value in place of attempting long long() - they would produce the same value.
This is how most code will be written too, so it will be most understandable to anyone else reading your code.
From your comments it seems like you really want long long, so you can typedef yourself to always guarantee you have a long long type, like this:
int main() {
typedef long long MyLongLong;
long long x = MyLongLong(); // or MyLongLong x = MyLongLong();
}
Use a template to get around needing explicit naming:
template<typename TypeT>
struct Type { typedef TypeT T(); };
// call it like this:
long long ll = Type<long long>::T();
As I mentioned in my comments, you can use an aliased type, like int64_t (from <cstdint>), which across common platforms is a typedef long long int64_t. This is a more platform dependent than the previous items in this list.
int64_t is a fixed-width type that is 64-bits, which is typically how wide long long is on platforms like linux-x86 and windows-x86. long long is at least 64-bit wide, but can be longer. If your code will only run on certain platforms, or if you really need a fixed-width type, this might be a viable choice.
C++11 Solutions
Thanks to the C++ newsgroup, I learned some additional ways of doing what you want to do, but unfortunately they're only in the realm of C++11 (and MSVC10 doesn't support either, and only very new compilers either way would):
The {} way:
long long ll{}; // does the zero initialization
Using what Johannes refers to as the 'bord tools' in C++11 with std::common_type<T>
#include <type_traits>
int main() {
long long ll = std::common_type<long long>::type();
}
So is there a real difference between () and initializing with 0 for POD types?
You say this in a comment:
I don't think default ctor returns zero always - more typical behaviour is to leave memory untouched.
Well, for primitive types, that is not true at all.
From Section 8.5 of the ISO C++ Standard/2003 (don't have 2011, sorry, but this information didn't change too much):
To default-initialize an object of type T means:
— if T is a non-POD class type (clause 9), the default constructor for T is called (and the initialization is ill-formed if T has no accessible default
constructor);
— if T is an array type, each element is
default-initialized;
— otherwise, the object is zero-initialized.
The last clause is most important here because long long, unsigned long, int, float, etc. are all scalar/POD types, and so calling things like this:
int x = int();
Is exactly the same as doing this:
int x = 0;
Generated code example
Here is a more concrete example of what actually happens in code:
#include <iostream>
template<typename T>
void create_and_print() {
T y = T();
std::cout << y << std::endl;
}
int main() {
create_and_print<unsigned long long>();
typedef long long mll;
long long y = mll();
long long z = 0LL;
int mi = int();
}
Compile this with:
g++ -fdump-tree-original construction.cxx
And I get this in the generated tree dump:
;; Function int main() (null)
;; enabled by -tree-original
{
typedef mll mll;
long long int y = 0;
long long int z = 0;
int mi = 0;
<<cleanup_point <<< Unknown tree: expr_stmt
create_and_print<long long unsigned int> () >>>>>;
<<cleanup_point long long int y = 0;>>;
<<cleanup_point long long int z = 0;>>;
<<cleanup_point int mi = 0;>>;
}
return <retval> = 0;
;; Function void create_and_print() [with T = long long unsigned int] (null)
;; enabled by -tree-original
{
long long unsigned int y = 0;
<<cleanup_point long long unsigned int y = 0;>>;
<<cleanup_point <<< Unknown tree: expr_stmt
(void) std::basic_ostream<char>::operator<< ((struct __ostream_type *) std::basic_ostream<char>::operator<< (&cout, y), endl) >>>>>;
}
Generated Code Implications
So from the code tree generated above, notice that all my variables are just being initialized with 0, even if I use constructor-style default initialization, like with int mi = int(). GCC will generate code that just does int mi = 0.
My template function that just attempts to do default construction of some passed in typename T, where T = unsigned long long, also produced just a 0-initialization code.
Conclusion
So in conclusion, if you want to default construct primitive types/PODs, it's like using 0.

A simple way to convert to/from VARIANT types in C++

Are there any easy-to-use, high-level classes or libraries that let you interact with VARIANTs in Visual C++?
More specifically, I'd like to convert between POD types (e.g. double, long), strings (e.g. CString), and containers (e.g. std::vector) and VARIANTs. For example:
long val = 42;
VARIANT var;
if (ToVariant(val, var)) ... // tries to convert long -> VARIANT
comObjPtr->someFunc(var);
std::vector<double> vec;
VARIANT var = comObjPtr->otherFunc();
if (FromVariant(var, vec)) ... // tries VARIANT -> std::vector<double>
I (naively?) assumed that people working with COM do this all the time, so there would most likely be a single convenient library that handles all sorts of conversions. But all that I've been able to find is a motley assortment of wrapper classes that each convert a few types:
_variant_t or CComVariant for POD types
_bstr_t, CComBSTR, or BSTR for strings
CComSafeArray or SAFEARRAY for arrays
Is there any simple way -- short of switching to Visual Basic -- to avoid this nightmare of awkward memory management and bitwise VT_ARRAY | VT_I4 code?
Related questions:
CComVariant vs. _variant_t, CComBSTR vs. _bstr_t
Convert VARIANT to...?
How to best convert VARIANT_BOOL to C++ bool?
Well, most of the hard work is already done for you with the various wrapper classes. I prefer _variant_t and _bstr_t as they are more suited for conversion to/from POD types and strings. For simple arrays, all you really need is template conversion function. Something like the following:
// parameter validation and error checking omitted for clarity
template<typename T>
void FromVariant(VARIANT Var, std::vector<T>& Vec)
{
CComSafeArray<T> SafeArray;
SafeArray.Attach(Var.parray);
ULONG Count = SafeArray.GetCount();
Vec.resize(Count);
for(ULONG Index = 0; Index < Count; Index++)
{
Vec[Index] = SafeArray[Index];
}
}
....
std::vector<double> Vec;
VARIANT Var = ...;
FromVariant(Var, Vec);
...
Of course things get hairy (in regards to memory / lifetime management) if the array contains non-POD types, but it is still doable.

Resources