I wrote a method tha uses myarray, defined in the same class. When I use count it always returns 0.
When I use:
printf("%d", [myarray count]);
compiler says:
Format '%d' expetcs type 'int', but argument 2 has type 'NSUInteger'
why?
You should use %lu instead of %d. The compiler checks your format string against the parameters that you are passing to printf, sees that you are passing an unsigned but print it as a signed integer, and issues a warning. The warning indicates that for numbers greater than or equal to 2^31 printf would output a large negative number, when the data type implies a different semantic, namely, a large positive integer.
EDITED in response to comments by Josh Caswell and thepepp
Related
I have this line in fortran and I'm getting the compiler error in the title. dFeV is a 1d array of reals.
dFeV(x)=R1*5**(15) * (a**2) * EXP(-(VmigFe)/kbt)
for the record, the variable names are inherited and not my fault. I think this is an issue with not having the memory space to compute the value on the right before I store it on the left as a real (which would have enough room), but I don't know how to allocate more space for that computation.
The problem arises as one part of your computation is done using integer arithmetic of type integer(4).
That type has an upper limit of 2^31-1 = 2147483647 whereas your intermediate result 5^15 = 30517578125 is slightly larger (thanks to #evets comment).
As pointed out in your question: you save the result in a real variable.
Therefor, you could just compute that exponentiation using real data types: 5.0**15.
Your formula will end up like the following
dFeV(x)= R1 * (5.0**15) * (a**2) * exp(-(VmigFe)/kbt)
Note that integer(4) need not be the same implementation for every processor (thanks #IanBush).
Which just means that for some specific machines the upper limit might be different from 2^31-1 = 2147483647.
As indicated in the comment, the value of 5**15 exceeds the range of 4-byte signed integers, which are the typical default integer type. So you need to instruct the compiler to use a larger type for these constants. This program example shows one method. The ISO_FORTRAN_ENV module provides the int64 type. UPDATE: corrected to what I meant, as pointed out in comments.
program test_program
use ISO_FORTRAN_ENV
implicit none
integer (int64) :: i
i = 5_int64 **15_int64
write (*, *) i
end program
Although there does seem to be an additional point here that may be specific to gfortran:
integer(kind = 8) :: result
result = 5**15
print *, result
gives: Error: Result of exponentiation at (1) exceeds the range of INTEGER(4)
while
integer(kind = 8) :: result
result = 5**7 * 5**8
print *, result
gives: 30517578125
i.e. the exponentiation function seems to have an integer(4) limit even if the variable to which the answer is being assigned has a larger capacity.
I am having an issue processing the following type declaration:
type foo is range -(2**30) to 2**30;
For the exponentiation, there are two possible interpretations to consider:
function "**"(universal_integer, integer) return universal_integer;
function "**"(integer, integer) return integer;
(There are interpretations using reals, but we can easily reject those since the first argument is integral.)
Since the second operand of the exponentiation must be integer, use of either alternative requires an implicit conversion. For this reason, we cannot disqualify either choice on the grounds that an implicit conversion is required.
The bounds of a type declaration must be of some integral type, but they need not be the same type. Absent any further constraints, I reject this type declaration as being ambiguous.
This seems to be a quirk with exponentiation. If I had instead:
type foo is range 0 to 2 * 30;
Then again you have two alternatives for "*". However:
function "*"(universal_integer,universal_integer) return universal_integer;
does not require an implicit conversion, so we can use that and reject the other.
Furthermore:
constant K: integer := 2 ** 30;
also works fine due to the additional constraint that the result type must be integer.
But the combination of type definition and exponentiation deprives us of enough information to make a selection.
Since this type definition works in other implementations, what am I missing?
On behalf of my implementor ..
Ken
For
type foo is range -(2**30) to 2**30;
There are two eligible "**" functions implementing the exponentiation operator.
9.2.8 Miscellaneous operators
The exponentiating operator ** is predefined for each integer type and for each floating-point type. In either case the right operand, called the exponent, is of the predefined type INTEGER.
Exponentiation with an integer exponent is equivalent to repeated multiplication of the left operand by itself for a number of times indicated by the absolute value of the exponent and from left to right; if the exponent is negative, then the result is the reciprocal of that obtained with the absolute value of the exponent. Exponentiation with a negative exponent is only allowed for a left operand of a floating-point type. Exponentiation by a zero exponent results in the value one. Exponentiation of a value of a floating-point type is approximate.
Predefined operators for predefined types are shown as comments in package STANDARD. The two applicable here:
16.3 Package STANDARD
-- function "**" (anonymous: universal_integer; anonymous: INTEGER)
-- return universal_integer;
-- function "**" (anonymous: INTEGER; anonymous: INTEGER)
-- return INTEGER;
Note the right operand type is INTEGER. The right operand is subject to implicit type conversion.
The reason there are two is found in 5.2.3 Integer types, 5.2.3.1 General:
Each bound of a range constraint that is used in an integer type definition shall be a locally static expression of some integer type, but the two bounds need not have the same integer type. (Negative bounds are allowed.)
Integer literals are the literals of an anonymous predefined type that is called universal_integer in this standard. Other integer types have no literals. However, for each integer type there exists an implicit conversion that converts a value of type universal_integer into the corresponding value (if any) of the integer type (see 9.3.6).
The bounds are required to be some integer type and that can include universal_integer.
Which of those two are selected by The context of overload resolution is based on semantics found in 9.3.6 Type conversions:
In certain cases, an implicit type conversion will be performed. An implicit conversion of an operand of type universal_integer to another integer type, or of an operand of type universal_real to another floating-point type, can only be applied if the operand is either a numeric literal or an attribute, or if the operand is an expression consisting of the division of a value of a physical type by a value of the same type; such an operand is called a convertible universal operand. An implicit conversion of a convertible universal operand is applied if and only if the innermost complete context determines a unique (numeric) target type for the implicit conversion, and there is no legal interpretation of this context without this conversion.
You only implicitly convert from universal_integer to another integer type if there is no other legal interpretation. This is the mechanism to prevent two possible choices here in the context of overload resolution.
For 2 ** 30 do you have to implicitly convert the left operand? No. Do you have to implicitly convert the right operand? Yes, both choices require the type be INTEGER. Further there's no requirement the result be a different integer type than universal_integer required by the semantics of integer type definition (5.2.3.1).
Looking at 9.5 Universal expressions, there's one additional limitation on the definition of an integer type here:
For the evaluation of an operation of a universal expression, the following rules apply. If the result is of type universal_integer, then the values of the operands and the result shall lie within the range of the integer type with the widest range provided by the implementation, excluding type universal_integer itself. If the result is of type universal_real, then the values of the operands and the result shall lie within the range of the floating-point type with the widest range provided by the implementation, excluding type universal_real itself.
There's no way for you to use literals to describe an integer type with a value range greater than that of type INTEGER, the only predefined integer type (5.2.3.2).
There's also the limitation of integer literals of the anonymous type universal_integer whose value range is unknowable.
12.5 The context of overload resolution
When considering possible interpretations of a complete context, the only rules considered are the syntax rules, the scope and visibility rules, and the rules of the form as follows:
a) ...
e) The rules given for the resolution of overloaded subprogram calls; for the implicit conversions of universal expressions; for the interpretation of discrete ranges with bounds having a universal type; for the interpretation of an expanded name whose prefix denotes a subprogram; and for a subprogram named in a subprogram instantiation declaration to denote an uninstantiated subprogram.
f) ...
Where we can see both the semantics of implicit conversion found in 9.3.6 and the limitations on range found in 9.5 are embraced.
For purposes of knowing how to select between the two candidate functions providing operator overload the key is found in 9.3.6.
Both operands of of the exponentiation operator in
constant K: integer := 2 ** 30;
can be implicitly type converted to another integer type (INTEGER) while the otherwise universal expression 2 ** 30 can't be implicitly converted, not meeting the requirements of 9.3.6 (the operands aren't values of a physical type and operator isn't the multiplying operator /).
The overload for that is also alluded to in package STANDARD:
-- function "**" (anonymous: INTEGER; anonymous: INTEGER)
-- return INTEGER;
Chuck Swart (the last ISAC chair) described this behavior in Issue Report 2073 (IR2073.txt) as forcing implicit type conversion to the leaves of an abstract syntax tree. In his words "This clause implies that implicit conversions occur as far down as possible in the expression tree." Note that 9.3.6 was 7.3.5 before Accellera VHDL -2006 was re-written to conform to the IEEE-SA's format for a standard.
The universal_integer to to another integer type conversion can be forced away from the leaves by using explicit type conversion:
constant K: integer := integer(2 ** 30);
Noting the right operand of "**" the integer literal 30 would still be implicitly type converted to type INTEGER, while the left operand and result are of type universal_integer.
Two kinds of conversion of the same constant to float64 return the same value, but when I try to convert these new values to int, the results are different.
...
const Big = 92233720368547758074444444
func needFloat(x float64) float64 {
return x
}
func main() {
fmt.Println(needFloat(Big))
fmt.Println(float64(Big))
fmt.Println(int(needFloat(Big)))
fmt.Println(int(float64(Big)))
}
I'd expect the two first Println return the same type of value
fmt.Println(needFloat(Big)) // 9.223372036854776e+25
fmt.Println(float64(Big)) // 9.223372036854776e+25
so when I convert them to int, I expect the same output, but:
fmt.Println(int(needFloat(Big))) // -2147483648
fmt.Println(int(float64(Big))) // constant 92233720368547758080000000 overflows int
If your real question is why one attempt to convert to int produces a compile-time error message, but the other produces a very negative integer, it's because one is a compile-time conversion, and the other is a runtime conversion. I think it helps in these cases to be explicit about what you are expecting, and what can be run and what can't. Here's a Go Playground version of your code, where the last conversion is commented out. The reason for commenting it out is of course that it doesn't compile.
As Adrian noted in a comment, Big is a constant, specifically an untyped one. As Uvelichitel answered, a constant x (of any type) can be converted to a new and different type T if and only if
x is representable by a value of type T.
(The quote part is from the section Uvelichitel linked, except that mine adds the inner link for the word "representable".)
The expression float64(Big) is an explicit type conversion, with a constant as its x, so the result is a float64-typed constant with the given value. So far, that's fine: now we have 92233720368547758074444444 as a float64. This chops off some of the digits: the actual internal representation is 92233720368547758080000000 (see variant with %f directives). The low digits, ...74444444, have been rounded to ...80000000. See the link for "representable" for why the rounding occurs.
The expression int(float64(Big)) is an outer explicit type conversion surrounding an inner explicit type conversion. We already know what the inner type conversion does: it produces the float64 constant 92233720368547758080000000.0. The outer conversion tries to represent this new value as int, but it does not fit, producing an error:
./prog.go:18:17: constant 92233720368547758080000000 overflows int
if the commented-out line is uncommented. Note again that the value has been rounded, due to the inner conversion.
On the other hand, needFloat(Big) is a function call. Calling the function assigns the untyped constant to its argument (a float64) and obtains its return value (the same float64, value 92233720368547758080000000.0. Printing that prints what you'd expect, given the default or explicit formatting directive. The returned value is not a constant.
Similarly, int(needFloat(Big)) calls needFloat, which returns the same float64 value—not a constant—as before. The int explicit type conversion tries to convert this value to int at runtime, rather than at compile time. For such conversions between numeric types, there is a list of three explicit rules at https://golang.org/ref/spec#Conversions, plus a final caveat. Here, rule 2 applies: any fractional part is discarded. But the caveat also applies:
In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent.
In other words, there is no runtime error, but the int value you get—which in this case was -2147483648, which is the smallest allowed 32-bit integer—is up to the implementation. This particular implementation chose to use this particular negative number as its result. Another implementation might choose some other number. (Interestingly, in the playground, if I convert directly to uint I get zero. If I convert to int, then to uint, I get the 0x80000000 I expected.)
Hence, the key difference in terms of whether you get an error is whether you do the conversion at compile time, via constants, or at runtime, via runtime conversion.
int(float64(Big)) //illegal because
A constant value x can be converted to type T if x is representable by
a value of T
int(needFloat(Big)) //is non-constant expression because of function call
A non-constant value x can be converted to type T in any of these
cases:
- x's type and T are both integer or floating point types.
https://golang.org/ref/spec#Conversions
I'm trying to print to the file amount of microseconds:
high_resolution_clock::time_point t1 = high_resolution_clock::now();
high_resolution_clock::time_point t2 = high_resolution_clock::now();
auto duration1 = duration_cast<microseconds> (t2-t1).count();
fprintf(file, "%lu, %lu\n", dutation1, duration1);
In the file I can see the first column having some values around 2000 but
I get second column values always equal to zero. I wonder if I'm doing correct fprintf (the %lu parameter) and why does it print the second variable as zero in the file?
The count function returns a type called rep, which according to this std::duration reference is
an arithmetic type representing the number of ticks
Since you don't know the exact type, you can't really use any printf function to print the values, since if you use the wrong format you will have undefined behavior (which is very likely what you have here).
This will be easily solved if you use C++ streams instead, since the correct "output" operator << will automatically be selected to handle the type.
What is the cast expression equivalent of VB.NET's CType in Visual Basic 6?
There are a number of them depending on the type you are casting to
cint() Cast to integer
cstr() cast to string
clng() cast to long
cdbl() cast to double
cdate() cast to date
It also has implicit casting so you can do this myString=myInt
Quite a few posters seem to have misread the question, so I will try to set things straight by rephrasing the question and summarizing the correct answers given so far.
Problem
I want to cast data of one type to another type. In my VB.NET code I would use CType to do this. However, when I try to use CType in VB6, I get a "Sub or Function not defined" error. So, how can I perform casts in VB6 if CType won't work?
Solution
As you may have discovered, VB6 does not have a CType function like VB.NET does. However, the other conversion functions (those that have names beginning with C), which you may have encountered in VB.NET code, such as CInt and CStr, do exist in VB6, and you can use them to convert to and from non-object types. There is no built-in function for converting an object of one class to an object of another class. Keep in mind that VB6, unlike VB.NET, does not support inheritance. A class in VB6 can implement one or more interfaces, but it cannot inherit from another class. However, if a object's class implements more than one interface, you can use the Set statement to cast an object to one of the interfaces it supports (as Ant suggested). An extended version of Ant's code example is provided below:
Example: Casting a class to one of its supported interfaces
Dim base As BaseClass
Dim child As ChildClass 'implements BaseClass'
Set child = New ChildClass
Set base = child '"Cast" child to BaseClass'
Built-in type conversion functions in VB6
Below is a complete list of the built-in conversion functions available in VB6, taken directly from the VB6 Help file.
CBool
Returns
Boolean
Description
Convert expression to Boolean.
Range for expression argument:
Any valid string or numeric expression.
CByte
Returns
Byte
Description
Convert expression to Byte.
Range for expression argument:
0 to 255.
CCur
Returns
Currency
Description
Convert expression to Currency.
Range for expression argument:
-922,337,203,685,477.5808 to 922,337,203,685,477.5807.
CDate
Returns
Date
Description
Convert expression to Date.
Range for expression argument:
Any valid date expression.
CDbl
Returns
Double
Description
Convert expression to Double.
Range for expression argument:
-1.79769313486232E308 to
-4.94065645841247E-324 for negative values; 4.94065645841247E-324 to 1.79769313486232E308 for positive values.
CDec
Returns
Decimal
Description
Convert expression to Decimal.
Range for expression argument:
+/-79,228,162,514,264,337,593,543,950,335 for zero-scaled numbers, that is, numbers with no decimal places. For numbers with 28 decimal places, the range is
+/-7.9228162514264337593543950335. The smallest possible non-zero number is 0.0000000000000000000000000001.
CInt
Returns
Integer
Description
Convert expression to Long.
Range for expression argument:
-32,768 to 32,767; fractions are rounded.
CLng
Returns
Long
Description
Convert expression to Long.
Range for expression argument:
-2,147,483,648 to 2,147,483,647; fractions are rounded.
CSng
Returns
Single
Description
Convert expression to Single.
Range for expression argument:
-3.402823E38 to -1.401298E-45 for negative values; 1.401298E-45 to 3.402823E38 for positive values.
CStr
Returns
String
Description
Convert expression to String.
Range for expression argument:
Returns for CStr depend on the expression argument.
CVar
Returns
Variant
Description
Convert expression to Variant.
Range for expression argument:
Same range as Double for numerics. Same range as String for non-numerics.
Let's say you have an object of ChildClass (child) that you want to cast to BaseClass. You do this:
Dim base As BaseClass
Set base = child
Because of the way VB6 handles compile-time type safety, you can just do that without any extra syntax.
Note: Given that everyone else seems to have mentioned CType, I may just have misunderstood the question completely, and I apologise if that's the case!
The casts already mentioned are correct, but if the type is an Object then you have to use "Set" in VB6, such as:
If IsObject(Value) Then
Set myObject = Value ' VB6 does not have CType(Value, MyObjectType)
Else
myObject = Value ' VB6 does not have CType(Value, MyObjectType)
End If
That, of course, depends on the type you are casting to. Almost all user classes are objects as well as Collection, Dictionary, and many others. The built-in types such as long, integer, boolean, etc. are obviously not objects.
Ctype() I believe. The C* (CDate(), CStr(), etc) are holdovers for the most part.
Conversions are not "casts" at all. For example try:
MsgBox CLng(CBool(3&))
The result is -1, not 3. This is because those are conversion functions, not casts. Language is important!