Initialize const variable - go

How can I initialize KILO variable with const type?
const KILO = math.Pow10(3)
Because I have an errror
const initializer math.Pow10(3) is not a constant

Constant declarations cannot contain function calls (with some exceptions, see below), they must be evaluated at compile time while a function call is carried out at runtime.
Quoting from Spec: Constants:
A constant value is represented by a rune, integer, floating-point, imaginary, or string literal, an identifier denoting a constant, a constant expression, a conversion with a result that is a constant, or the result value of some built-in functions such as unsafe.Sizeof applied to any value, cap or len applied to some expressions, real and imag applied to a complex constant and complex applied to numeric constants.
And quoting from Spec: Constant expressions:
Constant expressions may contain only constant operands and are evaluated at compile time.
Note that there is a small set of (builtin) functions that may be called in constant declarations such as unsafe.Sizeof(), but generally you can't do that.
So just use
const Kilo = 1000 // Integer literal
Or
const Kilo = 1e3 // Floating-point literal
For an extensive introduction to Go constants, read blog post: Constants
If for some reason you do need to call a function, you cannot store it in a constant, it must be a variable, e.g.:
var Kilo = math.Pow10(3)
Also see related Writing powers of 10 as constants compactly.

Related

gfortran compiler Error: result of exponentiation exceeds the range of INTEGER(4)

I have this line in fortran and I'm getting the compiler error in the title. dFeV is a 1d array of reals.
dFeV(x)=R1*5**(15) * (a**2) * EXP(-(VmigFe)/kbt)
for the record, the variable names are inherited and not my fault. I think this is an issue with not having the memory space to compute the value on the right before I store it on the left as a real (which would have enough room), but I don't know how to allocate more space for that computation.
The problem arises as one part of your computation is done using integer arithmetic of type integer(4).
That type has an upper limit of 2^31-1 = 2147483647 whereas your intermediate result 5^15 = 30517578125 is slightly larger (thanks to #evets comment).
As pointed out in your question: you save the result in a real variable.
Therefor, you could just compute that exponentiation using real data types: 5.0**15.
Your formula will end up like the following
dFeV(x)= R1 * (5.0**15) * (a**2) * exp(-(VmigFe)/kbt)
Note that integer(4) need not be the same implementation for every processor (thanks #IanBush).
Which just means that for some specific machines the upper limit might be different from 2^31-1 = 2147483647.
As indicated in the comment, the value of 5**15 exceeds the range of 4-byte signed integers, which are the typical default integer type. So you need to instruct the compiler to use a larger type for these constants. This program example shows one method. The ISO_FORTRAN_ENV module provides the int64 type. UPDATE: corrected to what I meant, as pointed out in comments.
program test_program
use ISO_FORTRAN_ENV
implicit none
integer (int64) :: i
i = 5_int64 **15_int64
write (*, *) i
end program
Although there does seem to be an additional point here that may be specific to gfortran:
integer(kind = 8) :: result
result = 5**15
print *, result
gives: Error: Result of exponentiation at (1) exceeds the range of INTEGER(4)
while
integer(kind = 8) :: result
result = 5**7 * 5**8
print *, result
gives: 30517578125
i.e. the exponentiation function seems to have an integer(4) limit even if the variable to which the answer is being assigned has a larger capacity.

VHDL : How is this statement resolved: type foo is range -(2**30) to 2**30;?

I am having an issue processing the following type declaration:
type foo is range -(2**30) to 2**30;
For the exponentiation, there are two possible interpretations to consider:
function "**"(universal_integer, integer) return universal_integer;
function "**"(integer, integer) return integer;
(There are interpretations using reals, but we can easily reject those since the first argument is integral.)
Since the second operand of the exponentiation must be integer, use of either alternative requires an implicit conversion. For this reason, we cannot disqualify either choice on the grounds that an implicit conversion is required.
The bounds of a type declaration must be of some integral type, but they need not be the same type. Absent any further constraints, I reject this type declaration as being ambiguous.
This seems to be a quirk with exponentiation. If I had instead:
type foo is range 0 to 2 * 30;
Then again you have two alternatives for "*". However:
function "*"(universal_integer,universal_integer) return universal_integer;
does not require an implicit conversion, so we can use that and reject the other.
Furthermore:
constant K: integer := 2 ** 30;
also works fine due to the additional constraint that the result type must be integer.
But the combination of type definition and exponentiation deprives us of enough information to make a selection.
Since this type definition works in other implementations, what am I missing?
On behalf of my implementor ..
Ken
For
type foo is range -(2**30) to 2**30;
There are two eligible "**" functions implementing the exponentiation operator.
9.2.8 Miscellaneous operators
The exponentiating operator ** is predefined for each integer type and for each floating-point type. In either case the right operand, called the exponent, is of the predefined type INTEGER.
Exponentiation with an integer exponent is equivalent to repeated multiplication of the left operand by itself for a number of times indicated by the absolute value of the exponent and from left to right; if the exponent is negative, then the result is the reciprocal of that obtained with the absolute value of the exponent. Exponentiation with a negative exponent is only allowed for a left operand of a floating-point type. Exponentiation by a zero exponent results in the value one. Exponentiation of a value of a floating-point type is approximate.
Predefined operators for predefined types are shown as comments in package STANDARD. The two applicable here:
16.3 Package STANDARD
-- function "**" (anonymous: universal_integer; anonymous: INTEGER)
-- return universal_integer;
-- function "**" (anonymous: INTEGER; anonymous: INTEGER)
-- return INTEGER;
Note the right operand type is INTEGER. The right operand is subject to implicit type conversion.
The reason there are two is found in 5.2.3 Integer types, 5.2.3.1 General:
Each bound of a range constraint that is used in an integer type definition shall be a locally static expression of some integer type, but the two bounds need not have the same integer type. (Negative bounds are allowed.)
Integer literals are the literals of an anonymous predefined type that is called universal_integer in this standard. Other integer types have no literals. However, for each integer type there exists an implicit conversion that converts a value of type universal_integer into the corresponding value (if any) of the integer type (see 9.3.6).
The bounds are required to be some integer type and that can include universal_integer.
Which of those two are selected by The context of overload resolution is based on semantics found in 9.3.6 Type conversions:
In certain cases, an implicit type conversion will be performed. An implicit conversion of an operand of type universal_integer to another integer type, or of an operand of type universal_real to another floating-point type, can only be applied if the operand is either a numeric literal or an attribute, or if the operand is an expression consisting of the division of a value of a physical type by a value of the same type; such an operand is called a convertible universal operand. An implicit conversion of a convertible universal operand is applied if and only if the innermost complete context determines a unique (numeric) target type for the implicit conversion, and there is no legal interpretation of this context without this conversion.
You only implicitly convert from universal_integer to another integer type if there is no other legal interpretation. This is the mechanism to prevent two possible choices here in the context of overload resolution.
For 2 ** 30 do you have to implicitly convert the left operand? No. Do you have to implicitly convert the right operand? Yes, both choices require the type be INTEGER. Further there's no requirement the result be a different integer type than universal_integer required by the semantics of integer type definition (5.2.3.1).
Looking at 9.5 Universal expressions, there's one additional limitation on the definition of an integer type here:
For the evaluation of an operation of a universal expression, the following rules apply. If the result is of type universal_integer, then the values of the operands and the result shall lie within the range of the integer type with the widest range provided by the implementation, excluding type universal_integer itself. If the result is of type universal_real, then the values of the operands and the result shall lie within the range of the floating-point type with the widest range provided by the implementation, excluding type universal_real itself.
There's no way for you to use literals to describe an integer type with a value range greater than that of type INTEGER, the only predefined integer type (5.2.3.2).
There's also the limitation of integer literals of the anonymous type universal_integer whose value range is unknowable.
12.5 The context of overload resolution
When considering possible interpretations of a complete context, the only rules considered are the syntax rules, the scope and visibility rules, and the rules of the form as follows:
a) ...
e) The rules given for the resolution of overloaded subprogram calls; for the implicit conversions of universal expressions; for the interpretation of discrete ranges with bounds having a universal type; for the interpretation of an expanded name whose prefix denotes a subprogram; and for a subprogram named in a subprogram instantiation declaration to denote an uninstantiated subprogram.
f) ...
Where we can see both the semantics of implicit conversion found in 9.3.6 and the limitations on range found in 9.5 are embraced.
For purposes of knowing how to select between the two candidate functions providing operator overload the key is found in 9.3.6.
Both operands of of the exponentiation operator in
constant K: integer := 2 ** 30;
can be implicitly type converted to another integer type (INTEGER) while the otherwise universal expression 2 ** 30 can't be implicitly converted, not meeting the requirements of 9.3.6 (the operands aren't values of a physical type and operator isn't the multiplying operator /).
The overload for that is also alluded to in package STANDARD:
-- function "**" (anonymous: INTEGER; anonymous: INTEGER)
-- return INTEGER;
Chuck Swart (the last ISAC chair) described this behavior in Issue Report 2073 (IR2073.txt) as forcing implicit type conversion to the leaves of an abstract syntax tree. In his words "This clause implies that implicit conversions occur as far down as possible in the expression tree." Note that 9.3.6 was 7.3.5 before Accellera VHDL -2006 was re-written to conform to the IEEE-SA's format for a standard.
The universal_integer to to another integer type conversion can be forced away from the leaves by using explicit type conversion:
constant K: integer := integer(2 ** 30);
Noting the right operand of "**" the integer literal 30 would still be implicitly type converted to type INTEGER, while the left operand and result are of type universal_integer.

Why converting the same value in two ways, the results are different?

Two kinds of conversion of the same constant to float64 return the same value, but when I try to convert these new values to int, the results are different.
...
const Big = 92233720368547758074444444
func needFloat(x float64) float64 {
return x
}
func main() {
fmt.Println(needFloat(Big))
fmt.Println(float64(Big))
fmt.Println(int(needFloat(Big)))
fmt.Println(int(float64(Big)))
}
I'd expect the two first Println return the same type of value
fmt.Println(needFloat(Big)) // 9.223372036854776e+25
fmt.Println(float64(Big)) // 9.223372036854776e+25
so when I convert them to int, I expect the same output, but:
fmt.Println(int(needFloat(Big))) // -2147483648
fmt.Println(int(float64(Big))) // constant 92233720368547758080000000 overflows int
If your real question is why one attempt to convert to int produces a compile-time error message, but the other produces a very negative integer, it's because one is a compile-time conversion, and the other is a runtime conversion. I think it helps in these cases to be explicit about what you are expecting, and what can be run and what can't. Here's a Go Playground version of your code, where the last conversion is commented out. The reason for commenting it out is of course that it doesn't compile.
As Adrian noted in a comment, Big is a constant, specifically an untyped one. As Uvelichitel answered, a constant x (of any type) can be converted to a new and different type T if and only if
x is representable by a value of type T.
(The quote part is from the section Uvelichitel linked, except that mine adds the inner link for the word "representable".)
The expression float64(Big) is an explicit type conversion, with a constant as its x, so the result is a float64-typed constant with the given value. So far, that's fine: now we have 92233720368547758074444444 as a float64. This chops off some of the digits: the actual internal representation is 92233720368547758080000000 (see variant with %f directives). The low digits, ...74444444, have been rounded to ...80000000. See the link for "representable" for why the rounding occurs.
The expression int(float64(Big)) is an outer explicit type conversion surrounding an inner explicit type conversion. We already know what the inner type conversion does: it produces the float64 constant 92233720368547758080000000.0. The outer conversion tries to represent this new value as int, but it does not fit, producing an error:
./prog.go:18:17: constant 92233720368547758080000000 overflows int
if the commented-out line is uncommented. Note again that the value has been rounded, due to the inner conversion.
On the other hand, needFloat(Big) is a function call. Calling the function assigns the untyped constant to its argument (a float64) and obtains its return value (the same float64, value 92233720368547758080000000.0. Printing that prints what you'd expect, given the default or explicit formatting directive. The returned value is not a constant.
Similarly, int(needFloat(Big)) calls needFloat, which returns the same float64 value—not a constant—as before. The int explicit type conversion tries to convert this value to int at runtime, rather than at compile time. For such conversions between numeric types, there is a list of three explicit rules at https://golang.org/ref/spec#Conversions, plus a final caveat. Here, rule 2 applies: any fractional part is discarded. But the caveat also applies:
In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent.
In other words, there is no runtime error, but the int value you get—which in this case was -2147483648, which is the smallest allowed 32-bit integer—is up to the implementation. This particular implementation chose to use this particular negative number as its result. Another implementation might choose some other number. (Interestingly, in the playground, if I convert directly to uint I get zero. If I convert to int, then to uint, I get the 0x80000000 I expected.)
Hence, the key difference in terms of whether you get an error is whether you do the conversion at compile time, via constants, or at runtime, via runtime conversion.
int(float64(Big)) //illegal because
A constant value x can be converted to type T if x is representable by
a value of T
int(needFloat(Big)) //is non-constant expression because of function call
A non-constant value x can be converted to type T in any of these
cases:
- x's type and T are both integer or floating point types.
https://golang.org/ref/spec#Conversions

Is it safe to write to a std::strings buffer directly?

If I have the following code:
std::string hello = "hello world";
char* internalBuffer = &hello[0];
Is it then safe to write to internalBuffer up to hello.length()? Or is this UB/implemention defined? Obviously I can write tests and see that this works, but it doesn't answer my question.
Yes, it's safe. No, it's not explicitly allowed by the standard.
According to my copy of the standard draft from like half a year ago, they do assure that data() points at a contiguous array, and that that array be the same as what you receive from operator[]:
21.4.7.1 basic_string accessors [string.accessors]
const charT* c_str() const noexcept;
const charT* data() const noexcept;
Returns: A pointer p such that p + i == &operator[](i) for each i in [0,size()].
From this one can conclude that operator[] returns a reference to some place within that contiguous array. They also allow the returned reference from (non-const) operator[] be modified.
Having a non-const reference to one member of an array I dare to say that we can modify the entire array.
The relevant section in the standard is §21.4.5:
const_reference operator[](size_type pos) const noexcept;
reference operator[](size_type pos) noexcept;
[...]
Returns: *(begin() + pos) if pos < size(), otherwise a reference to an
object of type T with value charT(); the referenced value shall not be modified.
If I understand this correctly, it means that as long as the index given to operator[] is smaller than the string's size, one is allowed to modify the value. If however, the index is equal to size and thus we obtain the \0 terminating the string, we must not write to this value.
Cppreference uses a slightly different wording here:
If pos == size(), a reference to the character with value CharT() (the null character) is returned.
For the first (non-const) version,the behavior is undefined if this character is modified.
I read this such that 'this character' here only refers to the default constructed CharT, and not to the reference returned in the other case. But I admit that the wording is a bit confusing here.
In practice it is safe, theoretically - no.
C++ standard doesn't force to implement string as a sequential character array like it does for the vector. I'm not aware of any implementation of string where it is not safe, but theoretically there is no guarantee.
http://herbsutter.com/2008/04/07/cringe-not-vectors-are-guaranteed-to-be-contiguous/

Convert Enum to Binary (via Integer or something similar)

I have an Ada enum with 2 values type Polarity is (Normal, Reversed), and I would like to convert them to 0, 1 (or True, False--as Boolean seems to implicitly play nice as binary) respectively, so I can store their values as specific bits in a byte. How can I accomplish this?
An easy way is a lookup table:
Bool_Polarity : constant Array(Polarity) of Boolean
:= (Normal=>False, Reversed => True);
then use it as
B Boolean := Bool_Polarity(P);
Of course there is nothing wrong with using the 'Pos attribute, but the LUT makes the mapping readable and very obvious.
As it is constant, you'd like to hope it optimises away during the constant folding stage, and it seems to: I have used similar tricks compiling for AVR with very acceptable executable sizes (down to 0.6k to independently drive 2 stepper motors)
3.5.5 Operations of Discrete Types include the function S'Pos(Arg : S'Base), which "returns the position number of the value of Arg, as a value of type universal integer." Hence,
Polarity'Pos(Normal) = 0
Polarity'Pos(Reversed) = 1
You can change the numbering using 13.4 Enumeration Representation Clauses.
...and, of course:
Boolean'Val(Polarity'Pos(Normal)) = False
Boolean'Val(Polarity'Pos(Reversed)) = True
I think what you are looking for is a record type with a representation clause:
procedure Main is
type Byte_T is mod 2**8-1;
for Byte_T'Size use 8;
type Filler7_T is mod 2**7-1;
for Filler7_T'Size use 7;
type Polarity_T is (Normal,Reversed);
for Polarity_T use (Normal => 0, Reversed => 1);
for Polarity_T'Size use 1;
type Byte_As_Record_T is record
Filler : Filler7_T;
Polarity : Polarity_T;
end record;
for Byte_As_Record_T use record
Filler at 0 range 0 .. 6;
Polarity at 0 range 7 .. 7;
end record;
for Byte_As_Record_T'Size use 8;
function Convert is new Ada.Unchecked_Conversion
(Source => Byte_As_Record_T,
Target => Byte_T);
function Convert is new Ada.Unchecked_Conversion
(Source => Byte_T,
Target => Byte_As_Record_T);
begin
-- TBC
null;
end Main;
As Byte_As_Record_T & Byte_T are the same size, you can use unchecked conversion to convert between the types safely.
The representation clause for Byte_As_Record_T allows you to specify which bits/bytes to place your polarity_t in. (i chose the 8th bit)
My definition of Byte_T might not be what you want, but as long as it is 8 bits long the principle should still be workable. From Byte_T you can also safely upcast to Integer or Natural or Positive. You can also use the same technique to go directly to/from a 32 bit Integer to/from a 32 bit record type.
Two points here:
1) Enumerations are already stored as binary. Everything is. In particular, your enumeration, as defined above, will be stored as a 0 for Normal and a 1 for Reversed, unless you go out of your way to tell the compiler to use other values.
If you want to get that value out of the enumeration as an Integer rather than an enumeration value, you have two options. The 'pos() attribute will return a 0-based number for that enumeration's position in the enumeration, and Unchecked_Conversion will return the actual value the computer stores for it. (There is no difference in the value, unless an enumeration representation clause was used).
2) Enumerations are nice, but don't reinvent Boolean. If your enumeration can only ever have two values, you don't gain anything useful by making a custom enumeration, and you lose a lot of useful properties that Boolean has. Booleans can be directly selected off of in loops and if checks. Booleans have and, or, xor, etc. defined for them. Booleans can be put into packed arrays, and then those same operators are defined bitwise across the whole array.
A particular pet peeve of mine is when people end up defining themselves a custom boolean with the logic reversed (so its true condition is 0). If you do this, the ghost of Ada Lovelace will come back from the grave and force you to listen to an exhaustive explanation of how to calculate Bernoulli sequences with a Difference Engine. Don't let this happen to you!
So if it would never make sense to have a third enumeration value, you just name objects something appropriate describing the True condition (eg: Reversed_Polarity : Boolean;), and go on your merry way.
It seems all I needed to do was pragma Pack([type name]); (in which 'type name' is the type composed of Polarity) to compress the value down to a single bit.

Resources