Why literal value has no type? - xcode

Type Inference is a powerful attribute of Swift. It means the compiler can infer the literal's type from its value provided by the programmer, the explicit type specification is not needed.
For example var IntNum = 3; the compiler can infer that the variable IntNum is of type Int. In Xcode, If user hits the key and clicks on the variable name, here IntNum, then Xcode tells you what the type it is.
However, if I did that on the literal value 3, Xcode provides nothing. I guess, the literal value I put on the screen simply has no type at all, only the object variable and constant has the type property.
I just guess so, can someone explain that to me?
Cheers
SL

That's right.
From the documentation
Type Safety and Type Inference
Type inference is particularly useful when you declare a constant or
variable with an initial value. This is often done by assigning a
literal value (or literal) to the constant or variable at the point
that you declare it. (A literal value is a value that appears directly
in your source code)
...
If you combine integer and floating-point literals in an expression, a
type of Double will be inferred from the context:
let anotherPi = 3 + 0.14159
// anotherPi is also inferred to be of type Double
The literal value of 3 has no explicit type in and of itself, and so
an appropriate output type of Double is inferred from the presence of
a floating-point literal as part of the addition.

Related

Function with 1/2 values and short variable declaration assigned twice to same variable

I have two questions for following code
emptyinterface.(int) can return one or two values, how the function is defined to achieve that effect?
ok has been declared twice using short variable declaration, why it is possible in this context?
package main
import (
"fmt"
)
func main() {
var emptyinterface interface{}
emptyinterface=4
i1:=emptyinterface.(int)
fmt.Println(i1)
i2,ok:=emptyinterface.(int)//<- how the function is defined such that it can return either 1 (i1) or 2 values (i2,ok)?
fmt.Println(i2,ok)
i3,ok:=emptyinterface.(string) //<--why I can reassign to ok, which has assign previously?
fmt.Println(i3,ok)
}
It's not a function, it's a language feature. You can't write a function that does that, but the compiler writers can create a bit of syntax that does.
A := is invalid if there are no new variables on its left side. If there is at least one new variable being declared, it's allowed.
In each of the cases, there is at-least new variable created along with ok, i.e. i2 and i3, so redeclaration of ok is perfectly fine.
This is well documented in the language spec (emphasis mine) under Short variable declarations
Unlike regular variable declarations, a short variable declaration may redeclare variables provided they were originally declared earlier in the same block (or the parameter lists if the block is the function body) with the same type, and at least one of the non-blank variables is new. As a consequence, redeclaration can only appear in a multi-variable short declaration.
Also it is unclear, what you are referring as a function here, Type assertion is a feature of the language that asserts if a value within the interface is of a particular type. It always returns the underlying value if the assertion was successful or a failure if its not. You should always be checking the return value of the type assertion (2nd argument) before meaningfully using it elsewhere.

Using the TYPE keyword in Pascal

I'm trying to understand the definition of the keyword TYPE in pascal. I understand that typedef in C just gives a new name to the type (alasing). But as I understand TYPE in Pascal does not work that way. It will create a new unique type.
I was trying to search and create a simple example which shows the mechanism of TYPE. I tried to create an example which creates some types and a function. After that, it pass each time one of the types to that function. It should fail because the function should get only one type, which proves that those types are not just aliasing. Due to my lack of knowledge of Pascal syntax, I failed each time.
Could you share a simple short program which proves the power of TYPE?
EDIT:
I have created the following example:
program Check;
TYPE
Meters = Real; Seconds = Real;
VAR
m: Meters; s: Seconds;
Procedure PRINT_SEC(s: Seconds);
Begin
WriteLn(s, ' sec');
end;
Begin
PRINT_SEC(s);
PRINT_SEC(m);
end.
Output:
0.0000000000000000E+000 sec
0.0000000000000000E+000 sec
But why it does not fail? I passed m which has type Meters no? Also, How can I initialize those variables?
First a minor point, in Pascal, the keyword TYPE does not create types. The keyword TYPE must occur before type definitions, but it is the type definitions which MAY create types. Not all type definitions create types.
The Pascal Standard says the following:
A type-definition shall introduce an identifier to denote a type.
which means a type definition introduces (i.e. creates or redefines) an identifier which denotes (i.e. is an alias for) a type.
The Pascal Standard defines a type definition as:
type-definition = identifier '=' type-denoter
type-denoter = type-identifier | new-type
new-type = new-ordinal-type | new-structured-type | new-pointer-type
Which means that a type definition is a identifier, followed by the equal side, followed by a type denoter. A type denoter is either a type identifier or a new type.
So a type identifier introduces an identifier that denotes (i.e. is an alias for) either another type identifier or a new-type. A type is created only in the case where the type denoter is a new type.
So in your example:
TYPE
Meters = Real; Seconds = Real;
The type denoter in both type definitions is the type identifier Real, so Meters and Seconds are both aliases for Real.
Yes, in Pascal, Real is not a Type, it is a built-in type identifier for the real type.
The Pascal Standard says
The required type identifier real shall denote the real-type.
So real is actually a type identifier and not a type. It is as if, there is an invisible type definition.
TYPE
Real = real-type;
where real-type is the actual real type.
Variables like m and s are defined by a type. In this case both types origins from a real type. That is called a type alias. They are compatible, both as a type and by assignment.
If you want a distinct type (in Freepascal and delphi), define:
type Seconds = type real;
That would have made the print procedure to only accept the Seconds type argument. Note that variables of Seconds and Meters declared as distinct types still are assignment compatible.
To initialize variables, just assign a value:
s := 42.0;
Note: most types are named starting with a T. Like TSeconds. Just to distinct them from variables. It is a common convention (in pascal).

How is type inference implemented in a language like C++11 or Go?

I saw this question here, but it doesn't answer what I had in mind in particular detail.
If languages like Go or C++11 don't use an inference algorithm like Damas-Milner, what exactly do they do? I don't think it's as simple as taking the type on the right hand side because what if you had something like:
5 + 3.4
How would the compiler decipher what type that is? Is there any algorithm that isn't as simple as
if left is integer and right is float:
return float;
if left is float and right is integer:
return float;
etc... for every possible pattern
And if you could explain things in simple terms that would be great. I'm not studying compiler construction or any of the theoretical topics in great detail, and I don't really speak functional languages or complex mathematical notation.
I don't think it's as simple as taking the type on the right hand side
For basic type inference of the form auto var = some_expression;, it is exactly that simple. Every well-typed expression has exactly one type and that type will be the type of var. There will be no implicit conversion from the type of the expression to another type (as there might be if you gave an explicit type for var).
what if you had something like:
5 + 3.4
The question "What is the type of 5 + 3.4?" isn't specific to type inference, C++ compilers always had to answer this question - even before type inference was introduced.
So let's take a step back and look at how a C++ compiler typechecks the statement some_type var = some_expression;:
First it determines the type of some_expression. So in code you can imagine something like Type exp_type = type_of(exp);. Now it checks whether exp_type is equal to some_type or there exists an implicit conversion from exp_type to some_type. If so, the statement is well-typed and var is introduced into the environment as having the type some_type. Otherwise it is not.
Now when we introduce type inference and write auto var = some_expression;, the equation changes as such: We still do Type exp_type = type_of(exp);, but instead of then comparing it to another type or applying any implicit conversions, we instead simply set exp_type as the type of var.
So now let's get back to 5 + 3.4. What is its type and how does the compiler determine it? In C++ its type is double. The exact rules to determine the type of an arithmetic expression are listed in the C++ standard (look for "usual arithmetic conversions"), but basically boil down to this: Of the two operand types, pick the one that can represent the greater range of values. If the type is smaller than int, convert both operands to int. Otherwise convert both operands to the type you picked.
In code you'd implement this by assigning each numeric type a conversion rank and then doing something like this:
Type type_of_binary_arithmetic_expression(Type lhs_type, Type rhs_type) {
int lhs_rank = conversion_rank(lhs_type);
int rhs_rank = conversion_rank(rhs_type);
if(lhs_rank < INT_RANK && rhs_rank < INT_RANK) return INT_TYPE;
else if(lhs_rank < rhs_rank) return rhs_type;
else return lhs_type;
}
Presumably the rules for Go are somewhat different, but the same principles apply.

What are the allowed enumerator initializer types in C++11 scoped enums?

The following code compiles cleanly with GCC 5.2.1
enum class some_enum
{
first = static_cast<some_enum>(1),
};
but fails in clang 3.6.2:
$ clang++ -std=c++11 enum.cpp -c
enum.cpp:15:13: error: value of type 'some_enum' is not implicitly convertible to 'int'
first = static_cast<some_enum>(1),
^~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
According to my reading of the standard (ยง7.2, paragraph 5), the enumerator initializer expression must have the same type as the underlying type of the enumeration in this case. Which happens to be int, not some_enum.
So I think clang is correct to reject this, and that this is a bug in GCC. Can anyone with a better understanding of the standard confirm, before I report this as a bug to GCC?
Edit: to explain my reasoning regarding the standard
Here is my understanding of 7.2 paragraph 5:
Each enumeration defines a type that is different from all other
types. Each enumeration also has an underlying type. The underlying
type can be explicitly specified using enum-base; if not explicitly
specified, the underlying type of a scoped enumeration type is int. In
these cases, the underlying type is said to be fixed.
So, since this a scoped enumeration with no explicit enum-base, its underlying type is fixed as int.
Following the closing brace of an enum-specifier, each enumerator
has the type of its enumeration.
In other words, outside the enum body, an individual enumerator (i.e., named entry within the enum) is has the same type as the entire enum.
If the underlying type is fixed, the type of each enumerator prior to
the closing brace is the underlying type...
Since we're dealing with a scoped enum which always has fixed underlying type, the type of each enum entry within the enum body has the same type as the enumeration's underlying type. In this case, within the body, the enumerators have type int.
...and the constant-expression in the enumerator-definition shall
be a converted constant expression of the underlying type (5.19);
if the initializing value of an enumerator cannot be represented
by the underlying type, the program is ill-formed.
In the example case, the initializer has type some_enum, which as a C++11 scoped enum cannot be converted to a constant expression of the underlying type without an explicit conversion. So the program is ill-formed.

Does the actual value of a enum class enumeration remain constant/invariant?

Given code for an incomplete server like:
enum class Command : uint32_t {
LOGIN,
MESSAGE,
JOIN_CHANNEL,
PART_CHANNEL,
INVALID
};
Can I expect that converting Command::LOGIN to an integer will always give the same value?
Across compilers?
Across compiler versions?
If I add another enumeration?
If I remove an enumeration?
Converting Command::LOGIN would look something like this:
uint32_t number = static_cast<uint32_t>(Command::LOGIN);
Some extra information on what I am doing here. This enumeration is fed onto the wire by converting it to an integer sending it along to the server/client. I do not really particularly care what the number is, as long as it will always stay the same. If it will not stay the same, then obviously I will have to provide my own numbers through the usual way.
Now my sneaking suspicion is that it will change depending on what compiler was used to compile the code, but I would like to know for sure.
Bonus question: How does the compiler/language determine what number to use for Command::LOGIN?
Before submitting this question, I have noticed some changes from say 3137527848 to 0 and back, so it is obviously not valid to rely on it not changing. I am still curious about how this number is determined, and how or why that number is changing.
From the C++11 Standard (or rather, n3485):
[dcl.enum]/2
If the first enumerator has no initializer, the value of the corresponding constant is zero. An enumerator-definition without an initializer gives the enumerator the value obtained by increasing the value of the previous enumerator by one.
Additionally, [expr.static.cast]/9
A value of a scoped enumeration type can be explicitly converted to an integral type. The value is unchanged if the original value can be represented by the specified type.
I think it's obvious that the values of the enumerators can be represented by uint32_t; if they weren't, [dcl.enum]/5 says "if the initializing value of an enumerator cannot be represented by the underlying type, the program is ill-formed."
So as long as you use the underlying type for conversion (either explicitly or via std::underlying_type<Command>::type), the value of those enumerators are fixed as long as you don't add any enumerators before them (in the same enumeration) or alter their order.
As Nicolas Louis Guillemo pointed out, be aware of possible different endianness when transferring the value.
If you assign explicit integer values to your enum constants then you are guaranteed to always have the same value when converting to the integer type.
Just do something like the following:
enum class Command : uint32_t {
LOGIN = 12,
MESSAGE = 46,
JOIN_CHANNEL = 5,
PART_CHANNEL = 0,
INVALID = 42
};
If you don't specify any values explicitly, the values are set implicitly, starting from zero and increasing by one with each move down the list.
Quoting from draft n3485:
[dcl.enum] paragraph 2
The enumeration type declared with an enum-key of only enum is an
unscoped enumeration, and its enumerators are unscoped enumerators.
The enum-keys enum class and enum struct are semantically equivalent;
an enumeration type declared with one of these is a scoped
enumeration, and its enumerators are scoped enumerators. [...] The
identifiers in an enumerator-list are declared as constants, and can
appear wherever constants are required. An enumerator-definition with
= gives the associated enumerator the value indicated by the constant-expression. If the first enumerator has no initializer, the
value of the corresponding constant is zero. An
enumerator-definition without an initializer gives the enumerator the
value obtained by increasing the value of the previous enumerator by
one.
The drawback of relying on this, is that if the list order somehow changes in the future, then your code might silently break, so I would advise you be explicit.
Command::LOGIN will always be 0 as long as it's the first enum in the list. Just be careful with the rest of the enums, because they will have different binary representations based on if the computer is using big endian or little endian.

Resources