OCaml explicit type signatures - syntax

In Haskell, it is considered good practice to explicitly declare the type signature of your functions, even though it can (usually) be inferred. It seems like this isn't even possible in OCaml, e.g.
val add : int -> int -> int ;;
gives me an error. (Although I can make type modules which give only signatures.)
Am I correct in that this isn't possible to do in OCaml?
If so, why? The type system of OCaml doesn't seem that incredibly different from Haskell.

OCaml has two ways of specifying types, they can be done inline:
let intEq (x : int) (y : int) : bool = ...
or they can be placed in an interface file, as you have done:
val intEq : int -> int -> bool
I believe the latter is preferred, since it more cleanly separates the specification (type) from the implementation (code).
References: OCaml for Haskellers

In general, the syntax to let-bind a value with a constrained type is:
let identifier_or_pattern : constraint = e ...
Applied to a function, you can specify the signature as follows:
let add : int -> int -> int = fun x y -> ...
This is analogous to the syntax required to constrain a module to a signature:
module Mod
: sig ... end
= struct ... end

Related

How to understand IN DEPTH gcc compiler?

Background of this question : I am trying to understand how compilers work. I learn many new things : scanner, parser, AST, IR, optimisation, frontend, backend,LL(1), ... I made gradual progress and it is very interesting. Now, I would like to do some practical works.
From a programmer point of view, I know why typedef struct { int x; mytype* next; } mytype; does not compile and I know the correct syntax typedef struct mystruct { int x; struct mystruct* next; } mytype; but I would like to know where the problem happens EXACTLY during compilation. I am using gcc, I would like to know how is it posible to use gcc developper options -fdump-... to answer this question.
The first step of the GCC compiler work is parser
c-parser.c
It parse your c or c++ or some else code into gimple representation:
Parse -> Gimplify -> Tree -> SSA -> Optimize -> Generate -> RTL -> Optimize RTL Generate -> ASM
Errors can be found, for example, in terminal, or in IDE in error output like next:
gcc yourcode.c
yourcode.c:2:25: error: unknown type name 'mytype'
typedef struct { int x; mytype* next; } mytype;
^~~~~~
You also can look at how it works via a
link
Sorry for my English.

C++11 static cast to rvalue reference

A friend of mine wrote some code similar to this in a project:
struct A {
int x{0};
};
struct B : public A {
int y{1};
};
int main() {
A a;
B b = static_cast<B &&>(a);
}
IMO, this code is clearly flawed and I confirmed it by trying to access b.y and running the program under Valgrind (which reported a memory error).
What I do not understand is why this is even compiling (I am using g++4.9.3). I was actually expecting an error message along the lines of no matching function for call to B::B(A &&). I apologize if this is a stupid remark but how is it essentially different from writing B b = static_cast<B>(a) - which does give me a compile error? The only difference I see is copy from A to B vs move from A to B, and both of them are undefined here.
A static_cast from an lvalue of type A to B && can be valid:
B b;
A &a = b;
B b2 = static_cast<B &&>(a);
The conversion is available, is not rejected by the compiler, because it could under other circumstances be valid. Yes, you're right, in your case it's definitely not valid.
I was actually expecting an error message along the lines of no matching function for call to B::B(A &&).
You would get an error along those lines (but not exactly that) if you used std::move(a). By not having to spell out the type, it reduces the possibility of errors.

. or -> as Member Access operator in C++

I am quite new to C++. My earlier programming experience is in Java.
As per my earlier knowledge to access members of class we only use '->' but of course that is not true as we can also use '.'(dot notation). Can somebody tell me when is suitable when?
Let's try to understand it using a simple example:
Suppose you have the following structure
struct myStructure
{
int a;
int b;
};
Now, you can access the fields a and b using two methods:
First using a myStructure variable:
myStructure x;
int aField = x.a;
int bField = x.b;
Second, using a pointer to myStructure :
myStructure * x;
int aField = x->a;
int bField = x->b;
So, the point is, if you have access to an object or instance of a class or structure, you access the individual members using . operator and when you have a pointer, you access the members using -> operator.
. is for object, -> is for pointer

Interfacing Ada enumerations and C enums

Let in C code are defined:
typedef enum { A=1, B=2 } option_type;
void f(option_type option);
Let we also have Ada code:
type Option_Type is (A, B);
for Option_Type'Size use Interfaces.C.int'Size;
for Option_Type use (A=>1, B=>2);
X: Option_Type := A;
Which of the following code is correct (accordingly RM)?
-- First code
declare
procedure F (Option: Option_Type)
with Import, Convention=>C, External_Name=>"f";
begin
F(X);
end;
or
-- Second code
declare
procedure F (Option: Interfaces.C.unsigned)
with Import, Convention=>C, External_Name=>"f";
function Conv is new Ada.Unchecked_Conversion(Option_Type, Interfaces.C.unsigned);
begin
F(Conv(X));
end;
I think both first and second Ada fragments are correct but am not sure.
Neither is 100% correct.
In C:
typedef enum { A=1, B=2 } option_type;
In Ada:
type Option_Type is (A, B);
for Option_Type'Size use Interfaces.C.int'Size;
for Option_Type use (A=>1, B=>2);
The Ada code assumes that the C type option_type has the same size as a C int. Your second snippet assumes it has the same representation as a C unsigned int.
Neither assumption is supported by the C standard.
Quoting the N1570 draft, section 6.7.2.2, paragraph 4:
Each enumerated type shall be compatible with char, a signed
integer type, or an unsigned integer type. The choice of type is
implementation-defined, but shall be capable of representing the
values of all the members of the enumeration.
So the C type option_type could be as narrow as 1 byte or as wide as the widest supported integer type (typically 8 bytes), and it could be either signed or unsigned. C restricts the values of the enumeration constants to the range of type int, but that doesn't imply that the type itself is compatible with int -- or with unsigned int.
If you have knowledge of the characteristics of the particular C compiler you're using (the phrase "implementation-defined" means that those characteristics must be documented), then you can rely on those characteristics -- but your code is going to be non-portable.
I'm not aware of any completely portable way to define an Ada type that's compatible with a given C enumeration type. (I've been away from Ada for a long time, so I could be missing something.)
The only portable approach I can think of is to write a C wrapper function that takes an argument of a specified integer type and calls f(). The conversion from the integer type to option_type is then handled by the C compiler, and the wrapper exposes a function with an argument of known type to Ada.
void f_wrapper(int option) {
f(option); /* the conversion from int to option_type is implicit */
}

the weird result_of<F(Ts...)> in Andrei Alexandrescu's talk about exploding tuple

Has anyone watched Andrei Alexandrescu's talk about exploding tuple in GoingNative2013 yet?
Here is the piece of code I don't quite follow:
template <class F, class... Ts>
auto explode(F&& f, const tuple<Ts...>& t)
-> typename result_of<F(Ts...)>::type
{
return Expander<sizeof...(Ts),
typename result_of<F(Ts...)>::type,
F,
const tuple<Ts...>&>::expand(f, t);
}
the F(Ts...) in result_of trouble me much. I mean: doesn't F stands for a function type ?
I know R(Ts...) well, but the R here is a return type, but using F in place where R should be, that's the thing driving me crazy...
Can anyone help me understand the weird F(Ts...) here ?
Here is the link forward to Andrei Alexandrescu's talk:
http://channel9.msdn.com/Events/GoingNative/2013/The-Way-of-the-Exploding-Tuple
The question you want to ask is probably a duplicate of this one: Why does std::result_of take an (unrelated) function type as a type argument?
Let's dissect:
std::result_of<F(Ts...)>::type
So, somewhere in namespace std, we've got a class template result_of<>. It takes one template type parameter; i.e., it looks basically like this:
template<typename Foo>
struct result_of
{
typedef FOOBARBAZ type;
};
Okay, so, we're instantiating this template with the parameter F(Ts...). That's unusual syntax! You presumably know that Ts is a parameter pack, and therefore the Ts... inside the parentheses will expand at compile time to a comma-separated list of types, for example int, double, bool. So we've got F(int, double, bool). Okay, that's a function type.
Just as int(char) means "function taking char and returning int", so does F(int, double, bool) mean "function taking int, double, bool and returning F".
"But wait," you say. "I thought F was already my function type!"
Yes. F is your function type. But the type expected by std::result_of is, really!, that function type wrapped up in another function type. To elaborate:
typedef int (*F)(char);
typedef F G(char);
static_assert(std::is_same< std::result_of<G>::type, int >::value);
static_assert(std::is_same< std::result_of<F(char)>::type, int >::value);
static_assert(std::is_same< std::result_of<int (*(char))(char)>::type, int >::value);
Each of the above lines is exactly equivalent: F(char) is just a much more aesthetically pleasing way of writing int (*(char))(char). Of course, you can't always get away with it, because sometimes F is a function type that can't be returned from a function:
typedef int F(char);
std::result_of<F(char)>; // fails to compile
As #Simple wrote in the comments, std::result_of<F(Ts...)>::type can always be replaced with the less clever but also less confusing expression
decltype( std::declval<F>() ( std::declval<Ts>()... ) )
i.e., "the decltype of the result of calling a value of type F with arguments of types Ts.... Here, there are no wacky higher-level function types; everything just works the way you'd naturally expect it to. Personally, I would probably use the decltype approach in my own code, just because it's easier to understand; but I imagine that some people would prefer the std::result_of approach because it looks superficially simpler and is blessed by the Standard. To each his own. :)

Resources