I'm new to Haskell. The following code shows a pair of integers. Let's say I want to sum up or subtract the pairs. How do I do that?
module IntPair where
data IntPair = IntPair Int Int
deriving(Show)
plusIntPair :: IntPair -> Int
plusIntPair = undefined
Let's say I created an IntPair 1 2.
I should get answer 3.
You need to use patterns to destructure your IntPair
Something like this should work (for the function):
plusIntPair :: IntPair -> Int
plusIntPair (IntPair a b) = a + b
In the snippet above, the first line declares that plusIntPair has type IntPair -> Int. The second line says that the function should destructure the data constructor IntPair, bind the first and second params to a and b respectively, and add them.
and to run it:
plusIntPair (IntPair 1 2)
(IntPair 1 2) constructs a value of type IntPair, plusIntPair is applied with the argument IntPair 1 2.
Related
I was reading a Fortran code, came across the following code, couldn't understand what it does.
m%AllOuts( BAzimuth(k) ) = m%BEMT_u(indx)%psi(k)*R2D
I know that % here works like a pipe indicator to access values in a way similar to a dictionary in Python. I have a dictionary m let's say and the first key is AllOuts, but what does anything inside parentheses mean? Is it like another dictionary?
The percent sign is not denoting a dictionary. There are no native dictionaries in Fortran.
The percent sign denotes the component of a type. For example:
! Declare a type
type :: rectangle
integer :: x, y
character(len=8) :: color
end type rectangle
! Declare a variable of this type
type(rectangle) :: my_rect
! Use the type
my_rect % x = 4
my_rect % y = 3
my_rect % color = 'red'
print *, "Area: ", my_rect % x * my_rect % y
The parentheses could either indicate the index of an array, or the arguments of a call.
So, for example:
integer, dimension(10) :: a
a(8) = 16 ! write the number 16 to the 8th element of array a
Or, as a prodedure:
print *, my_pow(2, 3)
...
contains
function my_pow(a, b)
integer, intent(in) :: a, b
my_pow = a ** b
end function my_pow
In order to figure out what m is, you'd need to look at the declaration of m, which would be something like
type(sometype) :: m
or
class(sometype) :: m
Then you'd need to find out the type declaration, which would be something like
type :: sometype
! component declarations in here
end type
Now one of the components, BEMT_u, is almost certainly an array of a different type, which you'd also need to look up.
I'm having trouble understanding function type signatures for higher-order functions in OCaml.
fun f -> f 3
(int -> a) -> a
The way I process this is that f 3 part takes an int as input and returns the type defined by the function f which is denoted as a. So really, fun f is a type (int->a). But then where is the last a coming from in (int -> a) -> a?
Your first example is this:
fun f -> f 3
I think that one possible confusion you might be having is that you're thinking of this as the definition of a function named f. This is not the case. This is an anonymous value representing a function, also known as a lambda. The f represents the paramter of this anonymous function.
To keep things clearer, let's give the function the name g. In other words, assume that we define g like this:
let g = fun f -> f 3
OK, so g is a function that takes one parameter f. This f parameter is obvously a function since we see it being applied to 3. (I.e., we see it being called with 3 as the argument.) What does g return? It returns whatever f returns when you call it, right?
Since g is a function its type must be something of this form:
d -> c
I.e., it accepts something of type d and returns something of type c. From the reasoning above, we know that d is a function type, and we also know that the return type of this function is also the return type of g. So if d is (in more detail) b -> a, then the full type of g is like this:
(b -> a) -> a
However we also know that the function parameter f takes an int argument because we see it being applied to 3. So the type b must be int. This gives us the following for the type of g:
(int -> a) -> a
I hope this helps make things clearer.
We could annotate the anonymous function thus:
fun (f : int -> 'a) -> (f 3 : 'a)
I.e., it's a function from f : int -> 'a to 'a, so this is where your last 'a is coming from.
The shorthand for declaration and initialization in go is
var a, b, c = 1 , 2, 3
Equivalent to following way of declaration and initialization (as per specs)
a:=1
b:=2
c:=3
var a int
var b int
var c int
a=1
b=2
c=3
But I am not getting the answer for the problem found in following code:
package main
import "fmt"
func main() {
var a int = 0
var b int = 1
fmt.Println("init a ",a)
fmt.Println("init b ",b)
a, b = b, a+b
fmt.Println("printing a after `a, b = b, a+b`",a)
fmt.Println("printing b after `a, b = b, a+b`",b)
}
Output should be:
printing a after 'a, b = b, a+b' 1
printing b after 'a, b = b, a+b' 2
Since the value of b is evaluated with a + b i.e 1+1 = 2. But its giving 1.
Here is the playground links of both the working code where you can observe the difference.
a,b = b, a+b
a=b, b=a+b
I know I am missing something to understand, basically how the shorthand expression are evaluated especially when the same variable is involved in the expression.
But where is the proper documentation to refer. Could anyone help on this?
See here
The assignment proceeds in two phases. First, the operands of index
expressions and pointer indirections (including implicit pointer
indirections in selectors) on the left and the expressions on the
right are all evaluated in the usual order. Second, the assignments
are carried out in left-to-right order.
Based on that a+b (0+1) is evaluated first. Then it's assigned. Thus you get the result of a = 1 and b = 1
while working on a project I accidentally noticed that the same method with only one additional (unused) argument manages to run even ten times faster than the other one, with optimizations enabled.
type Stream () =
static member private write (x, o, a : byte[]) = (for i = 0 to 3 do a.[o + i] <- byte((x >>> 24 - i * 8) % 256)); 4
static member private format f x l = Array.zeroCreate l |> fun a -> (f(x, 0, a) |> ignore; a)
static member private format1 f x l o = Array.zeroCreate l |> fun a -> (f(x, 0, a) |> ignore; a)
static member Format (value : int) = Stream.format (fun (x: int, i, a) -> Stream.write(x, i, a)) value 4
static member Format1 (value : int) = Stream.format1 (fun (x: int, i, a) -> Stream.write(x, i, a)) value 4
When tested, Stream.Format1 runs much faster than Stream.Format, although the only difference between the private members Stream.format and Stream.format1 is just the o argument, which moreover is unused by the method itself.
How does the compiler treat in so different ways two almost identical methods?
EDIT: thanks for the explanation and sorry for the ignorance.
The problem is that when you call Format1 with just a single argument, it only returns a function. It doesn't do the actual formatting yet. This means that if you compare the performance of:
Stream.Format 42
Stream.Format1 42
... then you're actually comparing the performance of actual formatting (that creates the array and writes something in it) in the first case and the performance of code that simply returns a function value without doing anything.
If you're not using the o parameter of format1 for anything, then you can just pass in some dummy value, to actually evaluate the function and get the result. Then you should get similar performance:
Stream.Format 42
Stream.Format1 42 ()
Format actually invokes Array.zeroCreate l |> fun a -> (f(x, 0, a) |> ignore; a).
Format1 returns a function that when passed an object invokes Array.zeroCreate l |> fun a -> (f(x, 0, a) |> ignore; a).
I.e., one does actual work, the other is merely a partial function application; the latter is obviously quicker.
If you're not familiar with partial function application, there is a section in the F# docs titled 'Partial Application of Arguments' that's worth reading over: Functions (F#)
I have a recursive immutable data structure in ocaml which can be simplified to something like this:
type expr =
{
eexpr : expr_expr;
some_other_complex_field : a_complex_type;
}
and expr_expr =
| TInt of int
| TSum of (expr * expr)
| TMul of (expr * expr)
It's an AST, and sometimes it gets pretty complex (it's very deep).
there is a recursive function that evaluates an expression. For example, let's say,
let rec result expr =
match expr.eexpr with
| TInt i -> i
| TSum (e1, e2) -> result e1 + result e2
| TMul (e1, e2) -> result e1 * result e2
Now suppose I am mapping an expression to another expression, and I need to constantly check the result of an expr, sometimes more than once for the same expr, and sometimes for expressions that were recently mapped by using the pattern
{ someExpr with eexpr = TSum(someExpr, otherExpr) }
Now, the result function is very lightweight, but running it many times for a deep AST will not be very optimized. I know I could cache the value using a Hashtbl, but AFAIK the Hashtbl will only do structural equality, so it will need to traverse my long AST anyway.
I know the best option would be to include a probably immutable "result" field in the expr type. But I can't.
So is there any way in Ocaml to cache a value to an immutable type, so I don't have to calculate it eagerly every time I need it ?
Thanks!
Hash-cons the values of expr_expr. By doing this structurally equal values in your program will share exactly the same memory representation and you can substitute structural equality (=) by physical equality (==).
This paper should get you quickly started on hash-consing in OCaml.
You can use the functorial interface to control the kind of equality used by the hash table. I believe the semantics of (==) are legitimate for your purposes; i.e., if A == B then f A = f B for any pure function f. So you can cache the results of f A. Then if you find a B that's physically equal to A, the cached value is correct for B.
The downside of using (==) for hashing is that the hash function will send all structurally equal objects to the same hash bucket, where they will be treated as distinct objects. If you have a lot of structurally equal objects in the table, you get no benefit from the hashing. The behavior degenerates to a linear search.
You can't define the hash function to work with physical addresses, because the physical addresses can be changed at any time by the garbage collector.
However, if you know your table will only contain relatively few large-ish values, using physical equality might work for you.
I think you can merge the two ideas above : use hash-consing-like techniques to get the hash of the "pure expression" part of your data, and use this hash as key in the memoization table for the eval function.
Of course this only works when your eval function indeed only depends on the "pure expression" part of the function, as in the example you gave. I believe that is a relatively general case, at least if you restrict yourself to storing the successful evaluations (that won't, for example, return an error including some location information).
Edit: a small proof of concept:
type 'a _expr =
| Int of int
| Add of 'a * 'a
(* a constructor to avoid needing -rectypes *)
type pure_expr = Pure of pure_expr _expr
type loc = int
type loc_expr = {
loc : loc;
expr : loc_expr _expr;
pure : pure_expr (* or any hash_consing of it for efficiency *)
}
(* this is where you could hash-cons *)
let pure x = Pure x
let int loc n =
{ loc; expr = Int n; pure = pure (Int n) }
let add loc a b =
{ loc; expr = Add (a, b); pure = pure (Add(a.pure, b.pure)) }
let eval =
let cache = Hashtbl.create 251 in
let rec eval term =
(* for debug and checking memoization *)
Printf.printf "log: %d\n" term.loc;
try Hashtbl.find cache term.pure with Not_found ->
let result =
match term.expr with
| Int n -> n
| Add(a, b) -> eval a + eval b in
Hashtbl.add cache term.pure result;
result
in eval
let test = add 3 (int 1 1) (int 2 2)
# eval test;;
log: 3
log: 2
log: 1
- : int = 3
# eval test;;
log: 3
- : int = 3