require/typed in typed racket... examples? - random

I'm trying to do use the science.plt module in a typed racket program, but I'm having a hard time understanding how to use the require/typed form properly. I've read the docs repeatedly, but I guess I don't quite understand what exactly I'm trying to produce with the form.
In the
[struct name ([f : t] ...)]
form, is the name a name I should expect to find in the module I want to require, or am I making it up for use within my own program?
Probably the most helpful thing for me would be an example or three of require/typed applied to untyped racket modules.
Or if I'm misunderstanding this real deeply and one cannot use untyped modules in a typed program, how should I go about structuring things? I really just need the random number and random distribution functionality from the science.plt module, and don't expect to have any other imports, at this point.

Did you have a look at the Typed Racket reference page for require/typed? There are several examples there showing how to import from untyped modules.
The name expression in the [#:struct name ([f : t] ...) struct-option ...] clause is supposed to be the name of a structure type.
That is, if you have a struct like (struct point (x y), the name is supposed to be point.

Related

Linux kernel variables ending in 'i' meaning

It's a random and stupid question, but I have no clue what the i stands for in structs with members like:
[starting character]i_[some specifier]
Examples are like: bio struct, everytime dm_target is referenced, bvec_iter.
Whenever I read such a variable I read the complete name in my head, and it's very frustrating to me that I can't do it with these.
There no meaning in i_, it's not special.
A loooong time ago structure members shared the same namespace as variables and such, they had to be unique - some people are used to that. And it is subjectively nice to have structure members starting with a unique prefix - IDE autocompletion works better, and it's easier to do lookup.
In the case of struct bio members, bi_ is just a shortcut for... bio_.
In the case of struct dm_target *ti variable name, ti looks like a nice shorter form of this used in C++ and Python for referencing the current object.
In the case of struct bvec_iter, bi_ is just a shorthand of bvec_iter. Above, you have struct bio_vec that uses bv_ for struct members.
These are just conventions that a specific developer used to mark some abstractions in their source code. It has no specific meaning.
It's also easier to look sometimes - when I see iter->bi_stuff, I can "suspect" that iter is a pointer to something bi related (bio? bvec_iter?). In the context of many, many variables, such small clues are nice for the eye.
This all is subjective. It is way more important to follow one convention, rather that what convention it is.

Is there a difference between fun(n::Integer) and fun(n::T) where T<:Integer in performance/code generation?

In Julia, I most often see code written like fun(n::T) where T<:Integer, when the function works for all subtypes of Integer. But sometimes, I also see fun(n::Integer), which some guides claim is equivalent to the above, whereas others say it's less efficient because Julia doesn't specialize on the specific subtype unless the subtype T is explicitly referred to.
The latter form is obviously more convenient, and I'd like to be able to use that if possible, but are the two forms equivalent? If not, what are the practicaly differences between them?
Yes Bogumił Kamiński is correct in his comment: f(n::T) where T<:Integer and f(n::Integer) will behave exactly the same, with the exception the the former method will have the name T already defined in its body. Of course, in the latter case you can just explicitly assign T = typeof(n) and it'll be computed at compile time.
There are a few other cases where using a TypeVar like this is crucially important, though, and it's probably worth calling them out:
f(::Array{T}) where T<:Integer is indeed very different from f(::Array{Integer}). This is the common parametric invariance gotcha (docs and another SO question about it).
f(::Type) will generate just one specialization for all DataTypes. Because types are so important to Julia, the Type type itself is special and allows parameterization like Type{Integer} to allow you to specify just the Integer type. You can use f(::Type{T}) where T<:Integer to require Julia to specialize on the exact type of Type it gets as an argument, allowing Integer or any subtypes thereof.
Both definitions are equivalent. Normally you will use fun(n::Integer) form and apply fun(n::T) where T<:Integer only if you need to use specific type T directly in your code. For example consider the following definitions from Base (all following definitions are also from Base) where it has a natural use:
zero(::Type{T}) where {T<:Number} = convert(T,0)
or
(+)(x::T, y::T) where {T<:BitInteger} = add_int(x, y)
And even if you need type information in many cases it is enough to use typeof function. Again an example definition is:
oftype(x, y) = convert(typeof(x), y)
Even if you are using a parametric type you can often avoid using where clause (which is a bit verbose) like in:
median(r::AbstractRange{<:Real}) = mean(r)
because you do not care about the actual value of the parameter in the body of the function.
Now - if you are Julia user like me - the question is how to convince yourself that this works as expected. There are the following methods:
you can check that one definition overwrites the other in methods table (i.e. after evaluating both definitions only one method is present for this function);
you can check code generated by both functions using #code_typed, #code_warntype, #code_llvm or #code_native etc. and find out that it is the same
finally you can benchmark the code for performance using BenchmarkTools
A nice plot explaining what Julia does with your code is here http://slides.com/valentinchuravy/julia-parallelism#/1/1 (I also recommend the whole presentation to any Julia user - it is excellent). And you can see on it that Julia after lowering AST applies type inference step to specialize function call before LLVM codegen step.
You can hint Julia compiler to avoid specialization. This is done using #nospecialize macro on Julia 0.7 (it is only a hint though).

sicp 2.4.3 data directed programming and additivity, scheme

Could some provide clarification to the example of complex arithmetic decribed in the chapter. I can not understand one point. I would appreciate any help.
The problem is the following:
There are two packages with similiar naming of procedures.
The first one is "(install-rectangular-package)". The second one is "(install-polar-package)". In addition, a procedure is defined:
(define (make-from-real-imag x y)
(get 'make-from-real-imag 'rectangular) x y))
i type in scheme interperter
(install-rectangular-package)
(install-polar-package)
(make-from-real-imag 3 5)
and it works. what i do not understand how "get" inside "make-from-real" finds proper function in the proper package. when the string "(get 'make-from-real-imag 'rectangular)" is executed ,it replaces by "(lambda (x y) (tag (make-from-real-imag x y))))" but how it knows that it has to call function inside "(install-rectangular-package)" but not in "(install-polar-package)".
The chapter includes some sentences that say you are simply supposed to assume that the procedures put and get exist:
To implement this plan, assume that we have two procedures, put and get, for manipulating the operation-and-type table:
(put <op> <type> <item>)
installs the <item> in the table, indexed by the <op> and the <type>.
(get <op> <type>)
looks up the <op>, <type> entry in the table and returns the item found there. If no item is found, get returns false.
For now, we can assume that put and get are included in our language.
So, now we must ask for a clarification of your question:
Are you asking "how can implement a procedure like install-rectangular-package so that after (install-rectangular-package) is evaluated, the get procedure can lookup the desired operations?"
Or are you asking "how does get itself work?"
Or are you asking: "Even if we assume such a table for supporting put and get exists, how can the presented code work, where it installs multiple distinct functions with names like real-part (and imag-part, etc) even though the one real-part comes from the rectangular package, and another real-part comes from the polar package?"
If you are asking the first question, the answer is: install-rectangular-package simply calls put with the appropriate arguments to extend the lookup table that get will access.
If you are asking the second question, then you will need to see how put and get are implemented, which is discussed in Chapter 3. But the quick answer is: You could use a data structure that stores a record of every {<op>, <type>, <item>} triple inserted by put. The book describes one way to do this, where you just build up a list of entries.
(The main interesting thing that any implementation of put and get needs to do is imperatively modify some hidden state. The book uses the set-cdr! operation to do this. The requirement to use some form of imperative operation is probably the reason why they waited until Chapter 3 to describe the implementation of put and get.)
If you are asking the third question, the answer is "by the magic of lexical scoping"
The definition of install-rectangular-package has a collection of internal definitions, and install-polar-package has another collection of internal definitions. Even though there is overlap between the names chosen in the two definitions, installing the polar package does not overwrite the functions previously defined by the rectangular package.
(It is important to distinguish here between the name used in a function definition versus the function value/object (which you might think of as the (lambda (x y) ...)) itself. Even though install-rectangular-package and install-polar-package reuse the same names, they are creating distinct function values, and those distinct values are then being put into the put/get table, without any significance attached to the name used to originally define them.)
Even though the picture of the put/get table in the book looks like:
the entries in the table are not names. They are instead function objects. Other local definitions of real-part or imag-part will not affect the entries that were installed by install-rectangular-package nor install-polar-package; the only way to affect those entries is to call put itself with the matching <op> and <type> arguments to overwrite the previous cell in the table.
For more discussion of lexical scope and ways to think about local function definitions, I recommend this part of HtDP ("HtDP" stands for "How to Design Programs", which, like SICP, is an intro to programming, but written in a fashion that spells things out a bit more than SICP does; see also this paper comparing SICP and HtDP.)

Do you use nouns for classnames that represent callable objects?

There is a more generic question asked here. Consider this as an extension to that.
I understand that classes represent type of objects and we should use nouns as their names. But there are function objects supported in many language that acts more like functions than objects. Should I name those classes also as nouns, or verbs are ok in that case. doSomething(), semantically, makes more sense than Something().
Update / Conclusion
The two top voted answers I got on this shares a mixed opinion:
Attila
In the case of functors, however, they represent "action", so naming them with a verb (or some sort of noun-verb combination) is more appropriate -- just like you would name a function based on what it is doing.
Rook
The instance of a functor on the other hand is effectively a function, and would perhaps be better named accordingly. Following Jim's suggestion above, SomethingDoer doSomething; doSomething();
Both of them, after going through some code, seems to be the common practice. In GNU implementation of stl I found classes like negate, plus, minus (bits/stl_function.h) etc. And variate_generator, mersenne_twister (tr1/random.h). Similarly in boost I found classes like base_visitor, predecessor_recorder (graph/visitors.hpp) as well as inverse, inplace_erase (icl/functors.hpp)
Objects (and thus classes) usually represent "things", therefore you want to name them with nouns. In the case of functors, however, they represent "action", so naming them with a verb (or some sort of noun-verb combination) is more appropriate -- just like you would name a function based on what it is doing.
In general, you would want to name things (objects, functions, etc.) after their purpose, that is, what they represent in the program/the world.
You can think of functors as functions (thus "action") with state. Since the clean way of achieving this (having a state associated with your "action") is to create an object that stores the state, you get an object, which is really an "action" (a fancy function).
Note that the above applies to languages where you can make a pure functor, that is, an object where the invocation is the same as for a regular function (e.g. C++). In languages where this is not supported (that is, you have to have a method other than operator() called, like command.do()), you would want to name the command-like class/object a noun and name the method called a verb.
The type of the functor is a class, which you may as well name in your usual style (for instance, a noun) because it doesn't really do anything on its own. The instance of a functor on the other hand is effectively a function, and would perhaps be better named accordingly. Following Jim's suggestion above, SomethingDoer doSomething; doSomething();
A quick peek at my own code shows I've managed to avoid this issue entirely by using words which are both nouns and verbs... Filter and Show for example. Might be tricky to apply that naming scheme in all circumstances, however!
I suggest you give them a suffix like Function, Callback, Action, Command, etc. so that you will have classes called SortFunction, SearchAction, ExecuteCommand instead of Sort, Search, Execute which sound more like names of actual functions.
I think there are two ways to see this:
1) The class which could be named sensibly so that it can be invoked as a functor directly upon construction:
Doer()(); // constructed and invoked in one shot, compiler could optimize
2) The instance in cases where we want the functor to be invoked multiple times on a stateless functor object, or perhaps for stylistic reasons when #1's syntax is not preferable.
Doer _do;
_do(); // operator () invocation after construction
I like #Rook's suggestion above of naming the class with words that have the same verb and noun forms such as Search or Filter.

Parameter Passing in Scheme

Can anyone help me out understanding the various parameter passing modes in Scheme? I know Scheme implements parameter passing by value. But how about other modes?
Is there any good documentation for parameter passing in Scheme?
Scheme has only call-by-value function calls. There are other alternatives that can be implemented within the language, but if you're a beginner then it's best to not even try them at this point. If you're looking for a way to pass values "by reference" -- then one option that can sort of make it is to use macros, but you really shouldn't go there. Instead, some Scheme implementations like PLT Scheme provide a "box value": this is a kind of a container that is used like this:
You create a box holding <something> with (box <something>)
You get the value that is stored in a box with (unbox <some-box>)
You change the value that is stored in a box with (set-box! <some-box> <new-value>)
Given these two, you can use such box objects "by value", but their contents is actually a reference. This is very much like C, where all values (most, actually) are passed by-value, yet some of these values can be pointers that you can mutate. BTW, it's best to avoid even these: in Scheme, functional programming is the more common choice and it is therefore better to start with that.
(Once you are more fluent with Scheme, and if you're using a Scheme with sufficient abstractions, then you can learn how to mimic lots of alternatives too.)
To add a bit more...
The four fundamental parameter-passing conventions are call-by-value, call-by-reference, call-by-name, and call-by-need. Scheme, as a "mostly functional" language, relies on call-by-value; variables, once created, generally aren't changed. The other three conventions are pretty similar and you can still do them in Scheme by passing around your values in boxes (using box and unbox), and the boxes act as pointers to values.
Generally, if you find that you need to use call-by-reference in a function, you should probably rethink how you're implementing the function and make it purely functional. Modifying a variable after it's been created using set! is a "side-effect" and is typically avoided in functional programming.

Resources