Can anyone help me out understanding the various parameter passing modes in Scheme? I know Scheme implements parameter passing by value. But how about other modes?
Is there any good documentation for parameter passing in Scheme?
Scheme has only call-by-value function calls. There are other alternatives that can be implemented within the language, but if you're a beginner then it's best to not even try them at this point. If you're looking for a way to pass values "by reference" -- then one option that can sort of make it is to use macros, but you really shouldn't go there. Instead, some Scheme implementations like PLT Scheme provide a "box value": this is a kind of a container that is used like this:
You create a box holding <something> with (box <something>)
You get the value that is stored in a box with (unbox <some-box>)
You change the value that is stored in a box with (set-box! <some-box> <new-value>)
Given these two, you can use such box objects "by value", but their contents is actually a reference. This is very much like C, where all values (most, actually) are passed by-value, yet some of these values can be pointers that you can mutate. BTW, it's best to avoid even these: in Scheme, functional programming is the more common choice and it is therefore better to start with that.
(Once you are more fluent with Scheme, and if you're using a Scheme with sufficient abstractions, then you can learn how to mimic lots of alternatives too.)
To add a bit more...
The four fundamental parameter-passing conventions are call-by-value, call-by-reference, call-by-name, and call-by-need. Scheme, as a "mostly functional" language, relies on call-by-value; variables, once created, generally aren't changed. The other three conventions are pretty similar and you can still do them in Scheme by passing around your values in boxes (using box and unbox), and the boxes act as pointers to values.
Generally, if you find that you need to use call-by-reference in a function, you should probably rethink how you're implementing the function and make it purely functional. Modifying a variable after it's been created using set! is a "side-effect" and is typically avoided in functional programming.
Related
I've long since known that define is scary and should be used with caution unless you know for sure how your implementation handles it. Out of interest, I recently opened up R7RS and read all that I could find about define and nothing gave me the impression that any of it is implementation dependent. Have I missed something or is define no longer implementation-dependent in R7RS?
You seem to be reading something into the answer you linked which isn’t there.
define has always been well defined, just as well-defined as let is. Most people choose to use define only at the top level of modules to create top-level bindings, but that’s a stylistic choice — it’s also capable of creating local bindings, like let is, if you use it inside and at the top of an ‘internal’ body, such as inside a procedure or a let or similar. Multiple defines in such a context are technically equivalent to letrec*, as another answer noted.
The most common interpretation of define is to replace it with letrec*.
But this problem has indeed many possible interpretations and the language does not impose any. Any interpretation is valid from the viewpoint of the language.
I am learning Ruby and I'm having a major conceptual problem concerning typing. Allow me to detail why I don't understand with paradigm.
Say I am method chaining for concise code as you do in Ruby. I have to precisely know what the return type of each method call in the chain, otherwise I can't know what methods are available on the next link. Do I have to check the method documentation every time?? I'm running into this constantly running tutorial exercises. It seems I'm stuck with a process of reference, infer, run, fail, fix, repeat to get code running rather then knowing precisely what I'm working with during coding. This flies in the face of Ruby's promise of intuitiveness.
Say I am using a third party library, once again I need to know what types are allow to pass on the parameters otherwise I get a failure. I can look at the code but there may or may not be any comments or declaration of what type the method is expecting. I understand you code based on methods are available on an object, not the type. But then I have to be sure whatever I pass as a parameter has all the methods the library is expect, so I still have to do type checking. Do I have to hope and pray everything is documented properly on an interface so I know if I'm expected to give a string, a hash, a class, etc.
If I look at the source of a method I can get a list of methods being called and infer the type expected, but I have to perform analysis.
Ruby and duck typing: design by contract impossible?
The discussions in the preceding stackoverflow question don't really answer anything other than "there are processes you have to follow" and those processes don't seem to be standard, everyone has a different opinion on what process to follow, and the language has zero enforcement. Method Validation? Test-Driven Design? Documented API? Strict Method Naming Conventions? What's the standard and who dictates it? What do I follow? Would these guidelines solve this concern https://stackoverflow.com/questions/616037/ruby-coding-style-guidelines? Is there editors that help?
Conceptually I don't get the advantage either. You need to know what methods are needed for any method called, so regardless you are typing when you code anything. You just aren't informing the language or anyone else explicitly, unless you decide to document it. Then you are stuck doing all type checking at runtime instead of during coding. I've done PHP and Python programming and I don't understand it there either.
What am I missing or not understanding? Please help me understand this paradigm.
This is not a Ruby specific problem, it's the same for all dynamically typed languages.
Usually there are no guidelines for how to document this either (and most of the time not really possible). See for instance map in the ruby documentation
map { |item| block } → new_ary
map → Enumerator
What is item, block and new_ary here and how are they related? There's no way to tell unless you know the implementation or can infer it from the name of the function somehow. Specifying the type is also hard since new_ary depends on what block returns, which in turn depends on the type of item, which could be different for each element in the Array.
A lot of times you also stumble across documentation that says that an argument is of type Object, Which again tells you nothing since everything is an Object.
OCaml has a solution for this, it supports structural typing so a function that needs an object with a property foo that's a String will be inferred to be { foo : String } instead of a concrete type. But OCaml is still statically typed.
Worth noting is that this can be a problem in statically typed lanugages too. Scala has very generic methods on collections which leads to type signatures like ++[B >: A, That](that: GenTraversableOnce[B])(implicit bf: CanBuildFrom[Array[T], B, That]): That for appending two collections.
So most of the time, you will just have to learn this by heart in dynamically typed languages, and perhaps help improve the documentation of libraries you are using.
And this is why I prefer static typing ;)
Edit One thing that might make sense is to do what Scala also does. It doesn't actually show you that type signature for ++ by default, instead it shows ++[B](that: GenTraversableOnce[B]): Array[B] which is not as generic, but probably covers most of the use cases. So for Ruby's map it could have a monomorphic type signature like Array<a> -> (a -> b) -> Array<b>. It's only correct for the cases where the list only contains values of one type and the block only returns elements of one other type, but it's much easier to understand and gives a good overview of what the function does.
Yes, you seem to misunderstand the concept. It's not a replacement for static type checking. It's just different. For example, if you convert objects to json (for rendering them to client), you don't care about actual type of the object, as long as it has #to_json method. In Java, you'd have to create IJsonable interface. In ruby no overhead is needed.
As for knowing what to pass where and what returns what: memorize this or consult docs each time. We all do that.
Just another day, I've seen rails programmer with 6+ years of experience complain on twitter that he can't memorize order of parameters to alias_method: does new name go first or last?
This flies in the face of Ruby's promise of intuitiveness.
Not really. Maybe it's just badly written library. In core ruby everything is quite intuitive, I dare say.
Statically typed languages with their powerful IDEs have a small advantage here, because they can show you documentation right here, very quickly. This is still accessing documentation, though. Only quicker.
Consider that the design choices of strongly typed languages (C++,Java,C#,et al) enforce strict declarations of type passed to methods, and type returned by methods. This is because these languages were designed to validate that arguments are correct (and since these languages are compiled, this work can be done at compile time). But some questions can only be answered at run time, and C++ for example has the RTTI (Run Time Type Interpreter) to examine and enforce type guarantees. But as the developer, you are guided by syntax, semantics and the compiler to produce code that follows these type constraints.
Ruby gives you flexibility to take dynamic argument types, and return dynamic types. This freedom enables you to write more generic code (read Stepanov on the STL and generic programming), and gives you a rich set of introspection methods (is_a?, instance_of?, respond_to?, kind_of?, is_array?, et al) which you can use dynamically. Ruby enables you to write generic methods, but you can also explicity enforce design by contract, and process failure of contract by means chosen.
Yes, you will need to use care when chaining methods together, but learning Ruby is not just a few new keywords. Ruby supports multiple paradigms; you can write procedural, object oriend, generic, and functional programs. The cycle you are in right now will improve quickly as you learn about Ruby.
Perhaps your concern stems from a bias towards strongly typed languages (C++, Java, C#, et al). Duck typing is a different approach. You think differently. Duck typing means that if an object looks like a , behaves like a , then it is a . Everything (almost) is an Object in Ruby, so everything is polymorphic.
Consider templates (C++ has them, C# has them, Java is getting them, C has macros). You build an algorithm, and then have the compiler generate instances for your chosen types. You aren't doing design by contract with generics, but when you recognize their power, you write less code, and produce more.
Some of your other concerns,
third party libraries (gems) are not as hard to use as you fear
Documented API? See Rdoc and http://www.ruby-doc.org/
Rdoc documentation is (usually) provided for libraries
coding guidelines - look at the source for a couple of simple gems for starters
naming conventions - snake case and camel case are both popular
Suggestion - approach an online tutorial with an open mind, do the tutorial (http://rubymonk.com/learning/books/ is good), and you will have more focused questions.
I'm trying to make an array-like data structure in Scheme, and since I need to refer to it (and alter it!) often, I want to give it a name. But from what I've read on various tutorial sites, it looks like the only way to name the list for later reference is with define. That would be fine, except it also looks like once I initialize a list with define, it becomes more complicated altering or adding to said list. For example, it seems like I wouldn't be able to do just (append wordlist (element)), I'd need some manner of ! bang.
Basically my questions boil down to: Is define my only hope of naming a list? And if so, am I stuck jumping through hoops changing its elements? Thanks.
Yes, define is the way for naming things in Scheme. A normal list in Scheme won't allow you to change its elements, because it's immutable - that's one of the things you'll have to learn to live with when working with a functional data structure. Of course you can add elements to it or remove elements to it, but those operations will produce new lists, you can't change the elements in-place.
The other option is to use mutable lists instead of normal lists, but if you're just learning to use Scheme, it's better to stick to the immutable lists first and learn the Scheme way to do things in terms of immutable data.
Yes, define is the way to do "assignment" (really naming) in Scheme. Though, if you're writing some kind of package, you might consider wrapping the whole thing inside of a function and then using let to define something you refer to.
Then, of course, you have to have some sort of abstraction to unwrap the functions inside of your "package."
See SICP 2.5 Building Systems with Generic Operations
http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-18.html#%_sec_2.5
(append wordlist (element)) is creating a new list. What you might want is to use set! to redirect a reference to the new list, or define a reference to the new list using the same symbol's name.
It seems like people who would never dare cut and paste code have no problem specifying the type of something over and over and over. Why isn't it emphasized as a good practice that type information should be declared once and only once so as to cause as little ripple effect as possible throughout the source code if the type of something is modified? For example, using pseudocode that borrows from C# and D:
MyClass<MyGenericArg> foo = new MyClass<MyGenericArg>(ctorArg);
void fun(MyClass<MyGenericArg> arg) {
gun(arg);
}
void gun(MyClass<MyGenericArg> arg) {
// do stuff.
}
Vs.
var foo = new MyClass<MyGenericArg>(ctorArg);
void fun(T)(T arg) {
gun(arg);
}
void gun(T)(T arg) {
// do stuff.
}
It seems like the second one is a lot less brittle if you change the name of MyClass, or change the type of MyGenericArg, or otherwise decide to change the type of foo.
I don't think you're going to find a lot of disagreement with your argument that the latter example is "better" for the programmer. A lot of language design features are there because they're better for the compiler implementer!
See Scala for one reification of your idea.
Other languages (such as the ML family) take type inference much further, and create a whole style of programming where the type is enormously important, much more so than in the C-like languages. (See The Little MLer for a gentle introduction.)
It isn't considered a bad thing at all. In fact, C# maintainers are already moving a bit towards reducing the tiring boilerplate with the var keyword, where
MyContainer<MyType> cont = new MyContainer<MyType>();
is exactly equivalent to
var cont = new MyContainer<MyType>();
Although you will see many people who will argue against var usage, which kind of shows that many people is not familiar with strong typed languages with type inference; type inference is mistaken for dynamic/soft typing.
Repetition may lead to more readable code, and sometimes may be required in the general case. I've always seen the focus of DRY being more about duplicating logic than repeating literal text. Technically, you can eliminate 'var' and 'void' from your bottom code as well. Not to mention you indicate scope with indentation, why repeat yourself with braces?
Repetition can also have practical benefits: parsing by a program is easier by keeping the 'void', for example.
(However, I still strongly agree with you on prefering "var name = new Type()" over "Type name = new Type()".)
It's a bad thing. This very topic was mentioned in Google's Go language Techtalk.
Albert Einstein said, "Everything should be made as simple as possible, but not one bit simpler."
Your complaint makes no sense in the case of a dynamically typed language, so you must intend this to refer to statically typed languages. In that case, your replacement example implicitly uses Generics (aka Template Classes), which means that any time that fun or gun is used, a new definition based upon the type of the argument. That could result in dozens of extra methods, regardless of the intent of the programmer. In particular, you're throwing away the benefit of compiler-checked type-safety for a runtime error.
If your goal was to simply pass through the argument without checking its type, then the correct type would be Object not T.
Type declarations are intended to make the programmer's life simpler, by catching errors at compile-time, instead of failing at runtime. If you have an overly complex type definition, then you probably don't understand your data. In your example, I would have suggested adding fun and gun to MyClass, instead of defining them separately. If fun and gun don't apply to all possible template types, then they should be defined in an explicit subclass, not as separate functions that take a templated class argument.
Generics exist as a way to wrap behavior around more specific objects. List, Queue, Stack, these are fine reasons for Generics, but at the end of the day, the only thing you should be doing with a bare Generic is creating an instance of it, and calling methods on it. If you really feel the need to do more than that with a Generic, then you probably need to embed your Generic class as an instance object in a wrapper class, one that defines the behaviors you need. You do this for the same reason that you embed primitives into a class: because by themselves, numbers and strings do not convey semantic information about their contents.
Example:
What semantic information does List convey? Just that you're working with multiple triples of integers. On the other hand, List, where a color has 3 integers (red, blue, green) with bounded values (0-255) conveys the intent that you're working with multiple Colors, but provides no hint as to whether the List is ordered, allows duplicates, or any other information about the Colors. Finally a Palette can add those semantics for you: a Palette has a name, contains multiple Colors, but no duplicates, and order isn't important.
This has gotten a bit far afield from the original question, but what it means to me is that DRY (Don't Repeat Yourself) means specifying information once, but that specification should be as precise as is necessary.
Peter Norvig in PAIP says:
in modern lisps...eval is used less often (in fact, in Scheme there is
no eval at all). If you find yourself using eval, you are probably
doing the wrong thing
What are some of the ways to circumvent using eval in scheme? Arent there case where eval is absolutely necessary?
There are cases where eval is necessary, but they always involve advanced programs that do things like dynamically loading some code (eg, a servlet in a web server). As for a way to "circumvent" using it -- that depends on the actual problem you're trying to solve, there's no magic solution to avoiding eval except for ... eval.
(BTW, my guess is that PAIP was written a long time ago, before eval was added to the Scheme Report.)
He's wrong. Of course there is eval in Scheme.
You'll need eval in some very rare case. The case that comes into mind first is when you'll build program with a program and then execute it. This happens mainly with genetic algorithm for example. In this case you build a lot a randomized programs that you'll need to execute. Having eval in conjunction with code being data make lisp the easiest programming language to do genetic algorithm.
Having these properties comes at a great cost (in term of speed and size of your program) because you'll remove all possibility to do compile time optimization on the code that will be evaled and you must keep the full interpreter in your resulting binary.
As a result it is considered poor design to use eval when it can be avoided.
The claim that Scheme has no eval is inaccurate at least for the most recent versions of the Scheme standard (R5RS and later). Usually, what you want is a macro instead, which will generate code at compilation time.
It is true that eval should be avoided. For starters, I've never seen a satisfactory definition of how should it behave, for example:
What environment expressions should be evaluated in when no environment is passed?
When you do pass in an environment, how do those work? For example, the standards specify no way you can pre-bind a value in that environment object.
That said, I've worked with a Scheme application that uses eval to generate code dynamically at runtime for cases where the structure of the computation cannot be known at compilation time. The intent has been to get the Scheme system to compile the code at runtime for performance reasons—and the difficulty is that there is no standard way to tell a Scheme system "compile this code."
It should go without saying also that eval can be a huge security risk. You should never eval anything that doesn't have a huge wall of separation from user input. Basically, if you want to use eval safely, you should be doing so in the context of the code-generation phase of a compiler-like system, after you've parsed some input (using a comprehensively defined grammar!).
First, PAIP is written for Common Lisp, not Scheme, so I don't know that he'd say the same thing. CL macros do much the same thing as eval, although at compile time instead of run time, and there's other things you could do. If you'd show me an example of using eval in Common Lisp, I could try to come up with other methods of doing the same thing.
I'm not a Scheme programmer. I can only speak from Norvig's perspective, as a Common Lisp programmer. I don't think he was talking about Scheme, and I don't know if he knew or knows Scheme particularly well.
Second, Norvig says "you are probably doing the wrong thing" rather than "you're doing the wrong thing". This implies that, for all he knows, there's times when eval is the correct thing to use. In a language like C, I'd say the same thing about goto, although they're quite useful in some restricted circumstances, but most goto use is by people who don't know any better.
One use I've seen for 'eval' in scripting environments is to parameterize some code with runtime values. for instance, in psuedo-C:
param = read_integer();
fn = eval("int lambda(int x) {
int param = " + to_string(param) + ";
return param*x; }");
I hope you find that really ugly. String pasting to create code at runtime? Ick. In Scheme and other lexically scoped Lisps, you can make parameterized functions without using eval.
(define make-my-fn
(lambda (param)
(lambda (x) (* param x)))
(let* ([ param (read-integer) ]
[ fn (make-my-fn param ])
;; etc.
)
Like was mentioned, dynamic code loading and such still need eval, but parameterized code and code composition can be produced with first class functions.
You could write a scheme interpreter in scheme. Such is certainly possible, but it is not practical.
Granted, this is a general answer, as I have no used scheme, but it may help you nonetheless. :)