Why not always use Inf instead of Lazy in Idris? - lazy-evaluation

I find that Lazy and Inf is very close:
Lazy and Inf are closely related (in fact, the underlying
implementation uses the same type). The only difference in practice is
in totality checking, where Lazy is erased (i.e. terms are checked for
termination normally, ignoring laziness annotations), and Inf uses a
productivity checker, where any use of Delay must be constructor
guarded.
As described above, the underlying implementation of Lazy and Inf is the same one, the only difference is about totality checking.
I think always use Inf seems much more natural, which is more close to the lazy we used in Haskell, and wondering what is the scene in production which we must use Lazy -- which always do a deep totality checking?

Related

Why is `math.Sin` disallowed in a Go constant?

According to Effective Go, the function math.Sin cannot be used to define a constant because that function must happen at run-time.
What is the reasoning behind this limitation? Floating-point consistency? Quirk of the Sin implementation? Something else?
There is support for this sort of thing in other languages. In C, for example: as of version 4.3, GCC supports compile-time calculation of the sine function. (See section "General Optimizer Improvements").
However, as noted in this blog post by Bruce Dawson, this can cause unexpected issues. (See section "Compile-time versus run-time sin").
Is this a relevant concern in Go? Or is this usage restricted for a different reason?
Go doesn't support initializing a constant with the result of a function. Functions are called at runtime, not at compile time. But constants are defined at compile time.
It would be possible to make exceptions for certain functions (like math.Sin for example), but that would make the spec more complicated. The Go developers generally prefer to keep the spec simple and consistent.
Go simply lacks the concept. There is no way of marking a function as pure (its return value depends only on its arguments, and it doesn't alter any kind of mutable state or perform I/O), there is no way for the compiler to infer pureness, and there's no attempt to evaluate any expression containing a function call at compile-time (because doing so for anything except a pure function of constant arguments would be a source of weird behavior and bugs, and because adding the machinery needed to make it work right would introduce quite a bit of complexity).
Yes, this is a substantial loss, which forces a tradeoff between code with bad runtime behavior, and code which is flat-out ugly. Go partisans will choose the ugly code and tell you that you are a bad human being for not finding it beautiful.
The best thing you have available to you is code generation. The integration of go generate into the toolchain and the provision of a complete Go parser in the standard library makes it relatively easy to munge code at build time, and one of the things that you can do with this ability is create more advanced constant-folding if you so choose. You still get all of the debuggability peril of code generation, but it's something.

Why are the functions making Vector an instance of Functor, Monad, Applicative, Alternative, Foldable and Traversable slow?

The changelog for version 0.8 of vector lists the following change with a warning:
Functor, Monad, Applicative, Alternative, Foldable and Traversable
instances for boxed vectors (WARNING: they tend to be slow and are
only provided for completeness).
Could someone explain why this is the case? Is it just the normal cost of typeclass specialization, or something more interesting?
Update: Looking at some particular instances, one sees for example:
instance Foldable.Foldable Vector where
{-# INLINE foldr #-}
foldr = foldr
and similarly for the other folds. Does this mean that folding is slow for Vectors in general? If not, what makes a non-specialized fold slow enough to warrant a warning?
I submitted the original set of these instances to Roman a year and a half ago and have maintained vector-instances since then. (I had to remove these instances from vector-instances once they migrated into vector, and now maintain it solely for the really exotic stuff). His concern was that if folks used these instances polymorphically then the RULES that make Vectors fuse away can't fire unless the polymorphic function gets inlined and monomorphized.
They exist because not every bit of code on the planet is Vector-specific and even then it is nice to sometimes use the common names.
Slow here is relative. The worst case is they perform like anybody else's folds, binds, etc. but Roman takes every single boxed value as a personal insult. :)
I've just had a quick look at the source code and the implementations don't look excessively slow. I'd argue the authors added this warning because when you're writing a program in the Vector monad, you're working from such a high-level point of view that it's easy to forget that every >>= is, in fact, a concatMap, which tends to be inherently slow.
Another thing: Vector is particularly fast for unboxed types. So a user might be attracted to use the monad notation (for convenience), while he should actually be using an unboxed type (for speed).

Why doesn't Haskell have symbols (a la ruby) / atoms (a la erlang)?

The two languages where I have used symbols are Ruby and Erlang and I've always found them to be extremely useful.
Haskell does have algebraic datatypes, but I still think symbols would be mighty convenient. An immediate use that springs to mind is that since symbols are isomorphic to integers you can use them where you would use an integral or a string "primary key".
The syntactic sugar for atoms can be minor - :something or <something> is an atom. All atoms are instances of a Type called Atom which derives Show and Eq. You can then use it for more descriptive error codes, for example
type ErrorCode = Atom
type Message = String
data Error = Error ErrorCode Message
loginError = Error :redirect "Please login first"
In this case :redirect is more efficient than using a string ("redirect") and easier to understand than an integer (404).
The benefit may seem minor, but I say it is worth adding atoms as a language feature (or at least a GHC extension).
So why have symbols not been added to the language? Or am I thinking about this the wrong way?
I agree with camccann's answer that it's probably missing mainly because it would have to be baked quite deeply into the implementation and it is of too little use for this level of complication. In Erlang (and Prolog and Lisp) symbols (or atoms) usually serve as special markers and serve mostly the same notion as a constructor. In Lisp, the dynamic environment includes the compiler, so it's partly also a (useful) compiler concept leaking into the runtime.
The problem is the following, symbol interning is impure (it modifies the symbol table). Because we never modify an existing object it is referentially transparent, however, but if implemented naïvely can lead to space leaks in the runtime. In fact, as currently implemented in Erlang you can actually crash the VM by interning too many symbols/atoms (current limit is 2^20, I think), because they can never get garbage collected. It's also difficult to implement in a concurrent setting without a huge lock around the symbol table.
Both problems can be (and have been) solved, however. For example, see Erlang EEP 20. I use this technique in the simple-atom package. It uses unsafePerformIO under the hood, but only in (hopefully) rare cases. It could still use some help from the GC to perform an optimisation similar to indirection shortening. It also uses quite a few IORefs internally which isn't too great for performance and memory usage.
In summary, it can be done but implementing it properly is non-trivial. Compiler writers always weigh the power of a feature against its implementation and maintenance efforts, and it seems like first-class symbols lose out on this one.
I think the simplest answer is that, of the things Lisp-style symbols (which is where both Ruby and Erlang got the idea, I believe) are used for, in Haskell most are either:
Already done in some other fashion--e.g. a data type with a bunch of nullary constructors, which also behave as "convenient names for integers".
Awkward to fit in--things that exist at the level of language syntax instead of being regular data usually have more type information associated with them, but symbols would have to either be distinct types from each other (nearly useless without some sort of lightweight ad-hoc sum type) or all the same type (in which case they're barely different from just using strings).
Also, keep in mind that Haskell itself is actually a very, very small language. Very little is "baked in", and of the things that are most are just syntactic sugar for other primitives. This is a bit less true if you include a bunch of GHC extensions, but GHC with -XAndTheKitchenSinkToo is not the same language as Haskell proper.
Also, Haskell is very amenable to pseudo-syntax and metaprogramming, so there's a lot you can do even without having it built in. Particularly if you get into TH and scary type metaprogramming and whatever else.
So what it mostly comes down to is that most of the practical utility of symbols is already available from other features, and the stuff that isn't available would be more difficult to add than it's worth.
Atoms aren't provided by the language, but can be implemented reasonably as a library:
http://hackage.haskell.org/package/simple-atom
There are a few other libs on hackage, but this one looks the most recent and well-maintained.
Haskell uses type constructors* instead of symbols so that the set of symbols a function can take is closed, and can be reasoned about by the type system. You could add symbols to the language, but it would put you in the same place that using strings would - you'd have to check all possible symbols against the few with known meanings at runtime, add error handling all over the place, etc. It'd be a big workaround for all the compile-time checking.
The main difference between strings and symbols is interning - symbols are atomic and can be compared in constant time. Both are types with an essentially infinite number of distinct values, though, and against the grain of Haskell's specifying arguments and results with finite types.
I'm more familiar with OCaml than Haskell, so "type constructor" may not be the right term. Things like None or Just 3.
An immediate use that springs to mind is that since symbols are isomorphic to integers you can use them where you would use an integral or a string "primary key".
Use Enum instead.
data FileType = GZipped | BZipped | Plain
deriving Enum
descr ft = ["compressed with gzip",
"compressed with bzip2",
"uncompressed"] !! fromEnum ft

How to "debug" Haskell with printfs?

coming from the Ocaml community, I'm trying to learn a bit of Haskell. The transition goes quite well but I'm a bit confused with debugging. I used to put (lots of) "printf" in my ocaml code, to inspect some intermediate values, or as flag to see where the computation exactly failed.
Since printf is an IO action, do I have to lift all my haskell code inside the IO monad to be able to this kind of debugging ? Or is there a better way to do this (I really don't want to do it by hand if it can be avoided)
I also find the trace function :
http://www.haskell.org/haskellwiki/Debugging#Printf_and_friends
which seems exactly what I want, but I don't understand it's type: there is no IO anywhere!
Can someone explain me the behaviour of the trace function ?
trace is the easiest to use method for debugging. It's not in IO exactly for the reason you pointed: no need to lift your code in the IO monad. It's implemented like this
trace :: String -> a -> a
trace string expr = unsafePerformIO $ do
putTraceMsg string
return expr
So there is IO behind the scenes but unsafePerformIO is used to escape out of it. That's a function which potentially breaks referential transparency which you can guess looking at its type IO a -> a and also its name.
trace is simply made impure. The point of the IO monad is to preserve purity (no IO unnoticed by the type system) and define the order of execution of statements, which would otherwise be practically undefined through lazy evaluation.
On own risk however, you can nevertheless hack together some IO a -> a, i.e. perform impure IO. This is a hack and of course "suffers" from lazy evaluation, but that's what trace simply does for the sake of debugging.
Nevertheless though, you should probably go other ways for debugging:
Reducing the need for debugging intermediate values
Write small, reusable, clear, generic functions whose correctness is obvious.
Combine the correct pieces to greater correct pieces.
Write tests or try out pieces interactively.
Use breakpoints etc. (compiler-based debugging)
Use generic monads. If your code is monadic nevertheless, write it independent of a concrete monad. Use type M a = ... instead of plain IO .... You can afterwards easily combine monads through transformers and put a debugging monad on top of it. Even if the need for monads is gone, you could just insert Identity a for pure values.
For what it's worth, there are actually two kinds of "debugging" at issue here:
Logging intermediate values, such as the value a particular subexpression has on each call into a recursive function
Inspecting the runtime behavior of the evaluation of an expression
In a strict imperative language these usually coincide. In Haskell, they often do not:
Recording intermediate values can change the runtime behavior, such as by forcing the evaluation of terms that would otherwise be discarded.
The actual process of computation can dramatically differ from the apparent structure of an expression due to laziness and shared subexpressions.
If you just want to keep a log of intermediate values, there are many ways to do so--for instance, rather than lifting everything into IO, a simple Writer monad will suffice, this being equivalent to making functions return a 2-tuple of their actual result and an accumulator value (some sort of list, typically).
It's also not usually necessary to put everything into the monad, only the functions that need to write to the "log" value--for instance, you can factor out just the subexpressions that might need to do logging, leaving the main logic pure, then reassemble the overall computation by combining pure functions and logging computations in the usual manner with fmaps and whatnot. Keep in mind that Writer is kind of a sorry excuse for a monad: with no way to read from the log, only write to it, each computation is logically independent of its context, which makes it easier to juggle things around.
But in some cases even that's overkill--for many pure functions, just moving subexpressions to the toplevel and trying things out in the REPL works pretty well.
If you want to actually inspect run-time behavior of pure code, however--for instance, to figure out why a subexpression diverges--there is in general no way to do so from other pure code--in fact, this is essentially the definition of purity. So in that case, you have no choice but to use tools that exist "outside" the pure language: either impure functions such as unsafePerformPrintfDebugging--errr, I mean trace--or a modified runtime environment, such as the GHCi debugger.
trace also tends to over-evaluate its argument for printing, losing a lot of the benefits of laziness in the process.
If you can wait until the program is finished before studying the output, then stacking a Writer monad is the classic approach to implementing a logger. I use this here to return a result set from impure HDBC code.
Well, since whole Haskell is built around principle of lazy evaluation (so that order of calculations is in fact non-deterministic), use of printf's make very little sense in it.
If REPL+inspect resulting values is really not enough for your debugging, wrapping everything into IO is the only choice (but it's not THE RIGHT WAY of Haskell programming).

Why is there no peek! function for clojure transient vectors?

Clojure has transient analogs for some of its persistent data structures, vectors, maps and sets. For vectors, there are pop! and conj! functions, analogous to pop and conj for persistent vectors, but no peek!.
Is there a technical reason that makes an efficient implementation of peek! impossible? Or is it just not necessary in most use cases for transient vectors? I can always do
(defn peek! [tvec] (get tvec (dec (count tvec))))
But it seems strange that there's no built-in solution.
That's really a design question best directed to the ggroup, but FWIW, I did investigate peek / peek! a while ago and providing peek! seems to be a simple matter of creating a new clojure.lang.ITransientStack interface to parallel clojure.lang.IPersistentStack and having transient vectors implement it.
My guess is that if such an interface is not already available (and used by transients), it's probably a matter of priorities. A single-threaded fast stack implementation is already available in Clojure in the form of java.util.Stack, so we're not missing out on that many features here; syntactic convenience and smooth conversion to persistent vectors will probably come as headway is made on Clojure-in-Clojure.
(Where the return on the effort invested is high, improvements to the Java side of Clojure make sense even if the ultimate goal is eventually to drop the relevant part of the Java codebase and replace it with an implementation in Clojure. Where expected returns are lower, it might make more sense to wait for protocols to be used more pervasively etc. The currently available set of functions for handling transients suffices for Clojure's own needs and I'm not sure if there's ever been call for peek! on the ggroup -- as for #clojure, I remember one relevant conversation -- so the return is probably judged low... You could start a grassroots movements to change this, though. :-))

Resources