As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
What are the design heuristics one has to master to write good Prolog? I've heard it takes an experienced programmer about two years to become proficient in Prolog. Using recursion effectively is part of it, but that seems to be a relatively minor hurdle. What exactly is it that gives programmers so much trouble? What should I be looking for in sample code to judge its quality?
The major difficulty in writing good Prolog code lies in not only understanding but also adequately transmitting the intention or purpose of a program. In contrast to other programming languages there are several quite different kinds of Prolog code often within the same program. By confusing such levels bugs and problems ensue:
Pure, monotonic code.
This code lies at the heart of Prolog. In such code a lot of algebraic properties hold, and the actual problems are described in the pure, ideal manner which Prolog is often advertised with. Yet, even in such parts certain procedural properties may surface, such as non-termination. Take as an example the commutativity of conjunction. In pure, monotonic code, ( A, B ) and ( B, A ) describe the same relation. The only differences may lie in different termination behavior and the sequence how answers appear. Ideally, the names of pure predicates communicate that the predicates are relations. Imperatives are definitely not a good choice here.
Side-effectful code.
The other extreme is code that can only be understood by effectively executing it, either by machine or in the mind. There are no simple invariants in the program. But even in such parts, there might still be certain properties observed like steadfastness. Effectively such code is not much different to other programming languages.
Often, the side-effectful part "eats up" the pure side since programmers are used to an imperative, command-oriented do-this do-that thinking. To lean into the other direction, think of which properties you will lose or gain. Think how easy it will be to test your program: The purer a program the easier it is to test without any extra sandbox around. A simple toplevel query is good enough.
Some examples, how the pure side can be expanded at the expense of seemingly necessary side-effects:
What are the pros and cons of using manual list iteration vs recursion through fail
Prolog Recursion skipping same results
Or simply these answers.
Edit: In your comment, you ask for "advice for learning". So here is some:
Stick to writing pure, monotonic code only. You can only judge to choose one or the other side if you know both. I assume you have some previous experience producing side effects in some command-oriented language, but none with pure code. As a consequence this will mean that you will refrain from writing inherently non-monotonic code.
Play with the toplevel. Imagine, the toplevel is the only way to access your programs. How would you formulate a problem such that it fits into this format? The SWI, Scryer and Trealla toplevel have been specifically designed to permit such light-weight interaction.
Use clpfd for arithmetics. Don't use (is)/2, it makes your code much too moded.
Enjoy the algebraic properties of pure, monotonic code. Think of it: You add a goal, no matter where, and still you can predict that this goal will specialize your program (and at best leaves it as is). You can - blindly - remove a goal, and still you know (part of) its effect.
Study the notion of a failure-slice to master non-termination.
Do not use a step-by-step tracer/debugger, as it is offered in many Prologs. It only shows you the precise steps Prolog takes. It does not show you anything directly related to the meaning of the program. It reinforces a step-by-step thinking.
Watch your language. The way how you talk about a program influences the way you think about it. So, if you use a lot of operationalizing language (like: This does this etc), chances are you reinforce the command-oriented view. There is a cleaner way of talking about things, but you need to find it. This is probably the hardest part.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
As part of my Ph.D. research, I am working on development of numerical models of atmosphere and ocean circulation. These involve numerically solving systems of PDE's on the order of ~10^6 grid points, over ~10^4 time steps. Thus, a typical model simulation takes hours to a few days to complete when run in MPI on dozens of CPUs. Naturally, improving model efficiency as much as possible is important, while making sure the results are byte-to-byte identical.
While I feel quite comfortable with my Fortran programming, and am aware of quite some tricks to make code more efficient, I feel like there is still space to improve, and tricks that I am not aware of.
Currently, I make sure I use as few divisions as possible, and try not to use literal constants (I was taught to do this from very early on, e.g. use half=0.5 instead of 0.5 in actual computations), use as few transcendental functions as possible etc.
What other performance sensitive factors are there? At the moment, I am wondering about a few:
1) Does the order of mathematical operations matter? For example if I have:
a=1E-7 ; b=2E4 ; c=3E13
d=a*b*c
would d evaluate with different efficiency based on the order of multiplication? Nowadays, this must be compiler specific, but is there a straight answer? I notice d getting (slightly) different value based on the order (precision limit), but will this impact the efficiency or not?
2) Passing lots (e.g. dozens) of arrays as arguments to a subroutine versus accessing these arrays from a module within the subroutine?
3) Fortran 95 constructs (FORALL and WHERE) versus DO and IF? I know that these mattered back in the 90's when code vectorization was a big thing, but is there any difference now with modern compilers being able to vectorize explicit DO loops? (I am using PGI, Intel, and IBM compilers in my work)
4) Raising a number to an integer power versus multiplication? E.g.:
b=a**4
or
b=a*a*a*a
I have been taught to always use the latter where possible. Does this affect efficiency and/or precision? (probably compiler dependent as well)
Please discuss and/or add any tricks and tips that you know about improving Fortran code efficiency. What else is out there? If you know anything specific to what each of the compilers above do related to this question, please include that as well.
Added: Note that I do not have any bottlenecks or performance issues per se. I am asking if there are any general rules for optimizing the code in sense of operations.
Thanks!
Sorry but all the tricks you mentioned are simply ... ridiculous. More exactly, they have no meaning in practice. For instance:
what could be the advantage of using half(=0.5) instead of 0.5?
idem for computing a**4 or a*a*a*a. (a*a)** 2 would be another possibility too. My personal taste is a**4 because a good compiler which choose automatically the best way.
For **, the only point which could matter is the difference between a ** 4 and a ** 4., the latter being much more CPU time consuming. But even this point has no sense without a measurement in an actual simulation.
In fact, your approach is wrong. Develop your code as well as possible. After that, measure objectively the cost of the different parts of your code. Optimizing without measuring before is simply non sense.
If a part exhibits a high percentage of the CPU, 50% for instance, don't forget that optimizing that part only cannot divide the cost of the overall code by a factor greater than two. Any way, start the optimization work by the most expensive part (the bottle neck).
Don't forget also that the main improvements are generally coming from better algorithms.
I second the advice that these tricks that you have been taught are silly in this era. Compilers do this for you now; such micro-optimizations are unlikely to make a significant difference and may not be portable. Write clear & understandable code. Carefully select your algorithm. One thing that can make a difference is using indices of multi-dimensions arrays in the correct order ... recasting an M X N array to N X M can help depending on the pattern of data access by your program. After this, if your program is too slow, measure where the CPU is consumed and improve only those parts. Experience shows that guessing is frequently wrong and leads to writing more opaque code for nor reason. If you make a code section in which your program spends 1% of its time twice as fast, it won't make any difference.
Here are previous answers on FORALL and WHERE: How can I ensure that my Fortran FORALL construct is being parallelized? and Do Fortran 95 constructs such as WHERE, FORALL and SPREAD generally result in faster parallel code?
You've got a-priori ideas about what to do, and some of them might actually help,
but the biggest payoff is in a-posteriori anaylsis.
(Added: In other words, getting a*b*c into a different order might save a couple cycles (which I doubt), while at the same time you don't know you're not getting blind-sided by something spending 1000 cycles for no good reason.)
No matter how carefully you code it, there will be opportunities for speedup that you didn't foresee. Here's how I find them. (Some people consider this method controversial).
It's best to start with optimization flags OFF when you do this, so the code isn't all scrambled.
Later you can turn them on and let the compiler do its thing.
Get it running under a debugger with enough of a workload so it runs for a reasonable length of time.
While it's running, manually interrupt it, and take a good hard look at what it's doing and why.
Do this several times, like 10, so you don't draw erroneous conclusions about what it's spending time at.
Here's examples of things you might find:
It could be spending a large fraction of time calling math library functions unnecessarily due to the way some expressions were coded, or with the same argument values as in prior calls.
It could be spending a large fraction of time doing some file I/O, or opening/closing a file, deep inside some routine that seemed harmless to call.
It could be in a general-purpose library function, calling a subordinate subroutine, for the purpose of checking argument flags to the upper function. In such a case, much of that time might be eliminated by writing a special-purpose function and calling that instead.
If you do this entire operation two or three times, you will have removed the stupid stuff that finds its way into any software when it's first written.
After that, you can turn on the optimization, parallelism, or whatever, and be confident no time is being spent on silly stuff.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
So sometimes I need to write a data structure I can't find on Hackage, or what I find isn't tested or quality enough for me to trust, or it's just something I don't want to be a dependency. I am reading Okasaki's book right now, and it's quite good at explaining how to design asymptotically fast data structures.
However, I am working specifically with GHC. Constant factors are a big deal for my applications. Memory usage is also a big deal for me. So I have questions specifically about GHC.
In particular
How to maximize sharing of nodes
How to reduce memory footprint
How to avoid space leaks due to improper strictness/laziness
How to get GHC to produce tight inner loops for important sections of code
I've looked around various places on the web, and I have a vague idea of how to work with GHC, for example, looking at core output, using UNPACK pragmas, and the like. But I'm not sure I get it.
So I popped open my favorite data structures library, containers, and looked at the Data.Sequence module. I can't say I understand a lot of what they're doing to make Seq fast.
The first thing that catches my eye is the definition of FingerTree a. I assume that's just me being unfamiliar with finger trees though. The second thing that catches my eye is all the SPECIALIZE pragmas. I have no idea what's going on here, and I'm very curious, as these are littered all over the code.
Many functions also have an INLINE pragma associated with them. I can guess what that means, but how do I make a judgement call on when to INLINE functions?
Things get really interesting around line ~475, a section headered as 'Applicative Construction'. They define a newtype wrapper to represent the Identity monad, they write their own copy of the strict state monad, and they have a function defined called applicativeTree which, apparently is specialized to the Identity monad and this increases sharing of the output of the function. I have no idea what's going on here. What sorcery is being used to increase sharing?
Anyway, I'm not sure there's much to learn from Data.Sequence. Are there other 'model programs' I can read to gain wisdom? I'd really like to know how to soup up my data structures when I really need them to go faster. One thing in particular is writing data structures that make fusion easy, and how to go about writing good fusion rules.
That's a big topic! Most has been explained elsewhere, so I won't try to write a book chapter right here. Instead:
Real World Haskell, ch 25, "Performance" - discusses profiling, simple specialization and unpacking, reading Core, and some optimizations.
Johan Tibell is writing a lot on this topic:
Computing the size of a data structure
Memory footprints of common data types
Faster persistent structures through hashing
Reasoning about laziness
And some things from here:
Reading GHC Core
How GHC does optimization
Profiling for performance
Tweaking GC settings
General improvements
More on unpacking
Unboxing and strictness
And some other things:
Intro to specialization of code and data
Code improvement flags
applicativeTree is quite fancy, but mainly in a way which has to do with FingerTrees in particular, which are quite a fancy data structure themselves. We had some discussion of the intricacies over at cstheory. Note that applicativeTree is written to work over any Applicative. It just so happens that when it is specialized to Id then it can share nodes in a manner that it otherwise couldn't. You can work through the specialization yourself by inlining the Id methods and seeing what happens. Note that this specialization is used in only one place -- the O(log n) replicate function. The fact that the more general function specializes neatly to the constant case is a very clever bit of code reuse, but that's really all.
In general, Sequence teaches more about designing persistent data structures than about all the tricks for eeking out performance, I think. Dons' suggestions are of course excellent. I'd also just browse through the source of the really canonical and tuned libs -- Map, IntMap, Set, and IntSet in particular. Along with those, its worth taking a look at Milan's paper on his improvements to containers.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
So there are a lot of posts around here about what are the best ways to teach kids to program. I'm interested in the next step, teaching kids how to debug code that doesn't do what they want, or doesn't always work 100% of the time (I believe these are separate problems, but that could be subjective).
I ask from the point of view of a game developer who already has a working game (ROBLOX) where kids can code up a ton of crazy stuff in our embedded scripting language, which happens to be Lua.
What we are seeing is that as these scripts become more complicated they are suffering from edge cases that the kids didn't consider - ultimately limiting the scope of what they can do. Part of the solution is education and part of the solution is better debugging tools. Thus I ask a two part question:
What high quality, freely available sources of information exist on the internet that we can send aspiring script developers to with any expectation that they would get something valuable out of it? Maybe there aren't any and we need to write some?
What debugging tools do you think would be most useful to kids? I want to hit the payoff vs. complexity sweet spot.
Our target demographic here is motivated kids, mostly 12-15 years old.
IMHO: Never mind tools. Talk them through it. Teach problem-solving skills. And just as importantly, teach testing.
Well for the debugging part, my guess would be three things:
Avoid bugs in the first place by teaching them good programming practice
Test each part with eg. unit-testing (Lunit)
use print() enough for seeing what happens
you might be interested in debugger.lua or Remdebug
Use a decent editor with syntax highlighting, bracket matching, ...
For the general information:
Learning Lua on the Lua-users wiki
The Lua reference manual
Programming in Lua
That's the way I learned using Lua anyway :).
Of course, early start always helps. In the early years, brains aren’t wired to one particular language like in adulthood. http://blog.quib.ly/2012/10/30/can-kids-beat-adults-at-coding/
I don't know about the "sources of information" part. It looks a bit too generic to me. I learned about edge cases with painful experience, and don't know any other means. I'm not sure it is a kind of knowledge that can be taught formally. It's more like an intuitive thing to me. Kind of like swimming: in order to learn, you have to get wet.
But regarding payoff-vs-complexity part, I'd say that nothing beats the good old console + print duet. It might not be as fancy as other debugging means, but its complexity asymptotically approaches 0. And it's something they will be able to use in nearly any environment and any language they encounter in the future (unless something really big happens).
If you have iPad, now there's a nice app that lets you write programs/games/simulations and run it directly from your iPad. The language is Lua.
http://twolivesleft.com/Codea/
I would use Netbeans after stripping it down a bit. It has some very nice code hinting and comprehensible error checking and hinting.
Kids can have restricted access to tools like debuggers as an individual may not be registered as a programmer or (game) software developer in the state or at the national level. Lua can be run in debug or trace mode and there is something to be gained by reading through the program script or code and using a pen and paper with test input values to note the variables and their contents with logic jumps separately noted with any return expectation and assess the output data values created at relevant points. This is sometimes called dryrunning and is used normally prior to first full test in the development process. This can help in coping with sometimes complex logic progress and with stack element contents written from bottom to top or from left to right on the paper.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Every language that is being used is being used for its advantages, generally.
What are the advantages of Prolog?
What are the general situations/ category of problems where one can use Prolog more efficiently than any other language?
Compared to what exactly? Prolog is really just the pre-eminent implementation of logic programming so if your question is really about a comparison of programming paradigms well that's really very broad indeed and you should look here.
If your question is more specifically about prolog vs the more commonly seen OO languages I would argue that you're really comparing apples to oranges - the "advantage" (such as it is) is just a different way of thinking about the world, and sometimes changing the way you ask a question provides a better tool for solving a problem.
Basically, if your program can be stated easily as declaritive formal logic statements, Prolog (or another language in that family) will give the fastest development time. If you use a good Prolog compiler, it will also give the best performance and reliability, because the engine will have had a lot of design and development effort.
Trying to implement this kind of thing in another language tends to be a mess. The cleanest and most general solution probably involves implementing your own unification engine. Even naive implementations aren't exactly trivial, the Warren Abstract Machine has a book or two written about it, and doing better will at the very least involve a fair bit of research, reading some headache-inducing papers.
Of course in the real world, key parts of your program may benefit from Prolog, but a lot of other stuff is better handled using another language. That's why a lot of Prolog compilers can interface with, e.g., C.
One of the best times to use Prolog is when you have a problem suited to solving with backtracking. And that is when you have lots of possible solutions to a problem, and perhaps you want to order them to include/exclude depending on some context. This suggests a lot of ambiguity... as in natural language processing.
It sure would be a lot tidier to write all the potential answers as Prolog clauses. With a imperative language all I think you can really do is write a giant (really giant) CASE statement, which is not too fun.
The stuff that are inherent in Prolog:
pattern matching!
anything that involves a depth first search. ( in Java if you want to do a DFS, you may want to implement it by a visitor pattern or do a (really giant) CASE
unification
??
Paul Graham, is a Lisp person nonetheless he argues that Prolog is really good for 2% of the problems, I am myself like to break this 2% down and figure how he'd come up with such number.
His argument for "better" languages is "less code, more power". Prolog is definitely "less code" and if you go for latter flavours of it (typed ones), you get more power too. The only thing that bothered me when using Prolog is the fact that I don't have random access in lists (no arrays).
Prolog is a very high level programming language. An analogy could be (Prolog : C) as (C : Assembler)
Why is not used that much then? I think that it has to do with the machines we use; They are based on turing machines. C can be compiled into byte code automatically, but Prolog is compiled to run on an emulation of the Abstract Warren Machine, thus, it is not that efficient.
Also, prolog is based on first order logic which is not capable of solving every solvable problem in a declarative manner, thus, at some point, you need to rely on imperative-like code.
I'd say prolog works well for problems where a knowledge base forms an important part of the solution. Especially when the knowledge structure is suited to be encoded as logical rules.
For example, writing a natural language interpreter for a particular problem domain would require a lot of knowledge in that domain. Expert systems also fall within this knowledge driven category.
It's also a nice language to explore solutions to logical puzzles ;-)
I have been programming (for fun) over a year with Swi-Prolog. I think one of the advantages of Prolog is that Prolog has no side effects: Prolog is language that kind of has no use for (local or class member) variables, it kind of forces the programmer not use variables. Prolog objects have no state, kind of. I think. I have been writing command line Prolog (no GUI, except few XPCE tests): it is like a train on a track.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Following up on my previous question on the enduring properties of a book on algorithms, see here, now I would like to ask the community what language would you use to write the examples of such a reference book.
I will probably not use MMIX (!) to write the examples of the book, but at the same time, I think just pseudo-code would be less interesting than examples in a real language.
Still, I'd also like the book to be a resource for researchers as well. What could be the choice of the community? Why?
Answer: I knew this was a tough question and that there would be several different answers. Notice that answers cover the whole range from Assembly/MMIX(!!) to Python and pseudo-code. The votes and the arguments compel me to choose Uri's sensible answer, with one caveat: my pseudo-code will be as close to C as I possibly can (without going into platform specific issues, of course), and I will possibly discuss better implementations in side notes (As all of us know, mathematically proving the algorithm works is far, far from the problems of implementing it).
The book is on algorithms in a particular domain, not on the mathematics of algorithms in general (much smarter people have done and will do much better than me on general algorithms). As such, one thing I consider would add value to such a book is the repository of the algorithms, which I will definitely put online in a companion website (maybe in a couple of languages, if I find the time).
Thanks for all the answers. I sometimes feel I should put everybody who answers as co-authors. :)
A good book on algorithms should be written in psueod-code a-la-CLR...
In my experience, most books that go into language-specific examplse end up looking more like undergraduate textbook than like a serious reference or learning books. In addition, most languages are fairly clunky when dealing with collections (esp. C++ and Java, even with generics). Between all the details, too much is lost. You're also immediately eliminating a lot of your potential audience.
The only advantage to language specific books is that if you were writing a textbook, the publisher could attach a CD and add 50$ to the MSRP.
It's easier for me to understand an algorithm from (readable) pseudocode. If I can't figure out how to implement it in my language with my own collections, I'm in trouble anyway.
You could add to every pseudocode listing a note about implementation details for specific languages (e.g., use a TreeSet in Java for best performance, etc.)
You could also maintain a separate website for the book (good idea anyway) where you'll have actual implementations in different languages. No need to kill trees with long printouts.
Use a real programming language -- never a psuedo one. Readers are very suspicious
of psuedo code , readers like real programming languages. The trap with psuedo languages is that you can define code concepts that the reader cannot impliment in their language of choice
A real programming language has a number of advantages :
1) you can test your code, hopefuly proving your code correct !
2) you can export that code into a published format for insertion in your book,
ensuring that anybody following your code would be looking
at actual executable code.
3) you would not have to defend you psuedo code.
The choice of language is obvously subjective, but I think that almost any modern language
could be used, but I'd recommend one that has 'least overheads' in terms of quick understanding. And perferably one that the reader can get a compiler/interpreter.
If you'd like to use C, then perhaps you should check out D. An improved C.
For example, Ruby is of this ilk if you keep you code 'simple',
Java is not ( too many support libraries required),
in an earlier time Pascal would be a candidate.
BTW: I dont use Ruby now, as I currently use Smalltalk & REBOL, but I would not use
either of those languages in a book. Your book would go straight to the remainder bin !
I would avoid anything that abstracts the core 'mechanics' of any particular algorithm
There is a tremendous benefit in Knuth's rendering of the algorithms in an assembly level language. It forces the reader to carefully consider exactly what is going on in the silicon when we code algorithms in some higher level language. Especially for systems programmers, this kind of understanding can't be gotten any other way.
Knuth's new MMIX is ideal: consider it an assembly level pseudo-code.
My ideal textbook would have algorithms written in pseudocode and MMIX, so that we can see the algorithm in both its pretty and gory-complex forms. Pseudo-code should be preferred to "real languages", because it sidesteps pointless "you should have used this language not that language" battles. At this stage, pseudocode needs no defending -- the best extant algorithms textbooks use either pseudocode or in Knuth's case a kind of assembly pseudocode.
The choice is not going to be able to please everyone.
Robert Sedgewick has written his "Algorithms in..." books in multiple languages. I had the C version for a course and bought the C++ version when I started working with C++ at work.
You can't escape language features (even pseudo language features).
To try to please as many people as possible you could choose two languages, say one functional and one not. It could help illustrate motivations in algorithm choice.
C style is often used because many languages use a very similar style so most programmers understand it without explanation. Further, examples can be run on any machine with a C compiler - which is nearly every machine.
However, higher level concepts often require the use of more recent technologies and techniques - OO, functional programming, etc.
These are often expressed in the language that has the required features. Java, C#, Erlang, Ada, etc - most good programmers will grasp what is going on with just a little explanation.
But C is very nearly a universal foundation - you really can't go wrong if you adopt a C style for examples.
-Adam
I would not use any specific language. Use a pseudo-language that will be clear to most anyone who has done a little programming. Usually these books use something close to the C style, but that is not a rule. I know that you mention that you do not want to use pseudo code, but that will allow you to reach a broader audience.
I would use something that lets you express exactly the idea behind the algorithm.
Haskell is quite neat, but I think that with algorithms that work with state, it can get in your way, and you would be more occupied with the language than with the algorithm.
I wouldn't use C or its descendants (C++, C#, Java ...) because they will get in your way when your algorithms are more "functional" in nature. Again, you would be more occupied with the language than with the algorithm. I would feel very uncomfortable if I had to work without higher order functions.
So, basically, I would use a multi-paradigm language that you are comfortable with, and with which you feel confident that you can express the algorithms without diving into language specifics.
My personal choice would be something like Common Lisp, but perhaps Python or Scala is workable, too.
Python's a good choice all around. It's very readable, even if you haven't programmed in it before. Plus, it's a lot less verbose than some other common language choices.