Switch from dyadic to monadic interpretation in a J sentence - refactoring

I am trying to understand composition in J, after struggling to mix and match different phases. I would like help switching between monadic and dyadic phrases in the same sentence.
I just made a simple dice roller in J, which will serve as an example:
d=.1+[:?[#]
4 d 6
2 3 1 1
8 d 12
10 2 11 11 5 11 1 10
This is a chain: "d is one plus the (capped) roll of x occurrences of y"
But what if I wanted to use >: to increment (and skip the cap [: ), such that it "switched" to monadic interpretation after the first fork?
It would read: "d is the incremented roll of x occurrences of y".
Something like this doesn't work, even though it looks to me to have about the right structure:
d=.>:&?[#]
d
>:&? ([ # ])
(If this approach is against the grain for J and I should stick to capped forks, that is also useful information.)

Let's look at a dyadic fork a(c d f h g)b where c,d,f, g and h are verbs and a and b are arguments, which is evaluated as: (a c b) d (a f b) h (a g b) The arguments are applied dyadically to the verbs in the odd positions (or tines c,f and g) - and those results are fed dyadically right to left into the even tines d and h. Also a fork can be either in the form of (v v v) or (n v v) where v stands for verbs and n stands for nouns. In the case of (n v v) you just get the value of n as the left argument to the middle tine.
If you look at your original definition of d=.1+[:?[#] you might notice it simplifies to a dyadic fork with five tines (1 + [: ? #) where the [ # ] can be replaced by # as it is a dyadic fork (see definition above).
The [: (Cap) verb returns no value to the left argument of ? which means that ? acts monadically on the result of a # b and this becomes the right argument to + which has a left argument of 1.
So, on to the question of how to get rid of the [: and use >: instead of 1 + ...
You can also write ([: f g) as f#:g to get rid of the Cap, which means that ([: ? #) becomes ?#:# and now since you want to feed this result into >: you can do that by either:
d1=.>:#:?#:#
d2=. [: >: ?#:#
4 d1 6
6 6 1 5
4 d2 6
2 3 4 5
8 d1 12
7 6 6 4 6 9 8 7
8 d2 12
2 10 10 9 8 12 4 3
Hope this helps, it is a good fundamental question about how forks are evaluated. It would be your preference of whether you use the ([: f g) or f#:g forms of composition.

To summarize the main simple patterns of verb mixing in J:
(f #: g) y = f (g y) NB. (1) monadic "at"
x (f #: g) y = f (x g y) NB. (2) dyadic "at"
x (f &: g) y = (g x) f (g y) NB. (3) "appose"
(f g h) y = (f y) g (h y) NB. (4) monadic fork
x (f g h) y = (x f y) g (x h y) NB. (5) dyadic fork
(f g) y = y f (g y) NB. (6) monadic hook
x (f g) y = x f (g y) NB. (7) dyadic hook
A nice review of those is here (compositions) and here (trains).
Usually there are many possible forms for a verb. To complicate matters more, you can mix many primitives in different ways to achieve to same result.
Experience, style, performance and other such factors influence the way you'll combine the above to form your verb.
In this particular case, I would use #bob's d1 because I find it clearer to read: increase the roll of x copies of y:
>: # ? # $
For the same reason, I am replacing # with $. When I see # in this context, I automatically read "number of elements of", but maybe that's just me.

Related

Query on lambda calculus addition

How do I add two numbers in lambda calculus using below given arithmetic representation of addition?
m + n = λx.λy.(m x) (n x) y
2 = λa.λb.a (a b)
3 = λa.λb.a (a (a b))
You know what 2 is, what 3 is, and what addition is. Take the values and just stick 'em into the operation!
2 + 3 = (λx.λy.(m x) (n x) y) (λa.λb.a (a b)) (λa.λb.a (a (a b)))
|-------- + --------| |----- 2 -----| |------- 3 -------|
This is an application with a lambda on the left. Such a term is called a redex, and it can be β-reduced. The actual reduction is left as an exercise for the reader.

Entropy formula in J language

I'm playing a bit with the J programming language, and I've tried to create a verb for computing entropy from a list of probabilities (outcomes of an event, formula would be like this in python/pesudocode: -sum([p*log(p,2) for p in ps])).
The version I've tried using composition (#:) works, but the one based on hook & fork seems to be doing something else, and I care about why it's doing what it does. I'm trying to grok working with hook and fork, and this case really proves my intuitions are wrong.
Here is the code:
probs =: 0.75 0.25 NB. probabilties
entropy =: +/ #: (- * 2&^.)
entropyWrong =: +/ (- * 2&^.)
entropy probs NB. this is correct
0.811278
entropyWrong probs NB. this is wrong!
1.06128 1.25
0.561278 0.75
NB. shouldn't the following be the same as above (wrong)?
+/ (- * 2&^.) probs
0.811278
The point of my question isn't "how to compute entropy of probabilities in JS", but "why does the entropyWrong above does what it does and why it's not the same as "it's content" which does the right thing apparently.
The entropyWrong definition is a hook that you are using monadically.
entropyWrong =: +/ (- * 2&^.)
If a monadic hook is represented as (u v) y then in your case +/ is u and (- * 2&^.) is v; v is a fork. y of course is probs, the noun argument.
J defines the actions of a monadic hook as equivalent to y u v y so that u becomes dyadic with y as its left argument and v y as its right argument. This is consistent with J's right to left order of execution.
By the way, forks are defined (f g h) y where f, g and h are verbs and the result is (f y) g h y. Each verb can be described as a tine of the fork and the middle tine g is dyadic while f and h are monadic when a fork if applied monadically.
entropy =: +/ #: (- * 2&^.) is doing something different. Entropy is in form u #: v and is taking the results of the fork v and applying them monadically to the verb u
If you would like to get rid of the use of #: in entropy, you can do that by using the verb [: . When used as the left tine of a fork [: returns no result and this creates a monadic centre tine instead of a dyadic one.
entropy2=: [: +/ (- * 2&^.) NB. with three verbs this is now a fork
probs =: 0.75 0.25
entropy2 probs
0.811278

Systematically extract noun arguments from J expression

What is the systematic approach to extracting nouns as arguments from an expression in J? To be clear, an expression containing two literals should become a dyadic expression with the left and right arguments used instead of the literals.
I'm trying to learn tacit style so I prefer not to use named variables if it is avoidable.
A specific example is a simple die roll simulator I made:
>:?10#6 NB. Roll ten six sided dice.
2 2 6 5 3 6 4 5 4 3
>:?10#6
2 1 2 4 3 1 3 1 5 4
I would like to systematically extract the arguments 10 and 6 to the outside of the expression so it can roll any number of any sized dice:
d =. <new expression here>
10 d 6 NB. Roll ten six sided dice.
1 6 4 6 6 1 5 2 3 4
3 d 100 NB. Roll three one hundred sided dice.
7 27 74
Feel free to illustrate using my example, but I'm looking to be able to follow the procedure for arbitrary expressions.
Edit: I just found out that a quoted version using x and y can be automatically converted to tacit form using e.g. 13 : '>:?x#y'. If someone can show me how to find the definition of 13 : I might be able to answer my own question.
If your goal is to learn tacit style, it's better that you simply learn it from the ground up rather than try to memorize an explicit algorithm—J4C and Learning J are good resources—because the general case of converting an expression from explicit to tacit is intractable.
Even ignoring the fact that there have been no provisions for tacit conjunctions since J4, in the explicit definition of a verb you can (1) use control words, (2) use and modify global variables, (3) put expressions containing x and/or y as the operands of an adverb or conjunction, and (4) reference itself. Solving (1), (3), or (4) is very hard in the general case and (2) is just flat out impossible.*
If your J sentence is one of a small class of expressions, there is an easy way to apply the fork rules make it tacit, and this is what is more or less what is implemented in 13 :. Recall that
(F G H) y is (F y) G (H y), and x (F G H) y is (x F y) G (x H y) (Monad/Dyad Fork)
([: G H) y is G (H y), and x ([: G H) y is G (x H y) (Monad/Dyad Capped Fork)
x [ y is x, x ] y is y, and both of [ y and ] y are y (Left/Right)
Notice how forks use their center verbs as the 'outermost' verb: Fork gives a dyadic application of g, while Capped Fork gives a monadic one. This corresponds exactly to the two modes of application of a verb in J, monadic and dyadic. So a quick-and-dirty algorithm for making tacit a "dyadic" expression might look like the following, for F G H verbs and N nouns:
Replace x with (x [ y) and y with (x ] y). (Left/Right)
Replace any other noun n with (x N"_ y)
If you see the pattern (x F y) G (x H y), replace it with x (F G H) y. (Fork)
If you see the pattern G (x H y), replace it with x ([: G H) y. (*Capped Fork()
Repeat 1 through 4 until you attain the form x F y, at which point you win.
If no more simplifications can be performed and you have not yet won, you lose.
A similar algorithm can be derived for "monadic expressions", expressions only dependent on y. Here's a sample derivation.
<. (y - x | y) % x NB. start
<. ((x ] y) - (x [ y) | (x ] y)) % (x [ y) NB. 1
<. ((x ] y) - (x ([ | ]) y)) % (x [ y) NB. 3
<. (x (] - ([ | ])) y) % (x [ y) NB. 3
<. x ((] - ([ | ])) % [) y NB. 3
x ([: <. ((] - ([ | ])) % [)) y NB. 4 and we win
This neglects some obvious simplifications, but attains the goal. You can mix in various other rules to simplify, like the long train rule—if Train is a train of odd length then (F G (Train)) are equivalent (F G Train)—or the observation that x ([ F ]) y and x F y are equivalent. After learning the rules, it shouldn't be hard to modify the algorithm to get the result [: <. [ %~ ] - |, which is what 13 : '<. (y - x | y) % x' gives.
The fail condition is attained whenever an expression containing x and/or y is an operand to an adverb or conjunction. It is sometimes possible to recover a tacit form with some deep refactoring, and knowledge of the verb and gerundial forms of ^: and }, but I am doubtful that this can be done programmatically.
This is what makes (1), (3), and (4) hard instead of impossible. Given knowledge of how $: works, a tacit programmer can find a tacit form for, say, the Ackermann function without too much trouble, and a clever one can even refactor that for efficiency. If you could find an algorithm doing that, you'd obviate programmers, period.
ack1 =: (1 + ])`(([ - 1:) $: 1:)`(([ - 1:) $: [ $: ] - 1:)#.(, i. 0:)
ack2 =: $: ^: (<:#[`]`1:) ^: (0 < [) >:
3 (ack1, ack2) 3
61 61
TimeSpace =: 6!:2, 7!:2#] NB. iterations TimeSpace code
10 TimeSpace '3 ack1 8'
2.01708 853504
10 TimeSpace '3 ack2 8'
0.937484 10368
* This is kind of a lie. You can refactor the entire program involving such a verb through some advanced voodoo magic, cf. Pepe Quintana's talk at the 2012 J Conference. It isn't pretty.
13 : is documented in the vocabulary or NuVoc under : (Explicit).
The basic idea is that the value you want to be x becomes [ and the value you want to be y becomes ]. But as soon as the the rightmost token changes from a noun (value) to a verb like [ or ], the entire statement becomes a train, and you may need to use the verb [: or the conjunctions # or #: to restore the composition behavior you had before.
You can also replace the values with the actual names x and y, and then wrap the whole thing in ((dyad : ' ... ')). That is:
>:?10#6 NB. Roll ten six sided dice.
can become:
10 (dyad : '>: ? x # y') 6 NB. dyad is predefined. It's just 4.
If you only need the y argument, you can use monad, which is prefined as 3. The name verb is also 3. I tend to use verb : when I provide both a monadic and dyadic version, and monad when I only need the monadic meaning.
If your verb is a one-liner like this, you can sometimes convert it automatically to tacit form by replacing the 3 or 4 with 13.
I have some notes on factoring verbs in j that can help you with the step-by-step transformations.
addendum: psuedocode for converting a statement to tacit dyad
This only covers a single statement (one line of code) and may not work if the constant values you're trying to extract are being passed to a conjunction or adverb.
Also, the statement must not make any reference to other variables.
Append [ x=. xVal [ y =. yVal to the statement.
Substitute appropriate values for xVal and yVal.
Rewrite the original expression in terms of the new x and y.
rewrite statement [ x=. xVal [ y=. yVal as:
newVerb =: (4 : 0)
statement ] y NB. we'll fill in x later.
)
(xVal) newVerb yVal
Now you have an explicit definition in terms of x and y. The reason for putting it on multiple lines instead of using x (4 : 'expr') y is that if expr still contains a string literal, you will have to fiddle with escaping the single quotes.
Converting the first noun
Since you only had a pipeline before, the rightmost expression inside statement must be a noun. Convert it to a fork using the following rules:
y → (])
x → ]x ([)
_, __, _9 ... 9 → (_:), (__:), (_9:) ... (9:)
n → n"_ (for any other arbitrary noun)
This keeps the overall meaning the same because the verb you've just created is invoked immediately and applied to the [ y.
Anyway, this new tacit verb in parentheses becomes the core of the train you will build. From here on out, you work by consuming the rightmost expression in the statement, and moving it inside the parentheses.
Fork normal form
From here on out, we will assume the tacit verb we're creating is always a fork.
This new tacit verb isn't actually a fork, but we will pretend it is, because any single-token verb can be rewritten as a fork using the rule:
v → ([: ] v).
There is no reason to actually do this transformation, it's just so I can simplify the rule below and always call it a fork.
We will not use hooks because any hook can be rewritten as a fork with the rule:
(u v) → (] u [: v ])
The rules below should produce trains in this form automatically.
Converting the remaining tokens
Now we can use the following rules to convert the rest of the original pipeline, moving one item at a time into the fork.
For all of these rules, the (]x)? isn't J syntax. It means the ]x may or may not be there. You can't put the ] x in until you transform a usage of x without changing the meaning of the code. Once you transform an instance of x, the ]x is required.
Following the J convention, u and v represent arbitrary verbs, and n is an arbitrary noun. Note that these include verbs
tokens y u (]x)? (fork) ] y → tokens (]x)? (] u fork) ] y
tokens x u (]x)? (fork) ] y → tokens ]x ([ u fork) ] y
tokens n u (]x)? (fork) ] y → tokens (]x)? (n u fork) ] y
tokens u v (]x)? (fork) ] y → tokens u (]x)? ([: v fork) ] y
There are no rules for adverbs or conjunctions, because you should just treat those as part of the verbs. For example +:^:3 should be treated as a single verb. Similarly, anything in parentheses should be left alone as a single phrase.
Anyway, keep applying these rules until you run out of tokens.
Cleanup
You should end up with:
newVerb =: (4 : 0)
] x (fork) ] y
)
(xVal) newVerb yVal
This can be rewritten as:
(xVal) (fork) yVal
And you are done.

Finding largest f satisfying a property given f is non-decreasing in its arguments

this has been bugging me for a while.
Lets say you have a function f x y where x and y are integers and you know that f is strictly non-decreasing in its arguments,
i.e. f (x+1) y >= f x y and f x (y+1) >= f x y.
What would be the fastest way to find the largest f x y satisfying a property given that x and y are bounded.
I was thinking that this might be a variation of saddleback search and I was wondering if there was a name for this type of problem.
Also, more specifically I was wondering if there was a faster way to solve this problem if you knew that f was the multiplication operator.
Thanks!
Edit: Seeing the comments below, the property can be anything
Given a property g (where g takes a value and returns a boolean) I am simply looking for the largest f such that g(f) == True
For example, a naive implementation (in haskell) would be:
maximise :: (Int -> Int -> Int) -> (Int -> Bool) -> Int -> Int -> Int
maximise f g xLim yLim = head . filter g . reverse . sort $ results
where results = [f x y | x <- [1..xLim], y <- [1..yLim]]
Let's draw an example grid for your problem to help think about it. Here's an example plot of f for each x and y. It is monotone in each argument, which is an interesting constraint we might be able to do something clever with.
+------- x --------->
| 0 0 1 1 1 2
| 0 1 1 2 2 4
y 1 1 3 4 6 6
| 1 2 3 6 6 7
| 7 7 7 7 7 7
v
Since we don't know anything about the property, we can't really do better than to list the values in the range of f in decreasing order. The question is how to do that efficiently.
The first thing that comes to mind is to traverse it like a graph starting at the lower-right corner. Here is my attempt:
import Data.Maybe (listToMaybe)
maximise :: (Ord b, Num b) => (Int -> Int -> b) -> (b -> Bool) -> Int -> Int -> Maybe b
maximise f p xLim yLim =
listToMaybe . filter p . map (negate . snd) $
enumIncreasing measure successors (xLim,yLim)
where
measure (x,y) = negate $ f x y
successors (x,y) = [ (x-1,y) | x > 0 ] ++ [ (x,y-1) | y > 0 ] ]
The signature is not as general as it could be (Num should not be necessary, but I needed it to negate the measure function because enumIncreasing returns an increasing rather than a decreasing list -- I could have also done it with a newtype wrapper).
Using this function, we can find the largest odd number which can be written as a product of two numbers <= 100:
ghci> maximise (*) odd 100 100
Just 9801
I wrote enumIncreasing using meldable-heap on hackage to solve this problem, but it is pretty general. You could tweak the above to add additional constraints on the domain, etc.
The answer depends on what's expensive. The case that might be intersting is when f is expensive.
What you might want to do is look at pareto-optimality. Suppose you have two points
(1, 2) and (3, 4)
Then you know that the latter point is going to be a better solution, so long as f is a nondecreasing function. However, of course, if you have points,
(1, 2) and (2, 1)
then you can't know. So, one solution would be to establish a pareto-optimal frontier of points that the predicate g permits, and then evaluate these though f.

Factorial Algorithms in different languages

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I want to see all the different ways you can come up with, for a factorial subroutine, or program. The hope is that anyone can come here and see if they might want to learn a new language.
Ideas:
Procedural
Functional
Object Oriented
One liners
Obfuscated
Oddball
Bad Code
Polyglot
Basically I want to see an example, of different ways of writing an algorithm, and what they would look like in different languages.
Please limit it to one example per entry.
I will allow you to have more than one example per answer, if you are trying to highlight a specific style, language, or just a well thought out idea that lends itself to being in one post.
The only real requirement is it must find the factorial of a given argument, in all languages represented.
Be Creative!
Recommended Guideline:
# Language Name: Optional Style type
- Optional bullet points
Code Goes Here
Other informational text goes here
I will ocasionally go along and edit any answer that does not have decent formatting.
Polyglot: 5 languages, all using bignums
So, I wrote a polyglot which works in the three languages I often write in, as well as one from my other answer to this question and one I just learned today. It's a standalone program, which reads a single line containing a nonnegative integer and prints a single line containing its factorial. Bignums are used in all languages, so the maximum computable factorial depends only on your computer's resources.
Perl: uses built-in bignum package. Run with perl FILENAME.
Haskell: uses built-in bignums. Run with runhugs FILENAME or your favorite compiler's equivalent.
C++: requires GMP for bignum support. To compile with g++, use g++ -lgmpxx -lgmp -x c++ FILENAME to link against the right libraries. After compiling, run ./a.out. Or use your favorite compiler's equivalent.
brainf*ck: I wrote some bignum support in this post. Using Muller's classic distribution, compile with bf < FILENAME > EXECUTABLE. Make the output executable and run it. Or use your favorite distribution.
Whitespace: uses built-in bignum support. Run with wspace FILENAME.
Edit: added Whitespace as a fifth language. Incidentally, do not wrap the code with <code> tags; it breaks the Whitespace. Also, the code looks much nicer in fixed-width.
char //# b=0+0{- |0*/; #>>>>,----------[>>>>,--------
#define a/*#--]>>>>++<<<<<<<<[>++++++[<------>-]<-<<<
#Perl ><><><> <> <> <<]>>>>[[>>+<<-]>>[<<+>+>-]<->
#C++ --><><> <><><>< > < > < +<[>>>>+<<<-<[-]]>[-]
#Haskell >>]>[-<<<<<[<<<<]>>>>[[>>+<<-]>>[<<+>+>-]>>]
#Whitespace >>>>[-[>+<-]+>>>>]<<<<[<<<<]<<<<[<<<<
#brainf*ck > < ]>>>>>[>>>[>>>>]>>>>[>>>>]<<<<[[>>>>*/
exp; ;//;#+<<<<-]<<<<]>>>>+<<<<<<<[<<<<][.POLYGLOT^5.
#include <gmpxx.h>//]>>>>-[>>>[>>>>]>>>>[>>>>]<<<<[>>
#define eval int main()//>+<<<-]>>>[<<<+>>+>->
#include <iostream>//<]<-[>>+<<[-]]<<[<<<<]>>>>[>[>>>
#define print std::cout << // > <+<-]>[<<+>+>-]<<[>>>
#define z std::cin>>//<< +<<<-]>>>[<<<+>>+>-]<->+++++
#define c/*++++[-<[-[>>>>+<<<<-]]>>>>[<<<<+>>>>-]<<*/
#define abs int $n //>< <]<[>>+<<<<[-]>>[<<+>>-]]>>]<
#define uc mpz_class fact(int $n){/*<<<[<<<<]<<<[<<
use bignum;sub#<<]>>>>-]>>>>]>>>[>[-]>>>]<<<<[>>+<<-]
z{$_[0+0]=readline(*STDIN);}sub fact{my($n)=shift;#>>
#[<<+>+>-]<->+<[>-<[-]]>[-<<-<<<<[>>+<<-]>>[<<+>+>+*/
uc;if($n==0){return 1;}return $n*fact($n-1); }//;#
eval{abs;z($n);print fact($n);print("\n")/*2;};#-]<->
'+<[>-<[-]]>]<<[<<<<]<<<<-[>>+<<-]>>[<<+>+>-]+<[>-+++
-}-- <[-]]>[-<<++++++++++<<<<-[>>+<<-]>>[<<+>+>-++
fact 0 = 1 -- ><><><>< > <><>< ]+<[>-<[-]]>]<<[<<+ +
fact n=n*fact(n-1){-<<]>>>>[[>>+<<-]>>[<<+>+++>+-}
main=do{n<-readLn;print(fact n)}-- +>-]<->+<[>>>>+<<+
{-x<-<[-]]>[-]>>]>]>>>[>>>>]<<<<[>+++++++[<+++++++>-]
<--.<<<<]+written+by+++A+Rex+++2009+.';#+++x-}--x*/;}
lolcode:
sorry I couldn't resist xD
HAI
CAN HAS STDIO?
I HAS A VAR
I HAS A INT
I HAS A CHEEZBURGER
I HAS A FACTORIALNUM
IM IN YR LOOP
UP VAR!!1
TIEMZD INT!![CHEEZBURGER]
UP FACTORIALNUM!!1
IZ VAR BIGGER THAN FACTORIALNUM? GTFO
IM OUTTA YR LOOP
U SEEZ INT
KTHXBYE
This is one of the faster algorithms, up to 170!. It fails inexplicably beyond 170!, and it's relatively slow for small factorials, but for factorials between 80 and 170 it's blazingly fast compared to many algorithms.
curl http://www.google.com/search?q=170!
There's also an online interface, try it out now!
Let me know if you find a bug, or faster implementation for large factorials.
EDIT:
This algorithm is slightly slower, but gives results beyond 170:
curl http://www58.wolframalpha.com/input/?i=171!
It also simplifies them into various other representations.
C++: Template Metaprogramming
Uses the classic enum hack.
template<unsigned int n>
struct factorial {
enum { result = n * factorial<n - 1>::result };
};
template<>
struct factorial<0> {
enum { result = 1 };
};
Usage.
const unsigned int x = factorial<4>::result;
Factorial is calculated completely at compile time based on the template parameter n. Therefore, factorial<4>::result is a constant once the compiler has done its work.
Whitespace
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
It was hard to get it to show here properly, but now I tried copying it from the preview and it works. You need to input the number and press enter.
I find the following implementations just hilarious:
The Evolution of a Haskell Programmer
Evolution of a Python programmer
Enjoy!
C# Lookup:
Nothing to calculate really, just look it up. To extend it,add another 8 numbers to the table and 64 bit integers are at at their limit. Beyond that, a BigNum class is called for.
public static int Factorial(int f)
{
if (f<0 || f>12)
{
throw new ArgumentException("Out of range for integer factorial");
}
int [] fact={1,1,2,6,24,120,720,5040,40320,362880,3628800,
39916800,479001600};
return fact[f];
}
Lazy K
Your pure functional programming nightmares come true!
The only Esoteric Turing-complete Programming Language that has:
A purely functional foundation, core, and libraries---in fact, here's the complete API: S K I
No lambdas even!
No numbers or lists needed or allowed
No explicit recursion but yet, allows recursion
A simple infinite lazy stream-based I/O mechanism
Here's the Factorial code in all its parenthetical glory:
K(SII(S(K(S(S(KS)(S(K(S(KS)))(S(K(S(KK)))(S(K(S(K(S(K(S(K(S(SI(K(S(K(S(S(KS)K)I))
(S(S(KS)K)(SII(S(S(KS)K)I))))))))K))))))(S(K(S(K(S(SI(K(S(K(S(SI(K(S(K(S(S(KS)K)I))
(S(S(KS)K)(SII(S(S(KS)K)I))(S(S(KS)K))(S(SII)I(S(S(KS)K)I))))))))K)))))))
(S(S(KS)K)(K(S(S(KS)K)))))))))(K(S(K(S(S(KS)K)))K))))(SII))II)
Features:
No subtraction or conditionals
Prints all factorials (if you wait long enough)
Uses a second layer of Church numerals to convert the Nth factorial to N! asterisks followed by a newline
Uses the Y combinator for recursion
In case you are interested in trying to understand it, here is the Scheme source code to run through the Lazier compiler:
(lazy-def '(fac input)
'((Y (lambda (f n a) ((lambda (b) ((cons 10) ((b (cons 42)) (f (1+ n) b))))
(* a n)))) 1 1))
(for suitable definitions of Y, cons, 1, 10, 42, 1+, and *).
EDIT:
Lazy K Factorial in Decimal
(10KB of gibberish or else I would paste it). For example, at the Unix prompt:
$ echo "4" | ./lazy facdec.lazy
24
$ echo "5" | ./lazy facdec.lazy
120
Rather slow for numbers above, say, 5.
The code is sort of bloated because we have to include library code for all of our own primitives (code written in Hazy, a lambda calculus interpreter and LC-to-Lazy K compiler written in Haskell).
XSLT 1.0
The input file, factorial.xml:
<?xml version="1.0"?>
<?xml-stylesheet href="factorial.xsl" type="text/xsl" ?>
<n>
20
</n>
The XSLT file, factorial.xsl:
<?xml version="1.0"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:msxsl="urn:schemas-microsoft-com:xslt" >
<xsl:output method="text"/>
<!-- 0! = 1 -->
<xsl:template match="text()[. = 0]">
1
</xsl:template>
<!-- n! = (n-1)! * n-->
<xsl:template match="text()[. > 0]">
<xsl:variable name="x">
<xsl:apply-templates select="msxsl:node-set( . - 1 )/text()"/>
</xsl:variable>
<xsl:value-of select="$x * ."/>
</xsl:template>
<!-- Calculate n! -->
<xsl:template match="/n">
<xsl:apply-templates select="text()"/>
</xsl:template>
</xsl:stylesheet>
Save both files in the same directory and open factorial.xml in IE.
Python: Functional, One-liner
factorial = lambda n: reduce(lambda x,y: x*y, range(1, n+1), 1)
NOTE:
It supports big integers. Example:
print factorial(100)
93326215443944152681699238856266700490715968264381621468592963895217599993229915\
608941463976156518286253697920827223758251185210916864000000000000000000000000
It does not work for n < 0.
APL (oddball/one-liner):
×/⍳X
⍳X expands X into an array of the integers 1..X
×/ multiplies every element in the array
Or with the built-in operator:
!X
Source: http://www.webber-labs.com/mpl/lectures/ppt-slides/01.ppt
Perl6
sub factorial ($n) { [*] 1..$n }
I hardly know about Perl6. But I guess this [*] operator is same as Haskell's product.
This code runs on Pugs, and maybe Parrot (I didn't check it.)
Edit
This code also works.
sub postfix:<!> ($n) { [*] 1..$n }
# This function(?) call like below ... It looks like mathematical notation.
say 10!;
x86-64 Assembly: Procedural
You can call this from C (only tested with GCC on linux amd64).
Assembly was assembled with nasm.
section .text
global factorial
; factorial in x86-64 - n is passed in via RDI register
; takes a 64-bit unsigned integer
; returns a 64-bit unsigned integer in RAX register
; C declaration in GCC:
; extern unsigned long long factorial(unsigned long long n);
factorial:
enter 0,0
; n is placed in rdi by caller
mov rax, 1 ; factorial = 1
mov rcx, 2 ; i = 2
loopstart:
cmp rcx, rdi
ja loopend
mul rcx ; factorial *= i
inc rcx
jmp loopstart
loopend:
leave
ret
Recursively in Inform 7
(it reminds you of COBOL because it's for writing text adventures; proportional font is deliberate):
To decide what number is the factorial of (n - a number):
    if n is zero, decide on one;
    otherwise decide on the factorial of (n minus one) times n.
If you want to actually call this function ("phrase") from a game you need to define an action and grammar rule:
"The factorial game" [this must be the first line of the source]
There is a room. [there has to be at least one!]
Factorialing is an action applying to a number.
Understand "factorial [a number]" as factorialing.
Carry out factorialing:
    Let n be the factorial of the number understood;
    Say "It's [n]".
C#: LINQ
public static int factorial(int n)
{
return (Enumerable.Range(1, n).Aggregate(1, (previous, value) => previous * value));
}
Erlang: tail recursive
fac(0) -> 1;
fac(N) when N > 0 -> fac(N, 1).
fac(1, R) -> R;
fac(N, R) -> fac(N - 1, R * N).
Haskell:
ones = 1 : ones
integers = head ones : zipWith (+) integers (tail ones)
factorials = head integers : zipWith (*) factorials (tail integers)
Brainf*ck
+++++
>+<[[->>>>+<<<<]>>>>[-<<<<+>>+>>]<<<<>[->>+<<]<>>>[-<[->>+<<]>>[-<<+<+>>>]<]<[-]><<<-]
Written by Michael Reitzenstein.
BASIC: old school
10 HOME
20 INPUT N
30 LET ANS = 1
40 FOR I = 1 TO N
50 ANS = ANS * I
60 NEXT I
70 PRINT ANS
Batch (NT):
#echo off
set n=%1
set result=1
for /l %%i in (%n%, -1, 1) do (
set /a result=result * %%i
)
echo %result%
Usage:
C:>factorial.bat 15
F#: Functional
Straight forward:
let rec fact x =
if x < 0 then failwith "Invalid value."
elif x = 0 then 1
else x * fact (x - 1)
Getting fancy:
let fact x = [1 .. x] |> List.fold_left ( * ) 1
Recursive Prolog
fac(0,1).
fac(N,X) :- N1 is N -1, fac(N1, T), X is N * T.
Tail Recursive Prolog
fac(0,N,N).
fac(X,N,T) :- A is N * X, X1 is X - 1, fac(X1,A,T).
fac(N,T) :- fac(N,1,T).
ruby recursive
(factorial=Hash.new{|h,k|k*h[k-1]})[1]=1
usage:
factorial[5]
=> 120
Scheme
Here is a simple recursive definition:
(define (factorial x)
(if (= x 0) 1
(* x (factorial (- x 1)))))
In Scheme tail-recursive functions use constant stack space. Here is a version of factorial that is tail-recursive:
(define factorial
(letrec ((fact (lambda (x accum)
(if (= x 0) accum
(fact (- x 1) (* accum x))))))
(lambda (x)
(fact x 1))))
Oddball examples? What about using the gamma function! Since, Gamma n = (n-1)!.
OCaml: Using Gamma
let rec gamma z =
let pi = 4.0 *. atan 1.0 in
if z < 0.5 then
pi /. ((sin (pi*.z)) *. (gamma (1.0 -. z)))
else
let consts = [| 0.99999999999980993; 676.5203681218851; -1259.1392167224028;
771.32342877765313; -176.61502916214059; 12.507343278686905;
-0.13857109526572012; 9.9843695780195716e-6; 1.5056327351493116e-7;
|]
in
let z = z -. 1.0 in
let results = Array.fold_right
(fun x y -> x +. y)
(Array.mapi
(fun i x -> if i = 0 then x else x /. (z+.(float i)))
consts
)
0.0
in
let x = z +. (float (Array.length consts)) -. 1.5 in
let final = (sqrt (2.0*.pi)) *.
(x ** (z+.0.5)) *.
(exp (-.x)) *. result
in
final
let factorial_gamma n = int_of_float (gamma (float (n+1)))
Freshman Haskell programmer
fac n = if n == 0
then 1
else n * fac (n-1)
Sophomore Haskell programmer, at MIT
(studied Scheme as a freshman)
fac = (\(n) ->
(if ((==) n 0)
then 1
else ((*) n (fac ((-) n 1)))))
Junior Haskell programmer
(beginning Peano player)
fac 0 = 1
fac (n+1) = (n+1) * fac n
Another junior Haskell programmer
(read that n+k patterns are “a disgusting part of Haskell” [1]
and joined the “Ban n+k patterns”-movement [2])
fac 0 = 1
fac n = n * fac (n-1)
Senior Haskell programmer
(voted for Nixon Buchanan Bush — “leans right”)
fac n = foldr (*) 1 [1..n]
Another senior Haskell programmer
(voted for McGovern Biafra Nader — “leans left”)
fac n = foldl (*) 1 [1..n]
Yet another senior Haskell programmer
(leaned so far right he came back left again!)
-- using foldr to simulate foldl
fac n = foldr (\x g n -> g (x*n)) id [1..n] 1
Memoizing Haskell programmer
(takes Ginkgo Biloba daily)
facs = scanl (*) 1 [1..]
fac n = facs !! n
Pointless (ahem) “Points-free” Haskell programmer
(studied at Oxford)
fac = foldr (*) 1 . enumFromTo 1
Iterative Haskell programmer
(former Pascal programmer)
fac n = result (for init next done)
where init = (0,1)
next (i,m) = (i+1, m * (i+1))
done (i,_) = i==n
result (_,m) = m
for i n d = until d n i
Iterative one-liner Haskell programmer
(former APL and C programmer)
fac n = snd (until ((>n) . fst) (\(i,m) -> (i+1, i*m)) (1,1))
Accumulating Haskell programmer
(building up to a quick climax)
facAcc a 0 = a
facAcc a n = facAcc (n*a) (n-1)
fac = facAcc 1
Continuation-passing Haskell programmer
(raised RABBITS in early years, then moved to New Jersey)
facCps k 0 = k 1
facCps k n = facCps (k . (n *)) (n-1)
fac = facCps id
Boy Scout Haskell programmer
(likes tying knots; always “reverent,” he
belongs to the Church of the Least Fixed-Point [8])
y f = f (y f)
fac = y (\f n -> if (n==0) then 1 else n * f (n-1))
Combinatory Haskell programmer
(eschews variables, if not obfuscation;
all this currying’s just a phase, though it seldom hinders)
s f g x = f x (g x)
k x y = x
b f g x = f (g x)
c f g x = f x g
y f = f (y f)
cond p f g x = if p x then f x else g x
fac = y (b (cond ((==) 0) (k 1)) (b (s (*)) (c b pred)))
List-encoding Haskell programmer
(prefers to count in unary)
arb = () -- "undefined" is also a good RHS, as is "arb" :)
listenc n = replicate n arb
listprj f = length . f . listenc
listprod xs ys = [ i (x,y) | x<-xs, y<-ys ]
where i _ = arb
facl [] = listenc 1
facl n#(_:pred) = listprod n (facl pred)
fac = listprj facl
Interpretive Haskell programmer
(never “met a language” he didn't like)
-- a dynamically-typed term language
data Term = Occ Var
| Use Prim
| Lit Integer
| App Term Term
| Abs Var Term
| Rec Var Term
type Var = String
type Prim = String
-- a domain of values, including functions
data Value = Num Integer
| Bool Bool
| Fun (Value -> Value)
instance Show Value where
show (Num n) = show n
show (Bool b) = show b
show (Fun _) = ""
prjFun (Fun f) = f
prjFun _ = error "bad function value"
prjNum (Num n) = n
prjNum _ = error "bad numeric value"
prjBool (Bool b) = b
prjBool _ = error "bad boolean value"
binOp inj f = Fun (\i -> (Fun (\j -> inj (f (prjNum i) (prjNum j)))))
-- environments mapping variables to values
type Env = [(Var, Value)]
getval x env = case lookup x env of
Just v -> v
Nothing -> error ("no value for " ++ x)
-- an environment-based evaluation function
eval env (Occ x) = getval x env
eval env (Use c) = getval c prims
eval env (Lit k) = Num k
eval env (App m n) = prjFun (eval env m) (eval env n)
eval env (Abs x m) = Fun (\v -> eval ((x,v) : env) m)
eval env (Rec x m) = f where f = eval ((x,f) : env) m
-- a (fixed) "environment" of language primitives
times = binOp Num (*)
minus = binOp Num (-)
equal = binOp Bool (==)
cond = Fun (\b -> Fun (\x -> Fun (\y -> if (prjBool b) then x else y)))
prims = [ ("*", times), ("-", minus), ("==", equal), ("if", cond) ]
-- a term representing factorial and a "wrapper" for evaluation
facTerm = Rec "f" (Abs "n"
(App (App (App (Use "if")
(App (App (Use "==") (Occ "n")) (Lit 0))) (Lit 1))
(App (App (Use "*") (Occ "n"))
(App (Occ "f")
(App (App (Use "-") (Occ "n")) (Lit 1))))))
fac n = prjNum (eval [] (App facTerm (Lit n)))
Static Haskell programmer
(he does it with class, he’s got that fundep Jones!
After Thomas Hallgren’s “Fun with Functional Dependencies” [7])
-- static Peano constructors and numerals
data Zero
data Succ n
type One = Succ Zero
type Two = Succ One
type Three = Succ Two
type Four = Succ Three
-- dynamic representatives for static Peanos
zero = undefined :: Zero
one = undefined :: One
two = undefined :: Two
three = undefined :: Three
four = undefined :: Four
-- addition, a la Prolog
class Add a b c | a b -> c where
add :: a -> b -> c
instance Add Zero b b
instance Add a b c => Add (Succ a) b (Succ c)
-- multiplication, a la Prolog
class Mul a b c | a b -> c where
mul :: a -> b -> c
instance Mul Zero b Zero
instance (Mul a b c, Add b c d) => Mul (Succ a) b d
-- factorial, a la Prolog
class Fac a b | a -> b where
fac :: a -> b
instance Fac Zero One
instance (Fac n k, Mul (Succ n) k m) => Fac (Succ n) m
-- try, for "instance" (sorry):
--
-- :t fac four
Beginning graduate Haskell programmer
(graduate education tends to liberate one from petty concerns
about, e.g., the efficiency of hardware-based integers)
-- the natural numbers, a la Peano
data Nat = Zero | Succ Nat
-- iteration and some applications
iter z s Zero = z
iter z s (Succ n) = s (iter z s n)
plus n = iter n Succ
mult n = iter Zero (plus n)
-- primitive recursion
primrec z s Zero = z
primrec z s (Succ n) = s n (primrec z s n)
-- two versions of factorial
fac = snd . iter (one, one) (\(a,b) -> (Succ a, mult a b))
fac' = primrec one (mult . Succ)
-- for convenience and testing (try e.g. "fac five")
int = iter 0 (1+)
instance Show Nat where
show = show . int
(zero : one : two : three : four : five : _) = iterate Succ Zero
Origamist Haskell programmer
(always starts out with the “basic Bird fold”)
-- (curried, list) fold and an application
fold c n [] = n
fold c n (x:xs) = c x (fold c n xs)
prod = fold (*) 1
-- (curried, boolean-based, list) unfold and an application
unfold p f g x =
if p x
then []
else f x : unfold p f g (g x)
downfrom = unfold (==0) id pred
-- hylomorphisms, as-is or "unfolded" (ouch! sorry ...)
refold c n p f g = fold c n . unfold p f g
refold' c n p f g x =
if p x
then n
else c (f x) (refold' c n p f g (g x))
-- several versions of factorial, all (extensionally) equivalent
fac = prod . downfrom
fac' = refold (*) 1 (==0) id pred
fac'' = refold' (*) 1 (==0) id pred
Cartesianally-inclined Haskell programmer
(prefers Greek food, avoids the spicy Indian stuff;
inspired by Lex Augusteijn’s “Sorting Morphisms” [3])
-- (product-based, list) catamorphisms and an application
cata (n,c) [] = n
cata (n,c) (x:xs) = c (x, cata (n,c) xs)
mult = uncurry (*)
prod = cata (1, mult)
-- (co-product-based, list) anamorphisms and an application
ana f = either (const []) (cons . pair (id, ana f)) . f
cons = uncurry (:)
downfrom = ana uncount
uncount 0 = Left ()
uncount n = Right (n, n-1)
-- two variations on list hylomorphisms
hylo f g = cata g . ana f
hylo' f (n,c) = either (const n) (c . pair (id, hylo' f (c,n))) . f
pair (f,g) (x,y) = (f x, g y)
-- several versions of factorial, all (extensionally) equivalent
fac = prod . downfrom
fac' = hylo uncount (1, mult)
fac'' = hylo' uncount (1, mult)
Ph.D. Haskell programmer
(ate so many bananas that his eyes bugged out, now he needs new lenses!)
-- explicit type recursion based on functors
newtype Mu f = Mu (f (Mu f)) deriving Show
in x = Mu x
out (Mu x) = x
-- cata- and ana-morphisms, now for *arbitrary* (regular) base functors
cata phi = phi . fmap (cata phi) . out
ana psi = in . fmap (ana psi) . psi
-- base functor and data type for natural numbers,
-- using a curried elimination operator
data N b = Zero | Succ b deriving Show
instance Functor N where
fmap f = nelim Zero (Succ . f)
nelim z s Zero = z
nelim z s (Succ n) = s n
type Nat = Mu N
-- conversion to internal numbers, conveniences and applications
int = cata (nelim 0 (1+))
instance Show Nat where
show = show . int
zero = in Zero
suck = in . Succ -- pardon my "French" (Prelude conflict)
plus n = cata (nelim n suck )
mult n = cata (nelim zero (plus n))
-- base functor and data type for lists
data L a b = Nil | Cons a b deriving Show
instance Functor (L a) where
fmap f = lelim Nil (\a b -> Cons a (f b))
lelim n c Nil = n
lelim n c (Cons a b) = c a b
type List a = Mu (L a)
-- conversion to internal lists, conveniences and applications
list = cata (lelim [] (:))
instance Show a => Show (List a) where
show = show . list
prod = cata (lelim (suck zero) mult)
upto = ana (nelim Nil (diag (Cons . suck)) . out)
diag f x = f x x
fac = prod . upto
Post-doc Haskell programmer
(from Uustalu, Vene and Pardo’s “Recursion Schemes from Comonads” [4])
-- explicit type recursion with functors and catamorphisms
newtype Mu f = In (f (Mu f))
unIn (In x) = x
cata phi = phi . fmap (cata phi) . unIn
-- base functor and data type for natural numbers,
-- using locally-defined "eliminators"
data N c = Z | S c
instance Functor N where
fmap g Z = Z
fmap g (S x) = S (g x)
type Nat = Mu N
zero = In Z
suck n = In (S n)
add m = cata phi where
phi Z = m
phi (S f) = suck f
mult m = cata phi where
phi Z = zero
phi (S f) = add m f
-- explicit products and their functorial action
data Prod e c = Pair c e
outl (Pair x y) = x
outr (Pair x y) = y
fork f g x = Pair (f x) (g x)
instance Functor (Prod e) where
fmap g = fork (g . outl) outr
-- comonads, the categorical "opposite" of monads
class Functor n => Comonad n where
extr :: n a -> a
dupl :: n a -> n (n a)
instance Comonad (Prod e) where
extr = outl
dupl = fork id outr
-- generalized catamorphisms, zygomorphisms and paramorphisms
gcata :: (Functor f, Comonad n) =>
(forall a. f (n a) -> n (f a))
-> (f (n c) -> c) -> Mu f -> c
gcata dist phi = extr . cata (fmap phi . dist . fmap dupl)
zygo chi = gcata (fork (fmap outl) (chi . fmap outr))
para :: Functor f => (f (Prod (Mu f) c) -> c) -> Mu f -> c
para = zygo In
-- factorial, the *hard* way!
fac = para phi where
phi Z = suck zero
phi (S (Pair f n)) = mult f (suck n)
-- for convenience and testing
int = cata phi where
phi Z = 0
phi (S f) = 1 + f
instance Show (Mu N) where
show = show . int
Tenured professor
(teaching Haskell to freshmen)
fac n = product [1..n]
D Templates: Functional
template factorial(int n : 1)
{
const factorial = 1;
}
template factorial(int n)
{
const factorial =
n * factorial!(n-1);
}
or
template factorial(int n)
{
static if(n == 1)
const factorial = 1;
else
const factorial =
n * factorial!(n-1);
}
Used like this:
factorial!(5)
Java 1.6: recursive, memoized (for subsequent calls)
private static Map<BigInteger, BigInteger> _results = new HashMap()
public static BigInteger factorial(BigInteger n){
if (0 >= n.compareTo(BigInteger.ONE))
return BigInteger.ONE.max(n);
if (_results.containsKey(n))
return _results.get(n);
BigInteger result = factorial(n.subtract(BigInteger.ONE)).multiply(n);
_results.put(n, result);
return result;
}
PowerShell
function factorial( [int] $n )
{
$result = 1;
if ( $n -gt 1 )
{
$result = $n * ( factorial ( $n - 1 ) )
}
$result
}
Here's a one-liner:
$n..1 | % {$result = 1}{$result *= $_}{$result}
Bash: Recursive
In bash and recursive, but with the added advantage that it deals with each iteration in a new process. The max it can calculate is !20 before overflowing, but you can still run it for big numbers if you don't care about the answer and want your system to fall over ;)
#!/bin/bash
echo $(($1 * `( [[ $1 -gt 1 ]] && ./$0 $(($1 - 1)) ) || echo 1`));

Resources