Cartesian product in J - cartesian-product

I'm trying to reproduce APL code for the game of life function in J. A YouTube video explaining this code can be found after searching "Game of Life in APL". Currently I have a matrix R which is:
0 0 0 0 0 0 0
0 0 0 1 1 0 0
0 0 1 1 0 0 0
0 0 0 1 0 0 0
0 0 0 0 0 0 0
I wrote J code which produces the adjacency list (number of living cells in adjacent squares) which is the following:
+/ ((#:i.4),-(#:1+i.2),(1 _1),.(_1 1)) |. R
And produces:
0 0 1 2 2 1 0
0 1 3 4 3 1 0
0 1 4 5 3 0 0
0 1 3 2 1 0 0
0 0 1 1 0 0 0
My main issue with this code is that it isn't elegant, as ((#:i.4),-(#:1+i.2),(1 _1),.(_1 1)) is needed just to produce:
0 0
0 1
1 0
1 1
0 _1
_1 0
_1 1
1 _1
Which is really just the outer product or Cartesian product between vectors 1 0 _1 and itself. I could not find an easy way to produce this Cartesian product, so my end question is how would I produce the required vector more elegantly?

A Complete Catalog
#Michael Berry's answer is very clear and concise. A sterling example of the J idiom table (f"0/~). I love it because it demonstrates how the subtle design of J has permitted us to generalize and extend a concept familiar to anyone from 3rd grade: arithmetic tables (addition tables, +/~ i. 10 and multiplication tables */~ i.12¹), which even in APL were relatively clunky.
In addition to that fine answer, it's also worth noting that there is a primitive built into J to calculate the Cartesian product, the monad {.
For example:
> { 2 # <1 0 _1 NB. Or i:_1 instead of 1 0 _1
1 1
1 0
1 _1
0 1
0 0
0 _1
_1 1
_1 0
_1 _1
Taking Inventory
Note that the input to monad { is a list of boxes, and the number of boxes in that list determines the number of elements in each combination. A list of two boxes produces an array of 2-tuples, a list of 3 boxes produces an array of 3-tuples, and so on.
A Tad Excessive
Given that full outer products (Cartesian products) are so expensive (O(n^m)), it occurs to one to ask: why does J have a primitive for this?
A similar misgiving arises when we inspect monad {'s output: why is it boxed? Boxes are used in J when, and only when, we want to consolidate arrays of incompatible types and shapes. But all the results of { y will have identical types and shapes, by the very definition of {.
So, what gives? Well, it turns out these two issues are related, and justified, once we understand why the monad { was introduced in the first place.
I'm Feeling Ambivalent About This
We must recall that all verbs in J are ambivalent perforce. J's grammar does not admit a verb which is only a monad, or only a dyad. Certainly, one valence or another might have an empty domain (i.e. no valid inputs, like monad E. or dyad ~. or either valence of [:), but it still exists.
An valence with an empty domain is "real", but its range of valid inputs is empty (an extension of the idea that the range of valid inputs to e.g. + is numbers, and anything else, like characters, produces a "domain error").
Ok, fine, so all verbs have two valences, so what?
A Selected History
Well, one of the primary design goals Ken Iverson had for J, after long experience with APL, was ditching the bracket notation for array indexing (e.g. A[3 4;5 6;2]), and recognizing that selection from an array is a function.
This was an enormous insight, with a serious impact on both the design and use of the language, which unfortunately I don't have space to get into here.
And since all functions need a name, we had to give one to the selection function. All primitive verbs in J are spelled with either a glyph, an inflected glyph (in my head, the ., :, .: etc suffixes are diacritics), or an inflected alphanumeric.
Now, because selection is so common and fundamental to array-oriented programming, it was given some prime real estate (a mark of distinction in J's orthography), a single-character glyph: {².
So, since { was defined to be selection, and selection is of course dyadic (i.e having two arguments: the indices and the array), that accounts for the dyad {. And now we see why it's important to note that all verbs are ambivalent.
I Think I'm Picking Up On A Theme
When designing the language, it would be nice to give the monad { some thematic relationship to "selection"; having the two valences of a verb be thematically linked is a common pattern in J, for elegance and mnemonic purposes.
That broad pattern is also a topic worthy of a separate discussion, but for now let's focus on why catalog / Cartesian product was chosen for monad {. What's the connection? And what accounts for the other quirk, that its results are always boxed?
Bracketectomy
Well, remember that { was introduced to replace -- replace completely -- the old bracketing subscript syntax of APL and (and many other programming languages and notations). This at once made selection easier, more useful, and also simplified J's syntax: in APL, the grammar, and consequently parser, had to have special rules for indexing like:
A[3 4;5 6;2]
The syntax was an anomaly. But boy, wasn't it useful and expressive from the programmer's perspective, huh?
But why is that? What accounts for the multi-dimensional bracketing notation's economy? How is it that we can say so much in such little space?
Well, let's look at what we're saying. In the expression above A[3 4;5 6;2], we're asking for the 3rd and 4th rows, the 5th and 6th columns, and the 2nd plane.
That is, we want
plane 2, row 3, column 5, and
plane 2, row 3, column 6, and
plane 2, row 4, column 5 and
plane 2, row 4, column 6
Think about that a second. I'll wait.
The Moment Ken Blew Your Mind
Boom, right?
Indexing is a Cartesian product.
Always has been. But Ken saw it.
So, now, instead of saying A[3 4;5 6;2] in APL (with some hand-waving about whether []IO is 1 or 0), in J we say:
(3 4;5 6;2) { A
which is, of course, just shorthand, or syntactic sugar, for:
idx =. { 3 4;5 6;2 NB. Monad {
idx { A NB. Dyad {
So we retained the familiar, convenient, and suggestive semicolon syntax (what do you want to bet link being spelled ; is also not a coincidence?) while getting all the benefits of turning { into a first-class function, as it always should have been³.
Opening The Mystery Box
Which brings us back to that other, final, quibble. Why the heck are monad {'s results boxed, if they're all regular in type and shape? Isn't that superfluous and inconvenient?
Well, yes, but remember that an unboxed, i.e. numeric, LHA in x { y only selects items from y.
This is convenient because it's a frequent need to select the same item multiple times (e.g. in replacing 'abc' with 'ABC' and defaulting any non-abc character to '?', we'd typically say ('abc' i. y) { 'ABC','?', but that only works because we're allowed to select index 4, which is '?', multiple times).
But that convenience precludes using straight numeric arrays to also do multidimensional indexing. That is, the convenience of unboxed numbers to select items (most common use case) interferes with also using unboxed numeric arrays to express, e.g. A[17;3;8] by 17 3 8 { A. We can't have it both ways.
So we needed some other notation to express multi-dimensional selections, and since dyad { has left-rank 0 (precisely because of the foregoing), and a single, atomic box can encapsulate an arbitrary structure, boxes were the perfect candidate.
So, to express A[17;3;8], instead of 17 3 8 { A, we simply say (< 17;3;8) { A, which again is straighforward, convenient, and familiar, and allows us to do any number of multi-dimensional selections simultaneously e.g. ( (< 17;3;8) , (<42; 7; 2) { A), which is what you'd want and expect in an array-oriented language.
Which means, of course, that in order to produce the kinds of outputs that dyad { expects as inputs, monad { must produce boxes⁴. QED.
Oh, and PS: since, as I said, boxing permits arbitrary structure in a single atom, what happens if we don't box a box, or even a list of a boxes, but box a boxed box? Well, have you ever wanted a way to say "I want every index except the last" or 3rd, or 42nd and 55th? Well...
Footnotes:
¹ Note that in the arithmetic tables +/~ i.10 and */~ i.12, we can elide the explicit "0 (present in ,"0/~ _1 0 1) because arithmetic verbs are already scalar (obviously)
² But why was selection given that specific glyph, {?
Well, Ken intentionally never disclosed the specific mnemonic choices used in J's orthography, because he didn't want to dictate such a personal choice for his users, but to me, Dan, { looks like a little funnel pointing right-to-left. That is, a big stream of data on the right, and a much smaller stream coming out the left, like a faucet dripping.
Similarly, I've always seen dyad |: like a little coffee table or Stonehenge trilithon kicked over on its side, i.e. transposed.
And monad # is clearly mnemonic (count, tally, number of items), but the dyad was always suggestive to me because it looked like a little net, keeping the items of interest and letting everything else "fall through".
But, of course, YMMV. Which is precisely why Ken never spelled this out for us.
³ Did you also notice that while in APL the indices, which are control data, are listed to the right of the array, whereas in J they're now on the left, where control data belong?
⁴ Though this Jer would still like to see monad { produce unboxed results, at the cost of some additional complexity within the J engine, i.e. at the expense of the single implementer, and to the benefit of every single user of the language
n There is a lot of interesting literature which goes into this material in more detail, but unfortunately I do not have time now to dig it up. If there's enough interest, I may come back and edit the answer with references later. For now, it's worth reading Mastering J, an early paper on J by one of the luminaries of J, Donald McIntyre, which makes mention of the eschewing of the "anomalous bracket notation" of APL, and perhaps a tl;dr version of this answer I personally posted to the J forums in 2014.

,"0/ ~ 1 0 _1
will get you the Cartesian product you ask for (but you may want to reshape it to 9 by 2).

The cartesian product is the monadic verb catalog: {
{ ;~(1 0 _1)
┌────┬────┬─────┐
│1 1 │1 0 │1 _1 │
├────┼────┼─────┤
│0 1 │0 0 │0 _1 │
├────┼────┼─────┤
│_1 1│_1 0│_1 _1│
└────┴────┴─────┘
Ravel (,) and unbox (>) for a 9,2 list:
>,{ ;~(1 0 _1)
1 1
1 0
1 _1
0 1
0 0
0 _1
_1 1
_1 0
_1 _1

Related

Is the implementation of `Don't care condition ( X ) in k- map is right

I have a little confusion regarding the Don't care condition in the karnaugh map. As we all know karnaugh map is used to achieve the
complete
accurate/precise
optimal
output equation of 16bit or sometime 32 bits binary solution,till that everything was alright but the problem arises when we insert a dont care conditions in it.
My question was that,
As even the dont care conditions were generated from the o's or 1's solution of truth table & in karnaugh map we sometimes conclude or sometimes ignore dont care conditions in our karnaugh map groups. so is it an ambiguity in a karnaugh map that we ignore dont care conditions in the karnaugh map beacuse we dont know what is behind that dont care condition is it 1 or 0. so afterwards that how could we confidently use to say that our solution is complete or accurate while we are ignoring the dont care conditions in it. May be the dont care we are ignoring contains a 1 in sop and 0 at pos so according to it may contain an error.
A "don't care" is just that. Something that we don't care about. It gives us an opportunity for additional optimization, because that value is not restricted. We are able to make it whatever we wish in order to achieve the most optimal solution.
Because we don't care about it, it doesn't matter what the value is. We will use whatever suits us best (lowest cost, fastest, etc... "optimal"). If it's better as a 1 in one implementation and a 0 in another, so be it, it doesn't matter.
Yes there is always another case with the don't care, but we can say it's complete/accurate because we don't care about the other one. We will treat it whichever way makes our implementation better.
Let's take a very simple example to understand what "don't care conditions" exactly mean.
Let F be a two-variable Boolean function defined by a user as follows:
A B F
0 0 1
0 1 0
This function is not defined when the value of A is 1.
This can mean one of two things :-
The user doesn't care about the value that F produces when A = 1.
The user guarantees that A = 1 will never be given as an input to F.
Both of these cases are collectively known as "don't care conditions".
Now, the person implementing this function can use this fact to their advantage by extending the definition of F to include the cases when A = 1.
Using a K-Map,
B' B
A' | 1 | 0 |
A | X | X |
Without these don't care conditions, the algebraic expression of F would be written as A'B', i.e. F = A'B'.
But, if we modify this map as follows,
B' B
A' | 1 | 0 |
A | 1 | 0 |
then F can be expressed as F = B'.
By modifying this map, we have basically extended the definition of F as follows:
A B F
0 0 1
0 1 0
1 0 1
1 1 0
This method works because the person implementing the function already knows that either the user will not care what happens when A = 1, or the user will never use A = 1 as an input to F.
Another example is a four-variable Boolean function in which each of those 4 variables denotes one distinct bit of a BCD value, and the function gives the output as 1 if the equivalent decimal number is even & 0 if it is odd. Here, since it is guaranteed that the input will never be one of 1010, 1011, 1100, 1101, 1110 & 1111, therefore the person implementing the function can extend the definition of the function to their advantage by including these cases as well.

Finding a hash function that has specific properties

My question relates a lot to this topic:
Hash function on list independant of order of items in it
Basically, I have a set of N numbers. N is fixed and is typically quite large, eg. 1000 for instance. These numbers can be integers or floating-point. They can be equal, some or all of them. No number can be zero.
Every combination of K numbers where K is anything between 1 and N leads to the calculation of a hash.
Let's take an example with 3 numbers, that I will call A, B and C. I need to calculate a hash for the following combinations:
A
B
C
A+B
B+C
A+B+C
A+C
Things are order-independent, C+A is just equal to A+C. '+' can be a real addition or something different, like a XOR, but it is fixed. Likewise, every value may go through a function first, eg.
f(A)
f(B)
f(A)+f(B)+f(C)
...
Now, I need to avoid collisions, but in a specific way only.
Each combination is tagged with a number, either 0 or 1.
Collisions may occur such that, if possible, only those tagged with the same number (0 or 1) may collide. In this case many collisions are even welcome indeed, especially if this makes the hash value compact. I mean, ideally, the best hash is only 1 bit long ! (0 or 1).
Collisions between combinations tagged with different numbers (0 and 1) should only rarely happen if possible - this is the whole point.
Let's take an example. Combination -> tag -> calculated hash value:
Combination Tag Hash
A -> 0 -> 0
B -> 1 -> 1
C -> 0 -> 2
A+B -> 0 -> 0
B+C -> 1 -> 1
A+B+C -> 1 -> 3
A+C -> 0 -> 2
Here, the hash values are valid because there is no collision between combinations of different tags. A collides with A+B for instance, but they're both tagged '0'.
However, the hash is not very good overall, because I need 4 bits, which seems a lot for only 4 input numbers.
How can find a good (good enough) hash function for this purpose?
Thank you for your insight.

Number of ways of counting nothing is one?

This is something that I routinely err in while solving problems. How do we decide what is the value of a recursive function when the argument is at the lowest extreme. An example will help:
Given n, find the number of ways to tile a 3xN grid using 2x1 blocks only. Rotation of blocks is allowed.
The DP solution is easily found as
f(n): the number of ways of tiling a 3xN grid
g(n): the number of ways of tiling a 3xN grid with a 1x1 block cut off at the rightmost column
f(n) = f(n-2) + 2*g(n-1)
g(n) = f(n-1) + g(n-2)
I initially thought that the base cases would be f(0)=0, g(0)=0, f(1)=0, g(1)=1. However, this yields a wrong answer. I then read somewhere that f(0)=1 and reasoned it out as
The number of ways of tiling a 3x0 grid is one because there is only one way we cannot use any tiles(2x1 blocks).
My question is, by that logic, shouldn't g(0) be also one. But, in the correct solution, g(0)=0. In general, when can we say that the number of ways of using nothing is one?
About your specific question of tiling, think this way:
How many ways are there to "tile a 3*0 grid"?
I would say: Just one way, don't do anything! and you can't "do nothing" any other way. (f(0) = 1)
How many ways are there to "tile a 3*0 grid, cutting that specific block off"?
I would say: Hey! That's impossible! You can't cut the specific block off since there is nothing. So, there's no way one can solve the task anyhow. (g(0) = 0)
Now, let's get to the general case:
There's no "general" rule about zero cases.
Depending on your problem, you may be able to somehow "interpret" the situation, and find the reasonable value. Most of the times (depending on your definition of "ways") number of ways of doing "nothing" is 1, and number of ways of doing something impossible is 0!
Warning! Being able to somehow "interpret" the zero case is not enough for the relation to be correct! You should recheck your recursive relation (i.e. the way you get the n-th value from the previous ones) to be applicable for the zero-to-one case as well, since most of the time this would be a "tricky" case.
You may find it easier to base your recursive relation on some non-zero case, if you find the zero-case being tricky, or confusing.
The way I see it, g(0) is invalid, since there is no way to cut a 1x1 block out of a 3x0 grid.
Invalid values are typically represented as 0, -∞ or ∞, but this largely depends on the problem. The number of ways to place something would make sense to be 0 to represent invalid values (otherwise your counts will be off). When working with min, you'd generally use ∞. When working with max, you'd generally use -∞ (or possibly 0).
Generally, the number of ways to place 0 objects or objects in a 0-sized space makes sense to be 1 (i.e. placing no objects) (so f(0) = 1). In a lot of other cases valid values would be 0.
But these are far from rules (and avoid blindly following rules, because you'll get hurt with exceptions); the best advice I can give - when in doubt, throw a few values in and see what happens.
In this case you can easily determine what the actual values for g(1), f(1), g(2) and f(2) should be, and use these to calculate g(0) and f(0):
g(1) = 1
f(1) = 0
g(2): (all invalid, since ? is not populated)
|X |X ?X
|? || --
-- ?| --
g(2) = 0, thus g(0) = 0 - f(1) = 0 - 0 = 0
f(2):
|| -- --
|| -- ||
-- -- ||
f(2) = 3, thus f(0) = 3 - 2*g(1) = 3 - 2 = 1

Check whether a point is inside a rectangle by bit operator

Days ago, my teacher told me it was possible to check if a given point is inside a given rectangle using only bit operators. Is it true? If so, how can I do that?
This might not answer your question but what you are looking for could be this.
These are the tricks compiled by Sean Eron Anderson and he even put a bounty of $10 for those who can find a single bug. The closest thing I found here is a macro that finds if any integer X has a word which is between M and N
Determine if a word has a byte between m and n
When m < n, this technique tests if a word x contains an unsigned byte value, such that m < value < n. It uses 7 arithmetic/logical operations when n and m are constant.
Note: Bytes that equal n can be reported by likelyhasbetween as false positives, so this should be checked by character if a certain result is needed.
Requirements: x>=0; 0<=m<=127; 0<=n<=128
#define likelyhasbetween(x,m,n) \
((((x)-~0UL/255*(n))&~(x)&((x)&~0UL/255*127)+~0UL/255*(127-(m)))&~0UL/255*128)
This technique would be suitable for a fast pretest. A variation that takes one more operation (8 total for constant m and n) but provides the exact answer is:
#define hasbetween(x,m,n) \
((~0UL/255*(127+(n))-((x)&~0UL/255*127)&~(x)&((x)&~0UL/255*127)+~0UL/255*(127-(m)))&~0UL/255*128)
It is possible if the number is a finite positive integer.
Suppose we have a rectangle represented by the (a1,b1) and (a2,b2). Given a point (x,y), we only need to evaluate the expression (a1<x) & (x<a2) & (b1<y) & (y<b2). So the problems now is to find the corresponding bit operation for the expression c
Let ci be the i-th bit of the number c (which can be obtained by masking ci and bit shift). We prove that for numbers with at most n bit, c<d is equivalent to r_(n-1), where
r_i = ((ci^di) & ((!ci)&di)) | (!(ci^di) & r_(i-1))
Prove: When the ci and di are different, the left expression might be true (depends on ((!ci)&di)), otherwise the right expression might be true (depends on r_(i-1) which is the comparison of next bit).
The expression ((!ci)&di) is actually equivalent to the bit comparison ci < di. Hence, this recursive relation return true that it compares the bit by bit from left to right until we can decide c is smaller than d.
Hence there is an purely bit operation expression corresponding to the comparison operator, and so it is possible to find a point inside a rectangle with pure bitwise operation.
Edit: There is actually no need for condition statement, just expands the r_(n+1), then done.
x,y is in the rectangle {x0<x<x1 and y0<y<y1} if {x0<x and x<x1 and y0<y and y<y1}
If we can simulate < with bit operators, then we're good to go.
What does it mean to say something is < in binary? Consider
a: 0 0 0 0 1 1 0 1
b: 0 0 0 0 1 0 1 1
In the above, a>b, because it contains the first 1 whose counterpart in b is 0. We are those seeking the leftmost bit such that myBit!=otherBit. (== or equiv is a bitwise operator which can be represented with and/or/not)
However we need some way through to propagate information in one bit to many bits. So we ask ourselves this: can we "code" a function using only "bit" operators, which is equivalent to if(q,k,a,b) = if q[k] then a else b. The answer is yes:
We create a bit-word consisting of replicating q[k] onto every bit. There are two ways I can think of to do this:
1) Left-shift by k, then right-shift by wordsize (efficient, but only works if you have shift operators which duplicate the last bit)
2) Inefficient but theoretically correct way:
We left-shift q by k bits
We take this result and and it with 10000...0
We right-shift this by 1 bit, and or it with the non-right-shifted version. This copies the bit in the first place to the second place. We repeat this process until the entire word is the same as the first bit (e.g. 64 times)
Calling this result mask, our function is (mask and a) or (!mask and b): the result will be a if the kth bit of q is true, other the result will be b
Taking the bit-vector c=a!=b and a==1111..1 and b==0000..0, we use our if function to successively test whether the first bit is 1, then the second bit is 1, etc:
a<b :=
if(c,0,
if(a,0, B_LESSTHAN_A, A_LESSTHAN_B),
if(c,1,
if(a,1, B_LESSTHAN_A, A_LESSTHAN_B),
if(c,2,
if(a,2, B_LESSTHAN_A, A_LESSTHAN_B),
if(c,3,
if(a,3, B_LESSTHAN_A, A_LESSTHAN_B),
if(...
if(c,64,
if(a,64, B_LESSTHAN_A, A_LESSTHAN_B),
A_EQUAL_B)
)
...)
)
)
)
)
This takes wordsize steps. It can however be written in 3 lines by using a recursively-defined function, or a fixed-point combinator if recursion is not allowed.
Then we just turn that into an even larger function: xMin<x and x<xMax and yMin<y and y<yMax

How can I generate the Rowland prime sequence idiomatically in J?

If you're not familiar with the Rowland prime sequence, you can find out about it here. I've created an ugly, procedural monadic verb in J to generate the first n terms in this sequence, as follows:
rowland =: monad define
result =. 0 $ 0
t =. 1 $ 7
while. (# result) < y do.
a =. {: t
n =. 1 + # t
t =. t , a + n +. a
d =. | -/ _2 {. t
if. d > 1 do.
result =. ~. result , d
end.
end.
result
)
This works perfectly, and it indeed generates the first n terms in the sequence. (By n terms, I mean the first n distinct primes.)
Here is the output of rowland 20:
5 3 11 23 47 101 7 13 233 467 941 1889 3779 7559 15131 53 30323 60647 121403 242807
My question is, how can I write this in more idiomatic J? I don't have a clue, although I did write the following function to find the differences between each successive number in a list of numbers, which is one of the required steps. Here it is, although it too could probably be refactored by a more experienced J programmer than I:
diffs =: monad : '}: (|#-/#(2&{.) , $:#}.) ^: (1 < #) y'
While I already marked estanford's answer as the correct one, I've come a long, long way with J since I asked this question. Here's a much more idiomatic way to generate the rowland prime sequence in J:
~. (#~ 1&<) | 2 -/\ (, ({: + ({: +. >:##)))^:(1000 - #) 7
The expression (, ({: + ({: +. >:##)))^:(1000 - #) 7 generates the so-called original sequence up to 1000 members. The first differences of this sequence can be generated by | 2 -/\, i.e., the absolute values of the differences of every two elements. (Compare this to my original, long-winded diffs verb from the original question.)
Lastly, we remove the ones and the duplicate primes ~. (#~ 1&<) to get the sequence of primes.
This is vastly superior to the way I was doing it before. It can easily be turned into a verb to generate n number of primes with a little recursion.
I don't have a full answer yet, but this essay by Roger Hui has a tacit construct you can use to replace explicit while loops. Another (related) avenue would be to make the inner logic of the block into a tacit expression like so:
FUNWITHTACIT =: ] , {: + {: +. 1 + #
rowland =: monad define
result =. >a:
t =. 7x
while. (# result) < y do.
t =. FUNWITHTACIT t
d =. | -/ _2 {. t
result =. ~.result,((d>1)#d)
end.
result
)
(You might want to keep the if block for efficiency, though, since I wrote the code in such a way that result is modified regardless of whether or not the condition was met -- if it wasn't, the modification has no effect. The if logic could even be written back into the tacit expression by using the Agenda operator.)
A complete solution would consist of finding out how to represent all the logic inside the while loop of as a single function, and then use Roger's trick to implement the while logic as a tacit expression. I'll see what I can turn up.
As an aside, I got J to build FUNWITHTACIT for me by taking the first few lines of your code, manually substituting in the functions you declared for the variable values (which I could do because they were all operating on a single argument in different ways), replaced every instance of t with y and told J to build the tacit equivalent of the resulting expression:
]FUNWITHTACIT =: 13 : 'y,({:y)+(1+#y)+.({:y)'
] , {: + {: +. 1 + #
Using 13 to declare the monad is how J knows to take a monad (otherwise explicitly declared with 3 : 0, or monad define as you wrote in your program) and convert the explicit expression into a tacit expression.
EDIT:
Here are the functions I wrote for avenue (2) that I mentioned in the comment:
candfun =: 3 : '(] , {: + {: +. 1 + #)^:(y) 7'
firstdiff =: [: | 2 -/\ ]
triage =: [: /:~ [: ~. 1&~: # ]
rowland2 =: triage #firstdiff #candfun
This function generates the first n-many candidate numbers using the Rowland recurrence relation, evaluates their first differences, discards all first-differences equal to 1, discards all duplicates, and sorts them in ascending order. I don't think it's completely satisfactory yet, since the argument sets the number of candidates to try instead of the number of results. But, it's still progress.
Example:
rowland2 1000
3 5 7 11 13 23 47 101 233 467 941
Here's a version of the first function I posted, keeping the size of each argument to a minimum:
NB. rowrec takes y=(n,an) where n is the index and a(n) is the
NB. nth member of the rowland candidate sequence. The base case
NB. is rowrec 1 7. The output is (n+1,a(n+1)).
rowrec =: (1 + {.) , }. + [: +./ 1 0 + ]
rowland3 =: 3 : 0
result =. >a:
t =. 1 7
while. y > #result do.
ts =. (<"1)2 2 $ t,rowrec t
fdiff =. |2-/\,>(}.&.>) ts
if. 1~:fdiff do.
result =. ~. result,fdiff
end.
t =. ,>}.ts
end.
/:~ result
)
which finds the first y-many distinct Rowland primes and presents them in ascending order:
rowland3 20
3 5 7 11 13 23 47 53 101 233 467 941 1889 3779 7559 15131 30323 60647 121403 242807
Most of the complexity of this function comes from my handling of boxed arrays. It's not pretty code, but it only keeps 4+#result many data elements (which grows on a log scale) in memory during computation. The original function rowland keeps (#t)+(#result) elements in memory (which grows on a linear scale). rowland2 y builds an array of y-many elements, which makes its memory profile almost the same as rowland even though it never grows beyond a specified bound. I like rowland2 for its compactness, but without a formula to predict the exact size of y needed to generate n-many distinct primes, that task will need to be done on a trial-and-error basis and thus potentially use many more cycles than rowland or rowland3 on redundant calculations. rowland3 is probably more efficient than my version of rowland, since FUNWITHTACIT recomputes #t on every loop iteration -- rowland3 just increments a counter, which is less computationally intensive.
Still, I'm not happy with rowland3's explicit control structures. It seems like there should be a way to accomplish this behavior using recursion or something.

Resources