Related
What does <> (less than followed by greater than) mean in Mathematica? For example:
InterpolatingFunction[{-6,6},{0,6}],<>[x,y]
I am very much confused in such kind of expressions. As I have received such kind of output in NDSolve.
Mathematica expressions come with a head and then several arguments. For example, the output of some operation might give you the output List[1,2,3,4,5]. However, Mathematica knows this is a list and the output is formatted as {1,2,3,4,5} instead.
A function like Interpolation will give you a special type of object (an interpolating function) that has many components. Unlike list, most of its components are irrelevant, so you can ignore them. Mathematica hides them using <> so that you don't have to look at them.
f = Interpolation[RandomInteger[10, 10]]
output: InterpolatingFunction[{{1, 10}}, "<>"]
All it shows you is the Head, which is InterpolatingFunction, and then the first argument which is the domain(s) of the function. There is only one variable, so there is only one domain {1,10} so the list of domains is {{1,10}}.
All the other arguments are there, so you can find them. You can evaluate f by:
f[2.3]
output: 0.7385
(Your output will vary!) But you can also look at the pieces of f:
f[[2]]
output: {4, 3, 0, {10}, {4}, 0, 0, 0, 0, Automatic}
The second piece, normally hidden, is a list of different properties of the interpolating function that we normally don't care about.
You can change the head on many things using ## which changes the header of one thing into another. For example:
mylist = {2,3,4,5};
Plus##mylist
output: 14
You can do this with our function:
List##f
output: {{{1, 10}}, {4, 3, 0, {10}, {4}, 0, 0, 0, 0, Automatic},
{{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}}, {{9}, {2}, {0}, {6},
{10}, {6}, {7}, {5}, {0}, {6}}, {Automatic}}
All of that is the "guts" of the interpolating function. That's what's missing in the <>, because this might describe the interpolating function but we don't really need to see it.
If you are looking for an explicit polynomial interpolation, you should be doing:
InterpolatingPolynomial[RandomInteger[10, 10], x]
which gives you a function of x (in a very non-simplified form) that is what you want.
Suppose I have two very large lists {a1, a2, …} and {b1, b2, …} where all ai and bj are large sparse arrays. For the sake of memory efficiency I store each list as one comprehensive sparse array.
Now I would like to compute some function f on all possible pairs of ai and bj where each result f[ai, bj] is a sparse array again. All these sparse arrays have the same dimensions, by the way.
While
Flatten[Outer[f, {a1, a2, ...}, {b1, b2, ...}, 1], 1]
returns the desired result (in principle) it appears to consume excessive amounts of memory. Not the least because the return value is a list of sparse arrays whereas one comprehensive sparse array turns out much more efficient in my cases of interest.
Is there an efficient alternative to the above use of Outer?
More specific example:
{SparseArray[{{1, 1, 1, 1} -> 1, {2, 2, 2, 2} -> 1}],
SparseArray[{{1, 1, 1, 2} -> 1, {2, 2, 2, 1} -> 1}],
SparseArray[{{1, 1, 2, 1} -> 1, {2, 2, 1, 2} -> 1}],
SparseArray[{{1, 1, 2, 2} -> -1, {2, 2, 1, 1} -> 1}],
SparseArray[{{1, 2, 1, 1} -> 1, {2, 1, 2, 2} -> 1}],
SparseArray[{{1, 2, 1, 2} -> 1, {2, 1, 2, 1} -> 1}],
SparseArray[{{1, 2, 2, 1} -> -1, {2, 1, 1, 2} -> 1}],
SparseArray[{{1, 2, 2, 2} -> 1, {2, 1, 1, 1} -> 1}]};
ByteCount[%]
list = SparseArray[%%]
ByteCount[%]
Flatten[Outer[Dot, list, list, 1], 1];
ByteCount[%]
list1x2 = SparseArray[%%]
ByteCount[%]
Flatten[Outer[Dot, list1x2, list, 1], 1];
ByteCount[%]
list1x3 = SparseArray[%%]
ByteCount[%]
etc. Not only are the raw intermediate results of Outer (lists of sparse arrays) extremely inefficient, Outer seems to consume way too much memory during the computation itself, too.
I will propose a solution which is rather complex but allows one to only use about twice as much memory during the computation as is needed to store the final result as a SparseArray. The price to pay for this will be a much slower execution.
The code
Sparse array construction / deconstruction API
Here is the code. First, a slightly modified (to address higher-dimensional sparse arrays) sparse array construction - deconstruction API, taken from this answer:
ClearAll[spart, getIC, getJR, getSparseData, getDefaultElement,
makeSparseArray];
HoldPattern[spart[SparseArray[s___], p_]] := {s}[[p]];
getIC[s_SparseArray] := spart[s, 4][[2, 1]];
getJR[s_SparseArray] := spart[s, 4][[2, 2]];
getSparseData[s_SparseArray] := spart[s, 4][[3]];
getDefaultElement[s_SparseArray] := spart[s, 3];
makeSparseArray[dims_List, jc_List, ir_List, data_List, defElem_: 0] :=
SparseArray ## {Automatic, dims, defElem, {1, {jc, ir}, data}};
Iterators
The following functions produce iterators. Iterators are a good way to encapsulate the iteration process.
ClearAll[makeTwoListIterator];
makeTwoListIterator[fname_Symbol, a_List, b_List] :=
With[{indices = Flatten[Outer[List, a, b, 1], 1]},
With[{len = Length[indices]},
Module[{i = 0},
ClearAll[fname];
fname[] := With[{ind = ++i}, indices[[ind]] /; ind <= len];
fname[] := Null;
fname[n_] :=
With[{ind = i + 1}, i += n;
indices[[ind ;; Min[len, ind + n - 1]]] /; ind <= len];
fname[n_] := Null;
]]];
Note that I could have implemented the above function more memory - efficiently and not use Outer in it, but for our purposes this won't be the major concern.
Here is a more specialized version, which produces interators for pairs of 2-dimensional indices.
ClearAll[make2DIndexInterator];
make2DIndexInterator[fname_Symbol, i : {iStart_, iEnd_}, j : {jStart_, jEnd_}] :=
makeTwoListIterator[fname, Range ## i, Range ## j];
make2DIndexInterator[fname_Symbol, ilen_Integer, jlen_Integer] :=
make2DIndexInterator[fname, {1, ilen}, {1, jlen}];
Here is how this works:
In[14]:=
makeTwoListIterator[next,{a,b,c},{d,e}];
next[]
next[]
next[]
Out[15]= {a,d}
Out[16]= {a,e}
Out[17]= {b,d}
We can also use this to get batch results:
In[18]:=
makeTwoListIterator[next,{a,b,c},{d,e}];
next[2]
next[2]
Out[19]= {{a,d},{a,e}}
Out[20]= {{b,d},{b,e}}
, and we will be using this second form.
SparseArray - building function
This function will build a SparseArray object iteratively, by getting chunks of data (also in SparseArray form) and gluing them together. It is basically code used in this answer, packaged into a function. It accepts the code piece used to produce the next chunk of data, wrapped in Hold (I could alternatively make it HoldAll)
Clear[accumulateSparseArray];
accumulateSparseArray[Hold[getDataChunkCode_]] :=
Module[{start, ic, jr, sparseData, dims, dataChunk},
start = getDataChunkCode;
ic = getIC[start];
jr = getJR[start];
sparseData = getSparseData[start];
dims = Dimensions[start];
While[True, dataChunk = getDataChunkCode;
If[dataChunk === {}, Break[]];
ic = Join[ic, Rest#getIC[dataChunk] + Last#ic];
jr = Join[jr, getJR[dataChunk]];
sparseData = Join[sparseData, getSparseData[dataChunk]];
dims[[1]] += First[Dimensions[dataChunk]];
];
makeSparseArray[dims, ic, jr, sparseData]];
Putting it all together
This function is the main one, putting it all together:
ClearAll[sparseArrayOuter];
sparseArrayOuter[f_, a_SparseArray, b_SparseArray, chunkSize_: 100] :=
Module[{next, wrapperF, getDataChunkCode},
make2DIndexInterator[next, Length#a, Length#b];
wrapperF[x_List, y_List] := SparseArray[f ### Transpose[{x, y}]];
getDataChunkCode :=
With[{inds = next[chunkSize]},
If[inds === Null, Return[{}]];
wrapperF[a[[#]] & /# inds[[All, 1]], b[[#]] & /# inds[[All, -1]]]
];
accumulateSparseArray[Hold[getDataChunkCode]]
];
Here, we first produce the iterator which will give us on demand portions of index pair list, used to extract the elements (also SparseArrays). Note that we will generally extract more than one pair of elements from two large input SparseArray-s at a time, to speed up the code. How many pairs we process at once is governed by the optional chunkSize parameter, which defaults to 100. We then construct the code to process these elements and put the result back into SparseArray, where we use an auxiliary function wrapperF. The use of iterators wasn't absolutely necessary (could use Reap-Sow instead, as with other answers), but allowed me to decouple the logic of iteration from the logic of generic accumulation of sparse arrays.
Benchmarks
First we prepare large sparse arrays and test our functionality:
In[49]:=
arr = {SparseArray[{{1,1,1,1}->1,{2,2,2,2}->1}],SparseArray[{{1,1,1,2}->1,{2,2,2,1}->1}],
SparseArray[{{1,1,2,1}->1,{2,2,1,2}->1}],SparseArray[{{1,1,2,2}->-1,{2,2,1,1}->1}],
SparseArray[{{1,2,1,1}->1,{2,1,2,2}->1}],SparseArray[{{1,2,1,2}->1,{2,1,2,1}->1}]};
In[50]:= list=SparseArray[arr]
Out[50]= SparseArray[<12>,{6,2,2,2,2}]
In[51]:= larger = sparseArrayOuter[Dot,list,list]
Out[51]= SparseArray[<72>,{36,2,2,2,2,2,2}]
In[52]:= (large= sparseArrayOuter[Dot,larger,larger])//Timing
Out[52]= {0.047,SparseArray[<2592>,{1296,2,2,2,2,2,2,2,2,2,2}]}
In[53]:= SparseArray[Flatten[Outer[Dot,larger,larger,1],1]]==large
Out[53]= True
In[54]:= MaxMemoryUsed[]
Out[54]= 21347336
Now we do the power tests
In[55]:= (huge= sparseArrayOuter[Dot,large,large,2000])//Timing
Out[55]= {114.344,SparseArray[<3359232>,{1679616,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2}]}
In[56]:= MaxMemoryUsed[]
Out[56]= 536941120
In[57]:= ByteCount[huge]
Out[57]= 262021120
In[58]:= (huge1 = Flatten[Outer[Dot,large,large,1],1]);//Timing
Out[58]= {8.687,Null}
In[59]:= MaxMemoryUsed[]
Out[59]= 2527281392
For this particular example, the suggested method is 5 times more memory-efficient than the direct use of Outer, but about 15 times slower. I had to tweak the chunksize parameter (default is 100, but for the above I used 2000, to get the optimal speed / memory use combination). My method only used as a peak value twice as much memory as needed to store the final result. The degree of memory-savings as compared to Outer- based method will depend on the sparse arrays in question.
If lst1 and lst2 are your lists,
Reap[
Do[Sow[f[#1[[i]], #2[[j]]]],
{i, 1, Length##1},
{j, 1, Length##2}
] &[lst1, lst2];
] // Last // Last
does the job and may be more memory-efficient. On the other hand, maybe not. Nasser is right, an explicit example would be useful.
EDIT: Using Nasser's randomly-generated arrays, and for len=200, MaxMemoryUsed[] indicates that this form needs 170MB while the Outer form in the question takes 435MB.
Using your example list data, I believe that you will find the ability to Append to a SparseArray quite helpful.
acc = SparseArray[{}, {1, 2, 2, 2, 2, 2, 2}]
Do[AppendTo[acc, i.j], {i, list}, {j, list}]
Rest[acc]
I need Rest to drop the first zero-filled tensor in the result. The second argument of the seed SparseArray must be the dimensions of each of your elements with a prefixed 1. You may need to explicitly specify a background for the seed SparseArray to optimize performance.
I have a list and I want to apply a logical test to each element, and if any one of them does not satisfy this condition, then return false. I want to write this in Mathematica or find a built-in function, but seems ForAll does not actually do that.
My question is: how to do this most efficiently?
Bonus: how about similarly for Exists function: i.e. if there is any element in the list satisfy the condition, return true.
The answer to the first portion of your question might be something along these lines:
forAll[list_, cond_] := Select[list, ! cond## &, 1] === {};
which is used like this:
forAll[{1, 2, 3, 3.5}, IntegerQ]
The "exists" function is already natively implemented as MemberQ. It could be reimplemented as:
exists[list_,cond_] := Select[list, cond, 1] =!= {};
Use it like
exists[Range#100, (10 == # &)]
which returns true as 10 is an element causing the Select to return {10} which is not equal to {}.
This answer is not intended to show the most efficient method, but rather an alternative method that serves the pedagogical purpose of showing some important core functionality in Mathematica.
nixeagle's answer avoids explicitly testing every element of the list. If the test doesn't lend itself to inclusion in the third argument of Select, then the below might be useful.
To do this, you need to learn about the standard Or and And functions, as well as the Map (/#) and Apply (##) commands which are extremely important for any Mathematica user to learn. (see this tutorial)
Here is a simple example.
In[2]:= data = RandomInteger[{0, 10}, {10}]
Out[2]= {10, 1, 0, 10, 1, 5, 2, 2, 4, 1}
In[4]:= # > 5 & /# data
Out[4]= {True, False, False, True, False, False, False, False, False, \
False}
In[6]:= And ## (# > 5 & /# data)
Out[6]= False
What is going on here is that you are mapping the function ("greater than 5") to each element of the list using Map, to get a list of True/False values. You are then applying the standard logical function And to the whole list to get the single Boolean value.
These are all very much core functionality in Mathematica and I recommend you read the documentation for these functions carefully and practice using them.
This is not the most efficient method, but for small problems you will not notice the difference.
In[11]:= Do[Select[data, ! # > 5 &, 1] === {}, {10000}] // Timing
Out[11]= {0.031, Null}
In[12]:= Do[And ## (# > 5 & /# data);, {10000}] // Timing
Out[12]= {0.11, Null}
For Exists, the alternative to Select would be MatchQ for patterns or MemberQ for explicit values. The documentation has some useful examples.
Not to be taken too seriously, but this
ClearAll[existsC];
existsC[cond_] := With[
{c = cond},
Compile[
{{l, _Integer, 1}},
Module[
{flag = False, len = Length#l},
Do[
If[cond[l[[i]]],
flag = True; Break[];
];,
{i, 1, len}];
flag
],
CompilationTarget -> "C"
]
]
appears to be around 300 times faster than nixeagle's solutions on my machine. What this does is to emit a compiled function which takes a list and compares its elements to the given condition (fixed at compile-time), returning True if any of them matches.
It is used as follows: Compile with the appropriate cond, eg
t = existsC[# == 99999 &];
and then
t[Range#100000] // timeIt
returns 2.33376*10^-6 (a worst-case scenario, as I am just searching linearly and the matching element is at the end) while
exists[Range#100000, (99999 == # &)] // timeIt
returns 0.000237162 (here, timeIt is this).
A pattern based approach:
forAll[list_, test_] := MatchQ[ list, _[__?test] ]
MemberQ already implements exists.
Mathematica 10 has a new function for this: AllTrue. When all elements pass the test my function appears to be a bit faster:
a = Range[2, 1*^7, 2];
AllTrue[a, EvenQ] // Timing // First
forAll[a, EvenQ] // Timing // First
1.014007
0.936006
However with an early exit the benefit of the new function becomes apparent:
a[[123456]] = 1;
AllTrue[a, EvenQ] // Timing // First
forAll[a, EvenQ] // Timing // First
0.031200
0.265202
Even though && and || perform short-circuit evaluation, i.e., don't evaluate their arguments unnecessarily, I suspect that solutions based on Select[] or Map[] won't benefit much from this. That's because they apply the logical test to every element, building a list of Boolean truth-values before performing the conjunction/disjunction among them. If the test you've specified is slow, it can be a real bottleneck.
So here is a variant that does short-circuit evaluation of the condition as well:
allSatisfy[list_, cond_] :=
Catch[Fold[If[cond[#2], True, Throw[False]] &, True, list]]
Testing if any element in the list satisfies the condition is nicely symmetric:
anySatisfy[list_, cond_] :=
Catch[Fold[If[cond[#2], Throw[True], False] &, False, list]]
Of course this could equally have been done (and candidly speaking, more easily) using procedural loops such as While[], but I have a soft spot for functional programming!
nixeagle got the bonus part, but the way I would've done the first part is as follows:
AllSatisfy[expr_, cond_] := Length#Select[expr, cond] == Length#expr
There's a simple solution:
In[1401]:= a = {1, 2, 3}
Out[1401]= {1, 2, 3}
In[1398]:= Boole[Thread[a[[2]] == a]]
Out[1398]= {0, 1, 0}
In[1400]:= Boole[Thread[a[[2]] >= a]]
Out[1400]= {1, 1, 0}
In[1402]:= Boole[Thread[a[[2]] != a]]
Out[1402]= {1, 0, 1}
Success!
Related to this question, I am wondering the algorithms (and actual code in java/c/c++/python/etc., if you have!) to generate all combinations of r elements for a list with m elements in total. Some of these m elements may be repeated.
Thanks!
recurse for each element type
int recurseMe(list<list<item>> items, int r, list<item> container)
{
if (r == container.length)
{
//print out your collection;
return 1;
}
else if (container.length > score)
{
return 0;
}
list<item> firstType = items[0];
int score = 0;
for(int i = 0; i < firstType.length; i++)
{
score += recurseMe(items without items[0], r, container + i items from firstType);
}
return score;
}
This takes as input a list containing lists of items, assuming each inner list represents a unique type of item. You may have to build a sorting function to feed as input to this.
//start with a list<item> original;
list<list<item>> grouped = new list<list<item>>();
list<item> sorted = original.sort();//use whichever method for this
list<item> temp = null;
item current = null;
for(int x = 0; x < original.length; x++)
if (sorted[x] == current)
{
temp.add(current);
}
else
{
if (temp != null && temp.isNotEmpty)
grouped.add(temp);
temp = new list<item>();
temp.add(sorted[x]);
}
}
if (temp != null && temp.isNotEmpty)
grouped.add(temp);
//grouped is the result
This sorts the list, then creates sublists containing elements that are the same, inserting them into the list of lists grouped
Here is a recursion that I believe is closely related to Jean-Bernard Pellerin's algorithm, in Mathematica.
This takes input as the number of each type of element. The output is in similar form. For example:
{a,a,b,b,c,d,d,d,d} -> {2,2,1,4}
Function:
f[k_, {}, c__] := If[+c == k, {{c}}, {}]
f[k_, {x_, r___}, c___] := Join ## (f[k, {r}, c, #] & /# 0~Range~Min[x, k - +c])
Use:
f[4, {2, 2, 1, 4}]
{{0, 0, 0, 4}, {0, 0, 1, 3}, {0, 1, 0, 3}, {0, 1, 1, 2}, {0, 2, 0, 2},
{0, 2, 1, 1}, {1, 0, 0, 3}, {1, 0, 1, 2}, {1, 1, 0, 2}, {1, 1, 1, 1},
{1, 2, 0, 1}, {1, 2, 1, 0}, {2, 0, 0, 2}, {2, 0, 1, 1}, {2, 1, 0, 1},
{2, 1, 1, 0}, {2, 2, 0, 0}}
An explanation of this code was requested. It is a recursive function that takes a variable number of arguments. The first argument is k, length of subset. The second is a list of counts of each type to select from. The third argument and beyond is used internally by the function to hold the subset (combination) as it is constructed.
This definition therefore is used when there are no more items in the selection set:
f[k_, {}, c__] := If[+c == k, {{c}}, {}]
If the total of the values of the combination (its length) is equal to k, then return that combination, otherwise return an empty set. (+c is shorthand for Plus[c])
Otherwise:
f[k_, {x_, r___}, c___] := Join ## (f[k, {r}, c, #] & /# 0~Range~Min[x, k - +c])
Reading left to right:
Join is used to flatten out a level of nested lists, so that the result is not an increasingly deep tensor.
f[k, {r}, c, #] & calls the function, dropping the first position of the selection set (x), and adding a new element to the combination (#).
/# 0 ~Range~ Min[x, k - +c]) for each integer between zero and the lesser of the first element of the selection set, and k less total of combination, which is the maximum that can be selected without exceeding combination size k.
I'm going to make this an answer rather than a bunch of comments.
My original comment was:
The CombinationGenerator Java class systematically generates all
combinations of n elements, taken r at a time. The algorithm is
described by Kenneth H. Rosen, Discrete Mathematics and Its
Applications, 2nd edition (NY: McGraw-Hill, 1991), pp. 284-286." See
merriampark.com/comb.htm. It has a link to source code.
As you pointed out in your comment, you want unique combinations. So, given the array ["a", "a", "b", "b"], you want it to generate aab, abb. The code I linked generates aab, aab, baa, baa.
With that array, removing duplicates is very easy. Depending on how you implement it, you either let it generate the duplicates and then filter them after the fact (i.e. selecting unique elements from an array), or you modify the code to include a hash table so that when it generates a combination, it checks the hash table before putting the item into the output array.
Looking something up in a hash table is an O(1) operation, so either of those is going to be efficient. Doing it after the fact will be a little bit more expensive, because you'll have to copy items. Still, you're talking O(n), where n is the number of combinations generated.
There is one complication: order is irrelevant. That is, given the array ["a", "b", "a", "b"], the code will generate aba, abb, aab, bab. In this case, aba and aab are duplicate combinations, as are abb and bab, and using a hash table isn't going to remove those duplicates for you. You could, though, create a bit mask for each combination, and use the hash table idea with the bit masks. This would be slightly more complicated, but not terribly so.
If you sort the initial array first, so that duplicate items are adjacent, then the problem goes away and you can use the hash table idea.
There's undoubtedly a way to modify the code to prevent it from generating duplicates. I can see a possible approach, but it would be messy and expensive. It would probably make the algorithm slower than if you just used the hash table idea. The approach I would take:
Sort the input array
Use the linked code to generate the combinations
Use a hash table or some other code to select unique items.
Although ... a thought that occurred to me.
Is it true that if you sort the input array, then any generated duplicates will be adjacent? That is, given the input array ["a", "a", "b", "b"], then the generated output will be aab, aab, abb, abb, in that order. This will be implementation dependent, of course. But if it's true in your implementation, then modifying the algorithm to remove duplicates is a simple matter of checking to see if the current combination is equal to the previous one.
I have the following Nested table
(myinputmatrix = Table[Nest[function, myinputmatrix[[i]][[j]],
myinputmatrix[[i]][[j]][[2]][[2]] +
myinputmatrix[[i]][[j]][[3]][[2]]], {i,
Dimensions[myinputmatrix][[1]]}, {j,
Dimensions[myinputmatrix][[2]]}]) // TableForm
fq[k_?NumericQ] := Count[RandomReal[{0, 1}, k], x_ /; x < .1]
function[x_List] := ReplacePart[
x, {{2, 1} -> x[[2]][[1]] - #1,
{2, 2} -> x[[2]][[2]] + #1,
{3, 1} -> x[[3]][[1]] - #2, {3, 2} ->
x[[3]][[2]] + #2}] &[fq[x[[2]][[1]]], fq[x[[2]][[1]]]];
My problem is that I want to add only the #1 in the bold part above, but not only the new one, I want it to add all #1 for the n times (Nest function times]
If I try the function
function[x_List] := ReplacePart[
x, {{2, 1} -> x[[2]][[1]] - #1, {2, 2} -> #1,
{3, 1} -> x[[3]][[1]] - #2, {3, 2} -> #2}] &[fq[x[[2]][[1]]],
fq[x[[2]][[1]]]];
I am having as a result the last value of fq[k]. I thought of replacing that part in my table with 0 but is not going to work since I am using it in my nested list, also I thought of substricting that part from my initial table but I am not sure which way is the best to do it and if the way I am thinking is the correct one. Can anyone help me?
If I may restate the problem and hopefully clarify the question for myself. At each iteration in the Nest, you want to add not the current (random) output from fq, but the cumulation of the current and all past values of it. But because the random output depends at each iteration on the input matrix, you need to calculate both the random number and the current value of the matrix in the same iteration.
If that hadn't been true you could use Fold.
Restating fq as Sasha suggested EDIT with some type checking to avoid problems with incorrect input:
fq[k_Integer?Positive]:=RandomVariate[BinomialDistribution[k,.1]]
You might want to add some other error checking code. Something like this, depending on your requirements, might do.
fq[0]:= 0;
fq[k_Real?Positive]:=RandomVariate[BinomialDistribution[Round[k],.1]]
You need function to take the random numbers as parameters. EDIT 1 and 2 I have changed the syntax of this function to use the parameters explicitly instead of the original question's anonymous function within a function. This should avoid some syntax errors. Also note that I have used "NumericQ" rather than "Real" as the type for the rv1 and rv2 parameters, because they can be integers at the start of the Nest iteration.
function[x_List, rv1_?NumericQ, rv2_?NumericQ] := ReplacePart[
x, {{2, 1} -> x[[2]][[1]] - rv1, {2, 2} -> rv1,
{3, 1} -> x[[3]][[1]] - rv2, {3, 2} -> rv2}]
And then pass the current random number as a local constant using With to a Nest function that works on a list containing your matrix and the cumulation of the random variates. I have used myoutputmatrix because I really don't like the idea of rewriting existing expressions all the time. That's just me. Now, the one other thing is that you need to set n, the number of iterates. I've set it to 5 but you can make this a parameter in a function if you want (see below).
(myoutputmatrix = Table[ First[Nest[With[{rv=fq[#1[[1]][[2]][[1]] ]},
{function[#1[[1]],rv, rv+#1[[2]] ],rv+#1[[2]] }]&,
{ myinputmatrix[[i]][[j]], 0 }, 5]],
{i, Dimensions[myinputmatrix][[1]]}, {j,
Dimensions[myinputmatrix][[2]]}]) // TableForm
The First is there because in the end you only want the matrix, not the cumulation of the random variates.
outputmatrix[input_List, n_Integer?Positive] /;
Length[Dimensions[input]] == 4 :=
Table[First[
Nest[With[{rv = fq[#1[[1]][[2]][[1]]]}, {function[#1[[1]], rv,
rv + #1[[2]]], rv + #1[[2]]}] &, {input[[i]][[j]], 0}, n]],
{i, Dimensions[input][[1]]}, {j, Dimensions[input][[2]]}]
outputmatrix[myinputmatrix, 10] // TableForm
EDIT I have checked this now and it runs, but note that you can get negative numbers in the output, which is not what you want, I don't think.