I am currently in the final phase of developing my first R package which is supposed to fit multinomial processing tree (MPT) models (see its homepage for the current version). The model fitting is achieved by R's optim function.
Today I was playing around with it for the first time on a windows 7 machine and noticed something really strange: optim does not converge succesfully when using the 64 bit version of R. This looks like a bug to me (especially as nlminb converges for both R versions). As optim is at the core of my package, any help on this question is greatly appreciated.
Here comes a minimally reproducible example (normally the model is specified via expressions and not specified in the objective function, but for simplicity I put everything in the objective function here):
# The objective function:
llk.tree <- function (Q, data)
{
p <- Q[1]
q <- Q[2]
r <- Q[3]
e <- rep(NA,4)
e[1] <- p * q * r
e[2] <- p * q * (1-r)
e[3] <- p * (1-q) * r
e[4] <- p * (1-q) * (1-r) + (1-p)
llk <- sum(data * log(e))
if (is.na(llk))
llk <- -1e+19
if (llk == -Inf)
llk <- -1e+19
return(-llk)
}
# The call to optim:
optim(runif(3, 0.1, 0.9), llk.tree, data = c(24, 65, 30, 61), method = "L-BFGS-B", lower = rep(0, 3), upper = rep(1, 3))
This example reproduces an example from a seminal paper on MPTs by Riefer & Batchelder, namely row 1 in Table 1 p. 327 (expected parameter values would be p = 1, q = .49 and r = .30).
Running it on a 32 bit R always gives the correct result (tried with versions 2.12.2 and 2.13.0):
$par
[1] 1.0000000 0.4944449 0.3000001
$value
[1] 234.7110
$counts
function gradient
11 11
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
(Note that count may differ due to random starting values.)
Running it on a 64 bit R on the other hand may produce such a (wrong) result:
$par
[1] 0.8668081 0.6326655 0.1433857
$value
[1] 257.7328
$counts
function gradient
3 3
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
The returned value of the objective function and the returned parameter values differ on each run, but count is always 3!
Note that running nlminb produces correct results on 32 bit and 64 bit R:
nlminb(runif(3, 0.1, 0.9), llk.tree, data = c(24, 65, 30, 61), lower = 0, upper = 1)
$par
[1] 1.0000000 0.4944445 0.3000000
$objective
[1] 234.711
$convergence
[1] 0
$iterations
[1] 14
$evaluations
function gradient
19 55
$message
[1] "relative convergence (4)"
One final note: We have examples (this is our simplest example model) that worked on 64 bit R and optim but more examples (like the one shown here) did not work.
And count is always 3...
EDIT:
When fixing the starting values (thanks to Joshua Ulrich) optim does not move away from those fixed values under 64 bit R:
optim(c(0.5, 0.5, 0.5), llk.tree, data = c(24, 65, 30, 61), method = "L-BFGS-B", lower = rep(0, 3), upper = rep(1, 3))
$par
[1] 0.5 0.5 0.5
$value
[1] 276.1238
$counts
function gradient
3 3
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
We did some more testing today and found the same problem as described in the question under Linux using 64 bit R.
However, thanks to Joachim Vandekerckhove for this ingenious idea, we tried a simple change that solved the problem (although the issue remains suspicious). At the end of the objective function if llk is Inf we set it to an extremely high value (was 1e19).
Using a smaller value (e.g., 1e10) removes the problem on 64 bit machines (so far tested on Linux):
llk.tree <- function (Q, data)
{
p <- Q[1]
q <- Q[2]
r <- Q[3]
e <- rep(NA,4)
e[1] <- p * q * r
e[2] <- p * q * (1-r)
e[3] <- p * (1-q) * r
e[4] <- p * (1-q) * (1-r) + (1-p)
llk <- sum(data * log(e))
if (is.na(llk))
llk <- -1e+10
if (llk == -Inf)
llk <- -1e+10
return(-llk)
}
# The call to optim:
optim(runif(3, 0.1, 0.9), llk.tree, data = c(24, 65, 30, 61), method = "L-BFGS-B", lower = rep(0, 3), upper = rep(1, 3))
This returns the correct result!
Related
I saw this in an algorithm textbook. I am confused about the middle recursive function. If you can explain it with an example, such as 4/2, that would be great!
function divide(x, y)
Input: Two n-bit integers x and y, where y ≥ 1
Output: The quotient and remainder of x divided by y
if x = 0: return (q, r) = (0, 0)
(q, r) = divide(floor(x/2), y)
q = 2 · q, r = 2 · r
if x is odd: r = r + 1
if r ≥ y: r = r − y, q = q + 1
return (q, r)
You're seeing how many times it's divisible by 2. This is essentially performing bit shifts and operating on the binary digits. A more interesting case would be 13/3 (13 is 1101 in binary).
divide(13, 3) // initial binary value - 1101
divide(6, 3) // shift right - 110
divide(3, 3) // shift right - 11
divide(1, 3) // shift right - 1 (this is the most significant bit)
divide(0, 3) // shift right - 0 (no more significant bits)
return(0, 0) // roll it back up
return(0, 1) // since x is odd (1)
return(1, 0) // r = r * 2 = 2; x is odd (3) so r = 3 and the r > y condition is true
return(2, 0) // q = 2 * 1; r = 2 * 1 - so r >= y and q = 2 + 1
return(4, 1) // q = 2 * 2; x is odd to r = 0 + 1
Recently I found this in some code I wrote a few years ago. It was used to rationalize a real value (within a tolerance) by determining a suitable denominator and then checking if the difference between the original real and the rational was small enough.
Edit to clarify : I actually don't want to convert all real values. For instance I could choose a max denominator of 14, and a real value that equals 7/15 would stay as-is. It's not as clear that as it's an outside variable in the algorithms I wrote here.
The algorithm to get the denominator was this (pseudocode):
denominator(x)
frac = fractional part of x
recip = 1/frac
if (frac < tol)
return 1
else
return recip * denominator(recip)
end
end
Seems to be based on continued fractions although it became clear on looking at it again that it was wrong. (It worked for me because it would eventually just spit out infinity, which I handled outside, but it would be often really slow.) The value for tol doesn't really do anything except in the case of termination or for numbers that end up close. I don't think it's relatable to the tolerance for the real - rational conversion.
I've replaced it with an iterative version that is not only faster but I'm pretty sure it won't fail theoretically (d = 1 to start with and fractional part returns a positive, so recip is always >= 1) :
denom_iter(x d)
return d if d > maxd
frac = fractional part of x
recip = 1/frac
if (frac = 0)
return d
else
return denom_iter(recip d*recip)
end
end
What I'm curious to know if there's a way to pick the maxd that will ensure that it converts all values that are possible for a given tolerance. I'm assuming 1/tol but don't want to miss something. I'm also wondering if there's an way in this approach to actually limit the denominator size - this allows some denominators larger than maxd.
This can be considered a 2D minimization problem on error:
ArgMin ( r - q / p ), where r is real, q and p are integers
I suggest the use of Gradient Descent algorithm . The gradient in this objective function is:
f'(q, p) = (-1/p, q/p^2)
The initial guess r_o can be q being the closest integer to r, and p being 1.
The stopping condition can be thresholding of the error.
The pseudo-code of GD can be found in wiki: http://en.wikipedia.org/wiki/Gradient_descent
If the initial guess is close enough, the objective function should be convex.
As Jacob suggested, this problem can be better solved by minimizing the following error function:
ArgMin ( p * r - q ), where r is real, q and p are integers
This is linear programming, which can be efficiently solved by any ILP (Integer Linear Programming) solvers. GD works on non-linear cases, but lack efficiency in linear problems.
Initial guesses and stopping condition can be similar to stated above. Better choice can be obtained for individual choice of solver.
I suggest you should still assume convexity near the local minimum, which can greatly reduce cost. You can also try Simplex method, which is great on linear programming problem.
I give credit to Jacob on this.
A problem similar to this is solved in the Approximations section beginning ca. page 28 of Bill Gosper's Continued Fraction Arithmetic document. (Ref: postscript file; also see text version, from line 1984.) The general idea is to compute continued-fraction approximations of the low-end and high-end range limiting numbers, until the two fractions differ, and then choose a value in the range of those two approximations. This is guaranteed to give a simplest fraction, using Gosper's terminology.
The python code below (program "simpleden") implements a similar process. (It probably is not as good as Gosper's suggested implementation, but is good enough that you can see what kind of results the method produces.) The amount of work done is similar to that for Euclid's algorithm, ie O(n) for numbers with n bits, so the program is reasonably fast. Some example test cases (ie the program's output) are shown after the code itself. Note, function simpleratio(vlo, vhi) as shown here returns -1 if vhi is smaller than vlo.
#!/usr/bin/env python
def simpleratio(vlo, vhi):
rlo, rhi, eps = vlo, vhi, 0.0000001
if vhi < vlo: return -1
num = denp = 1
nump = den = 0
while 1:
klo, khi = int(rlo), int(rhi)
if klo != khi or rlo-klo < eps or rhi-khi < eps:
tlo = denp + klo * den
thi = denp + khi * den
if tlo < thi:
return tlo + (rlo-klo > eps)*den
elif thi < tlo:
return thi + (rhi-khi > eps)*den
else:
return tlo
nump, num = num, nump + klo * num
denp, den = den, denp + klo * den
rlo, rhi = 1/(rlo-klo), 1/(rhi-khi)
def test(vlo, vhi):
den = simpleratio(vlo, vhi);
fden = float(den)
ilo, ihi = int(vlo*den), int(vhi*den)
rlo, rhi = ilo/fden, ihi/fden;
izok = 'ok' if rlo <= vlo <= rhi <= vhi else 'wrong'
print '{:4d}/{:4d} = {:0.8f} vlo:{:0.8f} {:4d}/{:4d} = {:0.8f} vhi:{:0.8f} {}'.format(ilo,den,rlo,vlo, ihi,den,rhi,vhi, izok)
test (0.685, 0.695)
test (0.685, 0.7)
test (0.685, 0.71)
test (0.685, 0.75)
test (0.685, 0.76)
test (0.75, 0.76)
test (2.173, 2.177)
test (2.373, 2.377)
test (3.484, 3.487)
test (4.0, 4.87)
test (4.0, 8.0)
test (5.5, 5.6)
test (5.5, 6.5)
test (7.5, 7.3)
test (7.5, 7.5)
test (8.534537, 8.534538)
test (9.343221, 9.343222)
Output from program:
> ./simpleden
8/ 13 = 0.61538462 vlo:0.68500000 9/ 13 = 0.69230769 vhi:0.69500000 ok
6/ 10 = 0.60000000 vlo:0.68500000 7/ 10 = 0.70000000 vhi:0.70000000 ok
6/ 10 = 0.60000000 vlo:0.68500000 7/ 10 = 0.70000000 vhi:0.71000000 ok
2/ 4 = 0.50000000 vlo:0.68500000 3/ 4 = 0.75000000 vhi:0.75000000 ok
2/ 4 = 0.50000000 vlo:0.68500000 3/ 4 = 0.75000000 vhi:0.76000000 ok
3/ 4 = 0.75000000 vlo:0.75000000 3/ 4 = 0.75000000 vhi:0.76000000 ok
36/ 17 = 2.11764706 vlo:2.17300000 37/ 17 = 2.17647059 vhi:2.17700000 ok
18/ 8 = 2.25000000 vlo:2.37300000 19/ 8 = 2.37500000 vhi:2.37700000 ok
114/ 33 = 3.45454545 vlo:3.48400000 115/ 33 = 3.48484848 vhi:3.48700000 ok
4/ 1 = 4.00000000 vlo:4.00000000 4/ 1 = 4.00000000 vhi:4.87000000 ok
4/ 1 = 4.00000000 vlo:4.00000000 8/ 1 = 8.00000000 vhi:8.00000000 ok
11/ 2 = 5.50000000 vlo:5.50000000 11/ 2 = 5.50000000 vhi:5.60000000 ok
5/ 1 = 5.00000000 vlo:5.50000000 6/ 1 = 6.00000000 vhi:6.50000000 ok
-7/ -1 = 7.00000000 vlo:7.50000000 -7/ -1 = 7.00000000 vhi:7.30000000 wrong
15/ 2 = 7.50000000 vlo:7.50000000 15/ 2 = 7.50000000 vhi:7.50000000 ok
8030/ 941 = 8.53347503 vlo:8.53453700 8031/ 941 = 8.53453773 vhi:8.53453800 ok
24880/2663 = 9.34284641 vlo:9.34322100 24881/2663 = 9.34322193 vhi:9.34322200 ok
If, rather than the simplest fraction in a range, you seek the best approximation given some upper limit on denominator size, consider code like the following, which replaces all the code from def test(vlo, vhi) forward.
def smallden(target, maxden):
global pas
pas = 0
tol = 1/float(maxden)**2
while 1:
den = simpleratio(target-tol, target+tol);
if den <= maxden: return den
tol *= 2
pas += 1
# Test driver for smallden(target, maxden) routine
import random
totalpass, trials, passes = 0, 20, [0 for i in range(20)]
print 'Maxden Num Den Num/Den Target Error Passes'
for i in range(trials):
target = random.random()
maxden = 10 + round(10000*random.random())
den = smallden(target, maxden)
num = int(round(target*den))
got = float(num)/den
print '{:4d} {:4d}/{:4d} = {:10.8f} = {:10.8f} + {:12.9f} {:2}'.format(
int(maxden), num, den, got, target, got - target, pas)
totalpass += pas
passes[pas-1] += 1
print 'Average pass count: {:0.3}\nPass histo: {}'.format(
float(totalpass)/trials, passes)
In production code, drop out all the references to pas (etc.), ie, drop out pass-counting code.
The routine smallden is given a target value and a maximum value for allowed denominators. Given maxden possible choices of denominators, it's reasonable to suppose that a tolerance on the order of 1/maxden² can be achieved. The pass-counts shown in the following typical output (where target and maxden were set via random numbers) illustrate that such a tolerance was reached immediately more than half the time, but in other cases tolerances 2 or 4 or 8 times as large were used, requiring extra calls to simpleratio. Note, the last two lines of output from a 10000-number test run are shown following the complete output of a 20-number test run.
Maxden Num Den Num/Den Target Error Passes
1198 32/ 509 = 0.06286837 = 0.06286798 + 0.000000392 1
2136 115/ 427 = 0.26932084 = 0.26932103 + -0.000000185 1
4257 839/2670 = 0.31423221 = 0.31423223 + -0.000000025 1
2680 449/ 509 = 0.88212181 = 0.88212132 + 0.000000486 3
2935 440/1853 = 0.23745278 = 0.23745287 + -0.000000095 1
6128 347/1285 = 0.27003891 = 0.27003899 + -0.000000077 3
8041 1780/4243 = 0.41951449 = 0.41951447 + 0.000000020 2
7637 3926/7127 = 0.55086292 = 0.55086293 + -0.000000010 1
3422 27/ 469 = 0.05756930 = 0.05756918 + 0.000000113 2
1616 168/1507 = 0.11147976 = 0.11147982 + -0.000000061 1
260 62/ 123 = 0.50406504 = 0.50406378 + 0.000001264 1
3775 52/3327 = 0.01562970 = 0.01562750 + 0.000002195 6
233 6/ 13 = 0.46153846 = 0.46172772 + -0.000189254 5
3650 3151/3514 = 0.89669892 = 0.89669890 + 0.000000020 1
9307 2943/7528 = 0.39094049 = 0.39094048 + 0.000000013 2
962 206/ 225 = 0.91555556 = 0.91555496 + 0.000000594 1
2080 564/1975 = 0.28556962 = 0.28556943 + 0.000000190 1
6505 1971/2347 = 0.83979548 = 0.83979551 + -0.000000022 1
1944 472/ 833 = 0.56662665 = 0.56662696 + -0.000000305 2
3244 291/1447 = 0.20110574 = 0.20110579 + -0.000000051 1
Average pass count: 1.85
Pass histo: [12, 4, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
The last two lines of output from a 10000-number test run:
Average pass count: 1.77
Pass histo: [56659, 25227, 10020, 4146, 2072, 931, 497, 233, 125, 39, 33, 17, 1, 0, 0, 0, 0, 0, 0, 0]
I try to calculate Monte Carlo pi function in R. I have some problems in the code.
For now I write this code:
ploscinaKvadrata <- 0
ploscinaKroga <- 0
n = 1000
for (i in i:n) {
x <- runif(1000, min= -1, max= 1)
y <- runif(1000, min= -1, max= 1)
if ((x^2 + y^2) <= 1) {
ploscinaKroga <- ploscinaKroga + 1
} else {
ploscinaKvadrata <- ploscinaKvadrata + 1
}
izracunPi = 4* ploscinaKroga/ploscinaKvadrata
}
izracunPi
This is not working, but I don't know how to fix it.
I would also like to write a code to plot this (with circle inside square and with dots).
Here is a vectorized version (and there was also something wrong with your math)
N <- 1000000
R <- 1
x <- runif(N, min= -R, max= R)
y <- runif(N, min= -R, max= R)
is.inside <- (x^2 + y^2) <= R^2
pi.estimate <- 4 * sum(is.inside) / N
pi.estimate
# [1] 3.141472
As far as plotting the points, you can do something like this:
plot.new()
plot.window(xlim = 1.1 * R * c(-1, 1), ylim = 1.1 * R * c(-1, 1))
points(x[ is.inside], y[ is.inside], pch = '.', col = "blue")
points(x[!is.inside], y[!is.inside], pch = '.', col = "red")
but I'd recommend you use a smaller N value, maybe 10000.
This is a fun game -- and there are a number of versions of it floating around the web. Here's one I hacked from the named source (tho' his code was somewhat naive).
from http://giventhedata.blogspot.com/2012/09/estimating-pi-with-r-via-mcs-dart-very.html
est.pi <- function(n){
# drawing in [0,1] x [0,1] covers one quarter of square and circle
# draw random numbers for the coordinates of the "dart-hits"
a <- runif(n,0,1)
b <- runif(n,0,1)
# use the pythagorean theorem
c <- sqrt((a^2) + (b^2) )
inside <- sum(c<1)
#outside <- n-inside
pi.est <- inside/n*4
return(pi.est)
}
Typo 'nside' to 'inside'
How can I speed up the following (noob) code:
#"mymatrix" is the matrix of word counts (docs X terms)
#"tfidfmatrix" is the transformed matrix
tfidfmatrix = Matrix(mymatrix, nrow=num_of_docs, ncol=num_of_words, sparse=T)
#Apply a transformation on each row of the matrix
for(i in 1:dim(mymatrix)[[1]]){
r = mymatrix[i,]
s = sapply(r, function(x) ifelse(x==0, 0, (1+log(x))*log((1+ndocs)/(1+x)) ) )
tfmat[i,] = s/sqrt(sum(s^2))
}
return (tfidfmatrix)
Problem is that the matrices I am working on are fairly large (~40kX100k), and this code is very slow.
The reason I am not using "apply" (instead of using a for loop and sapply) is that apply is going to give me the transpose of the matrix I want - I want num_of_docs X num_of_words, but apply will give me the transpose. I will then have to spend more time computing the transpose and re-allocating it.
Any thoughts on making this faster?
Thanks much.
Edit : I have found that the suggestions below greatly speed up my code (besides making me feel stupid). Any suggestions on where I can learn to write "optimized" R code from?
Edit 2: OK, so something is not right. Once I do s.vec[!is.finite(s.vec)] <- 0 every element of s.vec is being set to 0. Just to re-iterate my original matrix is a sparse matrix containing integers. This is due to some quirk of the Matrix package I am using. When I do s.vec[which(s.vec==-Inf)] <- 0 things work as expected. Thoughts?
As per my comment,
#Slightly larger example data
mymatrix <- matrix(runif(10000),nrow=10)
mymatrix[sample(10000,100)] <- 0
tfmat <- matrix(nrow=10, ncol=1000)
ndocs <- 1
justin <- function(){
s.vec <- ifelse(mymatrix==0, 0, (1 + log(mymatrix)) * log((1 + ndocs)/(1 + mymatrix)))
tfmat.vec <- s.vec/sqrt(rowSums(s.vec^2))
}
joran <- function(){
s.vec <- (1 + log(mymatrix)) * log((1 + ndocs)/(1 + mymatrix))
s.vec[!is.finite(s.vec)] <- 0
tfmat.vec <- s.vec/sqrt(rowSums(s.vec^2))
}
require(rbenchmark)
benchmark(justin(),joran(),replications = 1000)
test replications elapsed relative user.self sys.self user.child sys.child
2 joran() 1000 0.940 1.00000 0.842 0.105 0 0
1 justin() 1000 2.786 2.96383 2.617 0.187 0 0
So it's around 3x faster or so.
not sure what ndocs is, but ifelse is already vectorized, so you should be able to use the ifelse statement without walking through the matrix row by row and sapply along the row. The same can be said for the final calc.
However, you haven't given a complete example to replicate...
mymatrix <- matrix(runif(100),nrow=10)
tfmat <- matrix(nrow=10, ncol=10)
ndocs <- 1
s.vec <- ifelse(mymatrix==0, 0, 1 + log(mymatrix)) * log((1 + ndocs)/(1 + mymatrix))
for(i in 1:dim(mymatrix)[[1]]){
r = mymatrix[i,]
s = sapply(r, function(x) ifelse(x==0, 0, (1+log(x))*log((1+ndocs)/(1+x)) ) )
tfmat[i,] <- s
}
all.equal(s.vec, tfmat)
so the only piece missing is the rowSums in your final calc.
tfmat.vec <- s.vec/sqrt(rowSums(s.vec^2))
for(i in 1:dim(mymatrix)[[1]]){
r = mymatrix[i,]
s = sapply(r, function(x) ifelse(x==0, 0, (1+log(x))*log((1+ndocs)/(1+x)) ) )
tfmat[i,] = s/sqrt(sum(s^2))
}
all.equal(tfmat, tfmat.vec)
I was wondering if anyone could kindly help me with this seemingly easy task. I'm using nlminb to conduct optimization and compute some statistics by index. Here's an example from nlminb help.
> x <- rnbinom(100, mu = 10, size = 10)
> hdev <- function(par) {
+ -sum(dnbinom(x, mu = par[1], size = par[2], log = TRUE))
+ }
> nlminb(c(9, 12), hdev)
$par
[1] 9.730000 5.954936
$objective
[1] 297.2074
$convergence
[1] 0
$message
[1] "relative convergence (4)"
$iterations
[1] 10
$evaluations
function gradient
12 27
Suppose I generate random variables x, y, and z where z acts as an index (from 1 to 3).
> x <- rnbinom(100, mu = 10, size = 10)
> y <- rnbinom(100, mu = 10, size = 10)
> z <- rep(1:3, length=100)
> A <- cbind(x,y,z)
> hdev <- function(par) {
+ -sum(dnbinom(x+y, mu = par[1], size = par[2], log = TRUE))}
How can I apply nlminb(c(9, 12), hdev) to the data set by index z? In other words, I would like to compute nlminb(c(9, 12), hdev) for z=1, z=2, and z=3 separately. I tried by(A, z, function(A) nlminb(c(9,12), hdev)) and sparseby(A, z, function(A) nlminb(c(9,12), hdev)), but they return exactly the same values for each value of z.
I would like to turn each output into a new data frame so that it will become a 3X2 matrix.
[1] Z1_ANSWER_1 Z1_ANSWER_2
[2] Z2_ANSWER_1 Z2_ANSWER_2
[3] Z3_ANSWER_1 Z3_ANSWER_2
Since nlminb returns the summary of statistics, I needed to use CASEZ1<-nlminb$par, CASEZ2<-nlminb$par, CASEZ3<-nlminb$par and then use cbind to combine them. However, I would like to automate this process as the real data I'm working on has a lot more categories than z presented here.
If I'm not making myself clear, please let me know. I'll see if I can replicate the actual data set and functions I'm working on (I just don't have them on this computer).
Thank you very much in advance.
Let me try an approach
x <- rnbinom(100, mu = 10, size = 10)
y <- rnbinom(100, mu = 10, size = 10)
z <- rep(1:3, length=100)
A <- as.data.frame(cbind(x,y,z))
At first load the plyr library
library(plyr)
The following code returns the results for each z
dlply(A, .(z), function(x) {
hdev <- function(par, mydata) {-sum(dnbinom(mydata, mu = par[1], size = par[2], log = TRUE))}
nlminb(c(9, 12), hdev, mydata=t(as.vector(x[1] + as.vector(x[2]))))
}
)
Now, with this one you will get a 3x2 dataframe with the $par results
ddply(A, .(z), function(x) {
hdev <- function(par, mydata) {-sum(dnbinom(mydata, mu = par[1], size = par[2], log = TRUE))}
res <- nlminb(c(9, 12), hdev, mydata=t(as.vector(x[1] + as.vector(x[2]))))
return(res$par)
}
)