Confidence interval for exponential smoothing forecast - intervals

I'm using exponential smoothing (Brown's method) for forecasting.
The forecast can be calculated for one or more steps (time intervals).
Is there any way to calculate confidence intervals for such prognosis (ex-ante)?

Related

2nd order symplectic exponentially fitted integrator

I have to solve equations of motion of a charged particle under the effect of electromagnetic field. Since I have to deal with speed over precision I could not use adaptive stepsize algorithms (like Runge-Kutta Cash-Karp) because they would take too much time. I was looking for an algorithm which is both symplectic (like Boris integration) and exponentially fitted (in order to solve the equation of motion even if the equation is stiff). I found a method but it is for second order differential equations:
https://www.math.purdue.edu/~xiaj/work/SEFRKN.pdf
Later I found a paper which would describe a fourth order symplectic exponentially-fitted Runge-Kutta:
http://users.ugent.be/~gvdbergh/files/publatex/annals1.pdf
Since I have to deal with speed I was looking for a lower order algorithm. Does a 2nd order symplectic exponentially fitted ODE algorithm exist?

How the average complexity is calculated

How the average complexity of algorithm is calculated? Worst is obvious, best also, but how the average is calculated?
calculate the complexity for all possible input and take and weighted sum based on their probabilities. This is also called expected runtine (similar to expectation in probabilities).
ET(I) = P(X=I1)*T(I1) + P(X=I2)*T(I2) + P(X=I3)*T(I3).......
Average performance (time, space, etc.) complexity is found by considering all possible inputs of a given size and stating the asymptotic bound for the average of the respective measure across all those inputs.
For example, average "number of comparisons" complexity for a sort would be found by considering all N! permutations of input of size N and stating bounds on the average number of comparisons performed across all those inputs.
I.e. this is the sum of numbers of comparisons for all of the possible N! inputs divided by N!
Because the average performance across all possible inputs is equal to the expected value of the same performance measure, average performance is also called expected performance.
Quicksort presents an interesting non-trivial example of calculating the average run-time performance. As you can see the math can get quite complex, and so unfortunately I don't think there's a general equation for calculating average performance.

maximum likelihood and support vector complexity

Can anyone give some references showing how to determine the maximum likelihood and support vector machine classifiers' computation complexity?
I have been searching the web but don't seem to find a good docs that details how to find the equations that model the computation complexity of those classifier algorithms.
Thanks
Support vector machines, and a number of maximum likelihood fits are convex minimization problems. Therefore they could in theory be solved in polynomial time using http://en.wikipedia.org/wiki/Ellipsoid_method.
I suspect that you can get much better estimates if you consider methods. http://www.cse.ust.hk/~jamesk/papers/jmlr05.pdf says that standard SVM fitting on m instances costs O(m^3) time and O(m^2) space. http://research.microsoft.com/en-us/um/people/minka/papers/logreg/minka-logreg.pdf gives costs per iteration for logistic regression but does not give a theoretical basis for estimating the number of iterations. In practice I would hope that this goes to quadratic convergence most of the time and is not too bad.

Time Complexity of Genetic Algorithm

Is it possible to calculate the time complexity of genetic algorithm?
These are my parameter settings:
Population size (P) = 100
# of Generations (G) = 1000
Crossover probability (Pc) = 0.5 (fixed)
Mutation probability (Pm) = 0.01 (fixed)
Thanks
Updated:
problem: document clustering
Chromosome: 50 genes/chrom, allele value = integer(document index)
crossover: one point crossover (crossover point is randomly selected)
mutation: randomly change one gene
termination criteria: 1000 generation
fitness: Davies–Bouldin index
isnt it something like O(P * G * O(Fitness) * ((Pc * O(crossover)) + (Pm * O(mutation))))
IE the complexity is relative to the number of items, the number of generations and the computation time per generation
If P, G, Pc, and Pm are constant that really simplifies to O( O(Fitness) * (O(mutation) + O(crossover)))
If the number of generations and population size is constant, as long as your mutation function, crossover function, and fitness function takes a known amount of time, the big o is O(1) - it takes a constant amount of time.
Now, if you are asking what the big O would be for a population of N and a number of generations M, that is different, but as stated where you know all the variables ahead of time, the amount of time taken is constant with respect to your input.
Genetic Algorithms are not chaotic, they are stochastic.
The complexity depends on the genetic operators, their implementation (which may have a very significant effect on overall complexity), the representation of the individuals and the population, and obviously on the fitness function.
Given the usual choices (point mutation, one point crossover, roulette wheel selection) a Genetic Algorithms complexity is O(g(nm + nm + n)) with g the number of generations, n the population size and m the size of the individuals. Therefore the complexity is on the order of O(gnm)).
This is of cause ignoring the fitness function, which depends on the application.
Is it possible to calculate the time and computation complexity of genetic algorithm?
Yes, Luke & Kane's answer can work (with caveats).
However, most genetic algorithms are inherently chaotic. So calculating O() is unlikely to be useful and worse probably misleading.
There is a better way to measure the time complexity--by actually measuring the run time and averaging.

How can I efficiently calculate the negative binomial cumulative distribution function?

This post is really helpful:
How can I efficiently calculate the binomial cumulative distribution function?
(Title = How can I efficiently calculate the binomial cumulative distribution function?)
However, I need the negative binomial cumulative distribution function.
Is there a way to tweek the code to get a negative cumulative distribution function?
You can compute the CDF by summing the terms of the PMF taking advantage of the recurrence relationship the terms satisfy. The terms in the series are a little complicated, but the ratio of consecutive terms is simple.

Resources