Ran a mathematica code using NIntegrate containing an integration over spherical and normal bessel functions.
1.Would the answer in the two cases change if I use MaxRecursion with some number of recursions and don't use it?
2. And also will it matter if I use the global adaptive strategy?
3. In case I want to exclude singularity points for x variable, would I use {x,a,b,c,d} where a and b are singular points and c and d are integration limits.
Related
Following most estimation commands in Stata (e.g. reg, logit, probit, etc.) one may access the estimates using the _b[ParameterName] syntax (or the synonymous _coef[ParameterName]). For example:
regress y x
followed by
di _b[x]
will display the estimate of the coefficient of x. di _b[_cons] will display the coefficient of the estimated intercept (assuming the regress command was successful), etc.
But if I use the nonlinear least squares command nl I (seemingly) have to do something slightly different. Now (leaving aside that for this example model there is absolutely no need to use a NLLS regression):
nl (y = {_cons} + {x}*x)
followed by (notice the forward slash)
di _b[/x]
will display the estimate of the coefficient of x.
Why does accessing parameter estimates following nl require a different syntax? Are there subtleties to be aware of?
"leaving aside that for this example model there is absolutely no need to use a NLLS regression": I think that's what you can't do here....
The question is about why the syntax is as it is. That's a matter of logic and a matter of history. Why a particular syntax was chosen is ultimately a question for the programmers at StataCorp who chose it. Here is one limited take on your question.
The main syntax for regression-type models grows out of a syntax designed for linear regression models in which by default the parameters include an intercept, as you know.
The original syntax for nonlinear regression models (in the sense of being estimated by nonlinear least-squares) matches a need to estimate a bundle of parameters specified by the user, which need not include an intercept at all.
Otherwise put, there is no question of an intercept being a natural default; no parameterisation is a natural default and each model estimated by nl is sui generis.
A helpful feature is that users can choose the names they find natural for the parameters, within the constraints of what counts as a legal name in Stata, say alpha, beta, gamma, a, b, c, etc. If you choose _cons for the intercept in nl that is a legal name but otherwise not special and just your choice; nl won't take it as a signal that it should flip into using regress conventions.
The syntax you cite is part of what was made possible by a major redesign of nl but it is consistent with the original philosophy.
That the syntax is different because it needs to be may not be the answer you seek, but I guess you'll get a fuller answer only from StataCorp; developers do hang out on Statalist, but they don't make themselves visible here.
My question is regarding integration. I have a complex function that needs to be integrated and its a definite integral. The thing is when I use Wolfram Alpha to integrate this function it gives me nothing i.e its unable to compute it. However if I remove the boundaries of integration i.e I make my integral an indefinite integral, Wolfram Alpha is able to compute. Now my question is
Can I take the result I obtained for the indefinite integral and just evaluate for the boundary limits to evaluate my definite integral ?
If my analysis is correct, then why wouldn't Wolfram alpha give the result anyways?
using Wolfram Alpha, if I try
integrate(exp(-v)/(1+sv^-1))
then I get the following result
-e^(-v)-e^s s Ei(-s-v)
While if I try
integrate(exp(-v)/(1+sv^-1),{v,1,+infinity})
I get nothing!
since you tagged this Mathematica:
by specifying an appropriate assumption on s we get the expected result:
Integrate[Exp[-v]/(1 + s/v) , {v, 1, Infinity}, Assumptions -> {s > -1}]
--> 1/E + E^s s ExpIntegralEi[-1 - s]
I don't know if alpha has some similar syntax to add assumptions..
additionally if we try a finite integral:
Integrate[Exp[-v]/(1 + s/v) , {v, 1, 2} ]
mathematica returns a conditional expression that tells us the result is valid for s>-1 or s<-2. For some reason it doesn't give such result for the infinite case however.
Yes, you can take the result obtained for the indefinite integral and use to calculate the definite integral. When I try to run your request at Wolfram Alpha, here's what I get:
As you can see in the highlighted portion at the bottom left of the above picture, Wolfram Alpha didn't complete your request because it exceeded the standard computation time. This is because they need to offer some extra features for Wolfram Alpha Pro users to pay for the service. One of this features is extended computation time.
Wolfram Alpha is a business, and this is one of the ways it makes money. See for yourself, it'll offer you the pro service if you click the "Try again with additional computational time" on the bottom right.
If you just break down the definite integration between first the indefinite integral (which it can handle) and then calculate the boundary values and take the difference, it seems to work fine:
This is mathematically correct because that is how definite integrals are calculated.
However your input has an sv in the dividend. Wolfram Alpha is taking it to mean s*v, which might not be what you meant—if sv is a variable on it's own, I suggest you rename it to s or something else. The point is that if s is indeed a variable, if you take a look at the plot in the answer, there seems to be a ridge due to the -∞ term, so for some values of s that ridge might be within your integration curve, and then the integral can't be calculated, as Bill pointed out in his comment to your question.
I have implemented a MC-Simulation of the 2D Ising model in C99.
Compiling with gcc 4.8.2 on Scientific Linux 6.5.
When I scale up the grid the simulation time increases, as expected.
The implementation simply uses the Metropolis–Hastings algorithm.
I tried to find out a way to speed up the algorithm, but I haven't any good idea ?
Are there some tricks to do so ?
As jimifiki wrote, try to do a profiling session.
In order to improve on the algorithmic side only, you could try the following:
Lookup Table:
When calculating the energy difference for the Metropolis criteria you need to evaluate the exponential exp[-K / T * dE ] where K is your scaling constant (in units of Boltzmann's constant) and dE the energy-difference between the original state and the one after a spin-flip.
Calculating exponentials is expensive
So you simply build a table beforehand where to look up the possible values for the dE. There will be (four choose one plus four choose two plus four choose three plus four choose four) possible combinations for a nearest-neightbour interaction, exploit the problem's symmetry and you get five values fordE: 8, 4, 0, -4, -8. Instead of using the exp-function, use the precalculated table.
Parallelization:
As mentioned before, it is possible to parallelize the algorithm. To preserve the physical correctness, you have to use a so-called checkerboard concept. Consider the two-dimensional grid as a checkerboard and compute only the white cells parallel at once, then the black ones. That should be clear, considering the nearest-neightbour interaction which introduces dependencies of the values.
Use GPGPU:
You can also implement the simulation on a GPGPU, e.g. using CUDA, if you're already working on C99.
Some tips:
- Don't forget to align C99-structs properly.
- Use linear Arrays, not that nested ones. Aligned memory is normally faster to access, if done properly.
- Try to let the compiler do loop-unrolling, etc. (gcc special options, not default on O2)
Some more information:
If you look for an efficient method to calculate the critical point of the system, the method of choice would be finite-size scaling where you simulate at different system-sizes and different temperature, then calculate a value which is system-size independet at the critical point, therefore an intersection point of the corresponding curves (please see the theory to get a detailed explaination)
I hope I was helpful.
Cheers...
It's normal that your simulation times scale at least with the square of the size. Isn't it?
Here some subjestions:
If you are concerned with thermalization issues, try to use parallel tempering. It can be of help.
The Metropolis-Hastings algorithm can be made parallel. You could try to do it.
Check you are not pessimizing the code.
Are your spin arrays of ints? You could put many spins on the same int. It's a lot of work.
Moreover, remember what Donald taught us:
premature optimisation is the root of all evil
Before optimising you should first understand where your program is slow. This is called profiling.
I am trying to write a backtracking algorithm that keeps state using mutable BitSets, it works fine but I want it to go faster!
The crux is given two mutable.BitSet alpha and beta I need to calculate if any of the bits of alpha are set in beta, i.e. bitwise AND. I do not need the resulting set just need to know if the intersection isNonEmpty
(alpha intersect beta).nonEmpty
or
(alpha & beta).nonEmpty
but both of these construct a set which is then tested for size... I really just need a boolean and would like to avoid the cost of constructing the intermediate set.
Is there a better way?
TIA
Nivag
Referring to the API docs, you may use find and contains method.
alpha find (beta.contains) isDefined
OR
Even better, use exists method.
alpha exists (beta.contains)
OR
Even shorter and better, use apply method of BitSet which is equivalent to its contains method.
alpha exists beta
I'm looking for an algorithm to help me build 2D patterns based on rules. The idea is that I could write a script using a given site of parameters, and it would return a random, 2-dimensional sequence up to a given length.
My plan is to use this to generate image patterns based on rules. Things like image fractals or sprites for game levels could possibly use this.
For example, lets say that you can use A, B, C, & D to create the pattern. The rule is that C and A can never be next to each other, and that D always follows C. Next, lets say I want a pattern of size 4x4. The result might be the following which respects all the rules.
A B C D
B B B B
C D B B
C D C D
Are there any existing libraries that can do calculations like this? Are there any mathematical formulas I can read-up on?
While pretty inefficient concering runtime, backtracking is an often used algorithm for such a problem.
It follows a simple pattern, and if written correctly, you can easily replace a rule set into it.
Define your rule data structures; i.e., define the set of operations that the rules can encapsulate, and define the available cross-referencing that can be done. Once you've done this, you should have a clearer view of what type of algorithms to use to apply these rules to a potential result set.
Supposing that your rules are restricted to "type X is allowed to have type Y immediately to its left/right/top/bottom" you potentially have situations where generating possible patterns is computationally difficult. Take a look at Wang Tiles (a good source is the book Tilings and Patterns by Grunbaum and Shephard) and you'll see that with the states sets of rules you might define sets of Wang Tiles. Appropriate sets of these are Turing Complete.
For small rectangles, or your sets of rules, this may only be of academic interest. As mentioned elsewhere a backtracking approach might be appropriate for your ruleset - in which case you may want to consider appropriate heuristics for the order in which new components are added to your grid. Again, depending on your rulesets, other approaches might work. E.g. if your ruleset admits many solutions you might get a long way by randomly allocating many items to the grid before attempting to fill in remaining gaps.