How do I calculate a cosinor analysis in SPSS, HELP! - syntax

How can I put this into spss???
http://www.cbi.dongnocchi.it/glossary/Cosinor.html
I am trying to calculate the MESOR for a cyclic pattern of circadian rhythm.

SPSS is not very good about including built-in procedures for anything beyond very standard social science statistics. However, what you want to do (or something like it, I'm not familiar with the method) is apparently feasible if you transform your variables and then run a linear regression, as documented here (a powerpoint file).
If it isn't strictly necessary to use SPSS, you're probably better off using something like R where what you want is more likely to be formally implemented. Check out the season package.

Related

Mathimatical plotting

I want to make an application that plots mathematical functions, I'd like to know the best language for it. it should have the following features:
An area to draw the function.
Supports anti-aliasing.
A scroll bar to change other dependent variables (which is a in y=(x-a)*x).
It should be fast enough (calculations will be done hundreds of times).
Parsing mathematical expressions using regex (Is there a better way?).
any other suggestions would be useful.
edit: this can be useful in many ways such as discarding repeated calculations
ex: plotting y=4+1 using 1000 points have 999 repeated calculation, performance can be enhanced using a tree model that recalculates nodes with changed children only
Regex will not do for parsing math expressions.
Personally, I write recursive-descent parsers. You might be surprised how easy and flexible it is.
If you want the output to look like it's varying continuously, when it isn't actually, what I do is not paint to the output window.
Rather I paint to a memory bitmap, which I then block-transfer to the visible window.
This eliminates all flashing, and makes it look fast even if it's only actually being repainted a few times per second.
Remember, your time-hog is much more likely to be painting, not calculating, so don't waste time trying to figure out how to optimize the calculation.
As far as a "best language", it depends what you're trying to do.
I've done all this in C, C++, and C#.
I'm sure Java or other compiled languages would work just as well.
I think there isn't a "best language" for it, however I can give you some hints. I think one way would be to use C++ with gnuplot library. Another way would be to use C++ with Qt and qwt libraries. Qt will easily manage regex too.
The latest is a solution I've personally used in my past work and there aren't particular problems, while the first is only a theoretic idea.

How does a system like Wolfram Alpha or Mathematica solve equations?

I'm building a web-based programming language partially inspired by Prolog and Haskell (don't laugh).
It already has quite a bit of functionality, you can check out the prototype at http://www.lastcalc.com/. You can see the source here and read about the architecture here. Remember it's a prototype.
Currently LastCalc cannot simplify expressions or solve equations. Rather than hard-coding this in Java, I would like to enhance the fundamental language such that it can be extended to do these things using nothing but the language itself (as with Prolog). Unlike Prolog, LastCalc has a more powerful search algorithm, Prolog is "depth-first search with backtracking", LastCalc currently uses a heuristic best-first search.
Before delving into this I want to understand more about how other systems solve this problem, particularly Mathematica / Wolfram Alpha.
I assume the idea, at least in the general case, is that you give the system a bunch of rules for manipulation of equations (like a*(b+c) = a*b + a+c) specify the goal (eg. isolate variable x) and then let it loose.
So, my questions are:
Is my assumption correct?
What is the search strategy for applying rules? eg. depth first, breadth first, depth first with iterative deepening, some kind of best first?
If it is "best first", what heuristics are used to determine whether it is likely that a particular rule application has got us closer to our goal?
I'd also appreciate any other advice (except for "give up" - I regularly ignore that piece of advice and doing so has served me well ;).
I dealt with such questions myself some time ago. I then found this document about simplification of expressions. It is titled Rule-based Simplification of Expressions and shows some details about simplification in Mupad, which later became a part of Matlab.
According to this document, your assumption is correct. There is a set of rules for manipulation of expressions. A heuristic quality metric is is used as a target function for simplification.
Wolfram alpha is developed by Mathematica
mathematica is stephen wolphram's brainchild. Mathematica 1.0 was released in 1988. mathematica is much like maple and they both rely heavily on older software libraries like LaPack.
The libraries that these programs are, based on, and often simply, legacy software. They've been around, and modified, for a very long time.
If you would like to know about the background programs running, sagemath is a free open source alternative; you could possible reverse engineer the solutions to your questions:
SageMath.org

Looking for optimization algorithm in C++ to replace Excel Solver

since Excel Solver is quite slow to run on thousands of optimizations (the reason being that it uses the spreadsheet as interface), I'm trying to implement a similar (problem-specific) solver in C++ (with Visual Studio 2010, on a Win 7 64-bit platform). I would include the DLL via a Declare statement in VBA and already have experience in doing this, so this is not the problem.
My problem would be minimizing the sum of squared errors between empirical data and a target function which is non-linear but smooth, and the problem would include non-negativity (X>=0) or even positivity constraints (e.g. X>=0.00000001), with X denoting the decision variable.
I'm looking for a robust, proven implementation. It may be part of an established library.
For example, I've already looked into what ALGLIB has in store (see http://www.alglib.net/optimization/) and it seems only one of their algorithms accepts bounded constraints. But I don't know what it's worth, though, that's why I'm trying to gather some opinions.
Or, on another note, would it be advisable to augment ALGLIB's Levenberg-Marquardt algorithm with such basic constraints, for example by rejecting every intermediate solution that does not satisfy my constraints? (guess that won't do it, but it's still worth asking)
There are modifications of the Levenberg-Marquardt method that add support for inequality constraints. I know about one library that implements such an algorithm:
levmar (GPL).
If you would like to modify an existing algorithm, rejecting bad solutions won't do, the optimization will likely get stuck. But you can make a variable substitution, e.g. to ensure that X > 0.1 you can use t^2+0.1 instead of X.
I use this method as a workaround for the lack of built-in box constraints in my program. Here is a quote from Data fitting in the chemical sciences by Peter Gans that describes it better:
https://github.com/wojdyr/fityk/wiki/InequalityConstraints
We find OPTIF9 and UNCMIN to be the standard methods of choice.
You should be able to link them in a library, and call them from C++,
if you don't want to bother compiling Fortran.
A way to put limits on the search space is to transform the parameters, such as by a logit function.
Have you looked into the Microsoft Solver Foundation? The express edition is free, and comes with a .NET 4.0 dll. I found it fairly easy to use. On the other hand, I don't know how large of a problem you are talking: there are some limitations in the number of variables in the express edition.

How to calculate indefinite integral programmatically

I remember solving a lot of indefinite integration problems. There are certain standard methods of solving them, but nevertheless there are problems which take a combination of approaches to arrive at a solution.
But how can we achieve the solution programatically.
For instance look at the online integrator app of Mathematica. So how do we approach to write such a program which accepts a function as an argument and returns the indefinite integral of the function.
PS. The input function can be assumed to be continuous(i.e. is not for instance sin(x)/x).
You have Risch's algorithm which is subtly undecidable (since you must decide whether two expressions are equal, akin to the ubiquitous halting problem), and really long to implement.
If you're into complicated stuff, solving an ordinary differential equation is actually not harder (and computing an indefinite integral is equivalent to solving y' = f(x)). There exists a Galois differential theory which mimics Galois theory for polynomial equations (but with Lie groups of symmetries of solutions instead of finite groups of permutations of roots). Risch's algorithm is based on it.
The algorithm you are looking for is Risch' Algorithm:
http://en.wikipedia.org/wiki/Risch_algorithm
I believe it is a bit tricky to use. This book:
http://www.amazon.com/Algorithms-Computer-Algebra-Keith-Geddes/dp/0792392590
has description of it. A 100 page description.
You keep a set of basic forms you know the integrals of (polynomials, elementary trigonometric functions, etc.) and you use them on the form of the input. This is doable if you don't need much generality: it's very easy to write a program that integrates polynomials, for example.
If you want to do it in the most general case possible, you'll have to do much of the work that computer algebra systems do. It is a lifetime's work for some people, e.g. if you look at Risch's "algorithm" posted in other answers, or symbolic integration, you can see that there are entire multi-volume books ("Manuel Bronstein, Symbolic Integration Volume I: Springer") that have been written on the topic, and very few existing computer algebra systems implement it in maximum generality.
If you really want to code it yourself, you can look at the source code of Sage or the several projects listed among its components. Of course, it's easier to use one of these programs, or, if you're writing something bigger, use one of these as libraries.
These expert systems usually have a huge collection of techniques and simply try one after another.
I'm not sure about WolframMath, but in Maple there's a command that enables displaying all intermediate steps. If you do so, you get as output all the tried techniques.
Edit:
Transforming the input should not be the really tricky part - you need to write a parser and a lexer, that transforms the textual input into an internal representation.
Good luck. Mathematica is very complex piece of software, and symbolic manipulation is something that it does the best. If you are interested in the topic take a look at these books:
http://www.amazon.com/Computer-Algebra-Symbolic-Computation-Elementary/dp/1568811586/ref=sr_1_3?ie=UTF8&s=books&qid=1279039619&sr=8-3-spell
Also, going to the source wouldn't hurt either. These book actually explains the inner workings of mathematica
http://www.amazon.com/Mathematica-Book-Fourth-Stephen-Wolfram/dp/0521643147/ref=sr_1_7?ie=UTF8&s=books&qid=1279039687&sr=1-7

Algebraic logic

Both Wolfram Alpha and Bing are now providing the ability to solve complex, algebraic logic problems (ie "solve for x, given this equation"), and not just evaluate simple arithmetic expressions (eg "what's 5+5?"). How is this done?
I can read most types of code that might get thrown at me, so it doesn't really make a difference what you use to explain and represent the algorithm. I find that bash makes a really good pseudo-code, not to mention its actually functional, so that'd be ideal. Also, I'm fairly familiar with its in's and out's. Sorry to go ranting on a tangent, but it really irritates me to see people spend effort on crunching out "pseudocode" when they could be getting something 100% functional for just slightly more effort. Anyways, thanks so much for advance.
There are 2 main methods to solve:
Numeric methods. Numerical methods mean, basically, that the solver tries to change the value of x until the equation is satisfied. More info on numerical methods.
Symbolic math. The solver manipulates the equation as a string of symbols, by a number of formal rules. It's not that different from algebra we learn in school, the solver just knows a lot of different rules. More info on computer algebra.
Wolfram|Alpha (W|A) is based on the Mathematica kernel, combined with a natural language parser (which is also built primarily with Mathematica). They have a whole heap of curated data and associated formula that can be used once the question has been interpreted.
There's a blog post describing some of this which came out at the same time as W|A.
Finally, Bing simply uses the (non-free) API to answer questions via W|A.

Resources