Asymptotic time complexity exponential function - algorithm

Dear friends here I am getting some confusion regarding time complexity of my algorithm. My algorithm has time complexity 3^(.5n). Is it correct way to write 3^(.5n) as (3^.5)^n. In one thesis I got it.

Yes, it is correct way. It is known identity for exponentiation
(a^b)^c = a^(b*c)
But what is relation of math formula to programming?

Related

How to develop program which have less time complexity as possible?

How to develop programs which has less time complexity as possible?
There is no proven method to come up with the optimal algorithm for a given problem.
There are many problems for which is likely the world has not come up with the most efficient algorithm yet.
For example: what is the algorithm with the best time complexity for matrix multiplication? At the time of writing no one knows what that theoretical best time complexity would be, let be there is someone who has designed an algorithm with that time complexity.

Time complexity of a^n

I was learning about time complexity recently and I was wondering time complexity of a algorithm to compute a^n. My answer will be O(n).
However, I was thinking about the divide and conquer method. As a^n/2 * a^n/2 = a^n. Is it possible to turn time complexity of an algorithm a^n into a algorithm with O(logn) time complexity but I was stuck thinking how would such an algorithm be and how would it works?
I would greatly appreciate if anyone could share their thoughts with me.

Do recursive and iterative solutions to a problem have same worst time complexity?

If there is a theory existing that proves this, what is it to prove that both would give the same worst case time complexity? It has been showed that both can provide solutions to a problem, but nothing about both giving same worst case time complexity.
It completely depends on the problem. But generally, recursive solutions can be made more time-efficient by using dynamic programming.

What is complexity of simplex algorithm for binary integer programming?

What is complexity of simplex algorithm for binary integer programming problem? For worst case or average case?
I'm solving assignment problem.
References:
https://en.wikipedia.org/wiki/Integer_programming
https://en.wikipedia.org/wiki/Simplex_algorithm
Since it's for the assignment problem, that changes matters. In that case, as the wiki page notes, the constraint matrix is totally unimodular, which is exactly what you need to make your problem an instance of normal linear programming as well (that is, you can drop the integrality constraint, and the result will still be integral).
So, it can be solved in polynomial time. The Simplex algorithm doesn't guarantee that however.
Of course there are also other polynomial time algorithms to solve the assignment problem.
In a general sense, binary integer programming is one of Karp's 21 NP-complete problems, so assuming P!=NP it's safe to say that Simplex's worst-case running time is lower-bounded by Ω(poly(n)). Again, in general, similar to SAT solvers, the "average" case is going to be heavily dependent upon what you're taking the average across. Until you've got more specific information about the class of problems you're trying to solve with simplex, I don't think there is a good answer.
I'll do some more thinking and update when I have more information.

Running time of computing mathematical functions

Where can I turn for information regarding computing times of mathematical functions? Has any (general) study with any amount of rigor been made?
For instance, the computing time of
constant + constant
generally takes O(1).
Suppose I want to start using math like integrals, and I'd like to get an asymptotic approximation to various integrals. Has there been a standard study of this, or must I take the information I have and figure out my own approximation. I'd be very interested in a standard approach to this, and I'd like to know if it already exists.
Here's my motivation:
I'm in the middle of writing a paper that points out the equivalence between NP hard problems and certain types of mathematical equations. It seems that there might be use for a study of math computing times that is generalized like a new science.
EDIT:
I guess I'm wondering if there is a standard computational complexity to any given math that cannot be avoided. I'm wondering if anyone has studied this question. I'd love to see what others have tried.
EDIT 2:
Wikipedia lists "Computational Complexity Theory" in their encyclopedia, which I think may fit the bill. I'm still wondering if someone who has studied this could affirm this.
"Standard" math has no notion of algorithmic complexity. That's reserved for computer algorithms.
There are ways to analyze the dynamic behavior of solutions of equations. Things like convergence matter a great deal to mathematicians.
You can ask what the algorithmic complexity of euler integration versus fifth-order Runge-Kutta for integration. They would compare based on number of function evaluations required and time step stability.
But what's the "running time" of the solution to Fermat's Last Theorem? What about the last of David Hilbert's challenge problems? Is the "running time" for those a century and counting? What's your running time for solving a partial differential equation using separation of variables?
When you think about it that way, do you have a better understanding of why people would be put off by your question?
Yes, for various mathematical functions, the computational complexity (running time) of computing the function has been studied. This can differ depending on the model of computation.
For example adding two n-bit numbers takes Θ(n) time, multiplying them takes Θ(n log n) time (using the FFT), finding their gcd takes Θ(n2) time with the usual Euclidean algorithm and Θ(n(log n)2 (log log n)) with better algorithms, etc. For more complicated stuff like integrals, obviously it depends on what algorithm you use to do it.
There isn't a collected body of work, but work on approximating functions comes close. For example, you'd like to know that approximating sin(x) to within an epsilon error can be done in time proportional to some polynomial in log(x) and 1/epsilon. There isn't a general theory here (you should look up information complexity though), and focusing on specific functions might help.
user389117,
I think that subconsciously you want to deduce the complexity of computing a mathematical type from the form of this mathematical type.
E.g. A math type which concerns the square of the variable (x^2) you think (at least subconsciously) that the complexity of the computation is anologous to x^2 so the complexity should be something like O(n^2) or there is a standard process to deduce the form of complexity from the form of the mathematical equation.
These both are different qualities and one cannot deduce the one quality from the other.
I will give you an example: In papers all algorithms are written in pseudo code and then the scientists deduce the complexity of the pseudo code.
The pseudo code must be inevitably written and then you compute the complexity.
There is no a magical way to have the complexity derived from the form of the thing you want to compute.
Even if you compute the complexity and you find that the form is analogous to the form of the equation computed then I think it would be hard, at least at first place, for you to convert that remark from pseudo-science to science.
Good Luck!

Resources