Summations - Closed Form - Where to Start - algorithm

I am struggling to understand basics as it related to forming a closed form expression from a summation. I understand the goal at hand, but do not understand the process for which to follow in order to accomplish the goal.
Find a closed form for the sum k+2k+3k+...+K^2. Prove your claim
My first approach was to turn it into a recurrence relation, which did not work cleanly. After that I would attempt to turn from a recurrence relation into a closed form, but I am unsuccessful in getting there.
Does anyone know of a strong approach for solving such problems? Or any simplistic tutorials that can be provided? The material I find online does not help, and causes further confusion.
Thanks

No one gave the mathematical approach, so I am adding the mathematical approach to this AP problem.
Given series is 1k + 2k + 3k + .... + k.k(OR k^2)
Therefore, it means that there are altogether k terms together in the given series.
Next, as here all the consecutive terms are greater than the previous term by a constant common difference,i.e., k.
So, this is an Arithmetic Progression.
Now, to calculate the general summation, the formula is given by :-
S(n) = n/2{a(1)+a(n)}
where,S(n) is the summation of series upto n terms
n is the number of terms in the series,
a(1) is the first term of the series, and
a(n) is the last(n th) term of the series.
Here,fitting the terms of the given series into the summation formula, we get :-
S(n) = k/2{1k + k.k} = (k/2){k+k^2) = [(k^2)/2 + (k^3)/2]*.

If you are interested in a general algorithm to compute sums like these (and more complicated ones) I can't recommend the book A=B enough.
The authors have been so kind to make the pdf freely available:
http://www.math.upenn.edu/~wilf/AeqB.html
Enjoy!

Asad has explained a mathematical approach in the comments to solving this.
If you are interested in a programming approach that works for more complicated expressions, then you can use Sympy in Python.
For example:
import sympy
x,k = sympy.symbols('x k')
print sympy.sum(x*k,(x,1,k))
prints:
k*(k/2 + k**2/2)

Related

How to find an efficient algorithm to identify for each x the smallest n and k such that f^(n)(x) = f^(n+k)(x)

The titular question is associated to the following problem: https://i.gyazo.com/07b7dde7efe1df0b7ae9550317851fda.png
And a more detailed explanation of the titular problem can be provided but I can't post more than two links so if someone replies and asks for it I can provide it to you!
To start, I understand that the whole question is based upon the idea of the tortoise and hare algorithm for cycle detection (I would link the Wikipedia page, but I don't have enough reputation). I also understand that the existence of a loop is proven by the tortoise and hare 'meeting up' with each other after leaving the first node. I also know that where they meet up for the second time in the second phase of the algorithm is indicative of exactly where the loop begins.
Unfortunately, I simply can't wrap my head around relating this/these facts to the question given and how to create an algorithm for it.
Any help is greatly appreciated!
No idea how you're supposed to make an algorithm for it, but it seems like simple modulo math. T(x) = x mod 5 and H(x) = 2x mod 5. You're asked to solve for when T(x) = H(x). Since both expression are modulo 5, you know they're equal when 2x - x = 5k where k is some integer.

Display the exact form of quadratic equations' roots

I notice that almost all of new calculators are able to display the roots of quadratic equations in exact form. For example:
x^2-16x+14=0
x1=8+5sqrt2
x2=8-5sqrt2
What algorithm could I use to achieve that? I've been searching around but I found no results related to this problem
Assuming your quadratic equation is in the form
y = ax^2+bx+c
you get the two roots by
x_1,x_2 = ( -b +- sqrt(b^2-4ac)) / 2a
when for one you use the + between b and the square root, and for the other the -.
If you want to take something out of the square root, just compute the factors of the argument and take out the ones with exponent greater than 2.
By the way, the two root you posted are wrong.
The “algorithm” is exactly the same as on paper. Depending on the programming language, it may start with int delta = b*b - 4*a*c;.
You may want to define a datatype of terms and simplifications on them, though, in case the coefficients of the equation are not simply integer but themselves solutions of previous equations. If this is the sort of thing you are after, look up “symbolic computation”. Some languages are better for this purpose than others. I expect that elementary versions of what you are asking is actually used as an example in some ML tutorials, for instance (see chapter 9).

Algorithms and Recursion Help?

I have the following recursion: T(n) = 2*T(n/4) + T(n/2) + n and I need to know the exact equation, I know Master theorem won't help me, and the iteration seems to be wrong...
Please tell me how to do it in general for such recursions.
Thanks in advance.
Hey all, thanks for replying I need complexity. I need to understand how to solve such problems.
T(n) = O(nlogn) and W(nlogn)
To prove that, by definition of O, we need to find constants n0 and c such that:
for every n>=n0, T(n)<=cnlogn.
We will use induction on n to prove that T(n)<=cnlogn for all n>=n0
Let's skip the base case for now... (we'll return later)
Hipothesis: We assume that for every k<n, T(k)<=cklogk
Thesis: We want to prove that T(n)<=cnlogn
But, T(n)=2T(n/4)+T(n/2)+n
Using the hipothesis we get:
T(n)<=2(c(n/4)log(n/4))+c(n/2)log(n/2)+n=cnlogn + n(1-3c/2)
So, taking c>=2/3 would prove our thesis, because then T(n)<=cnlogn
Now we need to prove the base case:
We will take n0=2 because if we take n0=1, the logn would be 0 and that wouldn't work with our thesis. So our base cases would be n=2,3,4. We need the following propositions to be true:
T(2) <= 2clog2
T(3) <= 3clog3
T(4) <= 4clog4
So, by taking c=max{2/3, T(2)/2, T(3)/3log3, T(4)/8} and n0=2, we would be finding constants c and n0 such that for every natural n>=n0, T(n)<=cnlogn
The demonstration for T(n) = W(nlogn) is analog.
So basically, in these cases where you can't use the Masther Theorem, you need to 'guess' the result and prove it by induction.
For more information on these kind of demonstrations, refer to 'Introduction to Algorithms'
First of all you need to define some limits on this, otherwise it won't ever end and you will stuck up with OverflowException.
Something like the n is integer and the minimal value is 0.
Could you please bring up more details on your question in this manner ?
This won't help you figure out how to do it necessarily, but apparently Wolfram Alpha can get the right answer. Perhaps you can look for documentation or have Mathematica show you the steps it takes in solving this:
Wolfram Alpha: T(n)=2*T(n/4)+T(n/2)+n
To put crude upper and lower bounds on the search space, you could have recognized your T(n) is bounded above by 3T(n/2) + n and below by 2T(n/4) + n... so O(n^(3/2)) and W(n), by the master theorem.
In general, solving recurrence relations hard problem.

Why is the sequence 1,4,13,40,121... more efficient then 1,2,4,8,16... when insertion-sorting?

1,4,13,40,121...((3 * n) + 1) works slightly more efficiently than 1,2,4,8,16...(2 * n) when inserting random numbers to a sorting algorithm.
Why is this? Is it anything to do with threading?
Thanks.
It is well known that the shell sort increment steps of 2^k, 2^(k-1), ..., 1 are one of the worst. For instance, you only compare the elements at the odd positions, with the ones at the even positions only at the last step!
The other steps seem to be (3^k -1)/2 (and not 3n+1) and don't suffer from problems like the even/odd issue. That is not a proof, but we might expect this to be better than powers of 2.
If you are looking for mathematical analysis, Shell Sort is well known for giving mathematicians a hard time.
I didn't find any analysis of your sequence in Sedgewick's paper here. Perhap one of Knuth's books has it.
Good luck.
btw, why do you ask about threading?

How to solve a system of inequalities?

I've reduced my problem (table layout algorithm) to the following problem:
Imagine I have N variables X1, X2, ..., XN. I also have some (undetermined) number of inequalities, like:
X1 >= 2
x2 + X3 >= 13
etc.
Each inequalities is a sum of one or more variables, and it is always compared to a constant by using the >= operator. I cannot say in advance how many inequalities I will have each time, but all the variables have to be non-negative, so that's already one for each variable.
How to solve this system in such a way, that the values of the variables are as small as possible?
Added: Read the wikipedia article and realized that I forgot to mention that the variables have to be integers. Guess this makes it NP-hard, huh?
Minimizing x1 + x2 + ... where the xi satisfy linear constraints is called Linear Programming. It's covered in some detail in Wikipedia
What you have there is a pretty basic Linear Programming problem. You want to maximize the equation X_1 + ... + X_n subject to
X_1 >= 2
X_2 + X_3 >= 13
etc.
There are numerous algorithms to solve this type of problem. The most well known is the Simplex algorithm which will solve your equation (with certain caveats) quite efficiently in the average case, although there exist LP problems for which the Simplex algorithm will require exponentially many steps to solve (in the problem size).
Various implementations of LP solvers exist. For example LP_Solve should satisfy most of your requirements
You may also post directly your linear model to NEOS platform (http://neos.mcs.anl.gov/neos/solvers/index.html) . What you simply have to do first is write your model in an algebraic language such as AMPL. Then NEOS will solve the model and returns the results by e-mail.
Linear programming

Resources