Algorithms - Semi definite programming - algorithm

Let G=({1,...,n},E) be a graph with two edge weight functions ֿαe ≤ βe, e ∈ E.
I want to know whether there exist points p1 ,..., pn ∈ Rn, such that α{i,j} ≤ ||pi−pj||^2 ≤ β{i,j}, for all {i,j} ∈ E.
How can I show that this decision problem can be formulated as a semidefinite program?

Related

How many edges in dense graph

I have read that in the dense graph, the number of edges is (n^2) and I don't know-how
If I have a graph and every node connected to other all nodes then the number edges will be ( (n-1) + (n-2) + (n-3) + ..... + 1) so, how the edges in dense graph are (n^2)
It depends on whether your graph is directed. In an undirected dense graph, the number of edges is (n · (n − 1) / 2) (which is equal to your series). In a directed graph, the number is double that, so just (n · (n − 1)).
This is not exactly (n²), but very close to it. You can say that n² is an upper bound, so it is maybe more appropriate to say O(n²) if that makes sense in the context.
It's the Big O notation, maybe what they mean is the complexity when you do a graph traversal.
In Big O notation : O(n²/2) = O(n²)

Question about transitivity with regards to big o notations

If f ∈ O(g) and g ∈ Θ(h) is f ∈ Θ(h)?
I would say yes, because: If the upper bounding of f is g, and g lies between two function 1/c*h and c*h, then c*h must be an upper bounding for f also and consequently if c*h is an upper bounding for f and g then 1/c*h must be a lower bounding for both. (The reciprocal value of a big number is very little).
Is this right?
It is not:
Imagine f(x) = x, g(x) = 5*x^2 and h(x) = x^2
f ∈ O(g) since x^2 is an upper bound for x.
g ∈ Θ(h) since x^2 is both and upper and lower bound for x^2.
but f ∉ Θ(h) since x^2 is not a lower bound for x.
You are correct that c*h(x) is indeed an upper bound for f(x) but why do you believe 1/c*h(x) must be a lower bound?

how to prove a compatible heuristics can be a admissible heuristics in A* search algorithm

compatible heuristics (h) is the one that has below condition:
h(n) <= c(n,a,n') + h(n')
****************************************************
admissible heuristics (h) is the one that has below condition:
0 <= h(n) <= h*(n)
h*(n) is the real distance from node n to the goal
If a heuristic is compatible, how to prove it is admissible ?
Thanks a lot.
Assume that h(n) is not admissible, so there exists some vertex n such that h(n) > h*(n).
But because of the compatibility of h(n), we know that for all n` it holds that h(n) <= c(n,a,n') + h(n').
Now combine these two predicates when n` is the vertex G to deduce a contradiction, thus proving the required lemma reduction ad absurdum.
If you add an additional condition on h (namely that h(goal) = 0), you can prove it by induction over the minimum cost path from n to the goal state.
For the base case, the minimum cost path is 0, when n = goal. Then h(goal) = 0 = h*(goal).
For the general case, let n be a node and let n' be the next node on a minimal path from n to goal. Then h*(n) = c(n, n') + h*(n') >= c(n, n') + h(n') >= h(n) using the induction hypothesis to get the first inequality and the definition of compatibility for the second.

Calculating c and n sub naught in Big-O Analysis

The great people at MyCodeSchool.com have this introductory video on YouTube, covering the basics of Big-O, Theta, and Omega notation.
The following definition of Big-O notation is provided:
O(g(n) ) := { f(n) : f(n) ≤ cg(n) }, for all n ≥ n0
My casual understanding of the equation is as follows:
Given a function f(), which takes as its input n, there exists another function g(), whose output is always greater than or equal to the output of f()--given two conditions:
g() is multiplied by some constant c
n is greater than some lower bound n0
Is my understanding correct?
Furthermore, the following specific example was provided to illustrate Big-O:
Given:
f(n) = 5n2 + 2n + 1
Because all of the following are true:
5n2 > 2n2, for all n
5n2 > 1n2, for all n
It follows that:
c = 5 + 2 + 1 = 8
Therefore, the video concludes, f(n) ≤ 8n2 for all n ≥ 1, and g(n) = 8n2
I think maybe the video concluded that n0 must be 1, because 1 is the only positive root of the equality 8n2 = 5n2 + 2n + 1 ( Negative one-third is also a root, but n is limited to whole numbers. So, no dice there. )
Is this the standard way of computing n0 for Big-O notation?
Take the largest powered factor in your polynomial
Multiply it by the sum of the coefficients in your time function
Set their product equal to your time function
Solve for zero
Reject all roots that are not in the set of whole numbers
Any help would be greatly appreciated. Thanks in advance.
Your understanding is mostly correct, but from your wording - "I think maybe the video concluded that n0 must be 1", I have to point out that it is also valid to take n0 to be 2, or 3 etc. In fact, any number greater than 1 will satisfy the required condition, there are actually infinitely many choices for the pair (c, n0)!
The important point to note is that the values of the constant c and n0 does not really matter, all we care is the existence a pair of constants (c, n0).
The Basics
Big-O notation describes the asymptotic behavior of a given function f, it essential describes the upper bound of f when the its input value is sufficiently large.
Formally, we say that f is big-O of another function g, i.e. f(x) = O(g(x)), if there exists a positive constant c and a constant n0 such that the following inequality holds:
f(n) ≤ c g(n), for all n ≥ n0
Note that the inequality captures the idea of upper bound: f is upper-bounded by a positive multiple of g. Moreover, the "for all" condition satisfies that the upper bound holds when the input n is sufficiently large (e.g. larger than n0).
How to Pick (c, n0)?
In order to prove f(x) = O(g(x)) for given functions f, g, all we need is to pick any pair of (c, n0) such that the inequality holds, and then we are done!
There is no standard way of finding (c, n0), just use whatever mathematical tools you find helpful. For example, you may fix n0, and then find c by using Calculus to compute the maximum value of f(x) / g(x) in the interval [n0, +∞).
In your case, it appears that you are trying to prove that a polynomial of degree d is big-O of xd, the proof of the following lemma gives a way to pick (c, n0):
Lemma
If f is a polynomial of degree d, then f(x) = O(xd).
Proof: We have f(x) = ad xd + ad-1 xd-1 + ... + a1 x + a0, for each coefficient ai, we have ai ≤ |ai| (absolute value of ai).
Take c = (|ad| + |ad-1| + ... + |a1| + |a0|) , and n0 = 1, then we have:
f(x) = ad xd + ad-1 xd-1 + ... + a1 x + a0
≤ |ad| xd + |ad-1| xd-1 + ... + |a1| x + |a0|
≤ (|ad| + |ad-1| + ... + |a1| + |a0|) xd
= c xd, for all x ≥ 1
Therefore we have f(x) = O(xd)

Exponential growth in big-o notation

I have got problem about understanding the following question. It says:
Prove that exponential functions have different orders of growth for different
values of base.
It looks to me like for example, consider an. If a=3, its growth rate will be larger than when a=2. It looks obvious. Is that really what the question wants? How can i do a formal proof for that?
Thanks in advance for your help.
f(n) ∈ O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Let 1>a>b without loss of generality, and suppose b^n ∈ O(a^n). This implies that there are positive constants c and k such that 0 ≤ b^n ≤ c.a^n for all n ≥ k, which is impossible :
b^n ≤ c.a^n for all n ≥ k implies (b/a)^n ≤ c for all n ≥ k
which is in contradiction with lim (b/a)^n = +inf because b/a>1.
If 1>a>b then b^n ∉ O(a^n), but a^n ∈ O(b^n) so O(a^n)⊊O(b^n)

Resources