Are the following functions in O(x³)? - big-o

I'm trying to decide whether the following functions are or can be O(x³) assuming k = 1. I have what I think are the right answers but I'm confused on a few so I figured someone on here could look over them. If one is wrong, could you please explain why? From what I understand if it is about x³ then it can be referred to as O(x³) and if its below it can't be? I think I may be viewing it wrong or have the concept of this backwards.
Thanks
a. 3x = TRUE
b. 10x + 42 = FALSE
c. 2 + 4x + 8x² = FALSE
d. (logx + 1)⋅(2x + 3) = true
e. 2x + x! = TRUE

What is meant by Big-O?
A function f is in O(g), if you can find a constant number c with f ≤ c·g (for all x > x0).
The results
3x is not in O(x3), due to limx → ∞ 3x/x3 = ∞
10x+42 is in O(x3). c = 52 holds for all x ≥ 1.
2 + 4x + 8x2 is in O(x3). c = 14 holds for all x ≥ 1.
(logx+1)(2x+3) is in O(x3). c = 7 holds for all x ≥ 1.
2x + x! is not in O(x3), due to limx→∞(2x+x!)/x3 = ∞

Think of O(f) as an upper bound. So when we write g = O(f) we mean g grows/runs at most as fast as f. It's like <= for running times. The correct answers are:
a. FALSE
b. TRUE
c. TRUE
d. TRUE
e. FALSE

Related

Algorithm to precisely compare two exponentiations for very large integers (order of 1 billion)

We want to compare a^b to c^d, and tell if the first is smaller, greater, or equal (where ^ denotes exponentiation).
Obviously, for very large numbers, we cannot explicitely compute these values.
The most common approach in this situation is to apply log on both sides and compare b * log(a) to d * log(c). The issue here is that logs are floating-point operations, and as such we cannot trust our answer with 100% confidence (there might be some values which are incredibly close, and because of floating-point error we get a wrong answer).
Is there an algorithm for solving this problem? I've been scouring the intrernet for this, but I can only find solutions which work for particular cases only (e.g. in which one exponent is a multiple of another), or which use floating point in some way (logarithms, division) etc.
This is sort of two questions in one:
Are they equal?
If not, which one is greater?
As Peter O. observes, it's easiest to build in a language that provides an arbitrary-precision fraction type. I'll use Python 3.
Let's assume without loss of generality that a ≤ c (swap if necessary) and b is relatively prime to d (divide both by the greatest common divisor).
To get at the core of the question, I'm going to assume that a, c > 0 and b, d ≥ 0. Removing this assumption is tedious but not difficult.
Equality test
There are some easy cases where a = 1 or b = 0 or c = 1 or d = 0.
Separately, necessary conditions for a^b = c^d are
i. b ≥ d, since otherwise b < d, which together with a ≤ c implies a^b < c^d;
ii. a is a divisor of c, since we know from (i) that a^b = c^d is a divisor of c^b = c^(b−d) c^d.
When these conditions hold, we can divide through by a^d to reduce the problem to testing whether a^(b−d) = (c/a)^d.
In Python 3:
def equal_powers(a, b, c, d):
while True:
lhs_is_one = a == 1 or b == 0
rhs_is_one = c == 1 or d == 0
if lhs_is_one or rhs_is_one:
return lhs_is_one and rhs_is_one
if a > c:
a, b, c, d = c, d, a, b
if b < d:
return False
q, r = divmod(c, a)
if r != 0:
return False
b -= d
c = q
def test_equal_powers():
for a in range(1, 25):
for b in range(25):
for c in range(1, 25):
for d in range(25):
assert equal_powers(a, b, c, d) == (a ** b == c ** d)
test_equal_powers()
Inequality test
Once we've established that the two quantities are not equal, it's time to figure out which one is greater. (Without the equality test, the code here could run forever.)
If you're doing this for real, you should consult an actual reference on computing elementary functions. I'm just going to try to do the simplest thing that works.
Time for a calculus refresher. We have the Taylor series
−log x = (1−x) + (1−x)^2/2 + (1−x)^3/3 + (1−x)^4/4 + ...
To get a lower bound, truncate the series. To get an upper bound, we can truncate but replace the final term (1−x)^n/n with (1−x)^n/n (1/x), since
(1−x)^n/n (1/x)
= (1−x)^n/n (1 + (1−x) + (1−x)^2 + ...)
= (1−x)^n/n + (1−x)^(n+1)/n + (1−x)^(n+2)/n + ...
> (1−x)^n/n + (1−x)^(n+1)/(n+1) + (1−x)^(n+2)/(n+2) + ...
To get a good convergence rate, we're going to want 0.5 ≤ x < 1, which we can achieve by dividing x by a power of two.
In Python, we'll represent a real number as an infinite generator of shrinking intervals that contain the true value. Once the intervals for b log a and d log c are disjoint, we can determine how they compare.
import fractions
def minus(x, y):
while True:
x_lo, x_hi = next(x)
y_lo, y_hi = next(y)
yield x_lo - y_hi, x_hi - y_lo
def times(b, x):
for lo, hi in x:
yield b * lo, b * hi
def restricted_log(a):
series = 0
n = 0
numerator = 1
while True:
n += 1
numerator *= 1 - a
series += fractions.Fraction(numerator, n)
yield -(series + fractions.Fraction(numerator * (1 - a), (n + 1) * a)), -series
def log(a):
n = 0
while a >= 1:
a = fractions.Fraction(a, 2)
n += 1
return minus(restricted_log(a), times(n, restricted_log(fractions.Fraction(1, 2))))
def less_powers(a, b, c, d):
lhs = times(b, log(a))
rhs = times(d, log(c))
while True:
lhs_lo, lhs_hi = next(lhs)
rhs_lo, rhs_hi = next(rhs)
if lhs_hi < rhs_lo:
return True
if rhs_hi < lhs_lo:
return False
def test_less_powers():
for a in range(1, 10):
for b in range(10):
for c in range(1, 10):
for d in range(10):
if a ** b != c ** d:
assert less_powers(a, b, c, d) == (a ** b < c ** d)
test_less_powers()

Linear Temporal Logic (LTL) questions

[] = always
O = next
! = negation
<> = eventually
Wondering is it []<> is that equivalent to just []?
Also having a hard time understanding how to distribute temporal logic.
[][] (a OR !b)
!<>(!a AND b)
[]([] a ==> <> b)
I'll use the following notations:
F = eventually
G = always
X = next
U = until
In my model-checking course, we defined LTL the following way:
LTL: p | φ ∩ ψ | ¬φ | Xφ | φ U ψ
With F being a syntactic sugar for :
F (future)
Fφ = True U φ
and G:
G (global)
Gφ = ¬F¬φ
With that, your question is :
Is it true that : Gφ ?= GFφ
GFφ <=> G (True U φ)
Knowing that :
P ⊧ φ U ψ <=> exists i >= 0: P_(>= i) ⊧ ψ AND forall 0 <= j < i : P_(<= j) ⊧ φ
From that, we can clearly see that GFφ indicates that it must always be true that φ will be always be verified after some time i, and before that (j before i) True must be verified (trivial).
But Gφ indicates that φ must always be true, "from now to forever" and not "from i to forever".
G p indicates that at all times p holds. GF p indidcates that at all times, eventually p will hold. So while the infinite trace pppppp... satisfies both of the specifications, an infinite trace of the form p(!p)(!p!)p(!p)p... satisfies only GF p but not G p.
To be clear, both these example traces need to contain infinitely many locations, where p holds. But in the case of GF p, and only in this case, it is acceptable that there be locations in between, where p does not hold.
So the short answer to the above question by counterexample is: no, those two specifications aren't the same.

SICP - Which functions converge to fixed points?

In chapter 1 on fixed points, the book says we can find fixed points of certain functions using
f(x) = f(f(x)) = f(f(f(x))) ....
What are those functions?
It doesn't work for y = 2y when i rewrite it as y = y/2 it works
Does y need to get smaller everytime? Or are there any general attributes that a function has to have to find fixed points by that method?
What conditions it should satisfy to work?
According to the Banach fixed-point theorem, such a point exists iff the mapping (function) is a contraction. That means that, for example, y=2x doesn't have fixed point and y = 0,999... * x has. In general, if f maps [a,b] to [a,b], then |f(x) - f(y)| should be equal to c * |x - y| for some 0 <= c < 1 (for all x, y from [a, b]).
Say you have:
f(x) = sin(x)
then x = 0 is a fixed point of the function since:
f(0) = sin(0) = 0
f(f(0)) = sin(sin(0)) = sin(0) = 0
Not every point along x is a fixed point of sin, only 0 is.
Different functions have different fixed points, if at all. You can find more on fixed points of functions at Wikidpedia

Reduction from Atm to A (of my choice) , and from A to Atm

Reduction of many one , is not symmetric . I'm trying to prove it but it doesn't work
so well .
Given two languages A and B ,where A is defined as
A={w| |w| is even} , i.e. `w` has an even length
and B=A_TM , where A_TM is undecidable but Turing-recognizable!
Given the following Reduction:
f(w) = { (P(x):{accept;}),epsilon , if |w| is even
f(w) = { (P(x):{reject;}),epsilon , else
(Please forgive me for not using Latex , I have no experience with it)
As I can see, a reduction from A <= B (from A to A_TM) is possible , and works great.
However , I don't understand why B <= A , is not possible .
Can you please clarify and explain ?
Thanks
Ron
Assume for a moment that B <= A. Then there is a function f:Sigma*->Sigma* such that:
f(w) = x in A if w is in B
= x not in A if w is not in B
Therefore, we can describe the following algorithm [turing machine] M on input w:
1. w' <- f(w)
2. if |w'| is even return true
3. return false
It is easy to prove that M accepts w if and only if w is in B [left as an exercise to the reader], thus L(M) = B.
Also, M stops for any input w [from its construction]. Thus - L(M) is decideable.
But we got that L(M) = B is decideable - and that is a contradiction, because B = A_TM is undecideable!

Is Prolog the best language to solve this kind of problem?

I have this problem containing some inequations and requirement to minimize a value. After doing some research on the Internet, I came to conclusion that using Prolog might be the easiest way to solve it. However, I never used Prolog before, and I would hate to waste my time learning it just to discover that it is not the right tool for this job.
Please, if you know Prolog, take a look at this problem and tell me if Prolog is the right one. Or, if you know of some other language that is really suited for this.
a + b + c >= 100
d + e + f >= 50
g + h >= 30
if (8b + 2e + 7h > 620) then y = 0.8 else y = 1.0
if (d > 35) then x = 0.9 else x = 1.0
5xa + 8yb + 5c + 3xd + 2ye + 2f + 6xg + 7yh = w.
I need to find the values for a, b, c, d, e, f, g and h that minimize w.
Please note that the above is only an example. In real program, I would use up to 10000 variables and up to 20 if..then clauses. This rules out linear programming as an alternative technique because it would take a prohibitive amount of RAM and time to test all LP problems.
I'm not really asking for code, although I'd be grateful for some hint how to tackle this if Prolog is really good for it. Thanks.
You could have a look at Constraint Logic Programming, either CLP(R), CLP(Q) or CLP(FD).
Your problem can be encoded into CLP(R) as follows (I think):
:- use_module(library(clpr)).
test(sol([A,B,C,D,E,F,G,H], [X,Y,W])) :-
{A >=0, B >=0, C>=0, D>=0, E>=0, F>=0, G>=0, H>=0},
{A + B + C >= 100},
{D + E + F >= 50},
{G + H >= 30},
{5*X*A + 8*Y*B + 5*C + 3*X*D + 2*Y*E + 2*F + 6*X*G + 7*Y*H = W},
(({8*B + 2*E + 7*H > 620},{Y=0.8}) ; ({8*B + 2*E + 7*H =35},{X=0.9}) ; ({D=
Using SICStus Prolog, I get the following answer:
| ?- test(A).
A = sol([_A,0.0,_B,0.0,_C,_D,30.0,0.0],[1.0,1.0,780.0]),
{_A=100.0-_B},
{_C=50.0-_D},
{_B==0.0},
{_D>=0.0} ? ;
no
You can solve this using linear programming (LP), but the problem needs some modification before you can chuck it into an LP solver. An LP problem basically involves maximising or minimising a function given some constraints.
First, split the problem into two problems (as LP does not support the two if conditions you have):
Constraints:
a + b + c >= 100
d + e + f >= 50
g + h >= 30
8b + 2e + 7h > 620
Linear function:
5 * 0.8a + 8 * 1.0b + 5c + 3 * 0.8d + 2 * 1.0e + 2f + 6 * 0.8g + 7 * 1.0h = w
And
Constraints:
a + b + c >= 100
d + e + f >= 50
g + h >= 30
d > 35
Linear function:
5 * 1.0a + 8 * 0.9b + 5c + 3 * 1.0d + 2 * 0.9e + 2f + 6 * 1.0g + 7 * 0.9h = w
After you run both separately by the LP solver, the solution will come out with the values of a, b, c, d, e, f, g, h and w. Pick the smaller value of w and the corresponding values of a, b, c, d, e, f, g, h.
How does this work?
The two if conditions are effectively similar to the other constraints listed, just that they entail different values of x and y. Since the two conditions are mutually exclusive (given that both cannot be satisfied as x and y will hence have different values), you can split them into two separate LP problems. As a result, you solve the LP problems individually, and hence you will arrive at a minimised value of w.
For an LP solver, go to the linear programming Wikipedia article as linked above. There are tools like Excel and some others which are easier to use, but if you want a program, then there are numerical languages which are good at this, or general purpose languages like C that can do this with a library like glpk.
Hope this helps!
I haven't worked with similar problems before, so I can't give you any direct suggestions. However, I would not use Prolog for it. Prolog is really good for dealing with symbolic problems (a classic example would be the Einstein puzzle), but its math support is very awkward; it feels like math was tacked on as an afterthought.

Resources