if A and B are two events and P(A/B) = P(B/A) then I want to know how A and B are related to each other?. i.e. are they
1) exclusive events or
2) independent events or
3) exhaustive events
4) is it P(A) = P(B)
please let me know the answer with corresponding justification?
Rectify me if I am wrong anywhere
Bayes' Theorem states that:
P(A|B) = P(B|A) x P(A) / P(B)
So if P(A|B) = P(B|A) and is non-zero:
P(A) = P(B)
Related
Does anybody know where to find a Prolog algorithm for computing the probability of a disjunction for N dependent events? For N = 2 i know that P(E1 OR E2) = P(E1) + P(E2) - P(E1) * P(E2), so one could do:
prob_disjunct(E1, E2, P):- P is E1 + E2 - E1 * E2
But how can this predicate be generalised to N events when the input is a list? Maybe there is a package which does this?
Kinds regards/JCR
The recursive formula from Robert Dodier's answer directly translates to
p_or([], 0).
p_or([P|Ps], Or) :-
p_or(Ps, Or1),
Or is P + Or1*(1-P).
Although this works fine, e.g.
?- p_or([0.5,0.3,0.7,0.1],P).
P = 0.9055
hardcore Prolog programmers can't help noticing that the definition isn't tail-recursive. This would really only be a problem when you are processing very long lists, but since the order of list elements doesn't matter, it is easy to turn things around. This is a standard technique, using an auxiliary predicate and an "accumulator pair" of arguments:
p_or(Ps, Or) :-
p_or(Ps, 0, Or).
p_or([], Or, Or).
p_or([P|Ps], Or0, Or) :-
Or1 is P + Or0*(1-P),
p_or(Ps, Or1, Or). % tail-recursive call
I don't know anything about Prolog, but anyway it's convenient to write the probability of a disjunction of a number of independent items p_m = Pr(S_1 or S_2 or S_3 or ... or S_m) recursively as
p_m = Pr(S_m) + p_{m - 1} (1 - P(S_m))
You can prove this by just peeling off the last item -- look at Pr((S_1 or ... or S_{m - 1}) or S_m) and just write that in terms of the usual formula, writing Pr(A or B) = Pr(A) + Pr(B) - Pr(A) Pr(B) = Pr(B) + Pr(A) (1 - Pr(B)), for A and B independent.
The formula above is item C.3.10 in my dissertation: http://riso.sourceforge.net/docs/dodier-dissertation.pdf It is a simple result, and I suppose it must be an exercise in some textbooks, although I don't remember seeing it.
For any event E I'll write E' for the complementary event (ie E' occurs iff E doesn't).
Then we have:
P(E') = 1 - P(E)
(A union B)' = A' inter B'
A and B are independent iff A' and B' are independent
so for independent E1..En
P( E1 union .. union En ) = 1 - P( E1' inter .. inter En')
= 1 - product{ i<=i<=n | 1 - P(E[i])}
I'm starting learning Prolog and I want a program that given a integer P gives to integers A and B such that P = A² + B². If there aren't values of A and B that satisfy this equation, false should be returned
For example: if P = 5, it should give A = 1 and B = 2 (or A = 2 and B = 1) because 1² + 2² = 5.
I was thinking this should work:
giveSum(P, A, B) :- integer(A), integer(B), integer(P), P is A*A + B*B.
with the query:
giveSum(5, A, B).
However, it does not. What should I do? I'm very new to Prolog so I'm still making lot of mistakes.
Thanks in advance!
integer/1 is a non-monotonic predicate. It is not a relation that allows the reasoning you expect to apply in this case. To exemplify this:
?- integer(I).
false.
No integer exists, yes? Colour me surprised, to say the least!
Instead of such non-relational constructs, use your Prolog system's CLP(FD) constraints to reason about integers.
For example:
?- 5 #= A*A + B*B.
A in -2..-1\/1..2,
A^2#=_G1025,
_G1025 in 1..4,
_G1025+_G1052#=5,
_G1052 in 1..4,
B^2#=_G406,
B in -2..-1\/1..2
And for concrete solutions:
?- 5 #= A*A + B*B, label([A,B]).
A = -2,
B = -1 ;
A = -2,
B = 1 ;
A = -1,
B = -2 ;
etc.
CLP(FD) constraints are completely pure relations that can be used in the way you expect. See clpfd for more information.
Other things I noticed:
use_underscores_for_readability_as_is_the_convention_in_prolog instead ofMixingTheCasesToMakePredicatesHardToRead.
use declarative names, avoid imperatives. For example, why call it give_sum? This predicate also makes perfect sense if the sum is already given. So, what about sum_of_squares/3, for example?
For efficiency sake, Prolog implementers have choosen - many,many years ago - some compromise. Now, there are chances your Prolog implements advanced integer arithmetic, like CLP(FD) does. If this is the case, mat' answer is perfect. But some Prologs (maybe a naive ISO Prolog compliant processor), could complain about missing label/1, and (#=)/2. So, a traditional Prolog solution: the technique is called generate and test:
giveSum(P, A, B) :-
( integer(P) -> between(1,P,A), between(1,P,B) ; integer(A),integer(B) ),
P is A*A + B*B.
between/3 it's not an ISO builtin, but it's rather easier than (#=)/2 and label/1 to write :)
Anyway, please follow mat' advice and avoid 'imperative' naming. Often a description of the relation is better, because Prolog it's just that: a relational language.
I have an exam coming up and I'm going through past papers to help my understanding.
I came across the following past paper question:
Consider the following queries and answers. Some answers coincide with
what SWI-Prolog would infer whereas others are erroneous. Indicate which
answers are genuine and which ones are fake (no explanation of your answer
is required).
(i) |?- [A, B, C] ins 0 .. 2, A #= B + C.
A = 0..2 B = 0..2 C = 0..2
(ii) |?- A in 0 .. 3, A * A #= A.
A = 0..2
(iii) |?- [A, B] ins -1 .. 1, A #= B.
A = 1 B = 1
(iv) |?- [A, B] ins 0 .. 3, A #= B + 1.
A = 1..3 B = 1..2
I'm struggling to see how each one is either true or false. Would someone be able to explain to me how to figure these out please.
Thank you, really appreciate the help.
The key principle for deciding which answers are admissible and which are not is to look whether the residual program is declaratively equivalent to the original query. If the residual constraints admit any solution that the original query does not, or the other way around, then the answer is fake (or you have found a mistake in the CLP(FD) solver). If the shown answer is not even syntactically valid, then the answer is definitely fake.
Let's do it:
(i) |?- [A, B, C] ins 0 .. 2, A #= B + C.
suggested answer: A = 0..2 B = 0..2 C = 0..2
WRONG! The original query constrains the variables to integers, but this answer is not even a syntactically valid Prolog program.
(ii) |?- A in 0 .. 3, A * A #= A.
suggested answer: A = 0..2
WRONG! The original query constrains A to integers, but according to this residual program, A = 0..2 is a valid solution. The term ..(0, 2) is not an integer.
(iii) |?- [A, B] ins -1 .. 1, A #= B.
suggested answer: A = 1 B = 1
WRONG! Not syntactically valid.
(iv) |?- [A, B] ins 0 .. 3, A #= B + 1.
suggested answer: A = 1..3 B = 1..2
WRONG! Not syntactically valid.
Note that even if all shown answers were syntactically valid and =/2 were replaced by in/2 in the residual goals of (i), (ii) and (iv), these answer would still be all wrong, because you can in each case find solutions that either are not admissible by the original query or the residual goals, but not both. I leave solving these cases as an exercise for you, for example, suppose the respective answers are:
A in 0..2, B in 0..2, C in 0..2.
A in 0..2.
A = 1, B = 1.
A in 1..3, B in 1..2.
and find a witness for each case to show that the residual goals are semantically different from the respective original query.
For example, in case (1), A = B = C = 2 would be a valid solution according to the residual constraints, but obviously the original constraints exclude this solution, because 2 #= 2 + 2 does not hold!
A variable is always restricted to get a value contained in its domain, and arithmetic constrains only reduce the domain of involved variables.
So, try to 'label' all variables - that is, assign values from answers reported domains. Of course, if the arithmetic relation is not satisfied, you can say the answer is faked. Take ii). Does it hold for A=0 ? What about A=2 ?
This 'test' of course doesn't suffice to answer all questions. Some reported domains are narrower. For instance, take iii). Can you see any reason that excludes -1, or 0. If you cannot, you should mark the answer as faked.
I'm trying to decide whether the following functions are or can be O(x³) assuming k = 1. I have what I think are the right answers but I'm confused on a few so I figured someone on here could look over them. If one is wrong, could you please explain why? From what I understand if it is about x³ then it can be referred to as O(x³) and if its below it can't be? I think I may be viewing it wrong or have the concept of this backwards.
Thanks
a. 3x = TRUE
b. 10x + 42 = FALSE
c. 2 + 4x + 8x² = FALSE
d. (logx + 1)⋅(2x + 3) = true
e. 2x + x! = TRUE
What is meant by Big-O?
A function f is in O(g), if you can find a constant number c with f ≤ c·g (for all x > x0).
The results
3x is not in O(x3), due to limx → ∞ 3x/x3 = ∞
10x+42 is in O(x3). c = 52 holds for all x ≥ 1.
2 + 4x + 8x2 is in O(x3). c = 14 holds for all x ≥ 1.
(logx+1)(2x+3) is in O(x3). c = 7 holds for all x ≥ 1.
2x + x! is not in O(x3), due to limx→∞(2x+x!)/x3 = ∞
Think of O(f) as an upper bound. So when we write g = O(f) we mean g grows/runs at most as fast as f. It's like <= for running times. The correct answers are:
a. FALSE
b. TRUE
c. TRUE
d. TRUE
e. FALSE
In chapter 1 on fixed points, the book says we can find fixed points of certain functions using
f(x) = f(f(x)) = f(f(f(x))) ....
What are those functions?
It doesn't work for y = 2y when i rewrite it as y = y/2 it works
Does y need to get smaller everytime? Or are there any general attributes that a function has to have to find fixed points by that method?
What conditions it should satisfy to work?
According to the Banach fixed-point theorem, such a point exists iff the mapping (function) is a contraction. That means that, for example, y=2x doesn't have fixed point and y = 0,999... * x has. In general, if f maps [a,b] to [a,b], then |f(x) - f(y)| should be equal to c * |x - y| for some 0 <= c < 1 (for all x, y from [a, b]).
Say you have:
f(x) = sin(x)
then x = 0 is a fixed point of the function since:
f(0) = sin(0) = 0
f(f(0)) = sin(sin(0)) = sin(0) = 0
Not every point along x is a fixed point of sin, only 0 is.
Different functions have different fixed points, if at all. You can find more on fixed points of functions at Wikidpedia