I do not want a solution just some guidance.
I think 2^sqrt(lg (n^2)) = O(4^lg(n)).
However I am lost as how I can show proof. Is there a formula or property that will get me going in the right direction?
First I would start by trying to simplify your run time. Is there a property that tells us how roots and powers interact when used in conjunction, or how powers within logs can be simplified.
Related
I just finished with the first module in the algo specialization course in coursera.
There was an exam question that i could not quite understand. I have passed that exam, so there's no point for me to retake it.
Out of curiosity, I want to learn the principles around this question.
The question was posted as such:
Suppose that a randomized algorithm succeeds (e.g., correctly computes
the minimum cut of a graph) with probability p (with 0 < p < 1). Let ϵ
be a small positive number (less than 1).
How many independent times do you need to run the algorithm to ensure
that, with probability at least 1−ϵ, at least one trial succeeds?
The options given were:
log(1−p)/logϵ
log(p)/logϵ
logϵ/log(p)
logϵ/log(1−p)
I made two attempts and both were wrong. My attempts were:
log(1−p)/logϵ
logϵ/log(1−p)
It's not so much I want to know the right answer. I want to learn the principles behind this question and what it's asking for. So that I know how to answer similar questions in future.
I have posted this on the forum, but nobody answered after a month. So I am trying it out here.
NO need to post the answer directly. If you got me to get to aha moment, i will mark it as correct.
Thanks.
How many independent times do you need to run the algorithm to ensure that, with probability at least 1−ϵ, at least one trial succeeds?
Let's rephrase it a bit:
What is the smallest number of independent trials such that the probability of all of them failing is less than or equal to ϵ?
By the definition of independent events, the probability of all of them occurring is the product of their individual probabilities. Since the probability of one trial failing is (1-p), the probability of n trials failing is (1-p)^n.
This gives us an inequality for n:
(1-p)^n <= ϵ
The titular question is associated to the following problem: https://i.gyazo.com/07b7dde7efe1df0b7ae9550317851fda.png
And a more detailed explanation of the titular problem can be provided but I can't post more than two links so if someone replies and asks for it I can provide it to you!
To start, I understand that the whole question is based upon the idea of the tortoise and hare algorithm for cycle detection (I would link the Wikipedia page, but I don't have enough reputation). I also understand that the existence of a loop is proven by the tortoise and hare 'meeting up' with each other after leaving the first node. I also know that where they meet up for the second time in the second phase of the algorithm is indicative of exactly where the loop begins.
Unfortunately, I simply can't wrap my head around relating this/these facts to the question given and how to create an algorithm for it.
Any help is greatly appreciated!
No idea how you're supposed to make an algorithm for it, but it seems like simple modulo math. T(x) = x mod 5 and H(x) = 2x mod 5. You're asked to solve for when T(x) = H(x). Since both expression are modulo 5, you know they're equal when 2x - x = 5k where k is some integer.
I have developed a recursive formula for knapsack problem on my own without any knowledge of present solutions. Please tell me whether it is right or wrong and correct it.Thanks in advance.
B(S) = max (B (s-w(i)) + b(w(i)) )
for all i belonging to n;
notations are as usual . S is capacity,B is the answer to knapsack.
I do not want to give you straight answer, but to direct you on the flaws of your formula, and let you figure out how to solve them.
Well, if you do not address the value, something must be wrong - otherwise, you just simply lose information. If you chose to "take" the item (B(s-w(i))) what happens to the current value?
In addition, what is i? How do you change i over time?
When talking about recursive formula, you must also mention a stop clause for it.
Ok, so I have this puzzle that is called the CuFrog, that consists in filling a 3x3x3 cube with a number in each position but leaping over a position when going from one to the other. For instance, considering a flattened cube, the valid position to the right of (1,1) on side 1 would be (3,1) on side 1.
So I'm using Constraints in Prolog to do this and I've given the domain of each variable (1 to 54), I've said they must all be different and that, for each position, one of the positions in the set right-left-down-up has to be the current value of such position + 1.
Also, I've given an entry point to the puzzle, which means I placed the number 1 already on the first position.
Thing is, SICStus is not finding me an answer when I'm labelling the variables. :( It seems I must be missing a restriction somewhere or maybe I'm doing something wrong. Can anyone help?
Thanks.
So you say the CLP(FD) doesn't find a solution. Do you
mean it terminates with "no" or do you mean it doesn't
terminate?
The problem looks like a Hamiltonian path problem. It
could be that the search needs exponential time, and simply
doesn't terminate in practical time.
In this particular case, giving restrictions like symmetry
breaking heuristics, could in fact reduce the search time!
For example from your starting point, you could fix the search
in 2 directions, the other directions can be derived later.
So if the answer is "No", this means too many restrictions.
If the answer is that it doesn't terminate, this means not
enough restrictions or impossible to practically solve.
Despite all the brute force you put into searching a path,
it might later turn out that a solution is systematic. Or
you might get the idea by yourself.
Bye
First, yes it's my HW and I find it hard so I'll really appreciate some guidance.
I need to prove that for denomination of 1,x,x2...xn when x>=1 the greedy algorithm for the coins problem always work .
We will always get the amount of money we need in minimal coins number when we always pick the maximal coin that smaller from the amount.
Thank you.
As this is your homework I will not provide a complete answer but will rather try to guide you:
First as it usually happens for problems of that type try and prove for yourself that the statement is true for the first few natural numbers. Try to summarize what you use to make the proof for them. This usually will give you some guidance of the correct approach.
I would use induction for this one.
Another option that might help you - represent all the numbers in numerical system with base x. This should make it clearer why the statement is true.
Hope this helps you.