Finding the most frequent element [closed] - algorithm

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
There are several colors of balls. For one specific color, there are more than half of the balls with this color(>n/2). How can I find this color taking only O(n) running time?

You can use Boyer-Moore majority algorithm

Find the color of each ball and tally it up. This doesn't seem to be a sorting at all if you only want to find the most frequent. Just count the number of balls with each color. You could use a hashtable, key is the color and just iterate the spot. Also keep track of the colors.
Edit:
After reading this again, I realized that it did not answer the question.
A) You could just do the tracking at the end by iterating through every available color (assuming you were making that list of colors), as there will be less than n comparisons, it will in the worse case be O(n).
B) While you are tallying the ball count up, keep track of the largest count. Whenever that gets beat, replace it with the current color with the highest count. You probably want to keep track of the color along with the highest number. This way you will do it on a comparison on every run. This again will be O(n) but will have more comparisons.

Related

Given n coins, some of which are heavier, find the number of heavy coins? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Given n coins, some of which are heavier, algorithm for finding the number of heavy coins using O(log^2 n) weighings. Note that all heavy coins have the same weight and all the light ones share the same weight too.
You are given a balance using which you can compare the weights of two disjoint subsets of coins. Note that the balance only indicates which subset is heavier, or whether they have equal weights, and not the absolute weights.
I won't give away the whole answer, but I'll help you break it down.
Find a O(log(n)) algorithm to find a single heavy coin.
Find a O(log(n)) algorithm to split a set into two sets with equal number of heavy and light counts plus up to two leftovers (for when there are not even amounts of each).
Combine algorithms #1 and #2.
Hints:
Algorithm #1 is independent of algorithm #2.
O(log(n)) hints at binary search
How might you end up with O(log^2(n)) with two O(log(n)) algorithms?

Difference between brute force and Euclidean algorithms for finding GCD [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Why is consecutive integer checking algorithm for finding GCD considered brute force, but Euclidean algorithm is not? I am just confused about it. Is it because we are checking one by one?
Brute-force algorithms try every candidate solution that could be tried, see which one fits, and return its findings as the answer. For example, a brute-force GCD algorithm would start with the smaller of the two numbers, and continue down to 1, examining every single possibility, one by one, on its way down.
In contrast, Euclidean algorithm does not go one by one: it makes jumps, sometimes pretty significant ones. Moreover, it does not check each possible number to be a solution to the GCD problem at each step: its ending condition is rather different from a typical brute-force solution, which is to check if the current candidate is a solution to the problem, and stop when the answer is "yes". Euclidean algorithm checks a different condition, namely, b != 0, to decide on whether to continue or not.
These two distinctions (large steps and a different stopping condition) make the Euclidean algorithm different from brute-force algorithms.
A brute-force search involves trying every (reasonable) possibility. Euclid's algorithm only checks a very small subset of those possibilities.

Minimum Distance Algorithm [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have been reading on here for a while, but this is the first time I have posted so I apologize if this isn't tagged correctly or anything. Anyway I am stuck on a problem which I explain below.
In the problem my job is to arrange n wifi routers to minimize the longest distance between any house and the nearest wifi router. I can assume that the houses are arranged in a one dimensional space. I am given the positions of the houses as a distance from an initial point and the positions are given in sorted order. Additionally I must solve this problem in O(m log L) where m is the number of houses and L is the maximum position that can be given.
I have tried to figure this out, but none of the algorithms that I come up with can solve it in the complexity required. Thanks for any hints on how I would go about solving this.
Here is a hint.
It is easy to write a O(m) function that takes an upper bound on distance, and tells you the minimum number of needed routers to make sure that no house is above that distance from a router.
Now search for the largest distance that uses no more than n routers.

Car parked on an infinite street: find car and compute the complexity [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Here is the question asked at an Interview:
You are placed on a street which is very long. This is the street that you parked your car at. You have to find your car on this street.
What is the algorithm to find your car and what is the complexity. The answer they were looking for was O(nlogn)...but you have to prove why is it is o(nlogn)...
hint: lot of math involved to get to this answer.
I guess n here is the distance to the car. The problem is that you're in the middle of an infinite street/line and you don't know in which direction to go.
A solution for this is to go x units in one direction, then 2x to the other, then 4x in forward, 8x in reverse direction etc... And the needed walk is O(n*logn) O(n).
This is a special case of the online cow path problem. The usual metric is the worst case travel distance compared with the distance you would travel if you knew where the car was (competitive ratio). This is 9n, which is actually O(n), not O(n log n). You do, however make O(log n) changes of direction.

find max consecutive sum, find segments containing a point [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
1) Given an array of integers (negative and positive) - what is the most efficient algorithm to return the max consecutive sum.
a) I thought to solve this with Dynamic Programing, but complexity is O(n^2). Is there another way?
b) What if we were given a infinite input of integers. Is there a way to output the current max consecutive sum? I guess not.
2) Given: an array of segments[start,end] (can elapse) ordered ascending by start point,
and a point.
What is the most efficient algorithm to return a segment that contains this point?/all segments that contain this point?
I thought to use binarySearch to hit the first segment that starts before this point the than trying to traverse right and left.
Any other idea ?
For 1) There is an algorithm that's working in O(n)
For 2) I think your approach is not bad (as long as you can't assume ordering w.r.t. ending points)
1) As long as the sum doesn't drop below zero, it's always better to continue with the consecutive summation. So you just pass through the array once (i.e. you have a linear runtime algorithm) from left to right and remember the current consecutive summation and the maximum consecutive summation so far, updating it whenever the current sum gets bigger then the max sum.
So at any point at of the array traversal, you can say what the max sum so far is. Hence you can use this algorithm for an (infinite) input stream, too.
2) Yes, binary search sounds good. But if I understand the question correctly, you can start with the right-most segment (that starts closest to the point) and then just traverse the segments to the left. Of course, the worst case runtime is still linear in the number of segments, but the average should be logarithmic.

Resources