I'm working through codefights and am attempting the busyHolidays challenge from the Instacart company challenges.
The challenge provides three arrays. Shoppers contains strings representing the start and end times of their shifts. Orders contains strings representing the start and end times of the orders, and leadTime contains integers representing the number of minutes it takes to complete the job.
The goal is to determine if the orders can be matched to shoppers such that each shopper has only one order and each order has a shopper. An order may only be matched to a shopper if the shopper can both begin and complete it within the order time.
I have a solution that passes 19/20 tests, but since I can't see the last test I have no idea what's going wrong. I originally spent a couple days trying to learn algorithms like Edmond's Algorithm and the Hungarian Algorithm, but my lack of CS background and weakness in math kind of bit me in the ass and I can't seem to wrap my head around how to actually implement those methodologies, so I came up with a solution that involves weighting each node on each side of the graph according to its number of possible connections. I would appreciate it if anyone could help me take a look at my solution and either point out where it might be messing up or suggest a more standard solution to the problem in a way that might be easier for someone without formal training in algorithms to understand. Thanks in advance.
I'll put the code in a gist since it's fairly length
Code: https://gist.github.com/JakeTompkins/7e1afc4722fb828f26f8f6a964774a25
Well, I don't see any reason to think that the algorithm you're writing is actually going to work so the question about how you might be messing it up doesn't seem to be relevant.
You have correctly identified this as an instance of the assignment problem. More specifically this is the "maximum bipartite matching" problem, and the Edmonds-Karp algorithm is the simpliest way to solve it (https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm)
However, this is an algorithm for finding the maximum flow in a network, which is a larger problem than simple bipartite matching, and the explanations of this algorithm are really a lot more complicated than you need. It's understandable that you had some trouble implementing this from the literature, but actually when the problem is reduced to simple (unweighted) bipartite matching, the algorithm is easy to understand:
Make an initial assignment
Try to find an improvement
Repeat until no more improvements can be found.
For bipartite matching, an "improvement" always has the same form, which is what makes this problem easy to solve. To find an improvement, you have to find a path that connects an unassigned shopper to an unassigned order, following these rules:
The path can go from any shopper to any order he/she could fulfill but does not
The path can go from any order only to the shopper that is fulfilling it in the current assignment.
You use bread-first search to find the shortest path, which will correspond to the improvement that changes the smallest number of existing assignments.
The path you find will necessarily have an odd number of edges, and the even-numbered edges will be assignments. To implement the improvement, you remove those assignments and replace them with the odd-numbered edges. There's one more of those, which is what makes it an improvement. It looks like this:
PREVIOUS PATH FOUND IMPROVED ASSIGNMENT
1 1 1
/ /
A A A
\ \
2 2 2
/ /
B B B
\ \
3 3 3
/ /
C C C
Related
Given a bunch of sets of people (similar to):
[p1,p2,p3]
[p2,p3]
[p1]
[p1]
Select 1 from each set, trying to minimize the maximum number of times any one person is selected.
For the sets above, the max number of times a given person MUST be selected is 2.
I'm struggling to get an algorithm for this. I don't think it can be done with a greedy algorithm, more thinking along the lines of a dynamic programming solution.
Any hints on how to go about this? Or do any of you know any good websites about this stuff that I could have a look at?
This is neither dynamic nor greedy. Let's look at a different problem first -- can it be done by selecting every person at most once?
You have P people and S sets. Create a graph with S+P vertices, representing sets and people. There is an edge between person pi and set si iff pi is an element of si. This is a bipartite graph and the decision version of your problem is then equivalent to testing whether the maximum cardinality matching in that graph has size S.
As detailed on that page, this problem can be solved by using a maximum flow algorithm (note: if you don't know what I'm talking about, then take your time to read it now, as you won't understand the rest otherwise): first create a super-source, add an edge linking it to all people with capacity 1 (representing that each person may only be used once), then create a super-sink and add edges linking every set to that sink with capacity 1 (representing that each set may only be used once) and run a suitable max-flow algorithm between source and sink.
Now, let's consider a slightly different problem: can it be done by selecting every person at most k times?
If you paid attention to the remarks in the last paragraph, you should know the answer: just change the capacity of the edges leaving the super-source to indicate that each person may be used more than once in this case.
Therefore, you now have an algorithm to solve the decision problem in which people are selected at most k times. It's easy to see that if you can do it with k, then you can also do it with any value greater than k, that is, it's a monotonic function. Therefore, you can run a binary search on the decision version of the problem, looking for the smallest k possible that still works.
Note: You could also get rid of the binary search by testing each value of k sequentially, and augmenting the residual network obtained in the last run instead of starting from scratch. However, I decided to explain the binary search version as it's conceptually simpler.
Problem description
There are different categories which contain an arbitrary amount of elements.
There are three different attributes A, B and C. Each element does have an other distribution of these attributes. This distribution is expressed through a positive integer value. For example, element 1 has the attributes A: 42 B: 1337 C: 18. The sum of these attributes is not consistent over the elements. Some elements have more than others.
Now the problem:
We want to choose exactly one element from each category so that
We hit a certain threshold on attributes A and B (going over it is also possible, but not necessary)
while getting a maximum amount of C.
Example: we want to hit at least 80 A and 150 B in sum over all chosen elements and want as many C as possible.
I've thought about this problem and cannot imagine an efficient solution. The sample sizes are about 15 categories from which each contains up to ~30 elements, so bruteforcing doesn't seem to be very effective since there are potentially 30^15 possibilities.
My model is that I think of it as a tree with depth number of categories. Each depth level represents a category and gives us the choice of choosing an element out of this category. When passing over a node, we add the attributes of the represented element to our sum which we want to optimize.
If we hit the same attribute combination multiple times on the same level, we merge them so that we can stripe away the multiple computation of already computed values. If we reach a level where one path has less value in all three attributes, we don't follow it anymore from there.
However, in the worst case this tree still has ~30^15 nodes in it.
Does anybody of you can think of an algorithm which may aid me to solve this problem? Or could you explain why you think that there doesn't exist an algorithm for this?
This question is very similar to a variation of the knapsack problem. I would start by looking at solutions for this problem and see how well you can apply it to your stated problem.
My first inclination to is try branch-and-bound. You can do it breadth-first or depth-first, and I prefer depth-first because I think it's cleaner.
To express it simply, you have a tree-walk procedure walk that can enumerate all possibilities (maybe it just has a 5-level nested loop). It is augmented with two things:
At every step of the way, it keeps track of the cost at that point, where the cost can only increase. (If the cost can also decrease, it becomes more like a minimax game tree search.)
The procedure has an argument budget, and it does not search any branches where the cost can exceed the budget.
Then you have an outer loop:
for (budget = 0; budget < ... ; budget++){
walk(budget);
// if walk finds a solution within the budget, halt
}
The amount of time it takes is exponential in the budget, so easier cases will take less time. The fact that you are re-doing the search doesn't matter much because each level of the budget takes as much or more time than all the previous levels combined.
Combine this with some sort of heuristic about the order in which you consider branches, and it may give you a workable solution for typical problems you give it.
IF that doesn't work, you can fall back on basic heuristic programming. That is, do some cases by hand, and pay attention to how you did it. Then program it the same way.
I hope that helps.
Odd question here not really code but logic,hope its ok to post it here,here it is
I have a data structure that can be thought of as a graph.
Each node can support many links but is limited to a value for each node.
All links are bidirectional. and each link has a cost. the cost depends on euclidian difference between the nodes the minimum value of two parameters in each node. and a global modifier.
i wish to find the maximum cost for the graph.
wondering if there was a clever way to find such a matching, rather than going through in brute force ...which is ugly... and i'm not sure how i'd even do that without spending 7 million years running it.
To clarify:
Global variable = T
many nodes N each have E,X,Y,L
L is the max number of links each node can have.
cost of link A,B = Sqrt( min([a].e | [b].e) ) x
( 1 + Sqrt( sqrt(sqr([a].x-[b].x)+sqr([a].y-[b].y)))/75 + Sqrt(t)/10 )
total cost =sum all links.....and we wish to maximize this.
average values for nodes is 40-50 can range to (20..600)
average node linking factor is 3 range 0-10.
For the sake of completeness for anybody else that looks at this article, i would suggest revisiting your graph theory algorithms:
Dijkstra
Astar
Greedy
Depth / Breadth First
Even dynamic programming (in some situations)
ect. ect.
In there somewhere is the correct solution for your problem. I would suggest looking at Dijkstra first.
I hope this helps someone.
If I understand the problem correctly, there is likely no polynomial solution. Therefore I would implement the following algorithm:
Find some solution by beng greedy. To do that, you sort all edges by cost and then go through them starting with the highest, adding an edge to your graph while possible, and skipping when the node can't accept more edges.
Look at your edges and try to change them to archive higher cost by using a heuristics. The first that comes to my mind: you cycle through all 4-tuples of nodes (A,B,C,D) and if your current graph has edges AB, CD but AC, BD would be better, then you make the change.
Optionally the same thing with 6-tuples, or other genetic algorithms (they are called that way because they work by mutations).
This is equivalent to the traveling salesman problem (and is therefore NP-Complete) since if you could solve this problem efficiently, you could solve TSP simply by replacing each cost with its reciprocal.
This means you can't solve exactly. On the other hand, it means that you can do exactly as I said (replace each cost with its reciprocal) and then use any of the known TSP approximation methods on this problem.
Seems like a max flow problem to me.
Is it possible that by greedily selecting the next most expensive option from any given start point (omitting jumps to visited nodes) and stopping once all nodes are visited? If you get to a dead end backtrack to the previous spot where you are not at a dead end and greedily select. It would require some work and probably something like a stack to keep your paths in. I think this would work quite effectively provided the costs are well ordered and non negative.
Use Genetic Algorithms. They are designed to solve the problem you state rapidly reducing time complexity. Check for AI library in your language of choice.
It's been awhile since my algorithms class in school, so forgive me if my terminology is not exact.
I have a series of actions that, when run, produces some desired state (it's basically a set of steps to reproduce a bug, but that doesn't matter for the sake of this question).
My goal is to find the shortest series of steps that still produces the desired state. Any given step might be unnecessary, so I'm trying to remove those as efficiently as possible.
I want to preserve the order of the steps (so I can remove steps, but not rearrange them).
The naive approach I'm taking is to take the entire series and try removing each action. If I successfully can remove one action (without altering the final state), I start back at the beginning of the series. This should be O(n^2) in the worst case.
I'm starting to play around with ways to make this more efficient, but I'm pretty sure this is a solved problem. Unfortunately, I'm not sure exactly what to Google - the series isn't really a "path," so I can't use path-shortening algorithms. Any help - even just giving me some terms to search - would be helpful.
Update: Several people have pointed out that even my naive algorithm won't find the shortest solution. This is a good point, so let me revise my question slightly: any ideas about approximate algorithms for the same problem? I'd rather have a short solution that's near the shortest solution quickly than take a very long time to guarantee the absolute shortest series. Thanks!
Your naive n^2 approach is not exactly correct; in the worst case you might have to look at all subsets (well actually the more accurate thing to say is that this problem might be NP-hard, which doesn't mean "might have to look at all subsets", but anyway...)
For example, suppose you are currently running steps 12345, and you start trying to remove each of them individually. Then you might find that you can't remove 1, you can remove 2 (so you remove it), then you look at 1345 and find that each of them is essential -- none can be removed. But it might turn out that actually, if you keep 2, then just "125" suffice.
If your family of sets that produce the given outcome is not monotone (i.e. if it doesn't have the property that if a certain set of actions work, then so will any superset), then you can prove that there is no way of finding the shortest sequence without looking at all subsets.
If you are making strickly no assumptions about the effect of each action and you want to strickly find the smallest subset, then you will need to try all possible subets of actions to find the shortest seuence.
The binary search method stated, would only be sufficient if a single step caused your desired state.
For the more general state, even removing a single action at a time would not necessarily give you the shortest sequence. This is the case if you consider pathological examples where actions may together cause no problem, but individually trigger your desired state.
Your problem seem reducable to a more general search problem, and the more assumptions you can create the smaller your search space will become.
Delta Debugging, A method for minimizing a set of failure inducing input, might be a good fit.
I've previously used Delta(minimizes "interesting" files, based on test for interestingness) to reduce a ~1000 line file to around 10 lines, for a bug report.
The most obvious thing that comes to mind is a binary search-inspired recursive division into halves, where you alternately leave out each half. If leaving out a half at any stage of the recursion still reproduces the end state, then leave it out; otherwise, put it back in and recurse on both halves of that half, etc.
Recursing on both halves means that it tries to eliminate large chunks before giving up and trying smaller chunks of those chunks. The running time will be O(n log(n)) in the worst case, but if you have a large n with a high likelihood of many irrelevant steps, it ought to win ahead of the O(n) approach of trying leaving out each step one at a time (but not restarting).
This algorithm will only find some minimal paths, though, it can't find smaller paths that may exist due to combinatorial inter-step effects (if the steps are indeed of that nature). Finding all of those will result in combinatorial explosion, though, unless you have more information about the steps with which to reason (such as dependencies).
You problem domain can be mapped to directional graph where you have states as nodes and steps as links , you want to find the shortest path in a graph , to do this a number of well known algorithms exists for example Dijkstra's or A*
Updated:
Let's think about simple case you have one step what leads from state A to state B this can be drawn as 2 nodes conected by a link. Now you have another step what leads from A to C and from C you havel step what leads to B. With this you have graph with 3 nodes and 3 links, a cost of reaching B from A it eather 2 (A-C-B) or 1 (A-B).
So you can see that cost function is actualy very simple you add 1 for every step you take to reach the goal.
My best shot so far:
A delivery vehicle needs to make a series of deliveries (d1,d2,...dn), and can do so in any order--in other words, all the possible permutations of the set D = {d1,d2,...dn} are valid solutions--but the particular solution needs to be determined before it leaves the base station at one end of the route (imagine that the packages need to be loaded in the vehicle LIFO, for example).
Further, the cost of the various permutations is not the same. It can be computed as the sum of the squares of distance traveled between di -1 and di, where d0 is taken to be the base station, with the caveat that any segment that involves a change of direction costs 3 times as much (imagine this is going on on a railroad or a pneumatic tube, and backing up disrupts other traffic).
Given the set of deliveries D represented as their distance from the base station (so abs(di-dj) is the distance between two deliveries) and an iterator permutations(D) which will produce each permutation in succession, find a permutation which has a cost less than or equal to that of any other permutation.
Now, a direct implementation from this description might lead to code like this:
function Cost(D) ...
function Best_order(D)
for D1 in permutations(D)
Found = true
for D2 in permutations(D)
Found = false if cost(D2) > cost(D1)
return D1 if Found
Which is O(n*n!^2), e.g. pretty awful--especially compared to the O(n log(n)) someone with insight would find, by simply sorting D.
My question: can you come up with a plausible problem description which would naturally lead the unwary into a worse (or differently awful) implementation of a sorting algorithm?
I assume you're using this question for an interview to see if the applicant can notice a simple solution in a seemingly complex question.
[This assumption is incorrect -- MarkusQ]
You give too much information.
The key to solving this is realizing that the points are in one dimension and that a sort is all that is required. To make this question more difficult hide this fact as much as possible.
The biggest clue is the distance formula. It introduces a penalty for changing directions. The first thing an that comes to my mind is minimizing this penalty. To remove the penalty I have to order them in a certain direction, this ordering is the natural sort order.
I would remove the penalty for changing directions, it's too much of a give away.
Another major clue is the input values to the algorithm: a list of integers. Give them a list of permutations, or even all permutations. That sets them up to thinking that a O(n!) algorithm might actually be expected.
I would phrase it as:
Given a list of all possible
permutations of n delivery locations,
where each permutation of deliveries
(d1, d2, ...,
dn) has a cost defined by:
Return permutation P such that the
cost of P is less than or equal to any
other permutation.
All that really needs to be done is read in the first permutation and sort it.
If they construct a single loop to compare the costs ask them what the big-o runtime of their algorithm is where n is the number of delivery locations (Another trap).
This isn't a direct answer, but I think more clarification is needed.
Is di allowed to be negative? If so, sorting alone is not enough, as far as I can see.
For example:
d0 = 0
deliveries = (-1,1,1,2)
It seems the optimal path in this case would be 1 > 2 > 1 > -1.
Edit: This might not actually be the optimal path, but it illustrates the point.
YOu could rephrase it, having first found the optimal solution, as
"Give me a proof that the following convination is the most optimal for the following set of rules, where optimal means the smallest number results from the sum of all stage costs, taking into account that all stages (A..Z) need to be present once and once only.
Convination:
A->C->D->Y->P->...->N
Stage costs:
A->B = 5,
B->A = 3,
A->C = 2,
C->A = 4,
...
...
...
Y->Z = 7,
Z->Y = 24."
That ought to keep someone busy for a while.
This reminds me of the Knapsack problem, more than the Traveling Salesman. But the Knapsack is also an NP-Hard problem, so you might be able to fool people to think up an over complex solution using dynamic programming if they correlate your problem with the Knapsack. Where the basic problem is:
can a value of at least V be achieved
without exceeding the weight W?
Now the problem is a fairly good solution can be found when V is unique, your distances, as such:
The knapsack problem with each type of
item j having a distinct value per
unit of weight (vj = pj/wj) is
considered one of the easiest
NP-complete problems. Indeed empirical
complexity is of the order of O((log
n)2) and very large problems can be
solved very quickly, e.g. in 2003 the
average time required to solve
instances with n = 10,000 was below 14
milliseconds using commodity personal
computers1.
So you might want to state that several stops/packages might share the same vj, inviting people to think about the really hard solution to:
However in the
degenerate case of multiple items
sharing the same value vj it becomes
much more difficult with the extreme
case where vj = constant being the
subset sum problem with a complexity
of O(2N/2N).
So if you replace the weight per value to distance per value, and state that several distances might actually share the same values, degenerate, some folk might fall in this trap.
Isn't this just the (NP-Hard) Travelling Salesman Problem? It doesn't seem likely that you're going to make it much harder.
Maybe phrasing the problem so that the actual algorithm is unclear - e.g. by describing the paths as single-rail railway lines so the person would have to infer from domain knowledge that backtracking is more costly.
What about describing the question in such a way that someone is tempted to do recursive comparisions - e.g. "can you speed up the algorithm by using the optimum max subset of your best (so far) results"?
BTW, what's the purpose of this - it sounds like the intent is to torture interviewees.
You need to be clearer on whether the delivery truck has to return to base (making it a round trip), or not. If the truck does return, then a simple sort does not produce the shortest route, because the square of the return from the furthest point to base costs so much. Missing some hops on the way 'out' and using them on the way back turns out to be cheaper.
If you trick someone into a bad answer (for example, by not giving them all the information) then is it their foolishness or your deception that has caused it?
How great is the wisdom of the wise, if they heed not their ego's lies?