Google Interview Question: Assign People and Cars optimally on 2D array [closed] - algorithm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
My friend recently got this question from google interview and I think this is a tricky question.
Given People and Cars on 2D array, find an Optimal Assignment of
people and cars where you define what "optimal" means.
If we define Optimal Assignment as the least total Manhattan distance between all pairs (min sum of each pair's distance). What could be a good way to solve this and which algorithm should I use?
Is there any better way to solve this problem more efficiently with different definition of "optimal"?

If you use total manhattan distance (or total pretty much anything else), then this is an instance of the "assignment problem", or "minimum cost bipartite matching".
You can solve it using the Hungarian algorithm or other methods: https://en.wikipedia.org/wiki/Assignment_problem

I believe another condition is that for any person, the distances to any two cars should not be the same. And similarly for any car, the distance to any two people should not be the same. And ultimately you want to give everyone a car.
A greedy solution works like this: compute the distance of every (car, person) distances and sort them. Travel thru the array and pair the (car, person) if neither is paired. The algorithm takes O(mnlog(mn)) for m cars and n people but can be improved with 1. use parallelization when calculating distances 2. keep only a O(n) priority queue for each person. When a person is paired. We no longer add its distances to the queue. Hence the algorithm takes about O(mnlog(n)). The optimality can be proved by induction.

Related

Best search algorithm for solving Towers of Hanoi? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 11 months ago.
Improve this question
Which search algorithm is most appropriate for solving the Towers of Hanoi? More specifically, of the uninformed search algorithms breadth-first search, depth-first search, and iterative deepening, which would be best?
I'd argue that none of these algorithms would be the best way to solve Towers of Hanoi. The Towers of Hanoi puzzle is well-studied and we have some great solutions to it, some that work recursively and others that work by looking at patterns in the bits of numbers.
Any of the uninformed search strategies given above will run into trouble fairly quickly solving Towers of Hanoi because the search space is so big. With n disks, there are 3n possible states to the puzzle (each disk can be in one of three spots) and the shortest possible solution to the puzzle has length 2n - 1. You're likely to end up exploring the full search space if you used any of the uninformed strategies, and for any reasonable value of n the cost of holding 3n states in memory (which you'd need for BFS) or a sequence of at 2n - 1 states in memory (for DFS or IDDFS) would be prohibitive. Contrast this with the binary counting solution to Towers of Hanoi or the standard recursive algorithm for Towers of Hanoi, each of which uses only O(n) total memory.

For TSP, how does Held–Karp algorithm reduce the time complexity from Brute-force's O(n!) to O(2^n*n^2)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have hard time to grasp the key idea of Held-Karp algorithm, how does it reduce the time-complexity?
Is it because it uses Dynamic programming so that time is saved by getting the intermediate result from the cache or because it removes some paths earlier in the calculation?
Also, is it possible to use 2 dimension table to show the calculation for
a simple TSP problem(3 or 4 cities)?
The dynamic programming procedure of the Held–Karp algorithm takes advantage of the following property of the TSP problem: Every subpath of a path of minimum distance is itself of minimum distance.
So essentially, instead of checking all solutions in a naive "top-down", brute force approach (of every possible permutation), we instead use a "bottom-up" approach where all the intermediate information required to solve the problem is developed once and once only. The initial step is the very smallest subpath. Every time we move up to solve a larger subpath, we are able to look up the solutions to all the smaller subpath problems which have already been computed. The time savings come because all of the smaller subproblems have already been solved and these savings compound exponentially (at each greater subpath level). But no "paths are removed" from the calculations–at the end of the procedure all of the subproblems will have been solved. The obvious drawback is that a very large memory size may be required to store all the intermediate results.
In summary, the time savings of the Held–Karp algorithm follow from the fact that it never duplicates solving the solution to any subset (combination) of the cities. But the brute force approach will recompute the solution to any given subset combination many times (albeit not necessarily in consecutive order within a given overall set permutation).
Wikipedia contains a 2D distance matrix example and pseudocode here.

When to Use Sorting Algorithms [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm a mostly self-taught programmer, I'm in my freshman year of college going towards a BS in CompSci. Last year I would do some of the homework for the AP CompSci kids, and when they got to sorting algorithms, I understood what they did, but my question was what is a case where one is used? I know this may seem like a horrible, or ridiculous question, but other than a few cases I can think of, I don't understand when one would use a sorting algorithm. I understand that they are essential to know, and that they are foundational algorithms. But in the day to day, when are they used?
Sorting algorithm is an algorithm that arrange the list of elements in certain order. You can use such algorithms when you want the elements in some order.
For example:
Sorting strings on basis of lexicographical order. This makes several computation easier (like searching, insertion, deletion provided appropiate data structure is used)
Sorting integers as part of preprocessing of some algorithms. Suppose you have lot of queries in data base to find an integer, you will want to apply binary search. For it to be applicable, input must be sorted.
In many computational geometry algorithms (like convex hull), sorting the co-ordinates is the first step you do.
So, basically, if you want some ordering, you resort to sorting algorithms!

How can you compute the smallest number of queens that can be placed to attack each uncovered square? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This is a variant question from the Elements of Programming Interviews and doesn't come with a solution.
How can you compute the smallest number of queens that can be placed to attack each uncovered square?
The problem is about finding a minimal dominating set in a graph (the queen graph in your case http://mathworld.wolfram.com/QueenGraph.html), this more general problem is NP-Hard. Even if this reduction (on this specific kind of graphs) is unlikely to be NP-Hard, you may expect to not be able to find any efficient (polynomial) algorithm and indeed as up today nobody find one.
As an interview question, I think an acceptable answer would be a backtracking algorithm. You can add small improvements like always stop the search if you already put (n-2)-queens on the board.
For more information and pseudo-code of the algorithm and also more sophisticated algorithms I would suggest to read:
Fernau, H. (2010). minimum dominating set of queens: A trivial programming exercise?. Discrete Applied Mathematics, 158(4), 308-318.
http://www.sciencedirect.com/science/article/pii/S0166218X09003722
The simplest way is probably exhaustive searching with 1,2,3... queens until you find a solution. If you take the symmetries of the board into account you will only need ~10^6 searches to confirm that 4 queens is not enough (at that point you could use the same search until you find a solution for 5 queens or alternately, use a greedy algorithm for 5 queens to find a solution faster).

Why are heuristics proposed? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a small confusion with the nature of heuristics.
We know that heuristics need not give correct outputs for all input instances.
But then, why are heuristics proposed??
Heuristics are used to trade off performance (usually execution speed, but also memory consumption) with potential accuracy or generality. For example, your anti virus software uses heuristics to characterize what a virus might look like, and can take advantage of that piece of information to determine which files it should spend more time analyzing. A good heuristic has the property that it can save substantial time with minimal cost.
In graph traversal theory, a heuristic for an A* search algorithm need not be perfect. It just needs to have a predicted cost function h(x) that is less than or equal to the true cost to the goal state in order to guarantee an optimal solution. The closer h(x) equals the true cost, the quicker an optimal solution will be found.
Let me give you an example which might help you understand the importance of heuristics.
In Artificial Intelligence, search problems are mainly classified as blind search and directed search. Blind search is where you make use of algorithms such as BFS and DFS and there is a reason they are called blind search, they don't have any knowledge about the direction you should go, you just have to explore and explore until you reach the goal node, imagine the time and space complexity for those algorithms.
Now if you look at the directed search algorithm such as A*, where you have some kind of heuristic function or in simple terms an assumption about which direction you should take the next step.
Although heuristics does not guarantee the best result but rather will try to give you a better solution and sometimes even the best. There are so many classes of problems (Ex. games you play) where a better solution does the task rather than wasting so much time and space in finding the best solution.
I hope it helps.

Resources