I am trying to solve this question 8.2 from the book Grokking Algorithms, but I don't agree with the solution the author gave. The question from the book is:
You’re going to Europe, and you have seven days to see everything
you can. You assign a point value to each item (how much you want to see it) and estimate how long it takes. How can you maximize the
point total (seeing all the things you really want to see) during your
stay? Come up with a greedy strategy. Will that give you the optimal
solution?
The answer is also provided:
Keep picking the activity with the highest point value that you can still do in the time you have left. Stop when you can't do anything else. No, this wont give you the optimal solution.
Is it better to see the places that take minimum time or max time first? I am not convinced by the author's answer to this question. I can't see how it is best to visit the places which take longer in your schedule...
It seems you missed an aspect in the description of the challenge:
There are two metrics at play, not just one:
The time needed to visit a place
The point value a place has
These are independent metrics. The question is to maximize the sum of the second metric, while keeping the sum of the first metric within a given limit (7 days)
The answer suggest to select a next place which has the largest point value among those places whose time still fits within the available time.
This is not saying you should select the place that takes the most time, but the most points (after filtering on available time).
As the book explains, such greedy algorithms do not present the optimal solution, but can in practice come close without having to spend an unacceptable running time on them.
Fair Attraction Problem
What I've Tried
I tried thinking about the switches as bits of a bit string. Basically, no matter the state, I need to get them all to zero. And since this question is in a chapter about decrease-and-conquer I tried to solve for n=1. But, I can't even come up with a brute force solution to ensure that one switch is off.
If you have any ideas or hints, please help, thank you.
Since the only feedback we get is when we're in the goal state, the problem is to explore all possible states as efficiently as possible. The relevant mathematical object is called a Gray code. Since you're looking for a recursive construction, the algorithm is:
If there are no switches, then there's one state, and we're in it.
Otherwise, pick a switch. Recursively explore all configurations of the other switches. Flip the held-out switch and then recursively explore the others again.
I was going through a contest problem on hackerrank (link below)
https://www.hackerrank.com/contests/w13/challenges/a-super-hero
It is as far as I know, a dynamic programming problem. I tried various approaches, but failed to clear it. Its has a lengthy problem statement, but I will try to explain it as short as possible.
You have to clear n different levels, each containing m enemies. Each level can be cleared by defeating any one enemy of that level. Each enemy has some bullets and some power. You need as many bullets as his power to defeat him. After you defeat a enemy, you take his bullets, which can be used only at next level. So, you have to tell, minimum no. of bullets required at the start to complete the game.
For more details, please see the link.
Complete solution is not necessary. Just some pointers, tips will be sufficient.
Actually this problem is not DP. It is a binary search problem. Do a binary search over the answer. For each number of bullet N, be greedy on each level. That is, out of the enemies that you can kill(i.e. have power no more than your current bullets), kill the one that will give you the most bullets after the end of the level. Note that here you need to subtract the number of bullets you will need to kill the given enemy. If the initial value N is enough to complete all levels you set it as the new end of the range you are searching in. Otherwise set N as the new beginning(regular binary search approach).
I've recently saw how Windows 8 presents the on the dashboard the icons in "Metro" style.
In the image above, seems that some widgets receive a 2xwidth in comparison to the others so that the whole list of widgets is to a certain degree balanced.
My question is whether algorithm described here (partion problem solution) is used for achieving the result.
Can anybody give me some hints on how to build up a similar display when the widgets can span on multiple lines (e.g. : "Popular Session" widget would take 2 columns and 2 lines to be displayed)
The algorithm called Linear Partitioning is dividing a given range of positive numbers to optimum k sub-ranges which can be re-stated as a linear programming problems to which dynamic programming is a solution.
Your problem here does not look like an optimization problem which means there not much of a target function there. If tiles had weights and you needed to partition them among few pages evenly , then it would've been a optimization problem. So the answer to first question is NO to me.
Metro's tile arrangement seem to be much simpler as it actually let's the user edit the list and automatic re-arranging tiles would be annoying. so all Metro is doing is not letting you create a new row when the tile can be placed on upper row.
Maybe same simple technique should works you.
I'm trying to devise an algorithm for a robot trying to find the flag(positioned at unknown location), which is located in a world containing obstacles. Robot's mission is to capture the flag and bring it to his home base(which represents his starting position). Robot, at each step, sees only a limited neighbourhood (he does not know how the world looks in advance), but he has an unlimited memory to store already visited cells.
I'm looking for any suggestions about how to do this in an efficient manner. Especially the first part; namely getting to the flag.
A simple Breadth First Search/Depth First Search will work, albeit slowly. Be sure to prevent the bot from checking paths that have the same square multiple times, as this will cause these algorithms to run much longer in standard cases, and indefinitely in the case of the flag being unable to be reached.
A* is the more elegant approach, especially if you know the location of the flag relative to yourself. Wikipedia, as per usual, does a decent job with explaining it. The classic heuristic to use is the manning distance (number of moves assuming no obstacles) to the destination.
These algorithms are useful for the return trip - not so much the "finding the flag" part.
Edit:
These approaches involve creating objects that represents squares on your map, and creating "paths" or series of square to hit (or steps to take). Once you build a framework for representing your square, the problem of what kind of search to use becomes a much less daunting task.
This class will need to be able to get a list of adjacent squares and know if it is traversable.
Considering that you don't have all information, try just treating unexplored tiles as traversable, and recomputing if you find they aren't.
Edit:
As for seaching an unknown area for an unknown object...
You can use something like Pledge's algorithm until you've found the boundaries of your space, recording all information as you go. Then go have a look at all unseen squares using your favorite drift/pathfinding algorithm. If, at any point long the way, you see the flag, stop what you're doing and use your favorite pathfinding algorithm to go home.
Part of it will be pathfinding, for example with the A* algorithm.
Part of it will be exploring. Any cell with an unknown neighbour is worth exploring. The best cells to explore are those closest to the robot and with the largest unexplored neighbourhood.
If the robot sees through walls some exploration candidates might be inaccessible and exploration might be required even if the flag is already visible.
It may be worthwhile to reevaluate the current target every time a new cell is revealed. As long as this is only done when new cells are revealed, progress will always be made.
With a simple DFS search at least you will find the flag:)
Well, there are two parts to this.
1) Searching for the Flag
2) Returning Home
For the searching part, I would circle the home point moving outward every time I made a complete loop. This way, you can search every square and idtentify if it is a clear spot, an obstacle, map boundary or the flag. This way, you can create a map of your environment.
Once the Flag is found, you could either go back the same way, or find a more direct route. If it is more direct route, then you would have to use the map which you have created to find a direct route.
What you want is to find all minimal-spanning-tree in the viewport of the robot and then let the robot game which mst he wants to travel.
If you met an obstacle, you can go around to determine its precise dimensions, and after measuring it return to the previous course.
With no obstacles in the range of sight you can try to just head in the direction of the nearest unchecked area.
It maybe doesn't seem the fastest way but, I think, it is the good point to start.
I think the approach would be to construct the graph as the robot travels. There will be a function that will return to the robot the particular state of a grid. This is needed since the robot will not know in advance the state of the grid.
You can apply heuristics in the search so the probability of reaching the flag is increased.
As many have mentioned, A* is good for global planning if you know where you are and where your goal is. But if you don't have this global knowledge, there is a class of algorithms call "bug" algorithms that you should look into.
As for exploration, if you want to find the flag the fastest, depending on how much of the local neighborhood your bot can see, you should try to not have this neighborhood overlap. For example if your bot can see one cell around it in every direction, you should explore every third column. (columns 1, 4, 7, etc.). But if the bot can only see the cell it is currently occupying, then the most optimal thing you can do is to not go back over what you already visited.