optimised cost to travel and collect units form factories - algorithm

Problem statement
there are 'n' factories, situated at arr[i]. The entry is at '0' and the exit id at 'exit-pos' . The moving time form one place to another is the distance between the two.
Each factory takes 'processing' time (same for all) to process. You have to start from '0' , exit from 'exit-pos' .
Also you have to produce 'n' units, each factory can produce 1 unit only. Along the path you need to visit each factory and give the order to produce. Then it will take ' processing' time to produce it. you can wait that time for the unit to produce and move ahead. Or you can visit other factories to give them order. latter you need to visit the factory again to pick the order.
Time taken to travel is the distance between the two places.
Question: You need to tell the minimum time taken to produce 'n' units and collect them also. Your journey starts at '0' and ends at a 'end-pos'.
Input:
number of units required to produce = number of factories = n
the location of factories in a array = arr
the exit position = ext-pos
the processing time at each unit (it is same for each factory) = processing-time.
my approach
I had this question during a test. Test is submitted but unfortunately no answers were given. During the test, I tried to used a recursive approach. It considered the 'items-collected', 'time', 'factory - position'. It failed. It got into a time loop.
Any suggestion/ question is there regarding question's clarity feel free to comment.
Thanks
update: 0 < arrr[i] (factory time) < exiPos

Solve it by using a strategy with A* search
I've uploaded the solution to a GitHub repo.
You can see a live demo here.
Here are the contents of the README with a screenshot:
Solution summary
Since 0 < arr[i] < exit-pos, we always start at the first factory and finish at the last. It's trivial to add the remaining time.
The solution includes three different algorithms, which can be selected via a drop-down:
1. Dijkstra's algorithm (worst)
Explores all states and finds the minimum time to a final state:
A state consists of the current position and the state of all factories (How long until product is done? Was it picked up?).
From each state either wait until production is finished, move left (towards start) or right (towards exit).
Obviously uses Dijkstra's algorithm.
2. A* search algorithm
The A* algorithm is basically the same as Dijkstra's algorithm, but uses a heuristic to favor more promising nodes.
The heuristic used calculates the minimum travel distance without waiting. It computes the time to the leftmost unfinished factory and from there to the exit.
The improvement is significant (even for small inputs).
3. Specialized A* (best)
This version uses the same search as the previous, but restricts new states being explored:
It follows the strategy to finish everything left of you once you turn from going right to going left. And waiting is only allowed at leftmost unfinished factory.
You can move left, only when there is a unfinished factory left of you.
You can wait, only if you are at the leftmost unfinished factory.
You can move right, only if you're moving into untouched territory or everything not right of you is finished.
If you moved left last turn, continue moving only left as long as you can.
If you moved left last turn and cannot move left anymore, you have to only wait.
Why does this work (references above list)?
Moving left does not make any sense otherwise. Just don't.
You'll have to come back to the leftmost unfinshed factory anyways. On the way back right you'll cross all factories to the right. So we choose to wait only at the very left, because waiting elsewhere can be transformed to waiting here.
Moving back right into touched territory won't be faster, since you have to come back to the very left (see 2.). Either go farther right initially, but only turn back right when you're done on your left.
You chose to go back when turning around. Other options are explored by moving farther right before turning.
Same as 4.
Performance
These are rough timings for this input:
n = 14
arr = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43]
processingTime = 15
Algorithm
Time
Dijkstra
~ 3000 ms
A*
~ 100 ms
Strategy
~ 10 ms
How to get to this solution
First I chose to visualize the solution and solve it by using Dijkstra's algorithm.
After inspecting the paths for various inputs, I noticed the pattern described in the "Strategy" solution: Going right, turning around and then completing this chunk of factories.
Possible Improvements
One could take the strategy and chunk the factories array.
Each chunk [a1, a2, ..., an] takes travelTime + waitTime = [3 * (an-a1)] + [min(0, processingTime - 2 * (an-a1))] time to finish.
And travelling between chunks [a1, ..., an] and [b1, ..., bm] takes b1 - an time.
Now one has to determine the chunks, which minimize the total time.
I had too much fun working this out.
And I can only recommend visualizing things.
One note to me: Think before you code. Just take a piece of paper and draw what you would do without a computer.

Related

Solving a travelling salesman problem to maximize gain in minimum time

Team
I need suggestions on how to solve the below problem.
There are n places (for example say 10 places). Time taken from any one place to the other is known. On reaching a particular place a known reward is given in the form of rupees (ex. if I travel from place 1 to place 2, I get 100 rupees. Travelling from place 2 to place 3 will fetch me 50 rupees etc...). Also, sometimes a particular place is unavailable to travel to which changes with time. At all time instances, whatever places can be traveled to is known, reward fetched from each place is known and the time taken to travel from one place to other is known. This is an ongoing process, meaning after you reach place A and earn 100 rupees, you travelled to place B and fetch 100 Rs. Then it is possible that place A can again fetch you rupees say 50 if you travel from B to A again.
Problem statement is:
A path should be followed with time ( A to B, B to C, C to B, B to A etc...) so that I always have maximum rupees in a given time. Thus at the end of 1 month, I should have followed a path that fetches me the maximum amount among all possibilities available.
We already know that in the traveling salesman problem it takes O(N!) to calculate the best way for the month if there are no changes. Because of the unknown changes that can happen, the best way is to use a greedy algorithm such that every time you come to new place, you calculate where you get the most R's in the least amount of time. It will take O(N*k) where k is the amount of time that you move between places in a month.
I'm not sure how this problem is related to travelling salesman -- I understood the latter as having the restriction of at least visiting all the places once.
Assuming we have all of the time instances and their related information ahead of our calculation, if we work backwards from each place we imagine ending at, the choices we have for the previous location visited dictate the possible times it took to get to the last place and the possible earning we could have gotten. Clearly from those choices we would choose the best reward among them because it's our last choice. Apply this idea recursively from there, until we reach the start of the month. If we run this recursion from each possible ending place, we can reuse states we've seen before; for example if we reached place A at time T as one of the options when calculating backwards from B, and then we reach A again at time T when calculating a path that started at C, we can reuse the record for the first state. The search space would be O(N*T) but practically would vary with the input.
Something like this? (Assumes we cannot wait in any one place. Otherwise, the solution could be better coded bottom-up where we can try all place + time states.) Return the best of running f with the same memo map on all possible ending states.
get_travel_time(place_a, place_b):
# returns travel time from place a to place b
get_neighbours(place):
# returns places from which we can travel to place
get_reward(place, time):
# returns the reward awarded at place place at time time
f(place, time, memo={}):
if time == 0:
return 0
key = (place, time)
if key in memo:
return memo[key]
current_reward = get_reward(place, time)
best = -Infinity
for neighbour in get_neighbours(place):
previous_time = time - get_travel_time(neighbour, place)
if previous_time >= 0:
best = max(best, current_reward + f(neighbour, previous_time, memo))
memo[key] = best
return memo[key]

Dynamic Programming - Jumping jacks

Can some one help in solving the below problem using DP technique.
No need of code. Idea should be enough.
Marvel is coming up with a new superhero named Jumping Jack. The co-creator of this superhero is a mathematician, and he adds a mathematical touch to the character's powers.
So, one of Jumping Jack's most prominent powers is jumping distances. But, this superpower comes with certain restrictions.
Jumping Jack can only jump —
To the number that is one kilometre lesser than the current distance. For example, if he is 12 km away from the destination, he won't be able to jump directly to it since he can only jump to a location 11 km away.
To a distance that is half the current distance. For example, if Jumping Jack is 12 km away from the destination, again, he won't be able to jump directly to it since he can only jump to a location 6 km away.
To a distance that is ⅓rd the current distance. For example, if Jumping Jack is 12 km away from the destination, once more, he won't be able to jump directly to it since he can only jump to a location 4 km away.
So, you need to help the superhero develop an algorithm to reach the destination in the minimum number of steps. The destination is defined as the place where the distance becomes 1. Jumping Jack should cover the last 1 km running! Also, he can only jump to a destination that is an integer distance away from the main destination. For example, if he is at 10 km, by jumping 1/3rd the distance, he cannot reach 10/3rd the distance. He has to either jump to 5 or 9.
So, you have to find the minimum number of jumps required to reach a destination. For instance, if the destination is 10 km away, there are two ways to reach it:
10 -> 5 -> 4 -> 2 -> 1 (four jumps)
10 -> 9 -> 3 -> 1 (three jumps)
The minimum of these is three, so the superhero takes a minimum of three jumps to reach the point.
You should keep the following 2 points in mind for solving all the Dynamic programming problems:
Optimal Substructure (to find the minimum number of jumps)
Overlapping subproblems (If you encounter the same subproblem again, don't solve it, rather used the already computed result - yes, you'll have to store the computed result of sub-problems for future reference)
Always try to make a recursive solution to the problem at hand (now don't directly go ahead and look at the recursive solution, rather first try it yourself):
calCulateMinJumps(int currentDistance) {
if(currentDistance == 1) {
// return jumps - you don't need to recurse here
} else {
// find at what all places you can jump
jumps1 = calculateMinJumps(n-1) + 1
if(currentDistance % 2 == 0)
jumps2 = calculateMinJumps(n/2) + 1
if(currentDistance % 3 == 0)
jumps3 = calculateMinJumps(n/3) + 1
return minimum(jumps1, jumps2, jumps3) // takes jump2 and jump3 only if they are valid
}
}
Now you know that we have devised a recursive solution. The next step is to go and store the solution in an array so that you can use it in future and avoid re-computation. You can simply take a 1-D integer array and keep on storing it.
Keep in mind that if you go by top-down approach - it will be called memoization, and if you go by bottom-up approach - it will be called Dynamic programming. Have a look at this to see the exact difference between the two approaches.
Once you have a recursive solution in your mind, you can now think of constructing a bottom-up solution, or a top-down solution.
In Bottom-up solution strategy - you fill out the base cases first (Jumping Jack should cover the last 1 km running!) and then build upon it to reach to the final required solution.
Now, I'm not giving you the complete strategy, as it will be a task for you to carry out. And you will indeed find the solution. :) - As requested by you - Idea should be enough.
Firstly, think about this coin changing problem may help you understand yours, which are much the same:
Coin change - DP
Secondly, usually if you know that your problem has a DP solution, you can do 4 steps to solve it. Of cause you can ommit one or all of the first 3 steps.
Find a backtrack solution.(omitted here)
Find the recursion formula for your problem, based on the backtrack solution. (describe later)
Write a recursion code based on the recursion formula.(Omitted)
Write a iterating code based on step 3.(Omitted)
Finaly, for your question, the formula is not hard to figure out:
minStepFor(distance_N)=Math.min(minStepFor(distance_N-1)+1),
minStepFor(distance_N/2)+1,
minStepFor(distance_N/3)+1)
Just imagine jack is standing at the distance N point, and he has at most three options for his first go: go to N-1 point, or N/2 point, or N/3 point(if N/2 or N/3 is not integer, his choice will be reduced).
For each of his choice, the minimum steps is minStepFor(left distance)+1, since he has made 1 move, and surely he will try to make a minimum steps in his left moving. And the left distance for each choice is distance_N-1,distance_N/2 and distance_N/3.
So that's the way to understand the formula. With it it is not hard to write a recursion solution.
Consider f[1]=0 as the number of jumps required when JumpingJack is 1km away is none.
Using this base value solve F[2]...f[n] in below manner
for(int i=2; i<=n; i++) {
if(i%2==0 && i%3==0) {
f[i] = Math.min(Math.min(f[i-1]+1, f[i/2]+1), f[i/3] + 1);
}
if(i%2==0) {
f[i] = Math.min(f[i-1]+1, f[i/2]+1);
}else if(i%3==0) {
f[i] = Math.min(f[i-1]+1, f[i/3] + 1);
}else{
f[i] = f[i-1]+1;
}
}
return f[n];
You don't need to recursively solve the subproblem more than once!

Activity selection with two resources

Given n activities with start time (Si) and end time (Fi) and 2 resources.
Pick the activities such that maximum number of activities are finished.
My ideas
I tried to solve it with DP but couldn't figure out anything with DP.So trying with greedy
Approach: Fill resource-1 first greedily and then resource-2 next greedily(Least end time first). But this will not work for this case T1(1,4) T2(5,10) T3(6,12) T4(11,15)
Approach 2:Select tasks greedily and assign it in round robin fashion.
This will also not work.
Can anyone please help me in figuring out this?
No need to use DP at all, a Greedy solution suffices, though it is slightly more complicated than the 1-resource problem.
Here, we first sort the intervals by the ending time, earlier first. Then, put two "sentinel" intervals in the resources, both with ending time -∞. Then, keeping grabbing the interval x with lowest x.end, and follow these rules:
if x.start is before both of the two ending times in our two resources, skip x and don't assign it, since x cannot fit
otherwise, have x overwrite the resource whose endpoint is latest and still before x.start
The greedy strategy in rule 2 is the key point here: we want to replace the latest ending used resource, since that maximizes the "space" that we have in the other resource to accommodate some future interval with an early start time, making it strictly more likely that future interval will be able to fit.
Let's look the example in the question, with intervals (1,4), (5,10), (6,12), and (11,18) already in sorted order. We begin with both resources having (-∞,-∞) as "sentinel" intervals. Now take the first interval (1,4), and see that it fits, so now we have resource 1 having (1,4) and resource 2 having (-∞,-∞). Next, take (5,10), which can fit in both resources, so we choose resource 1, because it ends the latest, and now resource 1 has (5,10). Next, we take (6,12), which only fits in resource 2, so resource 2 has (6,12). Finally, take (11,18), which fits in resource 1.
Hence, we have been able to fit all four intervals using our Greedy strategy.
Activity selection problem can be solved by Greedy-Iterative-Activity-Selector Algorithm.
The basic idea is to always pick the next activity whose finish time is least among the remaining activities and the start time is more than or equal to the finish time of previously selected activity. We can sort the activities according to their finishing time so that we always consider the next activity as minimum finishing time activity.
See more on Wikipedia.

Why is the state space the power set of the grid's dimensions? (edX CS 188.1x Artificial Intelligence)

I'm self-learning with the edX course CS 188.1x Artificial Intelligence. Since the course concluded a year ago, it is in "archive mode" and there are no teaching staff to help with the questions. This also means I get no credit for finishing the course, so hopefully asking for help with a "homework" question here is okay.
In the first homework assignment the following question is asked:
Question 9: Hive Minds Lost at Night It is night and you control a single insect. You know the maze, but you do not know what square the
insect will start in. You must pose a search problem whose solution is
an all-purpose sequence of actions such that, after executing those
actions, the insect will be on the exit square, regardless of initial
position. The insect executes the actions mindlessly and does not know
whether its moves succeed: if it uses an action which would move it in
a blocked direction, it will stay where it is. For example, in the
maze below, moving right twice guarantees that the insect will be at
the exit regardless of its starting position.
It then asks the size of the state space. The answer is given as 2^MN, where M and N are the horizontal and vertical dimensions of the maze. Why is the answer the power set of MN? In my mind, the bug can only be in one square at the beginning, and we only have one bug, so I know the number of start states is MN. But the number of start states != state space, and that is where I am confused.
FYI - the cost per move is 1, and the bug can only move 1 square left, right, up, or down at a time. The goal is to get to the X (goal square).
Okay - I think I got it.
Set of all subsets (power set*) is exactly the right way to think about this. The state space is the set of all states.
1) Definition of state:
"A state contains all of the information necessary to predict the effects of an action and to
determine if it is a goal state." (http://artint.info/html/ArtInt_48.html)
The actions in this scenario are simple: left, right, up, down. They are the possible movements a bug could make.
2) Definition of solution:
Solutions are sequences of actions that lead to the goal test being
passed.
If we only permitted MN states, one for each possible starting position the bug was in, then we would have a state space that gave solutions that were valid only for discrete starting positions. But, the solution must be valid regardless of the initial state of the bug. This means the solution must work for scenarios in which the bug could occupy any of the MN available squares.
In other words, the solutions must be valid for each and every subset (combination) of possible starting spaces, which yields the power set of MN, which is 2^MN.
Why? Because solutions that are valid for a given start state may not be valid for all other start states. And the problem requires us to find solutions that are valid for all start states. This is also why the state space is much larger than MN even though in reality our bug only occupies 1 of the MN positions upon initialization. Just because a solution (sequence of moves) works when the bug starts at (1, 1) doesn't mean that solution (sequence of moves) will also work for the bug starting at (2, 1).
Bonus Question: Why isn't the state space just 1, the full set where
each of the MN squares 'has' a bug (and bugs are permitted to move
on top of each other)?
I was tempted to say that just because a sequence of moves gives a goal state when the bug can start at all of the MN possible positions, that doesn't mean that same sequence of moves gives a goal state when the bug starts at (3, 2) or at MN - 1 or MN - 2 etc. possible positions. But by definition it must (a solution over all starting points must be a solution over every finite subset of starting points).
So I think the reason you evaluate starting states other than "all boxes have a bug" is because the solution generated by evaluating only that state may not be optimal. And in fact this interpretation is borne out by what the homework gives as admissible heuristics for this problem:
The maximum of Manhattan distances to the goal from each possible
location the insect could be in.
OR
The minimum of Manhattan distances to the goal from each possible
location the insect could be in.
The case where we just have one starting state with bugs on all the boxes (with the magic ability to be on top of each other) is the relaxed problem we use to define our heuristic. Again by definition of admissibility, since the heuristic must not overestimate the true (arc) cost of an action, and since arc cost is given by Manhattan distances, both the heuristics above are admissible. (The maximum case is admissible because each possible location for the bug is, in fact, possible - thus the max cost is possible).
*If you don't know what power set means, all you need to know is that the power set is the set of all subsets of a given set. It is given by 2^(size of the set).
In other words, if I have a set of three balls {red, blue, green} and I want to know how many different subsets I have, I can calculate it as follows. A subset either has the element in it (1), or it doesn't (0). So {0, 0, 1} would be the subset of only the green ball, {1, 1, 1} would be the subset of all the balls (yes, technically this is a subset) and {0, 0, 0} would be the subset of none of the balls (again, technically a subset). So we see that the number of all subsets is 2^3 = 8. Or in our problem, 2^MN.

Card Flipper Analysis

I'm having an exam in my Data Structures class soon. To prepare I'm looking through some algorithm based problem I've found on the web and run into one that I can't seem to get.
You walk into a room, and see a row of n cards. Each one has a number xi
written on it, where i ranges from 1 to n. However, initially all the cards
are face down. Your goal is to find a local minimum: that is, a card i whose
number is less than or equal to those of its neighbors, xi-1 >= xi <= xi+1.
The first and last cards can also be local minima, and they only have one
neighbor to compare to. Clearly there can be many local minima, but you
are only responsible for finding one of them.
The only solution I can come up with is basically turning them all over and finding any local minima. However, the challenge is to do this by only turn over O(logn) cards.
Essentially if you see a card "7", it is a local minima if the card on the left is a "10" and the card on the right is "9". How is this done easily in logn time?
Any help appreciated, thanks.
Binary search is the way to go. Here is a rough sketch of how you can do it:
Look at the first and last elements. If either is a min, return it.
Look at the middle element. If it is a min return it. Otherwise either its left neighbor or right neighbor must be smaller than it. Recurse on that half.
So the idea is that if we know that the left neighbor of the center element is smaller than it, then the left half must have a local min somewhere so we can safely recurse there. The same goes for the right half.
Said differently, the only way that half of the data won't have a local min is if it is either monotonic or goes up and back down, both of which can't happen because we know the endpoint values.
Also the runtime is clearly log(n) because each step takes constant time and we have to do log(n) steps because we cut the data in half each time.

Resources