Elevator algorithm for minimum distance - algorithm

I have a building with a single elevator and I need to find an algorithm for this elevator. We gets a list of objects of this form: {i->j}, where i is the floor that a resident wants to take the elevator from and j is the floor he wants to get down on.
An infinite amount of people can use the elevator at the same time, and it's irrelevant how long people stay in the elevator. The elevator starts from the first floor.
I checked a little on the web and I found the "elevator algorithm" but it doesn't really help me. It says that I should go all the way up and then all the way down. But consider when one resident wants to go from 1 to 100 and another resident wants to go from 50 to 49. Using the above algorithm, it will take a distance of 151 floors. If I instead follow this path: 1->50->49->100, it takes only 102 floors, which is better.
What algorithm should I use?

Here's one way to formulate this problem as a Time-based Integer Program. (It might seem like an overkill to generate all the constraints, but it is guaranteed to produce the optimal solution)
Let's say that elevator takes 1 unit of time to go from floor F to F+1 or to F-1.
The Insight: We use the fact that at any time t, there is only one decision to be made. Whether to go UP or to go DOWN. That is the Decision Variable for our problem. DIR_t = +1 if the elevator moves up at time t, -1 otherwise.
We want to minimize the time when all the passengers reach their destination.
This table makes it clearer
Time FLOOR_t Dir_t
1 1 1
2 2 1
3 3 1
4 4 1
... ... ...
49 49 1
50 50 -1
51 49 1
52 50 1
...
100 99 1
101 100 NA
Now, let's bring in the passengers. There are P passengers and each one wants to go from
SF to EF (their starting Floor to their ending floor, their destination.)
So we are given (SF_p, EF_p) for each passenger p.
Constraints
We know that the Floor in which the elevator is present at time t is
F_t = F_t-1 + DIR_t-1
(F0 = 0, DIR_0 = 1, F1 = 1 just to start things off.)
Now, let ST_p be the time instant when passenger p Starts their elevator journey. Let ET_p be the time instant when passenger p ends their elevator journey.
Note that SF and EF are input parameters given to us, but ST and ET are variables that the IP will set when solving. That is, the floors are given to us, we have to come up with the times.
ST_p = t if F_t = SF_p # whenever the elevator comes to a passenger's starting floor, their journey starts.
ET_p = t if F_t = EF_p AND ST_p > 0 (a passenger cannot end their journey before it commenced.)
This can be enforced by introducing new 0/1 indicator variables.
ETp > STp # you can only get off after you got on
Finally, let's introduce one number T which is the time when the entire set of trips is done. It is the max of all ET's for each p. This is what needs to be minimized.
T > ET_p for all p # we want to find the time when the last passenger gets off.
Formulation
Putting it all together:
Min T
T > ET_p for all p
F_t = F_t-1 + DIR_t-1
ETp > STp # you can only get off after you got on
ST_p = t if F_t = SF_p # whenever the elevator some to a passenger's starting floor, their journey starts.
ET_p = t if F_t = EF_p AND ST_p > 0
ET_p >= 1 #everyone should end their journey. Otherwise model will give 0 as the obj function value.
DIR_t = (+1, -1) # can be enforced with 2 binary variables if needed.
Now after solving this IP problem, the exact trip can be traced using the values of each DIR_t for each t.

There's a polynomial-time dynamic program whose running time does not depend on the number of floors. If we pick up passengers greedily and make them wait, then the relevant state is the interval of floors that the elevator has visited (hence the passengers picked up), the floor on which the elevator most recently picked up or dropped off, and two optional values: the lowest floor it is obligated to visit for the purpose of dropping off passengers currently inside, and the highest. All of this state can be described by the identities of five passengers plus a constant number of bits.
I'm quite sure that there is room for improvement here.

Your question mirrors disk-head scheduling algorithms.
Check out shortest seek time first vs scan, cscan, etc.
There are cases where sstf wins, but what if it was 50 to 10, and you also had 2 to 100, 3 to 100, 4 to 100, 5 to 100, 6 to 100 etc. You can see you add the overhead to all of the other people. Also, if incoming requests have a smaller seek time, starvation can occur (similar to process scheduling).
In your case, it really depends on if the requests are static or dynamic. If you want to minimize variance, go with scan/cscan etc.

In the comments to C.B.'s answer, the OP comments: "the requests are static. in the beginning i get the full list." I would welcome counter examples and/or other feedback since it seems to me that if we are given all trips in advance, the problem can be drastically reduced if we consider the following:
Since the elevator has an unlimited capacity, any trips going up that end lower than the highest floor we will visit are irrelevant to our calculation. Since we are guaranteed to pass all those pickups and dropoffs on the way to the highest point, we can place them in our schedule after considering the descending trips.
Any trips 'contained' in other trips of the same direction are also irrelevant since we will pass those pickups and dropoffs during the 'outer-most' trips, and may be appropriately scheduled after considering those.
Any overlapping descending trips may be combined for a reason soon to be apparent.
Any descending trips occur either before or after the highest point is reached (excluding the highest floor reached being a pickup). The optimal schedule for all descending trips that we've determined to occur before the highest point (considering only 'outer-container' types and two or more overlapping trips as one trip) is one-by-one as we ascend, since we are on the way up anyway.
How do we determine which descending trips should occur after the highest point?
We conduct our calculation in reference to one point, TOP. Let's call the trip that includes the highest floor reached H and the highest floor reached HFR. If HFR is a pickup, H is descending and TOP = H_dropoff. If HFR is a dropoff, H is ascending and TOP = HFR.
The descending trips that should be scheduled after the highest floor to be visited are all members of the largest group of adjacent descending trips (considering only 'outer-container' types and two or more overlapping trips as one trip) that we can gather, starting from the next lower descending trip after TOP and continuing downward, where their combined individual distances, doubled, is greater than the total distance from TOP to their last dropoff. That is, where (D1 + D2 + D3...+ Dn) * 2 > TOP - Dn_dropoff
Here's a crude attempt in Haskell:
import Data.List (sort,sortBy)
trips = [(101,100),(50,49),(25,19),(99,97),(95,93),(30,20),(35,70),(28,25)]
isDescending (a,a') = a > a'
areDescending a b = isDescending a && isDescending b
isContained aa#(a,a') bb#(b,b') = areDescending aa bb && a < b && a' > b'
extends aa#(a,a') bb#(b,b') = areDescending aa bb && a <= b && a > b' && a' < b'
max' aa#(a,a') bb#(b,b') = if (maximum [b,a,a'] == b) || (maximum [b',a,a'] == b')
then bb
else aa
(outerDescents,innerDescents,ascents,topTrip) = foldr f ([],[],[],(0,0)) trips where
f trip (outerDescents,innerDescents,ascents,topTrip) = g outerDescents trip ([],innerDescents,ascents,topTrip) where
g [] trip (outerDescents,innerDescents,ascents,topTrip) = (trip:outerDescents,innerDescents,ascents,max' trip topTrip)
g (descent:descents) trip (outerDescents,innerDescents,ascents,topTrip)
| not (isDescending trip) = (outerDescents ++ (descent:descents),innerDescents,trip:ascents,max' trip topTrip)
| isContained trip descent = (outerDescents ++ (descent:descents),trip:innerDescents,ascents,topTrip)
| isContained descent trip = (trip:outerDescents ++ descents,descent:innerDescents,ascents,max' trip topTrip)
| extends trip descent = ((d,t'):outerDescents ++ descents,(t,d'):innerDescents,ascents,max' topTrip (d,t'))
| extends descent trip = ((t,d'):outerDescents ++ descents,(d,t'):innerDescents,ascents,max' topTrip (t,d'))
| otherwise = g descents trip (descent:outerDescents,innerDescents,ascents,topTrip)
where (t,t') = trip
(d,d') = descent
top = snd topTrip
scheduleFirst descents = (sum $ map (\(from,to) -> 2 * (from - to)) descents)
> top - (snd . last) descents
(descentsScheduledFirst,descentsScheduledAfterTop) =
(descentsScheduledFirst,descentsScheduledAfterTop) where
descentsScheduledAfterTop = (\x -> if not (null x) then head x else [])
. take 1 . filter scheduleFirst
$ foldl (\accum num -> take num sorted : accum) [] [1..length sorted]
sorted = sortBy(\a b -> compare b a) outerDescents
descentsScheduledFirst = if null descentsScheduledAfterTop
then sorted
else drop (length descentsScheduledAfterTop) sorted
scheduled = ((>>= \(a,b) -> [a,b]) $ sort descentsScheduledFirst)
++ (if isDescending topTrip then [] else [top])
++ ((>>= \(a,b) -> [a,b]) $ sortBy (\a b -> compare b a) descentsScheduledAfterTop)
place _ [] _ _ = error "topTrip was not calculated."
place floor' (floor:floors) prev (accum,numStops)
| floor' == prev || floor' == floor = (accum ++ [prev] ++ (floor:floors),numStops)
| prev == floor = place floor' floors floor (accum,numStops)
| prev < floor = f
| prev > floor = g
where f | floor' > prev && floor' < floor = (accum ++ [prev] ++ (floor':floor:floors),numStops)
| otherwise = place floor' floors floor (accum ++ [prev],numStops + 1)
g | floor' < prev && floor' > floor = (accum ++ [prev] ++ (floor':floor:floors),numStops)
| otherwise = place floor' floors floor (accum ++ [prev],numStops + 1)
schedule trip#(from,to) floors = take num floors' ++ fst placeTo
where placeFrom#(floors',num) = place from floors 1 ([],1)
trimmed = drop num floors'
placeTo = place to (tail trimmed) (head trimmed) ([],1)
solution = foldl (\trips trip -> schedule trip trips) scheduled (innerDescents ++ ascents)
main = do print trips
print solution
Output:
*Main> main
[(101,100),(50,49),(25,19),(99,97),(95,93),(30,20),(35,70),(28,25)]
[1,25,28,30,25,20,19,35,50,49,70,101,100,99,97,95,93]

Related

Two-egg problem - just the ideal height for k eggs

The solutions listed on wikipedia and other websites to the egg dropping puzzle calculate the maximum amount of drops, or the worse case scenario until we reach the critical floor where the egg breaks. But what if I want an algorithm that only returns the ideal floor to start from?
For example: 1 egg, 100 foors = 1:
Obvious because you need to check every floor until it breaks.
2 eggs, 100 floors = 14:
We start at floor k. If it breaks, we just need to check k-1 steps beforehand as it's a 1-egg problem.
If it doesn't break, we move k-1 steps, so that the maximum amount of steps still remains k. This leads to k + k -1 + k-2... = k(k+1) / 2 >= 100, k = ~14 rounded up.
How do I find the general best floor for e eggs and n floors?
The trick is that the dynamic programming data structure has the answer encoded in it. Namely that you figure out how many drops are needed, and then it is the maximum floors with 1 less drop and 1 less egg plus 1 (the test egg which, if it breaks, puts you in the previously solved solution.)
Here is a Python solution with generators that is slightly inefficient but demonstrates the ideas in a hopefully clear manner.
def floors_by_drops (eggs):
drops = 0
if eggs == 1:
while True:
drops = drops + 1
yield (drops, drops)
else:
floors = 1
drops = 1
yield (drops, floors)
prev_floors = floors_by_drops(eggs-1)
while True:
drops = drops + 1
(this_drops, this_floors) = prev_floors.next()
if drops <= this_drops:
# We are not able to use the last egg in our best strategy.
yield (drops, this_floors)
floors = this_floors
else:
# We drop an egg at this_floors+1
# If we fail, we can do this_floors with 1 less egg and one less drop.
# If we succeed, we can do floors with all eggs and one less drop.
floors = floors + this_floors + 1
yield (drops, floors)
def first_floor (eggs, floors):
if eggs == 1:
return 1 # always
else:
prev_eggs_iterator = floors_by_drops(eggs-1)
eggs_iterator = floors_by_drops(eggs)
prev_floors = 0
while True:
# eggs_iterator is always 1 more drop than prev_eggs_iterator
this_floors = eggs_iterator.next()[1]
if floors <= this_floors:
return prev_floors + 1
prev_floors = prev_eggs_iterator.next()[1]
print(first_floor(2, 100))

Calculating particle equilibrium using Monte Carlo

I am trying to write a function that calculates the number of iterations it takes for two chambers to have equally as many particles.The evolution of the system is considered as a series of time-steps, beginning at t = 1. At each time-step exactly
one particle will pass through the hole, and we assume that the particles do not interact. The probability that
a particle will move from the left to the right chamber is pLR = NL/N, and the probability of a particle will
move from the right to the left chamber is pRL = 1 − pLR = (N − NL)/N.
The simulation will iteratively proceed as follows:
Get a random number r from the interval 0 ≤ r ≤ 1.
If r ≤ pLR, move one particle from the left to the right chamber. Otherwise move one particle from the
right to the left chamber.
Repeat step 1 and 2 until NL = NR. Report back, how many time-steps it took to reach this equilibrium
This is my code thus far.
function t = thermoEquilibrium(N, r) %N = number of particles, r = random numbers from 0-1
h = []; %right side of the chamber
v = []; %left side of the chamber
rr = r;
k = false
NL=N-length(h) %This is especially where i suspect i make a mistake.
%How can the probability change with every iteration?
pLR = NL/N;
pRL = 1 - pLR;
count = 1
while k==false
for i = r
if i<=pLR
h(end+1)=i
rr = rr(rr~=i)
end
end
for l = h
if pRL>l
v(end+1) = l
h = h(h~=l)
end
end
if length(h)==N/2 && length(v)==N/2
k=true
end
count = count + 1
end
t = count
Can someone point me in a direction, so i can get a bit closer to something that works?

Dynamic Programming and Probability

I've been staring at this problem for hours and I'm still as lost as I was at the beginning. It's been a while since I took discrete math or statistics so I tried watching some videos on youtube, but I couldn't find anything that would help me solve the problem in less than what seems to be exponential time. Any tips on how to approach the problem below would be very much appreciated!
A certain species of fern thrives in lush rainy regions, where it typically rains almost every day.
However, a drought is expected over the next n days, and a team of botanists is concerned about
the survival of the species through the drought. Specifically, the team is convinced of the following
hypothesis: the fern population will survive if and only if it rains on at least n/2 days during the
n-day drought. In other words, for the species to survive there must be at least as many rainy days
as non-rainy days.
Local weather experts predict that the probability that it rains on a day i ∈ {1, . . . , n} is
pi ∈ [0, 1], and that these n random events are independent. Assuming both the botanists and
weather experts are correct, show how to compute the probability that the ferns survive the drought.
Your algorithm should run in time O(n2).
Have an (n + 1)×n matrix such that C[i][j] denotes the probability that after ith day there will have been j rainy days (i runs from 1 to n, j runs from 0 to n). Initialize:
C[1][0] = 1 - p[1]
C[1][1] = p[1]
C[1][j] = 0 for j > 1
Now loop over the days and set the values of the matrix like this:
C[i][0] = (1 - p[i]) * C[i-1][0]
C[i][j] = (1 - p[i]) * C[i-1][j] + p[i] * C[i - 1][j - 1] for j > 0
Finally, sum the values from C[n][n/2] to C[n][n] to get the probability of fern survival.
Dynamic programming problems can be solved in a top down or bottom up fashion.
You've already had the bottom up version described. To do the top-down version, write a recursive function, then add a caching layer so you don't recompute any results that you already computed. In pseudo-code:
cache = {}
function whatever(args)
if args not in cache
compute result
cache[args] = result
return cache[args]
This process is called "memoization" and many languages have ways of automatically memoizing things.
Here is a Python implementation of this specific example:
def prob_survival(daily_probabilities):
days = len(daily_probabilities)
days_needed = days / 2
# An inner function to do the calculation.
cached_odds = {}
def prob_survival(day, rained):
if days_needed <= rained:
return 1.0
elif days <= day:
return 0.0
elif (day, rained) not in cached_odds:
p = daily_probabilities[day]
p_a = p * prob_survival(day+1, rained+1)
p_b = (1- p) * prob_survival(day+1, rained)
cached_odds[(day, rained)] = p_a + p_b
return cached_odds[(day, rained)]
return prob_survival(0, 0)
And then you would call it as follows:
print(prob_survival([0.2, 0.4, 0.6, 0.8])

Approximate matching of two lists of events (with duration)

I have a black box algorithm that analyses a time series and "detects" certain events in the series. It returns a list of events, each containing a start time and end time. The events do not overlap.
I also have a list of the "true" events, again with start time and end time for each event, not overlapping.
I want to compare the two lists and match detected and true events that fall within a certain time tolerance (True Positives). The complication is that the algorithm may detect events that are not really there (False Positives) or might miss events that were there (False Negatives).
What is an algorithm that optimally pairs events from the two lists and leaves the proper events unpaired? I am pretty sure I am not the first one to tackle this problem and that such a method exists, but I haven't been able to find it, perhaps because I do not know the right terminology.
Speed requirement:
The lists will contain no more than a few hundred entries, and speed is not a major factor. Accuracy is more important. Anything taking less than a few seconds on an ordinary computer will be fine.
Here's a quadratic-time algorithm that gives a maximum likelihood estimate with respect to the following model. Let A1 < ... < Am be the true intervals and let B1 < ... < Bn be the reported intervals. The quantity sub(i, j) is the log-likelihood that Ai becomes Bj. The quantity del(i) is the log-likelihood that Ai is deleted. The quantity ins(j) is the log-likelihood that Bj is inserted. Make independence assumptions everywhere! I'm going to choose sub, del, and ins so that, for every i < i' and every j < j', we have
sub(i, j') + sub(i', j) <= max {sub(i, j ) + sub(i', j')
,del(i) + ins(j') + sub(i', j )
,sub(i, j') + del(i') + ins(j)
}.
This ensures that the optimal matching between intervals is noncrossing and thus that we can use the following Levenshtein-like dynamic program.
The dynamic program is presented as a memoized recursive function, score(i, j), that computes the optimal score of matching A1, ..., Ai with B1, ..., Bj. The root of the call tree is score(m, n). It can be modified to return the sequence of sub(i, j) operations in the optimal solution.
score(i, j) | i == 0 && j == 0 = 0
| i > 0 && j == 0 = del(i) + score(i - 1, 0 )
| i == 0 && j > 0 = ins(j) + score(0 , j - 1)
| i > 0 && j > 0 = max {sub(i, j) + score(i - 1, j - 1)
,del(i) + score(i - 1, j )
,ins(j) + score(i , j - 1)
}
Here are some possible definitions for sub, del, and ins. I'm not sure if they will be any good; you may want to multiply their values by constants or use powers other than 2. If Ai = [s, t] and Bj = [u, v], then define
sub(i, j) = -(|u - s|^2 + |v - t|^2)
del(i) = -(t - s)^2
ins(j) = -(v - u)^2.
(Apologies to the undoubtedly extant academic who published something like this in the bioinformatics literature many decades ago.)

How to easily know if a maze has a road from start to goal?

I implemented a maze using 0,1 array. The entry and goal is fixed in the maze. Entry always be 0,0 point of the maze. Goal always be m-1,n-1 point of the maze. I'm using breadth-first search algorithm for now, but the speed is not good enough. Especially for large maze (100*100 or so). Could someone help me on this algorithm?
Here is my solution:
queue = []
position = start_node
mark_tried(position)
queue << position
while(!queue.empty?)
p = queue.shift #pop the first element
return true if maze.goal?(p)
left = p.left
visit(queue,left) if can_visit?(maze,left)
right = p.right
visit(queue,right) if can_visit?(maze,right)
up = p.up
visit(queue,up) if can_visit?(maze,up)
down = p.down
visit(queue,down) if can_visit?(maze,down)
end
return false
the can_visit? method check whether the node is inside the maze, whether the node is visited, whether the node is blocked.
worst answer possible.
1) go front until you cant move
2) turn left
3) rinse and repeat.
if you make it out , there is an end.
A better solution.
Traverse through your maze keeping 2 lists for open and closed nodes. Use the famous A-Star algorithm
to choose evaluate the next node and discard nodes which are a dead end. If you run out of nodes on your open list, there is no exit.
Here is a simple algorithm which should be much faster:
From start/goal move to to the first junction. You can ignore anything between that junction and the start/goal.
Locate all places in the maze which are dead ends (they have three walls). Move back to the next junction and take this path out of the search tree.
After you have removed all dead ends this way, there should be a single path left (or several if there are several ways to reach the goal).
I would not use the AStar algorithm there yet, unless I really need to, because this can be done with some simple 'coloring'.
# maze is a m x n array
def canBeTraversed(maze):
m = len(maze)
n = len(maze[0])
colored = [ [ False for i in range(0,n) ] for j in range(0,m) ]
open = [(0,0),]
while len(open) != 0:
(x,y) = open.pop()
if x == m-1 and y == n-1:
return True
elif x < m and y < n and maze[x][y] != 0 not colored[x][y]:
colored[x][y] = True
open.extend([(x-1,y), (x,y-1), (x+1,y), (x,y+1)])
return False
Yes it's stupid, yes it's breadfirst and all that.
Here is the A* implementation
def dist(x,y):
return (abs(x[0]-y[0]) + abs(x[1]-y[1]))^2
def heuristic(x,y):
return (x[0]-y[0])^2 + (x[1]-y[1])^2
def find(open,f):
result = None
min = None
for x in open:
tmp = f[x[0]][x[1]]
if min == None or tmp < min:
min = tmp
result = x
return result
def neighbors(x,m,n):
def add(result,y,m,n):
if x < m and y < n: result.append(y)
result = []
add(result, (x[0]-1,x[1]), m, n)
add(result, (x[0],x[1]-1), m, n)
add(result, (x[0]+1,x[1]), m, n)
add(result, (x[0],x[1]+1), m, n)
return result
def canBeTraversedAStar(maze):
m = len(maze)
n = len(maze[0])
goal = (m-1,n-1)
closed = set([])
open = set([(0,0),])
g = [ [ 0 for y in range(0,n) ] for x in range(0,m) ]
h = [ [ heuristic((x,y),goal) for y in range(0,n) ] for x in range(0,m) ]
f = [ [ h[x][y] for y in range(0,n) ] for x in range(0,m) ]
while len(open) != 0:
x = find(open,f)
if x == (m-1,n-1):
return True
open.remove(x)
closed.add(x)
for y in neighbors(x,m,n):
if y in closed: continue
if y not in open:
open.add(y)
g[y[0]][y[1]] = g[x[0]][x[1]] + dist(x,y)
h[y[0]][y[1]] = heuristic(y,goal)
f[y[0]][y[1]] = g[y[0]][y[1]] + h[y[0]][y[1]]
return True
Here is my (simple) benchmark code:
def tryIt(func,size, runs):
maze = [ [ 1 for i in range(0,size) ] for j in range(0,size) ]
begin = datetime.datetime.now()
for i in range(0,runs): func(maze)
end = datetime.datetime.now()
print size, 'x', size, ':', (end - begin) / runs, 'average on', runs, 'runs'
tryIt(canBeTraversed,100,100)
tryIt(canBeTraversed,1000,100)
tryIt(canBeTraversedAStar,100,100)
tryIt(canBeTraversedAStar,1000,100)
Which outputs:
# For canBeTraversed
100 x 100 : 0:00:00.002650 average on 100 runs
1000 x 1000 : 0:00:00.198440 average on 100 runs
# For canBeTraversedAStar
100 x 100 : 0:00:00.016100 average on 100 runs
1000 x 1000 : 0:00:01.679220 average on 100 runs
The obvious here: going A* to run smoothly requires a lot of optimizations I did not bother to go after...
I would say:
Don't optimize
(Expert only) Don't optimize yet
How much time are you talking about when you say too much ? Really a 100x100 grid is so easily parsed in brute force it's a joke :/
I would have solved this with an AStar implementation. If you want even more speed, you can optimize to only generate the nodes from the junctions rather than every tile/square/step.
A method you can use that does not need to visit all nodes in the maze is as follows:
create an integer[][] with one value per maze "room"
create a queue, add [startpoint, count=1, delta=1] and [goal, count=-1, delta=-1]
start coloring the route by:
popping an object from the head of the queue, put the count at the maze point.
check all reachable rooms for a count with sign opposite to that of the rooms delta, if you find one the maze is solved: run both ways and connect the routes with the biggest steps up and down in room counts.
otherwise add all reachable rooms that have no count to the tail of the queue, with delta added to the room count.
if the queue is empty no path through the maze is possible.
This not only determines if there is a path, but also shows the shortest path possible through the maze.
You don't need to backtrack, so its O(number of maze rooms)

Resources