Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Suppose you have to drive from Islamabad to Lahore. At the start, your gas tank is full. Your gas tank, when full, holds enough gas to travel m miles, and you have a map that gives distances between gas stations along the route. Let d1 < d2 < … < dn be the locations of all the gas stations along the route, where di is the distance from Islamabad to the gas station. The distance between neighboring gas stations is at most m miles. Also, the distance between the last gas station and Lahore is at most m miles.
Your goal is to make as few gas stops as possible along the way. Give a greedy algorithm (in pseudo-code form) to determine at which gas stations you should stop.
Is your solution optimal? What is the time complexity of your solution?
This algorithm begins at Islamabad, and repeatedly tries to drive as far as possible without running out of gas.
current_distance = 0
current_stop = 0
stops = []
while current != lahore:
next_stop = 0
while distance(next_stop) - current_distance <= m:
next_stop = next_stop + 1
next_stop = next_stop - 1
current_stop = next_stop
current_distance = distance(current_stop)
add next_stop to stops
return stops
This is an optimal solution. To see this, we note that any sequence of stops that took less stops then the greedy algorithm would have to 'pass' the greedy algorithm at some point along the route.
Using induction, we can see that if the greedy algorithm is the farthest it can be after the first stop, and after the nth stop it is the farthest it could be given stop n - 1, then the greedy algorithm must be the farthest it can be for all stops along the route.
Although this algorithm has complexity O(n) and returns an optimal solution computationally, the route it returns may not be a very 'even' or 'smooth' route. In order to produce routes for actual use by people, more routes that space their stops more evenly would want to be considered.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Can you help me with problem? Given N <= 10^5 pairs of points, suppose they are written in
array A, A[i][0] <= A[i][1]. Also, given M <= 10^5 pairs of segments, i-th pair given in form L_1[i], R_1[i], L_2[i], R_2[i]. For each pair of segments I need to find number of pairs from array A, such that for each pair (A[z][0], A[z][1]) it should be L_1[i] <= A[z][0] <= R_1[i] <= L_2[i] <= A[z][1] <= R_2[i].
I think here we can use scan line algorithm, but I don't know how to fit in time and memory. My idea works in N * M * log(N).
If you map A[i] to a point (A[i][0], A[i][1]) on 2d-plane, then for each segment, basically you're just counting the number of points inside the rectangle whose left-bottom corner is (L_1[i], L_2[i]) and right-top corner is (R_1[i], R_2[i]). Counting the points on 2d-plane is a classic question which could be solved in O(n logn). Here are some possible implementations:
Notice that number of points in a rectangle P(l,b,r,t) could be interpreted as P(0,0,r,t)-P(0,0,l-1,t)-P(0,0,r,b-1)+P(0,0,l-1,b-1), so the problem can be simplified to calculating P(0,0,?,?). This could be done easily if we maintain a fenwick tree during the process which basically resembles scan line algorithm.
Build a persistent segment tree for each x-coordinate (in time O(n logn)) and calculate the answers for segments (in time O(m logn)).
Build a kd-tree and answer each query in O(sqrt(n)) time. This is not efficient but could be useful when you want to insert points and count points online.
Sorry for my poor English. Feel free to point out my typos and mistakes.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a floating point number x from [1, 500] that generates a binary y of 1 at some probability p. And I'm trying to find the x that can generate the most 1 or has highest p. I'm assuming there's only one maximum.
Is there a algorithm that can converge fast to the x with highest p while making sure it doesn't jump around too much after it's achieved for e.x. within 0.1% of the optimal x? Specifically, it would be great if it stabilizes when near < 0.1% of optimal x.
I know we can do this with simulated annealing but I don't think I should hard code temperature because I need to use the same algorithm when x could be from [1, 3000] or the p distribution is different.
This paper provides an for smart hill-climbing algorithm. The idea is basically you take n samples as starting points. The algorithm is as follows (it is simplified into one dimensional for your problem):
Take n sample points in the search space. In the paper, he uses Linear Hypercube Sampling since the dimensions of the data in the paper is assumed to be large. In your case, since it is one-dimensional, you can just use random sapling as usual.
For each sample points, gather points from its "local neighborhood" and find a best fit quadratic curve. Find the new maximum candidate from the quadratic curve. If the objective function of the new maximum candidate is actually higher than the previous one, update the sample point to the new maximum candidate. Repeat this step with smaller "local neighborhood" size for each iteration.
Use the best point from the sample points
Restart: repeat step 2 and 3, and then compare the maximums. If there is no improvement, stop. If there is improvement, repeat again.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I would like to generate a random Barabasi-Albert graph with 10,000 node, but my program is very slow. Can anybody help if my subroutine is correct or not? In my code ran1() is the standard random number generator. Thanks for the help.
***********************************************************************
subroutine barabasi_albert(kon,n,m0,m,a,idum)
***********************************************************************
implicit real*8(a-h,o-z)
implicit integer*4(i-n)
logical linked(n) ! logical array for storing if a node is connected
! to the current one
dimension kon(n,n) ! connectivity matrix
! m0: number of initial nodes
! m: minimal degree
! a: scale parameter
! idum: argument to the random number generation
c
c Initialize kon(:,:)
c
kon(:n,:n)=0
c
c Create complete graph of m0 node
c
kon(:m0,:m0)=1
do i=1,m0
kon(i,i)=0
end do
c
c Extend the graph with n-m0 node
c
do i=m0+1,n
linked(:i)=.false.
c
c Add edges while the vertex degree of the ith node is less than m
c
do
if(sum(kon(i,:i-1)).eq.m) exit
ii=floor(dble(i-1)*ran1(idum))+1
if(.not.linked(ii)) then
p=(dble(sum(kon(ii,:i-1)))/sum(kon(:i-1,:i-1)))**a
if(p.gt.ran1(idum)) then
kon(i,ii)=1
kon(ii,i)=1
linked(ii)=.true.
endif
endif
end do
end do
end
Some links connected to:
https://math.stackexchange.com/questions/454188/why-does-my-barabasi-albert-model-implementation-doesnt-produce-a-scale-free-ne
https://math.stackexchange.com/questions/1824097/creating-barab%c3%a1si-albertba-graph-with-spacific-node-and-edgs
Implementing Barabasi-Albert Method for Creating Scale-Free Networks
I'm not familiar with Fortran, but a few things stand out. First consider the time complexity of your function. You have two nested loops. The first one runs n times, where n is proportional to the size of the input. The second runs until it has found m0 connections. But inside the innermost loop you calculate the sums of parts of the array. First sum consists of i*(i-1) additions, that's probably the most costly. So the time complexity is limited by O(n*m0*i^2). Assuming m0 is small, this becomes O(n^3).
The best improvement I think is to change to an algorithm with lower time complexity, but if that's not possible you can still tweak what you have. First, cache your sums. Don't calculate the same sum twice. Or if you calculate sum(1:i), save that result and use when you calculate sum(1:i+1) or sum(1:i-1).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I had an assignment on dynamic programming due last night, but I had to turn it in unfinished because i could not understand how to solve the last problem:
The state wants to monitor traffic on a highway n miles long. It costs ci to install a monitoring device on mile i of the highway. The maximum distance between monitoring devices should not be more than d miles. That is, if there is a monitoring device on mile i, then there must be one monitoring device from mile i + 1 to mile i + d (or it is the case that i + d > n). The state wants a plan that will minimize the cost. Assume there is an array C[1..n] of costs.
Let vk be the cost of the best solution assuming a k mile highway and assuming a monitoring device on mile k. Given C and d, if the values of v1 through vk-1 are known, show how to determine the value of vk. You can write this mathematically, or provide pseudocode in the style of the book. Note you need to take into account all possible values of k from k = 1 to k = n.
I'm sure a problem similar to this will appear on the exam coming up and I'd like to at least know where to begin solving this, so any assistance is appreciated.
Let's define DP[i] as the minimum cost of installing a monitor at station i and some other indexes which are less than i (such that each consecutive station is less than or equal to distance d)
Now the answer to our problem would be
min(DP[n - d + 1], ...DP[n - 2], DP[n - 1], DP[n])
That is the minimum cost of having the last monitor on last d indexes.
Now, the recurrence relation for the dynamic programming can be easily seen as :
DP[i] = min(DP[i - 1], DP[i - 2], ... DP[i - d]) + C[i]
If we want to install a monitor on ith index, we install it by cost C[i], and we must also ensure that we have a monitor in previous d indexes. So, we take the minimum of installing the second last monitor on it's previous d indexes.
If you code the recurrence by naive method it looks O(n * d), but by using the sliding window minimum algorithm using a doubly ended queue, you can reduce the time complexity to asymptotically O(n).
As this is an assignment problem, I won't write in detail. You should be able to follow up from this point.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
https://liahen.ksp.sk/ provides some trainig problems for competitions. But they do not have solutions with them. So I would like to ask you how to solve this, since I am trying to solve it for hours and still nothing.
Problem:
We have some line of length L kilometers. Some person stands in the middle of this line. We have list with two numbers: x, y and z - y is the time in seconds when z crates will fall to the x-th kilometer of the road. Person can stay or move one km to right or left each second. To catch crates, person must be on place where they will fall exactly in the second they are intended to fall.
Point of the algorithm is to find a way to save maximal number of crates.
I'd do this as a dp problem: for each time, for each place you can stand, store the maximum number of crates you can have caught.
Runtime would be O(L * timesteps)
EDIT:
if L is very large compared to the number of drop points you can get away with storing only information at the drop points, saving a bit on the performance:
for each drop point store how far it is to the left and right neighbor, and a buffer indicating crates collected at this point at time t-i for i from 0 to the maximum distance to a neighbor.
at each time step, for each drop point, fetch the possible crates collected from each neighbors at time t-distance, and select the best value.
add the number of crates dropped at this point, if any.
This algorithm runs in O(droppoints*timesteps), but uses O(L) space.
You can solve this problem using dynamic programming.
Here's the basic recursive relationship that you need to solve.
saved[y][x] = max(saved[y-1][x-1],saved[y-1][x+1],saved[y-1][x])+crates[y][x]
crates[y][x] : the # crates that fall at time y at position x. You create this array from your input data.
saved[y][x] : the maximum number of crates that you have saved so far winding up at time y being in position x.
The equation is based on the fact that at time y-1 ( 1 time step before ), you can only be at position x+1 or x-1, or x (stayed in the same place), so look at both options and pick the one that gives you the most crates saved, and then add the crates saved since you are now at position x at time y ( i.e. crates[y][x] )
Say you start at x = 0, and time y = 0.
Then saved[0][x] = 0 ( assuming that crates start to fall at time y > 0 ), and then you can do DP using the above equation.
Your final answer is to find the maximum value of saved[y][x].