Scheduling: advance deadline for implicit-deadline rate monotonic algorithm - algorithm

Given a set of tasks:
T1(20,100) T2(30,250) T3(100,400) (execution time, deadline=peroid)
Now I want to constrict the deadlines as Di = f * Pi where Di is new deadline for ith task, Pi is the original period for ith task and f is the factor I want to figure out. What is the smallest value of f that the tasks will continue to meet their deadlines using rate monotonic scheduler?

This schema will repeat (synchronize) every 2000 time units. During this period
T1 must run 20 times, requiring 400 time units.
T2 must run 8 times, requiring 240 time units.
T3 must run 5 times, requiring 500 time units.
Total is 1140 time units per 2000 time unit interval.
f = 1140 / 2000 = 0.57
This assumes long-running tasks can be interrupted and resumed, to allow shorter-running tasks to run in between. Otherwise there will be no way for T1 to meet it's deadline once T3 has started.
The updated deadlines are:
T1(20,57)
T2(30,142.5)
T3(100,228)
These will repeat every 1851930 time units, and require the same time to complete.
A small simplification: When calculating factor, the period-time cancels out. This means you don't really need to calculate the period to get the factor:
Period = 2000
Required time = (Period / 100) * 20 + (Period / 250) * 30 + (Period / 400) * 100
f = Required time / Period = 20 / 100 + 30 / 250 + 100 / 400 = 0.57
f = Sum(Duration[i] / Period[i])
To calculate the period, you could do this:
Period(T1,T2) = lcm(100, 250) = 500
Period(T1,T2,T3) = lcm(500, 400) = 2000
where lcm(x,y) is the Least Common Multiple.

Related

How to calculate execution time (speedup)

I was stuck when trying to calculate for the speedup. So the question given was:
Question 1
If 50% of a program is enhanced by 2 times and the rest 50% is enhanced by 4 times then what is the overall speedup due to the enhancements? Hints: Consider that the execution time of program in the machine before enhancement (without enhancement) is T. Then find the total execution time after the enhancements, T'. The speedup is T/T'.
The only thing I know is speedup = execution time before enhancement/execution time after enhancement. So can I assume the answer is:
Speedup = T/((50/100x1/2) + (50/100x1/4))
Total execution time after the enhancement = T + speedup
(50/100x1/2) because 50% was enhanced by 2 times and same goes to the 4 times.
Question 2
Let us hypothetically imagine that the execution of (2/3)rd of a program could be made to run infinitely fast by some kind of improvement/enhancement in the design of a processor. Then how many times the enhanced processor will run faster compared with the un-enhanced (original) machine?
Can I assume that it is 150 times faster as 100/(2/3) = 150
Any ideas? Thanks in advance.
Let's start with question 1.
The total time is the sum of the times for the two halves:
T = T1 + T2
Then, T1 is enhanced by a factor of two. T2 is improved by a factor of 4:
T' = T1' + T2'
= T1 / 2 + T2 / 4
We know that both T1 and T2 are 50% of T. So:
T' = 0.5 * T / 2 + 0.5 * T / 4
= 1/4 * T + 1/8 * T
= 3/8 * T
The speed-up is
T / T' = T / (3/8 T) = 8/3
Question two can be solved similarly:
T' = T1' + T2'
T1' is reduced to 0. T2 is the remaining 1/3 of T.
T' = 1/3 T
The speed-up is
T / T' = 3
Hence, the program is three times as fast as before (or two times faster).

Find maximum point out of n tasks

Exact problem statement :
Given N tasks, find the maximal points that can be achieved by finishing them
Problem Constraints:
There are T minutes for completing N tasks
Solutions can be submitted at any time, including exactly T minutes after the start
i-th task submitted t minutes after the start, will get
maxPoints[i] - t * pointsPerMinute[i] points
i-th task takes requiredTime[i] minutes to solve
Input Format
Line 1: T, total minutes available to finish
Line 2: Comma separated list of maxPoints
Line 3: Comma separated list of pointsPerMinute
Line 4: Comma separated list of requiredTime
Sample Input
75
250 500 1000
2 4 8
25 25 25
Sample Output
1200
Explanation
First, solve the third task 25 minutes after the start of the contest. Get 1000 - 8 * 25 = 800 points
Second, solve the second task 50 minutes after the start of the contest. Get 500 - 4 * 50 = 300 points
Third, solve the first task 75 minutes after the start of the contest. Get 250 - 2 * 75 = 100 points
In total, get 800 + 300 + 100 = 1200 points
I was able to get the solution by permuting and the calculating the point for each arrangement. I couldn't figure out optimized solution for this.
I use greedy algorithm to solve this.
I use m for maxPoint, p for pointsPerMinute, r for requiredTime.
Consider finishing A task before B task, you can get:
Point(A, B) = m[A] - p[A] * r[A] + m[B] - p[B] * (r[A] + r[B])
Finishing B task before A task, you can get:
Point(B, A) = m[B] - p[B] * r[B] + m[A] - p[A] * (r[A] + r[B])
If you should finish A task before B task, Point(A, B) need greater than Point(B, A), so Point(A, B) - Point(B, A) >= 0.
For some calculation, we can get:
r[A] / p[A] < r[B] / p[B].
That means, if we need to finish A task before B task, r[A] / p[A] should less than r[B] / p[B]. And we can sort all tasks by this and count the answer.

Optimizing a program and calculating % of total execution time improved

So I was told to ask this on here instead of StackExchage:
If I have a program P, which runs on a 2GHz machine M in 30seconds and is optimized by replacing all instances of 'raise to the power 4' with 3 instructions of multiplying x by. This optimized program will be P'. The CPI of multiplication is 2 and CPI of power is 12. If there are 10^9 such operations optimized, what is the percent of total execution time improved?
Here is what I've deduced so far.
For P, we have:
time (30s)
CPI: 12
Frequency (2GHz)
For P', we have:
CPI (6) [2*3]
Frequency (2GHz)
So I need to figure our how to calculate the time of P' in order to compare the times. But I have no idea how to achieve this. Could someone please help me out?
Program P, which runs on a 2GHz machine M in 30 seconds and is optimized by replacing all instances of 'raise to the power 4' with 3 instructions of multiplying x by. This optimized program will be P'. The CPI of multiplication is 2 and CPI of power is 12. If there are 10^9 such operations optimized,
From this information we can compute time needed to execute all POWER4 ("raise to the power 4) instructions, we have total count of such instructions (all POWER4 was replaced, count is 10^9 or 1 G). Every POWER4 instruction needs 12 clock cycles (CPI = clock per instruction), so all POWER4 were executed in 1G * 12 = 12G cycles.
2GHz machine has 2G cycles per second, and there are 30 seconds of execution. Total P program execution is 2G*30 = 60 G cycles (60 * 10^9). We can conclude that P program has some other instructions. We don't know what instructions, how many executions they have and there is no information about their mean CPI. But we know that time needed to execute other instructions is 60 G - 12 G = 48 G (total program running time minus POWER4 running time - true for simple processors). There is some X executed instructions with Y mean CPI, so X*Y = 48 G.
So, total cycles executed for the program P is
Freq * seconds = POWER4_count * POWER4_CPI + OTHER_count * OTHER_mean_CPI
2G * 30 = 1G * 12 + X*Y
Or total running time for P:
30s = (1G * 12 + X*Y) / 2GHz
what is the percent of total execution time improved?
After replacing 1G POWER4 operations with 3 times more MUL instructions (multiply by) we have 3G MUL operations, and cycles needed for them is now CPI * count, where MUL CPI is 2: 2*3G = 6G cycles. X*Y part of P' was unchanged, and we can solve the problem.
P' time in seconds = ( MUL_count * MUL_CPI + OTHER_count * OTHER_mean_CPI ) / Frequency
P' time = (3G*2 + X*Y) / 2GHz
Improvement is not so big as can be excepted, because POWER4 instructions in P takes only some part of running time: 12G/60G; and optimization converted 12G to 6G, without changing remaining 48 G cycles part. By halving only some part of time we get not half of time.

Minimize the number of trips or Group maximum possible orders

We have one distribution center ( ware house ) and we are getting orders in real time whose time/distance from ware house and other order locations is known.
time matrix=
W O1 O2 O3
W 0 5 20 2
O1 5 0 21 7
O2 20 21 0 11
O3 2 7 11 0
order time of O1= 10:00 AM
order time of O2= 10:20 AM
order time of O3= 10:25 AM
I want to club as many as order possible such that delivery time of any order does not exceed by 2 hours of its order time. Thus the question is to reduce the number of trips(Trip is when delivery agent goes for delivery).
I am trying to come up with algorithm for this. there are two competing factors when
We can combine all the orders in the sequence as they are coming till it satisfies the constraint of delivery of the order within 2 hours of its ordering time.
We can modify above approach to find the bottleneck order(due to which we can not club more order now in approach 1). and pull it out from trip1 and make it a part of trip 2(new order) and wait for other orders to club it with trip1 or trip2 depending.
All the orders are coming in realtime. What will be the best approach to conquer this situation. Let me know if you need more clarity on this.
Very safe and easy algorithm which is guaranteed to not exceed the maximal waiting time for an order:
Let TSP() be a function which returns the estimate of time spent to visit given places. The estimate is pessimistic, i.e. the actual ride time can be shorter or equals to estimate, but not longer. For the good start you can implement TSP() very easily in a greedy way: from each place go to the nearest place. You can subtract the length of the longer edge coming out from W to have better estimate (so a car will always take the shorter edge coming out of W). If TSP() would happen to be optimal, then the whole algorithm presented here would be also optimal. The overall algorithm is as good as TSP() implementation is, it highly depends on good estimation.
Let earliestOrderTime be a time of the earliest not handled yet order.
Repeat every minute:
If there is a new order: If s is empty, set earliestOrderTime to current time. Add it to a set s. Calculate t = TSP(s + W).
If (current time + t >= earliestOrderTime + 2 hours): send a car for a TSP(s + W) trip. Make s an empty set.
Example
For your exemplary data it will work like this:
10:00. earliestOrderTime = 10:00. s = {O1}. t = TSP({01, W}) = 10 - 5 = 5.
10:00 + 0:05 < 10:00 + 2:00, so we don't send a car yet, we wait.
...
10:20. s = {O1, O2}. t = 46 - 20 = 26.
10:20 + 0:26 < 10:00 + 2:00, so we wait.
...
10:25. s = {O1, O2, O3}. t = 2 + 7 + 21 + 20 - 20 = 30.
10:25 + 0:30 < 10:00 + 2:00, so we wait.
...
11.30.
11:30 + 0:30 >= 10:00 + 2:00, so we send a car to go to O3, O1, O2 and back to W. He visits orders at 11:32, 11:39, 12:00 and come backs at 12:20. Guys where waiting 67, 99 and 100 minutes.

Basic Velocity Algorithm?

Given the following dataset for a single article on my site:
Article 1
2/1/2010 100
2/2/2010 80
2/3/2010 60
Article 2
2/1/2010 20000
2/2/2010 25000
2/3/2010 23000
where column 1 is the date and column 2 is the number of pageviews for an article. What is a basic velocity calculation that can be done to determine if this article is trending upwards or downwards for the most recent 3 days?
Caveats, the articles will not know the total number of pageviews only their own totals. Ideally with a number between 0 and 1. Any pointers to what this class of algorithms is called?
thanks!
update: Your data actually already is a list of velocities (pageviews/day). The following answer simply shows how to find the average velocity over the past three days. See my other answer for how to calculate pageview acceleration, which is the real statistic you are probably looking for.
Velocity is simply the change in a value (delta pageviews) over time:
For article 1 on 2/3/2010:
delta pageviews = 100 + 80 + 60
= 240 pageviews
delta time = 3 days
pageview velocity (over last three days) = [delta pageviews] / [delta time]
= 240 / 3
= 80 pageviews/day
For article 2 on 2/3/2010:
delta pageviews = 20000 + 25000 + 23000
= 68000 pageviews
delta time = 3 days
pageview velocity (over last three days) = [delta pageviews] / [delta time]
= 68,000 / 3
= 22,666 + 2/3 pageviews/day
Now that we know the maximum velocity, we can scale all the velocities to get relative velocities between 0 and 1 (or between 0% and 100%):
relative pageview velocity of article 1 = velocity / MAX_VELOCITY
= 240 / (22,666 + 2/3)
~ 0.0105882353
~ 1.05882353%
relative pageview velocity of article 2 = velocity / MAX_VELOCITY
= (22,666 + 2/3)/(22,666 + 2/3)
= 1
= 100%
"Pageview trend" likely refers to pageview acceleration, not velocity. Your dataset actually already is a list of velocities (pageviews/day). Pageviews are non-decreasing values, so pageview velocity can never be negative. The following describes how to calculate pageview acceleration, which may be negative.
PV_acceleration(t1,t2) = (PV_velocity{t2} - PV_velocity{t1}) / (t2 - t1)
("PV" == "Pageview")
Explanation:
Acceleration is simply change in velocity divided by change in time. Since your dataset is a list of page view velocities, you can plug them directly into the formula:
PV_acceleration("2/1/2010", "2/3/2010") = (60 - 100) / ("2/3/2010" - "2/1/2010")
= -40 / 2
= -20 pageviews per day per day
Note the data for "2/2/2010" was not used. An alternate method is to calculate three PV_accelerations (using a date range that goes back only a single day) and averaging them. There is not enough data in your example to do this for three days, but here is how to do it for the last two days:
PV_acceleration("2/3/2010", "2/2/2010") = (60 - 80) / ("2/3/2010" - "2/2/2010")
= -20 / 1
= -20 pageviews per day per day
PV_acceleration("2/2/2010", "2/1/2010") = (80 - 100) / ("2/2/2010" - "2/1/2010")
= -20 / 1
= -20 pageviews per day per day
PV_acceleration_average("2/3/2010", "2/2/2010") = -20 + -20 / 2
= -20 pageviews per day per day
This alternate method did not make a difference for the article 1 data because the page view acceleration did not change between the two days, but it will make a difference for article 2.
Just a link to an article about the 'trending' algorithm reddit, SUs and HN use among others.
http://www.seomoz.org/blog/reddit-stumbleupon-delicious-and-hacker-news-algorithms-exposed

Resources