disk scheduling algorithm - algorithm

Assume the disk head starts at track 1; there are 230 tracks (from 1 to 230); a seek takes 34 + 0.1*T milliseconds, where T is the number of tracks to move; latency is 12 milliseconds; and I/O transfer takes 3 milliseconds. Here are the requests, all in the disk queue already, and the time they arrive (starting at time 0):
arrival time(ms): 0, 21, 23, 28, 32, 45, 58, 83, 89, 109
for track: 43, 132, 34, 23, 202, 175, 219, 87, 75, 182
Compute the average time to service a request for each of the following disk scheduling algorithms: SCAN, FCFS, SSTF. Also show the order of service for each algorithm.
Answer for SCAN:
1>23>34>43>75>87>132>175>182>202>219>230
average time = 10*49 + 0.1*218 = 51.18 ms
I don't understand how they calculated the average time.
The above is the only work they showed.
Where did they get the 10 and 218 from in the average time formula?
Answer for FCFS
1>43>132>34>23>202>175>219>87>75>182
average time = 490 + (42+89+98+11+179+27+44+132+12+107)*0.1 = 56.4ms
I understand where they got (42+89+98+11+179+27+44+132+12+107)*0.1 from, but how did they get 490?

For scan, the total number of tracks of movement is just the difference between 1, where the head starts, and 219, the most distant track, so time due to moving past tracks is 0.1*(219-1).
There is a seek overhead of 34, latency 12, transfer 3, total 34+12+3 = 49.
Thus the total time is 10*49+0.1*218 = 490+21.8 = 511.8, average 51.18.
The 490 ms of non-move time is the same for FCFS. Only the track move time is different.

Related

How do I count the number of occurrences that a value in a Power BI matrix > X

I am trying to create a measure that counts the number of occurrences where any employee worked in excess of 45 hours in any week. I've looked at other posted questions and can't seem to connect the dots to my specific question. The following matrix shows the total hours worked by employee by week:
Employees and Hours (the rows and values in the matrix, respectively) reside in the same table called "Power BI Upload"
Week Number (Columns in the matrix) reside in a separate table called "Date Table"
My desired total row would show:
Week 30 has 2 total occurrences (50 and 48)
Week 31 has 3 total occurrences (60, 54, 47)
Week 32 has 3 total occurrences (48, 47, 47)
Week 33 has 5 total occurrences (46, 47, 72, 64, 68)
Week 34 has 5 total occurrences (48, 55, 56, 67, 62)
I hope I am being clear. Thank you very much for your help.
Add a DAX measure to Power BI Upload to count those employees with greater than 45 hours in a week as follows;
Over45 = CALCULATE(COUNT('Power BI Upload'[Employee Name]),FILTER('Power BI Upload',[Hours]>45))
That'll allow you to take your matrix data of
and turn it into

Heapify Down Method in Min Heap

Currently trying to grasp the Min Heap with Repl here: https://repl.it/#Stylebender/Minheap#index.js
Min Heap has a capacity of 5.
Once I insert the 6th element (50) we swap the positions of the elements at index 0 and index 5 and eject the smallest element from the heap leaving the heap as:
[ 50, 10, 20, 40, 30]
My specific query has to deal with Lines 39-40.
As you can see from the console log, the first time we call the trickeDown() method, min (0) which represents the index position of 50 becomes leftChild and ends up swapping positions with index 1 with the following result:
[ 50, 10, 20, 40, 30]
However, on the second call of the trickleDown() method, 50 which is at index 1 assumes the position of rightChild and swaps positions with index 4 to form the final heap as below:
[ 10, 30, 20, 40, 50]
Maybe I'm just missing something but I'm not sure why min decided to become leftChild in the first run and rightChild in the second run since wouldn't 50, as the largest element within the min heap satisfy both the For Loops every time the method is invoked?
In the first call, we comparing 50, 10 and 20.
min begins at 0, indicating the 50.
10 is less than 50, so min becomes 1.
20 is not less than 10, so min does not change.
We have found the minimum: 10.
In the second call, we compare 50, 40 and 30.
min begins at 1, indicating the 50.
40 is less than 50, so min becomes 3.
30 is less than 40, so min becomes 4.
We have found the minimum: 30.
It is not sufficient to find an element less than 50; we must find the minimum. To swap 50 and 20 would not produce a valid min-heap.

Finding largest difference in array of compass headings

I'm trying to have the "range" of compass headings over the last X seconds. Example: Over the last minute, my heading has been between 120deg and 140deg on the compass. Easy enough right? I have an array with the compass headings over the time period, say 1 reading every second.
[ 125, 122, 120, 125, 130, 139, 140, 138 ]
I can take the minimum and maximum values and there you go. My range is from 120 to 140.
Except it's not that simple. Take for example if my heading has shifted from 10 degrees, to 350 degrees (ie it "passed" through North, changing -20deg.
Now my array might look something like this:
[ 9, 10, 6, 3, 358, 355, 350, 353 ]
Now the Min is 3 and max 358, which is not what I need :( I'm looking for the most "right hand" (clockwise) value, and most "left hand" (counter-clockwise) value.
Only way I can think of, is finding the largest arc along the circle that includes none of the values in my array, but I don't even know how I would do that.
Would really appreciate any help!
Problem Analysis
To summarize the problem, it sounds like you want to find both of the following:
The two readings that are closest together (for simplicity: in a clockwise direction) AND
Contain all of the other readings between them.
So in your second example, 9 and 10 are only 1° apart, but they do not contain all the other readings. Conversely, traveling clockwise from 10 to 9 would contain all of the other readings, but they are 359° apart in that direction, so they are not closest.
In this case, I'm not sure if using the minimum and maximum readings will help. Instead, I'd recommend sorting all of the readings. Then you can more easily check the two criteria specified above.
Here's the second example you provided, sorted in ascending order:
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
If we start from the beginning, we know that traveling from reading 3 to reading 358 will encompass all of the other readings, but they are 358 - 3 = 355° apart. We can continue scanning the results progressively. Note that once we circle around, we have to add 360 to properly calculate the degrees of separation.
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
*--------------------------> 358 - 3 = 355° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
-> *----------------------------- (360 + 3) - 6 = 357° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
----> *-------------------------- (360 + 6) - 9 = 357° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
-------> *----------------------- (360 + 9) - 10 = 359° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
----------> *------------------- (360 + 10) - 350 = 20° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
--------------> *-------------- (360 + 350) - 353 = 357° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
-------------------> *--------- (360 + 353) - 355 = 358° separation
[ 3, 6, 9, 10, 350, 353, 355, 358 ]
------------------------> *---- (360 + 355) - 358 = 357° separation
Pseudocode Solution
Here's a pseudocode algorithm for determining the minimum degree range of reading values. There are definitely ways it could be optimized if performance is a concern.
// Somehow, we need to get our reading data into the program, sorted
// in ascending order.
// If readings are always whole numbers, you can use an int[] array
// instead of a double[] array. If we use an int[] array here, change
// the "minimumInclusiveReadingRange" variable below to be an int too.
double[] readings = populateAndSortReadingsArray();
if (readings.length == 0)
{
// Handle case where no readings are provided. Show a warning,
// throw an error, or whatever the requirement is.
}
else
{
// We want to track the endpoints of the smallest inclusive range.
// These values will be overwritten each time a better range is found.
int minimumInclusiveEndpointIndex1;
int minimumInclusiveEndpointIndex2;
double minimumInclusiveReadingRange; // This is convenient, but not necessary.
// We could determine it using the
// endpoint indices instead.
// Check the range of the greatest and least readings first. Since
// the readings are sorted, the greatest reading is the last element.
// The least reading is the first element.
minimumInclusiveReadingRange = readings[array.length - 1] - readings[0];
minimumInclusiveEndpointIndex1 = 0;
minimumInclusiveEndpointIndex2 = array.length - 1;
// Potential to skip some processing. If the ends are 180 or less
// degrees apart, they represent the minimum inclusive reading range.
// The for loop below could be skipped.
for (int i = 1; i < array.length; i++)
{
if ((360.0 + readings[i-1]) - readings[i] < minimumInclusiveReadingRange)
{
minimumInclusiveReadingRange = (360.0 + readings[i-1]) - readings[i];
minimumInclusiveEndpointIndex1 = i;
minimumInclusiveEndpointIndex2 = i - 1;
}
}
// Most likely, there will be some different readings, but there is an
// edge case of all readings being the same:
if (minimumInclusiveReadingRange == 0.0)
{
print("All readings were the same: " + readings[0]);
}
else
{
print("The range of compass readings was: " + minimumInclusiveReadingRange +
" spanning from " + readings[minimumInclusiveEndpointIndex1] +
" to " + readings[minimumInclusiveEndpointIndex2]);
}
}
There is one additional edge case that this pseudocode algorithm does not cover, and that is the case where there are multiple minimum inclusive ranges...
Example 1: [0, 90, 180, 270] which has a range of 270 (90 to 0/360, 180 to 90, 270 to 180, and 0 to 270).
Example 2: [85, 95, 265, 275] which has a range of 190 (85 to 275 and 265 to 95)
If it's necessary to report each possible pair of endpoints that create the minimum inclusive range, this edge case would increase the complexity of the logic a bit. If all that matters is determining the value of the minimum inclusive range or it is sufficient to report just one pair that represents the minimum inclusive range, the provided algorithm should suffice.

Print the row and column wise sorted 2 D matrix in a sorted order

Given an n x n matrix, where every row and column is sorted in non-decreasing order. Print all elements of matrix in sorted order.
Example:
Input:
mat[][] = { {10, 20, 30, 40},
{15, 25, 35, 45},
{27, 29, 37, 48},
{32, 33, 39, 50},
};
Output:
(Elements of matrix in sorted order)
10 15 20 25 27 29 30 32 33 35 37 39 40 45 48 50
I am unable to figure out how to do this.But according to me we can put the 2 D matrix in one matrix and apply the sort function.But i am in a need of space optimized code.
Using a Heap would be a good idea here.
Please refer to the following for a very similar question:
http://www.geeksforgeeks.org/kth-smallest-element-in-a-row-wise-and-column-wise-sorted-2d-array-set-1/
Thought the problem in the link above is different, the same approach could be used for the problem you specify. Instead of looping k times as the link explains, you need to visit all elements in the matrix i.e you should loop till the heap is empty.

Evenly distributing a timeline in some length even when length is less than the number of points (compression)

I need to render a horizontal calendar and render events on it. So I get two dates and the width in pixels. I want to distribute the days between the two provided dates over those pixels and maintain a minimum distance between the visual points.
for instance, I have 365 days (each day should consume at least 10 pixels) and I need to distribute then over 300 pixels. So I need to "pack" them in groups so each pixel would represent multiple dates. How can I calculate this mathematically speaking?
i.e.
(days)
1/1 8/1 16/1 24/1 2/2 10/2 18/2 ......
in the above example for instance, how can I calculate that I need to "pack/skip" the 7 days?
What I need in the end is to produce an array with the dates (days) and the x offset where it should be positioned in the horizontal axis.
i.e.
1/1/2013 = 0
2/1/2013 = 0
3/1/2013 = 0
4/1/2013 = 0
5/1/2013 = 0
6/1/2013 = 0
7/1/2013 = 0
8/1/2013 = 10
9/1/2013 = 10
10/1/2013 = 10
....
You have 300 pixels to use. Each 'package' should be at least 10 pixels. This means you should have 300/10=30 packages. You have 365 which should be distributed over 30 packages so that's 365/30=12.17 days per package. Or simply 12.
The same logic can be used to calculate the amount of days needed in a package if you have a different amount of pixels to use.
I hope that this was what you were asking for.
Jannes
Edit: I have just read your edit so I will alter my reply a bit here.
If you have converted your date to a number between 1 and 365 you can simply calculate each element of your array days like this.
days[i]=floor(i/12)*10
Where the 12 came from above calculations.
date_width = 10
display_width = 300
date_range = 365
num_of_dates = display_len // date_len
date_offsets = [x * date_range // num_of_dates for x in range(num_of_dates)]
gives dates for every 10 "pixels"
[0, 12, 24, 36, 48, 60, 73, 85, 97, 109, 121, 133, 146, 158, 170, 182, 194, 206, 219, 231, 243, 255, 267, 279, 292, 304, 316, 328, 340, 352]
if seeing that you have about 12 days between data points you want to shift it up to 2 weeks
date_offset = 14
date_offsets = [x * date_offset for x in range(date_range//date_offset)]
date_positions = [display_width * o // date_range for o in date_offsets]

Resources