I have an algorithm with 3 different test data that doubles in size each time.
Here is the run time in seconds for each test data:
Size , time(s):
20 , 0.001
40 , 0.016
80 , 0.047
Form this how do I get the order of growth classification?
Do I need to find the cost of the algorithm first? By evaluating each line?
Or do I plot it and make a log-log graph?
Its the A* path finding algorithm, I am starting with grid size of 20 cells, and then I am doubling the grid size each time, does the data double? Or quadruple? Since it will have 20*20 amount of cells, 20*20 = 400, 40*40 = 1600, 400 is not half of 1600, so by doubling the "grid size" am I doubling the data or quadrupling?
Any help will be greatly appreciated.
Related
I am trying to figure out what my overall average speed is while on a bike.
I have miles traveled in column 1 and then avg speed in miles per hour in column 2.
The problem is that I have different distances associated with the differing speeds, so I cant juts average my all of the speeds to get an accurate average.
How do I find the overall average speed, weighted, so that I have the actual avg speed?
Distance
9.82,
8.69,
8.43,
9.07,
8.16,
12.41,
7.22,
3.13,
10.28,
9.79,
10.44,
Avg Speed
18.4,
12.8,
16.7,
18.1,
16.2,
19,
13.5,
16,
10.9,
16.8,
15.6,
How overall average speed should be calculated? As
Vavg_total = Distance_total / Time_total
We can find total distance as sum of stage sizes, and time as sum of stage times
Distance_total = Sum(Distance[i])
Time_total = Sum(Distance[i] / AvgSpeed[i])
I have 30 elements with different weight stored as number like 50, 100, 300 etc. After query received with "limit" argument, I need to response with array of randomly collected elements with sum of it weights, not above queried limit with some deviation allowed. As it's elements collected randomly, multiple variations for one query also allowed.
In examples I have deviation set to 30.
Example 1, query for limit 560. I response it elements with weights 300, 150, 50, 40 in sum of 540.
Example 2, query for limit 160. I response it elements with weights 150, 40 in sum of 190.
I don't know, may be there's some algorithm previously invented for such solutions? Or not and I must develop it by myself?
Thanks much
I have a three dimensional cell that holds images (i.e. images = cell(10,4,5)) and each cell block holds images of different sizes. The sizes are not too important in terms of what I’m trying to achieve. I would like to know if there is an efficient way to compute the sharpness of each of these cell blocks (total cell blocks = 10*4*5 = 200). I need to compute the sharpness of each block using the following function:
If it matters:
40 cell blocks contain images of size 240 X 320
40 cell blocks contain images of size 120 X 160
40 cell blocks contain images of size 60 X 80
40 cell blocks contain images of size 30 X 40
40 cell blocks contain images of size 15 X 20
which totals to 200 cells.
%% Sharpness Estimation From Image Gradients
% Estimate sharpness using the gradient magnitude.
% sum of all gradient norms / number of pixels give us the sharpness
% metric.
function [sharpness]=get_sharpness(G)
[Gx, Gy]=gradient(double(G));
S=sqrt(Gx.*Gx+Gy.*Gy);
sharpness=sum(sum(S))./(480*640);
Currently I am doing the following:
for i = 1 : 10
for j = 1 : 4
for k = 1 : 5
sharpness = get_sharpness(images{i,j,k});
end
end
end
The sharpness function isn’t anything fancy. I just have a lot of data hence it takes a long time to compute everything.
Currently I am using a nested for loop that iterates through each cell block. Hope someone can help me find a better solution.
(P.S. This is my first time asking a question hence if anything is unclear please ask further questions. THANK YOU)
In the Big Picture Section of this page here a table is given for comparing different combinations of 3 different functions. Let the function in the left be y = f(x) then what about the functions Average, Difference, Weighted Sum, 4% Threshold ? I need the mathematical equation in terms of y
Everything is explained on that page :
Here are some simple, boring functions which, when repeatedly combined with smaller and smaller versions of themselves, create very interesting patterns. The table below shows you the basic source pattern (left), and combinations of that pattern with smaller versions of itself using various combination methods.
Average (1/n) - This is simply the average of all of the scales being used, 'n' is the total number of scales. So if there are 6 scales, each scale contributes about 16% (1/6th) of the final value.
Difference - This uses the difference between the color values of each scale as the final texture color.
Weighted Sum (1/2^n) - The weighted sum is very similar to the average, except the larger scales have more weight. As 'n' increases, the contribution of that scale is lessened. The smallest scales (highest value of n) have the least effect. This method is the most common and typically the most visually pleasing.
4% Threshold - This is a version of the Weighted Sum where anything below 48% gray is turned black, and anything above 52% gray is turned white.
Let us take the Average and checker function. You are averaging a number of repeating different images, 6 in their example, but 3 in the following example :
So each pixel of the output image is the average value of the pixel values from the other images. You can have as many of these images as you want, and they are always built the same way : the image at level n is made of 4 tiles which are the image at level n-1 scaled to a fourth of its size. Then from all these pictures you apply one of the above functions to get only one.
Is it clearer now ? It is, however, generally hard to give a function f that defines each image. However, the "compounding" functions are defined even though there are n inputs (xs) for 1 output (y = f(x1, x2, ....xn)) in pseudocode and math :
Average (1/n) - For n levels, final_pixel[x][y] = sum for i from 1 to n of image_i[x][y]/n
Difference - For n levels, final_pixel[x][y] = sum for i from 2 to n of to n of image_i[x][y] - image_i-1[x][y] -- Not entirely sure about this one.
Weighted Sum (1/2^n) - For n levels, final_pixel[x][y] = sum for i from 1 to n of image_i[x][y]/(2**n)
4% Threshold - For n levels,
value = sum for i from 1 to n of image_i[x][y]/(2**n)
if value/max_value > .52 then final_pixel[x][y]=white
else if value/max_value < .48 then final_pixel[x][y]=black;
else final_pixel[x][y]=value
Where 2**n is 2 to the power of n.
This problem is based on a puzzle by Joel Spolsky from 2001.
A guy "gets a job as a street painter, painting the dotted lines down the middle of the road." On the first day he finishes up 300 yards, on the second - 150, and on the 3rd even less so. The boss is furious and demands an explanation.
"I can't help it," says the guy. "Every day I get farther and farther away from the paint can!"
My question is, can you estimate the distance he covered in the 3rd day?
One of the comments in the linked thread does derive a precise solution, but my question is about a good enough estimation -- say, 10% -- that is easy to make from the general principles.
clarification: this is about a certain method in analysis of algorithms, not about developing an algorithm, nor code.
There are a lot of unknowns here - his walking speed, his painting speed, for how long does the paint in the brush last...
But clearly there are two processes going on here. One is quadratic - it's the walking to and fro between the paint can and the painting point. The other is linear - it's the process of painting, itself.
Thinking about the 10th or even the 100th day, it is clear that the linear component becomes negligible, and the process becomes very nearly quadratic - the walking takes almost all the time. During the first few minutes of the first day, on the contrary, it is close to being linear.
We can thus say that the time t as a function of the distance s follows a power law t ~ s^a with a changing coefficient a = 1.0 ... 2.0. This also means that s ~ t^b, b = 1/a.
Applying the empirical orders of growth analysis:
The b coefficient between day 1 and day 2 is approximated as
b(1,2) = log (450/300) / log 2 = 0.585 ;; and so,
a(1,2) = 1/b(1,2) = 1/0.585 = 1.71
Just as expected, the a coefficient is below 2. Going for the time period between day 2 and day 3, we can set it approximately to the middle value between 1.71 and 2.0,
a(2,3) = 1.85 ;; a = 1.0 .... 2.0
b(2,3) = 0.54 ;; b = 1.0 .... 0.5
s(3) = s(2) * (3/2)^b(2,3)
= 450 * (3/2)^0.54
= 560 yards
Thus the distance covered in the third day can be estimated as 560 - 450 = 110 yards.
What if the a coefficient had the maximum possible value, 2.0, already (which is impossible)? Then, 450*(3/2)^0.5 = 551 yards. And for the other extreme, if it were the same 1.71 (which it clearly can't be, either), 450*(3/2)^0.585 = 570.
This means that the estimate of 110 yards is plausible, with an error of less than 10 yards on either side.
considering four assumptions :-
painting speed = infinity
walking speed = x
he can paint only infinitly small in one brush stroke.
he leaves his can at starting point.
The distance he walks for painting dy road at y distance = 2y
Total distance he walks = intgeration of 2y*dy = y^2 = y^2
Total time he can paint y distance = y^2/x
Time taken to paint 300 yards = 1 day
(300)^2/x = 1
x = 90000 yards/day
Total time he can paint distance y = y^2/90000
(y/300)^2 = 2 after second day
y = 300*2^(1/2) = 424
Day 1 = 300
Day 2 = 424-300 = 124
Day 3 = 300*3^(1/2)-424 = 520 - 424 = 96
Ans : 300/124/96 assuming the first day its 300