I am trying to figure out what my overall average speed is while on a bike.
I have miles traveled in column 1 and then avg speed in miles per hour in column 2.
The problem is that I have different distances associated with the differing speeds, so I cant juts average my all of the speeds to get an accurate average.
How do I find the overall average speed, weighted, so that I have the actual avg speed?
Distance
9.82,
8.69,
8.43,
9.07,
8.16,
12.41,
7.22,
3.13,
10.28,
9.79,
10.44,
Avg Speed
18.4,
12.8,
16.7,
18.1,
16.2,
19,
13.5,
16,
10.9,
16.8,
15.6,
How overall average speed should be calculated? As
Vavg_total = Distance_total / Time_total
We can find total distance as sum of stage sizes, and time as sum of stage times
Distance_total = Sum(Distance[i])
Time_total = Sum(Distance[i] / AvgSpeed[i])
Related
I'm having trouble finding the formula for percentages
I have to calculate the employee bonus
e.g. on April 8, 2022, we have 90.13% of the standard developed, the same on 04/04/2022 is 108.54%
where 90% is equal to 3.75% of the bonus and a maximum of 120% is equal to 15% of the bonus below 90% of the worked norm, the employee will not receive a bonus
I need to find a formula that will calculate how much the employee's bonus will be
https://docs.google.com/spreadsheets/d/1xtf9ZUecahZTnB6dNKQAKMO7e1qTmv6qbNHsQ0wWpI0/edit#gid=51882774
89.99% = 0%
90% = 3.75%
90.13% = ???
108.54% = ???
120% = 15%
130% = 15%
140% = 15%
150% = 15%
Anyone can help me with this ??
Image preview
My browser won't view the image for some reason, so in this answer I'm assuming that the 'Percentage of standard developed' is column A, please change cell references to suit.
in Cell B2 (or a different column) enter the formula;
=IFS(A2<90%, 0, A2>120%, 15%, TRUE, A2*37.5%-30%)
Here is a dynamic formula for covering the range you specified. It will help you understand how to come up with the numbers. Given the conditional formatting, I think you already know when not to apply it.
=$A2+(B1-$A1)*($E2-$A2)/($E1-$A1)
Basically you start with the lower bound for the percentage
3,75 then you calculate how many steps you took from 90 and multiply the number of steps by the ratio of step range (120 - 90) to the percentage range (15 - 3,75).
I have an algorithm with 3 different test data that doubles in size each time.
Here is the run time in seconds for each test data:
Size , time(s):
20 , 0.001
40 , 0.016
80 , 0.047
Form this how do I get the order of growth classification?
Do I need to find the cost of the algorithm first? By evaluating each line?
Or do I plot it and make a log-log graph?
Its the A* path finding algorithm, I am starting with grid size of 20 cells, and then I am doubling the grid size each time, does the data double? Or quadruple? Since it will have 20*20 amount of cells, 20*20 = 400, 40*40 = 1600, 400 is not half of 1600, so by doubling the "grid size" am I doubling the data or quadrupling?
Any help will be greatly appreciated.
In the Big Picture Section of this page here a table is given for comparing different combinations of 3 different functions. Let the function in the left be y = f(x) then what about the functions Average, Difference, Weighted Sum, 4% Threshold ? I need the mathematical equation in terms of y
Everything is explained on that page :
Here are some simple, boring functions which, when repeatedly combined with smaller and smaller versions of themselves, create very interesting patterns. The table below shows you the basic source pattern (left), and combinations of that pattern with smaller versions of itself using various combination methods.
Average (1/n) - This is simply the average of all of the scales being used, 'n' is the total number of scales. So if there are 6 scales, each scale contributes about 16% (1/6th) of the final value.
Difference - This uses the difference between the color values of each scale as the final texture color.
Weighted Sum (1/2^n) - The weighted sum is very similar to the average, except the larger scales have more weight. As 'n' increases, the contribution of that scale is lessened. The smallest scales (highest value of n) have the least effect. This method is the most common and typically the most visually pleasing.
4% Threshold - This is a version of the Weighted Sum where anything below 48% gray is turned black, and anything above 52% gray is turned white.
Let us take the Average and checker function. You are averaging a number of repeating different images, 6 in their example, but 3 in the following example :
So each pixel of the output image is the average value of the pixel values from the other images. You can have as many of these images as you want, and they are always built the same way : the image at level n is made of 4 tiles which are the image at level n-1 scaled to a fourth of its size. Then from all these pictures you apply one of the above functions to get only one.
Is it clearer now ? It is, however, generally hard to give a function f that defines each image. However, the "compounding" functions are defined even though there are n inputs (xs) for 1 output (y = f(x1, x2, ....xn)) in pseudocode and math :
Average (1/n) - For n levels, final_pixel[x][y] = sum for i from 1 to n of image_i[x][y]/n
Difference - For n levels, final_pixel[x][y] = sum for i from 2 to n of to n of image_i[x][y] - image_i-1[x][y] -- Not entirely sure about this one.
Weighted Sum (1/2^n) - For n levels, final_pixel[x][y] = sum for i from 1 to n of image_i[x][y]/(2**n)
4% Threshold - For n levels,
value = sum for i from 1 to n of image_i[x][y]/(2**n)
if value/max_value > .52 then final_pixel[x][y]=white
else if value/max_value < .48 then final_pixel[x][y]=black;
else final_pixel[x][y]=value
Where 2**n is 2 to the power of n.
I'm looking for an algorithm, that would be able to find cheapest and most efficent way to buy resources.
Example data (Let's base this on rocks that contain minerals)
Rock A (Contains 300 units of iron, 200 units of copper, 500 units of silver)
Rock B (Contains 150 units of iron, 400 units of copper, 100 units of silver)
Rock C (Contains 180 units of iron, 300 units of copper, 150 units of silver)
Rock D (Contains 200 units of iron, 350 units of copper, 80 units of silver)
Rock E (Contains 220 units of iron, 150 units of copper, 400 units of silver)
Rock F (Contains 30 000 units of iron, 150 units of copper, 400 units of silver)
Each unit costs 1. So A rock costs a sum of units inside.
Cases:
First case, needs 2600 units of Copper
Second case needs 5000 units of Iron
Third case needs 4600 units of Silver
What algorithm could I use to estimate which types of rocks are needed to pay lowest unit price (have as low loss as possible).
That case I came up with an algorithm that would calculate for each item a ratio of wasted vs needed materials.
Still ratio could lead me to getting rock F in case of Iron. Since that would be cheapest ratio. But overall value of stone is big. And could be achived with lower value stones as I dont need 30 000 units of Iron.
Secondly, and way more complex. Is to combine all 3 cases and get best combination of stones to fit all requirements at lowest price (waste).
This is the unbounded Knapsack problem but instead of maximization, you need minimization. The amount of resource you need is the "weight" and the cost is the "value".
These are the re-written properties:
m[0] = 0
m[w] = min(v[i] + m[w - w[i]]) for w[i] < w
Where m[j] is the solution for j amount of resource and v[i] is the cost of the ith rock.
Here is some pseudocode:
m[0] = 0
for i = 1 to W: # W is your target amount of resource i.e. 2600, 500, 4600
minv = max_value_possible
# rocks is the vector with the <cost,resource> pairs of each rock e.g.<650,150>
# for Rock B, iron
for r in rocks:
if r.second < i: minv = min(minv, m[i - r.second] + r.first)
m[i] = minv
Knapsack problem
The greedy approach you're talking about will give you a suboptimal solution.
In my opinion, it will be the best way if you follow your first idea. The percentage of a mineral in relation to the overall amount gives you the best result:
For example if you search for the mineral iron:
Rock A: 300/1000 = 30% iron
Rock F: 30000 / 30550 = 98.2% iron
This problem is based on a puzzle by Joel Spolsky from 2001.
A guy "gets a job as a street painter, painting the dotted lines down the middle of the road." On the first day he finishes up 300 yards, on the second - 150, and on the 3rd even less so. The boss is furious and demands an explanation.
"I can't help it," says the guy. "Every day I get farther and farther away from the paint can!"
My question is, can you estimate the distance he covered in the 3rd day?
One of the comments in the linked thread does derive a precise solution, but my question is about a good enough estimation -- say, 10% -- that is easy to make from the general principles.
clarification: this is about a certain method in analysis of algorithms, not about developing an algorithm, nor code.
There are a lot of unknowns here - his walking speed, his painting speed, for how long does the paint in the brush last...
But clearly there are two processes going on here. One is quadratic - it's the walking to and fro between the paint can and the painting point. The other is linear - it's the process of painting, itself.
Thinking about the 10th or even the 100th day, it is clear that the linear component becomes negligible, and the process becomes very nearly quadratic - the walking takes almost all the time. During the first few minutes of the first day, on the contrary, it is close to being linear.
We can thus say that the time t as a function of the distance s follows a power law t ~ s^a with a changing coefficient a = 1.0 ... 2.0. This also means that s ~ t^b, b = 1/a.
Applying the empirical orders of growth analysis:
The b coefficient between day 1 and day 2 is approximated as
b(1,2) = log (450/300) / log 2 = 0.585 ;; and so,
a(1,2) = 1/b(1,2) = 1/0.585 = 1.71
Just as expected, the a coefficient is below 2. Going for the time period between day 2 and day 3, we can set it approximately to the middle value between 1.71 and 2.0,
a(2,3) = 1.85 ;; a = 1.0 .... 2.0
b(2,3) = 0.54 ;; b = 1.0 .... 0.5
s(3) = s(2) * (3/2)^b(2,3)
= 450 * (3/2)^0.54
= 560 yards
Thus the distance covered in the third day can be estimated as 560 - 450 = 110 yards.
What if the a coefficient had the maximum possible value, 2.0, already (which is impossible)? Then, 450*(3/2)^0.5 = 551 yards. And for the other extreme, if it were the same 1.71 (which it clearly can't be, either), 450*(3/2)^0.585 = 570.
This means that the estimate of 110 yards is plausible, with an error of less than 10 yards on either side.
considering four assumptions :-
painting speed = infinity
walking speed = x
he can paint only infinitly small in one brush stroke.
he leaves his can at starting point.
The distance he walks for painting dy road at y distance = 2y
Total distance he walks = intgeration of 2y*dy = y^2 = y^2
Total time he can paint y distance = y^2/x
Time taken to paint 300 yards = 1 day
(300)^2/x = 1
x = 90000 yards/day
Total time he can paint distance y = y^2/90000
(y/300)^2 = 2 after second day
y = 300*2^(1/2) = 424
Day 1 = 300
Day 2 = 424-300 = 124
Day 3 = 300*3^(1/2)-424 = 520 - 424 = 96
Ans : 300/124/96 assuming the first day its 300