Algorithm for calculating total cost in groups of N - algorithm

I have a limited supply of objects and as the objects are purchased, the price goes up accordingly in groups of N (every time N objects are bought, price increases). When trying to purchase a number of objects, what is the easiest way to calculate total cost?
Example:
I have 24 foo. For every N(example using 3) that are purchased, the price increases by 1.
So if I buy 1 at the prices of 1 then there are 23 left and 2 left at the price of 1.
After 1 has been purchased, someone wishes to buy 6. Well the total cost would be = (2*1)+(3*2)+(1*3)

Borrowing RBarryYoung's notation, the first N items cost B each, the second N items cost B + I each, the third N items cost B + 2*I each, etc.
To buy X items: Q := X div N (floor division) whole groups are bought, plus R := X mod N extra items. The former cost Q * N * (B + (B + (Q - 1) * I)) / 2, since, with linearly increasing item costs, the average item cost is equal to the average of first item cost, B, and the last item cost, B + (Q - 1) * I. The latter items cost R * (B + Q*I), so the resulting function f(X) is
f(X) := (Q * N * (B + (B + (Q - 1) * I))) div 2 + R * (B + Q*I).
To compute the cost of the items (zero-)indexed from X inclusive to X' exclusive, use f(X') - f(X).

OK, I think that this is correct now...
Given:
X = Total Number of Items Bought
N = Number of Items per Price Increment
B = Base Item Price, before any Increments
I = Price Increment per [N]
Set:
J = FLOOR((X-1)/N)+1
Then:
TotalCost = X*(B-I) + I*(X*J - N*J*(J-1)/2)

Related

Finding the amount of combination of three numbers in a sequence which fulfills a specific requirement

The question is, given a number D and a sequence of numbers with amount N, find the amount of the combinations of three numbers that have a highest difference value within it that does not exceed the value D. For example:
D = 3, N = 4
Sequence of numbers: 1 2 3 4
Possible combinations: 1 2 3 (3-1 = 2 <= D), 1 2 4 (4 - 1 = 3 <= D), 1 3 4, 2 3 4.
Output: 4
What I've done: link
Well my concept is: iterate through the whole sequence of numbers and find the smallest number that exceeds the D value when subtracted to the current compared number. Then, find the combinations between those two numbers with the currently compared number being a fixed value (which means combination of n [numbers between the two numbers] taken 2). If even the biggest number in the sequence subtracted with the currently compared number does not exceed D, then use a combination of the whole elements taken 3.
N can be as big as 10^5 with the smallest being 1 and D can be as big as 10^9 with the smallest being 1 too.
Problem with my algorithm: overflow occurs when I do a combination of the 1st element and 10^5th element. How can I fix this? Is there a way to calculate that large amount of combination without actually doing the factorials?
EDIT:
Overflow occurs when worst case happens: currently compared number is still in index 0 while all other numbers, when subtracted with the currently compared number, is still smaller than D. For example, the value of number at index 0 is 1, the value of number at index 10^5 is 10^5 + 1 and D is 10^9. Then, my algorithm will attempt to calculate the factorial of 10^5 - 0 which then overflows. The factorial will be used to calculate the combination of 10^5 taken 3.
When you seek for items in value range D in sorted list, and get index difference M, then you should calculate C(M,3).
But for such combination number you don't need to use huge factorials:
C(M,3) = M! / (6 * (M-3)!) = M * (M-1) * (M-2) / 6
To diminish intermediate results even more:
A = (M - 1) * (M - 2) / 2
A = (A * M) / 3
You didn't add the C++ tag to your question, so let me write the answer in Python 3 (it should be easy to translate it to C++):
N = int(input("N = "))
D = int(input("D = "))
v = [int(input("v[{}] = ".format(i))) for i in range (0, N)]
count = 0
i, j = 0, 1
while j + 1 < N:
j += 1
while v[j] - v[i] > D:
i += 1
d = j - i
if d >= 2:
count += (d - 1) * d // 2 # // is the integer division
print(count)
The idea is to move up the upper index of the triples j, while dragging the lower index i at the greatest distance j-i=d where v[j]-v[i]<=D. For each i-j pair, there are 1+2+3+...+d-1 possible triples keeping j fixed, i.e., (d-1)*d/2.

Computing all infix products for a monoid / semigroup

Introduction: Infix products for a group
Suppose I have a group
G = (G, *)
and a list of elements
A = {0, 1, ..., n} ⊂ ℕ
x : A -> G
If our goal is to implement a function
f : A × A -> G
such that
f(i, j) = x(i) * x(i+1) * ... * x(j)
(and we don't care about what happens if i > j)
then we can do that by pre-computing a table of prefixes
m(-1) = 1
m(i) = m(i-1) * x(i)
(with 1 on the right-hand side denoting the unit of G) and then implementing f as
f(i, j) = m(i-1)⁻¹ * m(j)
This works because
m(i-1) = x(0) * x(1) * ... * x(i-1)
m(j) = x(0) * x(1) * ... * x(i-1) * x(i) * x(i+1) * ... * x(j)
and so
m(i)⁻¹ * m(j) = x(i) * x(i+1) * ... * x(j)
after sufficient reassociation.
My question
Can we rescue this idea, or do something not much worse, if G is only a monoid, not a group?
For my particular problem, can we do something similar if G = ([0, 1] ⊂ ℝ, *), i.e. we have real numbers from the unit line, and we can't divide by 0?
Yes, if G is ([0, 1] ⊂ ℝ, *), then the idea can be rescued, making it possible to compute ranged products in O(log n) time (or more accurately, O(log z) where z is the number of a in A with x(a) = 0).
For each i, compute the product m(i) = x(0)*x(1)*...*x(i), ignoring any zeros (so these products will always be non-zero). Also, build a sorted array Z of indices for all the zero elements.
Then the product of elements from i to j is 0 if there's a zero in the range [i, j], and m(j) / m(i-1) otherwise.
To find if there's a zero in the range [i, j], one can binary search in Z for the smallest value >= i in Z, and compare it to j. This is where the extra O(log n) time cost appears.
General monoid solution
In the case where G is any monoid, it's possible to do precomputation of n products to make an arbitrary range product computable in O(log(j-i)) time, although its a bit fiddlier than the more specific case above.
Rather than precomputing prefix products, compute m(i, j) for all i, j where j-i+1 = 2^k for some k>=0, and 2^k divides both i and j. In fact, for k=0 we don't need to compute anything, since the values of m(i, i+1) is simply x(i).
So we need to compute n/2 + n/4 + n/8 + ... total products, which is at most n-1 things.
One can construct an arbitrary interval [i, j] from at O(log_2(j-i+1)) of these building blocks (and elements of the original array): pick the largest building block contained in the interval and append decreasing sized blocks on either side of it until you get to [i, j]. Then multiply the precomputed products m(x, y) for each of the building blocks.
For example, suppose your array is of size 10. For example's sake, I'll assume the monoid is addition of natural numbers.
i: 0 1 2 3 4 5 6 7 8 9
x: 1 3 2 4 2 3 0 8 2 1
2: ---- ---- ---- ---- ----
4 6 5 8 3
4: ----------- ----------
10 13
8: ----------------------
23
Here, the 2, 4, and 8 rows show sums of aligned intervals of length 2, 4, 8 (ignoring bits left over if the array isn't a power of 2 in length).
Now, suppose we want to calculate x(1) + x(2) + x(3) + ... + x(8).
That's x(1) + m(2, 3) + m(4, 7) + x(8) = 3 + 6 + 13 + 2 = 24.

Given k sorted numbers, what is the minimum cost to turn them into consecutive numbers?

Suppose, we are given a sorted list of k numbers. Now, we want to convert this sorted list into a list having consecutive numbers. The only operation allowed is that we can increase/decrease a number by one. Performing every such operation will result in increasing the total cost by one.
Now, how to minimize the total cost while converting the list as mentioned?
One idea that I have is to get the median of the sorted list and arrange the numbers around the median. After that just add the absolute difference between the corresponding numbers in the newly created list and the original list. But, this is just an intuitive method. I don't have any proof of it.
P.S.:
Here's an example-
Sorted list: -96, -75, -53, -24.
We can convert this list into a consecutive list by various methods.
The optimal one is: -58, -59, -60, -61
Cost: 90
This is a sub-part of a problem from Topcoder.
Let's assume that the solution is in increasing order and m, M are the minimum and maximum value of the sorted list. The other case will be handled the same way.
Each solution is defined by the number assigned to the first element. If this number is very small then increasing it by one will reduce the cost. We can continue increasing this number until the cost grows. From this point the cost will continuously grow. So the optimum will be a local minimum and we can find it by using binary search. The range we are going to search will be [m - n, M + n] where n is the number of elements:
l = [-96, -75, -53, -24]
# Cost if initial value is x
def cost(l, x):
return sum(abs(i - v) for i, v in enumerate(l, x))
def find(l):
a, b = l[0] - len(l), l[-1] + len(l)
while a < b:
m = (a + b) / 2
if cost(l, m + 1) >= cost(l, m) <= cost(l, m - 1): # Local minimum
return m
if cost(l, m + 1) < cost(l, m):
a = m + 1
else:
b = m - 1
return b
Testing:
>>> initial = find(l)
>>> range(initial, initial + len(l))
[-60, -59, -58, -57]
>>> cost(l, initial)
90
Here is a simple solution:
Let's assume that these numbers are x, x + 1, x + n - 1. Then the cost is sum i = 0 ... n - 1 of abs(a[i] - (x + i)). Let's call it f(x).
f(x) is piece-wise linear and it approaches infinity as x approaches +infinity or -infinity. It means that its minimum is reached in one of the end points.
The end points are a[0], a[1] - 1, a[2] - 2, ..., a[n - 1] - (n - 1). So we can just try all of them and pick the best.

Probabilty based on quicksort partition

I have come across this question:
Let 0<α<.5 be some constant (independent of the input array length n). Recall the Partition subroutine employed by the QuickSort algorithm, as explained in lecture. What is the probability that, with a randomly chosen pivot element, the Partition subroutine produces a split in which the size of the smaller of the two subarrays is ≥α times the size of the original array?
Its answer is 1-2*α.
Can anyone explain me how has this answer come?Please Help.
The choice of the pivot element is random, with uniform distribution.
There are N elements in the array, and we will assume that N is large (or we won't get the answer we want).
If 0≤α≤1, the probability that the number of elements smaller than the pivot is less than αN is α. The probability that the number of elements greater than the pivot is less than αN is the same. If α≤ 1/2, then these two possibilities are exclusive.
To say that the smaller subarray is of length ≥αN, is to say that neither of these conditions holds, therefore the probability is 1-2α.
The other answers didn't quite click with me so here's another take:
If at least one of the 2 subarrays must be you can deduce that the pivot must also be in position . This is obvious by contradiction. If the pivot is then there is a subarray smaller than . By the same reasoning the pivot must also be . Any larger value for the pivot will yield a smaller subarray than on the "right hand side".
This means that , as shown by the diagram below:
What we want to calculate then is the probability of that event (call it A) i.e .
The way we calculate the probability of an event is to sum of the probability of the constituent outcomes i.e. that the pivot lands at .
That sum is expressed as:
Which easily simplifies to:
With some cancellation we get:
Just one more approach for solving the problem (for those who have uneasy time understanding it, like I have).
First.
Since we are talking about "the smaller of the two subarrays", then its length is less than 1/2 * n (n - the number of elements in original array).
Second.
If 0 < a < 0.5 it means the a * n is less than 1/2 * n either.
And thus we are talking from now about two randomly chosen integers bounded by 0 at lowest and 1/2 * n at highest.
Third.
Lets imagine the dice with numbers from 1 to 6 on it's sides. Lets choose a number from 1 to 6, for example 4. Now roll the dice. Each number has a probability 1/6 to be the outcome of this roll. Thus for event "outcome is less or equal to 4" we have probability equal to the sum of probabilities of each of this outcomes. And we have numbers 1, 2, 3 and 4. Altogether p(x <= 4) = 4 * 1/6 = 4/6 = 2/3. So the probability of event "output is bigger than 4" is p(x > 4) = 1 - p(x <= 4) = 1 - 2/3 = 1/3.
Fourth.
Lets go back to our problem. The "chosen number" is now a * n. And we are going to roll the dice with the numbers from 0 to (1/2 * n) on it to get k - the number of elements in a smallest of subarrays. The probability that outcome is bounded by (a * n) at highest is equals to sum of the probabilities of all outcomes from 0 to (a * n). And the probability for any particular outcome k is p(k) = 1 / (1/2 * n).
Therefore p(k <= a * n) = (a * n) * (1 / (1/2 * n)) = 2 * a.
From this we can easily conclude that p(k > a * n) = 1 - p(k <= a * n) = 1 - 2 * a.
Array length is n.
For smaller array length >= αn pivot should be greater than αn number of elements. At the same time pivot should be smaller than αn number of elements( else smaller array size will be less than required)
So out of n element we have to select one among (n-2α)n elements.
required probability is n(1-2α)/n.
Hence 1-2α
The probability would be, the number of desired elements/Total number of elements.
In this case, ((1-αn)-(αn))/n
Since α lies between,0 and 0.5,(1-α) must be bigger than α.Hence the number of elements contained between them would be,
(1-α-α)n=(1-2α)n
and so,the probability would be,
(1-2α)n/n=1-2α
Another approach:
List the "more balanced" options:
αn + 1 to (1 - α)n - 1
αn + 2 to (1 - α)n - 2
...
αn + k to (1 - α)n - k
So k in total. We know that the most balanced is n / 2 to n / 2, so:
αn + k = n / 2 => k = n(1/2 - α)
Similarly, list the "less balanced" options:
αn - 1 to (1 - α)n + 1
αn - 2 to (1 - α)n + 2
...
αn - m to (1 - α)n + m
So m in total. We know that the least balanced is 0 to n so:
αn - m = 0 => m = αn
Since all these options happen with equal probability we can use the frequency definition of probability so:
Pr{More balanced} = (total # of more balanced) / (total # of options) =>
Pr{More balanced} = k / (k + m) = n(1/2 - α) / (n(1/2 - α) + αn) = 1 - 2α

Algorithm to partition a number

Given a positive integer X, how can one partition it into N parts, each between A and B where A <= B are also positive integers? That is, write
X = X_1 + X_2 + ... + X_N
where A <= X_i <= B and the order of the X_is doesn't matter?
If you want to know the number of ways to do this, then you can use generating functions.
Essentially, you are interested in integer partitions. An integer partition of X is a way to write X as a sum of positive integers. Let p(n) be the number of integer partitions of n. For example, if n=5 then p(n)=7 corresponding to the partitions:
5
4,1
3,2
3,1,1
2,2,1
2,1,1,1
1,1,1,1,1
The the generating function for p(n) is
sum_{n >= 0} p(n) z^n = Prod_{i >= 1} ( 1 / (1 - z^i) )
What does this do for you? By expanding the right hand side and taking the coefficient of z^n you can recover p(n). Don't worry that the product is infinite since you'll only ever be taking finitely many terms to compute p(n). In fact, if that's all you want, then just truncate the product and stop at i=n.
Why does this work? Remember that
1 / (1 - z^i) = 1 + z^i + z^{2i} + z^{3i} + ...
So the coefficient of z^n is the number of ways to write
n = 1*a_1 + 2*a_2 + 3*a_3 +...
where now I'm thinking of a_i as the number of times i appears in the partition of n.
How does this generalize? Easily, as it turns out. From the description above, if you only want the parts of the partition to be in a given set A, then instead of taking the product over all i >= 1, take the product over only i in A. Let p_A(n) be the number of integer partitions of n whose parts come from the set A. Then
sum_{n >= 0} p_A(n) z^n = Prod_{i in A} ( 1 / (1 - z^i) )
Again, taking the coefficient of z^n in this expansion solves your problem. But we can go further and track the number of parts of the partition. To do this, add in another place holder q to keep track of how many parts we're using. Let p_A(n,k) be the number of integer partitions of n into k parts where the parts come from the set A. Then
sum_{n >= 0} sum_{k >= 0} p_A(n,k) q^k z^n = Prod_{i in A} ( 1 / (1 - q*z^i) )
so taking the coefficient of q^k z^n gives the number of integer partitions of n into k parts where the parts come from the set A.
How can you code this? The generating function approach actually gives you an algorithm for generating all of the solutions to the problem as well as a way to uniformly sample from the set of solutions. Once n and k are chosen, the product on the right is finite.
Here is a python solution to this problem, This is quite un-optimised but I have tried to keep it as simple as I can to demonstrate an iterative method of solving this problem.
The results of this method will commonly be a list of max values and min values with maybe 1 or 2 values inbetween. Because of this, there is a slight optimisation in there, (using abs) which will prevent the iterator constantly trying to find min values counting down from max and vice versa.
There are recursive ways of doing this that look far more elegant, but this will get the job done and hopefully give you an insite into a better solution.
SCRIPT:
# iterative approach in-case the number of partitians is particularly large
def splitter(value, partitians, min_range, max_range, part_values):
# lower bound used to determine if the solution is within reach
lower_bound = 0
# upper bound used to determine if the solution is within reach
upper_bound = 0
# upper_range used as upper limit for the iterator
upper_range = 0
# lower range used as lower limit for the iterator
lower_range = 0
# interval will be + or -
interval = 0
while value > 0:
partitians -= 1
lower_bound = min_range*(partitians)
upper_bound = max_range*(partitians)
# if the value is more likely at the upper bound start from there
if abs(lower_bound - value) < abs(upper_bound - value):
upper_range = max_range
lower_range = min_range-1
interval = -1
# if the value is more likely at the lower bound start from there
else:
upper_range = min_range
lower_range = max_range+1
interval = 1
for i in range(upper_range, lower_range, interval):
# make sure what we are doing won't break solution
if lower_bound <= value-i and upper_bound >= value-i:
part_values.append(i)
value -= i
break
return part_values
def partitioner(value, partitians, min_range, max_range):
if min_range*partitians <= value and max_range*partitians >= value:
return splitter(value, partitians, min_range, max_range, [])
else:
print ("this is impossible to solve")
def main():
print(partitioner(9800, 1000, 2, 100))
The basic idea behind this script is that the value needs to fall between min*parts and max*parts, for each step of the solution, if we always achieve this goal, we will eventually end up at min < value < max for parts == 1, so if we constantly take away from the value, and keep it within this min < value < max range we will always find the result if it is possable.
For this code's example, it will basically always take away either max or min depending on which bound the value is closer to, untill some non min or max value is left over as remainder.
A simple realization you can make is that the average of the X_i must be between A and B, so we can simply divide X by N and then do some small adjustments to distribute the remainder evenly to get a valid partition.
Here's one way to do it:
X_i = ceil (X / N) if i <= X mod N,
floor (X / N) otherwise.
This gives a valid solution if A <= floor (X / N) and ceil (X / N) <= B. Otherwise, there is no solution. See proofs below.
sum(X_i) == X
Proof:
Use the division algorithm to write X = q*N + r with 0 <= r < N.
If r == 0, then ceil (X / N) == floor (X / N) == q so the algorithm sets all X_i = q. Their sum is q*N == X.
If r > 0, then floor (X / N) == q and ceil (X / N) == q+1. The algorithm sets X_i = q+1 for 1 <= i <= r (i.e. r copies), and X_i = q for the remaining N - r pieces. The sum is therefore (q+1)*r + (N-r)*q == q*r + r + N*q - r*q == q*N + r == X.
If floor (X / N) < A or ceil (X / N) > B, then there is no solution.
Proof:
If floor (X / N) < A, then floor (X / N) * N < A * N, and since floor(X / N) * N <= X, this means that X < A*N, so even using only the smallest pieces possible, the sum would be larger than X.
Similarly, if ceil (X / N) > B, then ceil (X / N) * N > B * N, and since ceil(X / N) * N >= X, this means that X > B*N, so even using only the largest pieces possible, the sum would be smaller than X.

Resources