Algorithm for fair distribution of revenue - algorithm

I am after an algorithm (preferably abstract or in very clear Python or PHP code) that allows for a fair distribution of revenue both in the short and in the long term, based on the following constraints:
Each incoming amount will be an integer.
Each distribution will be also an integer amount.
Each person in the distribution is set to receive a cut of the revenue expressed as a fixed fraction. These fractions can of course be put under a common denominator (think integer percentages with a denominator of 100). The sum of all numerators equals the denominator (100 in the case of percentages)
To make it fair in the short term, person i must be receiving either floor(revenue*cut[i]) or ceil(revenue*cut[i]). EDIT: Let me emphasize that ceil(revenue*cut[i]) is not necessarily equal to 1+floor(revenue*cut[i]), which is where some algorithms fail including one presented (see my comment on the answer by Andrey).
To make it fair in the long term, as payments accumulate, the actual cuts received must be as close to the theoretical cuts as possible, preferably always under 1 unit. In particular, if the total revenue is a multiple of the denominator of the common fraction, every person should have received the corresponding multiple of their numerator.
[Edit] Forgot to add that the total amount distributed every time must of course add up to the incoming amount received.
Here's an example for clarity. Say there are three people among which to distribute revenue, and one has to get 31%, another has to get 21%, and the third has to get 100-31-21=48%.
Now imagine that there's an income of 80 coins. The first person should receive either floor(80*31/100) or ceil(80*31/100) coins, the second person should receive either floor(80*21/100) or ceil(80*21/100) and the third person, either floor(80*48/100) or ceil(80*48/100).
And now imagine that there's a new income of 120 coins. Each person should again receive either the floor or the ceiling of their corresponding cuts, but since the total revenue (200) is a multiple of the denominator (100), each person should have now their exact cuts: the first person should have 62 coins, the second person should have 42 coins, and the third person should have 96 coins.
I think I have an algorithm figured out for the case of two people to distribute the revenue among. It goes like this (for 35% and 65%):
set TotalRevenue to 0
set TotalPaid[0] to 0
{ TotalPaid[1] is implied and not necessary for the algorithm }
set Cut[0] to 35
{ Cut[1] is implied and not necessary for the algorithm }
set Denominator to 100
each time a payment is received:
let Revenue be the amount received this time
set TotalRevenue = TotalRevenue + Revenue
set tmp = floor(TotalRevenue*Cut[0]/Denominator)
pay person 0 the amount of tmp - TotalPaid[0]
set TotalPaid[0] = tmp
pay person 1 the amount of TotalRevenue - TotalPaid[0]
I think that this simple algorithm meets all of my constraints, but I have not found the way to extend it to more than two people without violating at least one. Can anyone figure out an extension to multiple people (or perhaps prove its impossibility)?

I think this works:
Keep track of a totalReceived, the total amount of money that has entered the system.
When money is added to the system, add it to totalReceived.
Set person[i].newTotal = floor(totalReceived * cut[i]). Edit: had error here; thanks commenters. This distributes some of the money. Keep track of how much, to know how much "new money" is left over. To be clear, we are keeping track of how much of the total amount of money is left over, and intuitively we re-distribute the whole sum at every step, although actually nobody's fund ever goes down between one step and the next.
You need to know how to distribute the left-over integer amount of money: there should be less than numPeople amount left over. For i ranging from 0 to leftOverMoney - 1, increment person[i].newTotal by 1. Effectively we give a dollar to each of the first leftOverMoney people.
To compute how much a person "received" during this step, compute person[i].newTotal - person[i].total.
To clean/finish up, set person[i].total = person[i].newTotal.

To accomodate constraint 4, the current payment must be used as the basis for distribution. After distributing the rounded dividends to the investors, the leftover dollars may be awarded, one each, to any eligible investors. An investor is eligible if he is due a fractional share, that is, ceil(revenuecut[i]) > floor(revenuecut[i]).
Constraint 5 may be addressed by deciding who, among the eligible investors, should get the leftover dollars. Maximum long-term fairness will be most closely approached by always awarding the leftover dollars to the eligible investors who have accumulated the most total (long-term) distribution error.
The following is derived from the algorithm I used for my fund manager clients. (I always used Pascal for financial applications for forensic reasons.) I have transposed the code fragments from a database application to an array-based method in an attempt to improve the clarity.
Included in the accounting data structures is a table of investors. Included in the information about each investor is his share fraction (investor share count divided by fund total shares), amount earned to date, and amount distributed (actually paid to investor) to date.
Investors : array [0..InvestorCount-1] of record
// ... lots of information about this investor
ShareFraction : float; // >0 and <1
EarnedToDate : currency; // rounded to dollars
DistributedToDate : currency; // rounded to dollars
end;
// The sum of all the .ShareFraction in the table is exactly 1.
Some temporary data is created for these calculations and freed afterwards.
DistErrs : array [0..InvestorCount-1] of record
TotalError : float; // distribution rounding error
CurrentError : float; // distribution rounding error
InvestorIndex : integer; // link to Investors record
end;
UNDISTRIBUTED_PAYMENTS : currency;
CurrentFloat : float;
CurrentInteger : currency;
CurrentFraction : float;
TotalFloat : float;
TotalInteger : currency;
TotalFraction : float;
DD, II : integer; // array indices
FUND_PREVIOUS_PAYMENTS : currency; is a parameter, derived from the fund history. If it is not provided as a parameter it may be calculated as the sum over all Investors[].EarnedToDate.
FUND_CURRENT_PAYMENT : currency; is a parameter, derived from the current increase in fund value.
FUND_TOTAL_PAYMENTS : currency; is a parameter, derived from the current fund value. This is the fund total amount previously paid out plus the fund current payment.
These three values form a dependents system with two degrees of freedom. Any two may be provided and the third calculated from the others.
// Calculate how much each investor has earned after the fund current payment.
UNDISTRIBUTED_PAYMENTS := FUND_CURRENT_PAYMENT;
for II := 0 to InvestorCount-1 do begin
// Calculate theoretical current dividend whole and fraction parts.
CurrentFloat := FUND_CURRENT_PAYMENT * Investors[II].ShareFraction;
CurrentInteger := floor(CurrentFloat);
CurrentFraction := CurrentFloat - CurrentInteger;
// Calculate theoretical total dividend whole and fraction parts.
TotalFloat := FUND_TOTAL_PAYMENTS * Investors[II].ShareFraction;
TotalInteger := floor(TotalFloat);
TotalFraction := TotalFloat - TotalInteger;
// Distribute the current whole dollars to the investor.
Investors[II].EarnedToDate := Investors[II].EarnedToDate + CurrentInteger;
// Track total whole dollars distributed.
UNDISTRIBUTED_PAYMENTS := UNDISTRIBUTED_PAYMENTS - CurrentInteger;
// Record the fractional dollars in the errors table and link to investor.
DistErrs[II].CurrentError := CurrentFraction;
DistErrs[II].TotalError := TotalFraction;
DistErrs[II].InvestorIndex := II;
end;
At this point UNDISTRIBUTED_PAYMENTS is always less than InvestorCount.
// Sort DistErrs by .TotalError descending.
In the real world the data is kept in a database not in arrays, so sorting is done by creating an index on DistErrs using TotalError as a key.
// Distribute one dollar each to the first UNDISTRIBUTED_PAYMENTS eligible
// investors (i.e. those whose current share ceilings are higher than their
// current share floor), in order from greatest to least .TotalError.
for DD := 0 to InvestorCount-1 do begin
if UNDISTRIBUTED_PAYMENTS <= 0 then break;
if DistErrs[DD].CurrentError <> 0 then begin
II := DistErrs[DD].InvestorIndex;
Investors[II].EarnedToDate := Investors[II].EarnedToDate + 1;
UNDISTRIBUTED_PAYMENTS := UNDISTRIBUTED_PAYMENTS - 1;
end;
end;
In a subsequent process, each investor is paid the difference between his .EarnedToDate and his .DistributedToDate, and his .DistributedToDate is adjusted to reflect that payment.

Related

Algorithm to calculate sum of points for groups with varying member count [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
Let's start with an example. In Harry Potter, Hogwarts has 4 houses with students sorted into each house. The same happens on my website and I don't know how many users are in each house. It could be 20 in one house 50 in another and 100 in the third and fourth.
Now, each student can earn points on the website and at the end of the year, the house with the most points will win.
But it's not fair to "only" do a sum of the points, as the house with a 100 students will have a much higher chance to win, as they have more users to earn points. So I need to come up with an algorithm which is fair.
You can see an example here: https://worldofpotter.dk/points
What I do now is to sum all the points for a house, and then divide it by the number of users who have earned more than 10 points. This is still not fair, though.
Any ideas on how to make this calculation more fair?
Things we need to take into account:
* The percent of users earning points in each house
* Few users earning LOTS of points
* Many users earning FEW points (It's not bad earning few points. It still counts towards the total points of the house)
Link to MySQL dump(with users, houses and points): https://worldofpotter.dk/wop_points_example.sql
Link to CSV of points only: https://worldofpotter.dk/points.csv
I'd use something like Discounted Cumulative Gain which is used for measuring the effectiveness of search engines.
The concept is as it follows:
FUNCTION evalHouseScore (0_INDEXED_SORTED_ARRAY scores):
score = 0;
FOR (int i = 0; i < scores.length; i++):
score += scores[i]/log2(i);
END_FOR
RETURN score;
END_FUNCTION;
This must be somehow modified as this way of measuring focuses on the first result. As this is subjective you should decide on your the way you would modify it. Below I'll post the code which some constants which you should try with different values:
FUNCTION evalHouseScore (0_INDEXED_SORTED_ARRAY scores):
score = 0;
FOR (int i = 0; i < scores.length; i++):
score += scores[i]/log2(i+K);
END_FOR
RETURN L*score;
END_FUNCTION
Consider changing the logarithm.
Tests:
int[] g = new int[] {758,294,266,166,157,132,129,116,111,88,83,74,62,60,60,52,43,40,28,26,25,24,18,18,17,15,15,15,14,14,12,10,9,5,5,4,4,4,4,3,3,3,2,1,1,1,1,1};
int[] s = new int[] {612,324,301,273,201,182,176,139,130,121,119,114,113,113,106,86,77,76,65,62,60,58,57,54,54,42,42,40,36,35,34,29,28,23,22,19,17,16,14,14,13,11,11,9,9,8,8,7,7,7,6,4,4,3,3,3,3,2,2,2,2,2,2,2,1,1,1};
int[] h = new int[] {813,676,430,382,360,323,265,235,192,170,107,103,80,70,60,57,43,41,21,17,15,15,12,10,9,9,9,8,8,6,6,6,4,4,4,3,2,2,2,1,1,1};
int[] r = new int[] {1398,1009,443,339,242,215,210,205,177,168,164,144,144,92,85,82,71,61,58,47,44,33,21,19,18,17,12,11,11,9,8,7,7,6,5,4,3,3,3,3,2,2,2,1,1,1,1};
The output is for different offsets:
1182
1543
1847
2286
904
1231
1421
1735
813
1120
1272
1557
It sounds like some sort of constraint between the houses may need to be introduced. I might suggest finding the person that earned the most points out of all the houses and using it as the denominator when rolling up the scores. This will guarantee the max value of a user's contribution is 1, then all the scores for a house can be summed and then divided by the number of users to normalize the house's score. That should give you a reasonable comparison. It does introduce issues with low numbers of users in a house that are high achievers in which you may want to consider lower limits to the number of house members. Another technique may be to introduce handicap scores for users to balance the scales. The algorithm will most likely flex over time based on the data you receive. To keep it fair it will take some responsive action after the initial iteration. Players can come up with some creative ways to make scoring systems work for them. Here is some pseudo-code in PHP that you may use:
<?php
$mostPointsEarned; // Find the user that earned the most points
$houseScores = [];
foreach ($houses as $house) {
$numberOfUsers = 0;
$normalizedScores = [];
foreach ($house->getUsers() as $user) {
$normalizedScores[] = $user->getPoints() / $mostPointsEarned;
$numberOfUsers++;
}
$houseScores[] = array_sum($normalizedScores) / $numberOfUsers;
}
var_dump($houseScores);
You haven't given any examples on what should be preferred state, and what are situations against which you want to be immune. (3,2,1,1 compared to 5,2 etc.)
It's also a pity you haven't provided us the dataset in some nice way to play.
scala> val input = Map( // as seen on 2016-09-09 14:10 UTC on https://worldofpotter.dk/points
'G' -> Seq(758,294,266,166,157,132,129,116,111,88,83,74,62,60,60,52,43,40,28,26,25,24,18,18,17,15,15,15,14,14,12,10,9,5,5,4,4,4,4,3,3,3,2,1,1,1,1,1),
'S' -> Seq(612,324,301,273,201,182,176,139,130,121,119,114,113,113,106,86,77,76,65,62,60,58,57,54,54,42,42,40,36,35,34,29,28,23,22,19,17,16,14,14,13,11,11,9,9,8,8,7,7,7,6,4,4,3,3,3,3,2,2,2,2,2,2,2,1,1,1),
'H' -> Seq(813,676,430,382,360,323,265,235,192,170,107,103,80,70,60,57,43,41,21,17,15,15,12,10,9,9,9,8,8,6,6,6,4,4,4,3,2,2,2,1,1,1),
'R' -> Seq(1398,1009,443,339,242,215,210,205,177,168,164,144,144,92,85,82,71,61,58,47,44,33,21,19,18,17,12,11,11,9,8,7,7,6,5,4,3,3,3,3,2,2,2,1,1,1,1)
) // and the results on the website were: 1. R 1951, 2. H 1859, 3. S 990, 4. G 954
Here is what I thought of:
def singleValuedScore(individualScores: Seq[Int]) = individualScores
.sortBy(-_) // sort from most to least
.zipWithIndex // add indices e.g. (best, 0), (2nd best, 1), ...
.map { case (score, index) => score * (1 + index) } // here is the 'logic'
.max
input.mapValues(singleValuedScore)
res: scala.collection.immutable.Map[Char,Int] =
Map(G -> 1044,
S -> 1590,
H -> 1968,
R -> 2018)
The overall positions would be:
Ravenclaw with 2018 aggregated points
Hufflepuff with 1968
Slytherin with 1590
Gryffindor with 1044
Which corresponds to the ordering on that web: 1. R 1951, 2. H 1859, 3. S 990, 4. G 954.
The algorithms output is maximal product of score of user and rank of the user within a house.
This measure is not affected by "long-tail" of users having low score compared to the active ones.
There are no hand-set cutoffs or thresholds.
You could experiment with the rank attribution (score * index or score * Math.sqrt(index) or score / Math.log(index + 1) ...)
I take it that the fair measure is the number of points divided by the number of house members. Since you have the number of points, the exercise boils down to estimate the number of members.
We are in short supply of data here as the only hint we have on member counts is the answers on the website. This makes us vulnerable to manipulation, members can trick us into underestimating their numbers. If the suggested estimation method to "count respondents with points >10" would be known, houses would only encourage the best to do the test to hide members from our count. This is a real problem and the only thing I will do about it is to present a "manipulation indicator".
How could we then estimate member counts? Since we do not know anything other than test results, we have to infer the propensity to do the test from the actual results. And we have little other to assume than that we would have a symmetric result distribution (of the logarithm of the points) if all members tested. Now let's say the strong would-be respondents are more likely to actually test than weak would-be respondents. Then we could measure the extra dropout ratio for the weak by comparing the numbers of respondents in corresponding weak and strong test-point quantiles.
To be specific, of the 205 answers, there are 27 in the worst half of the overall weakest quartile, while 32 in the strongest half of the best quartile. So an extra 5 respondents of the very weakest have dropped out from an assumed all-testing symmetric population, and to adjust for this, we are going to estimate member count from this quantile by multiplying the number of responses in it by 32/27=about 1.2. Similarly, we have 29/26 for the next less-extreme half quartiles and 41/50 for the two mid quartiles.
So we would estimate members by simply counting the number of respondents but multiplying the number of respondents in the weak quartiles mentioned above by 1.2, 1.1 and 0.8 respectively. If however any result distribution within a house would be conspicuously skewed, which is not the case now, we would have to suspect manipulation and re-design our member count.
For the sample at hand however, these adjustments to member counts are minor, and yields the same house ranks as from just counting the respondents without adjustments.
I got myself to amuse me a little bit with your question and some python programming with some random generated data. As some people mentioned in the comments you need to define what is fairness. If as you said you don't know the number of people in each of the houses, you can use the number of participations of each house, thus you motivate participation (it can be unfair depending on the number of people of each house, but as you said you don't have this data on the first place).
The important part of the code is the following.
import numpy as np
from numpy.random import randint # import random int
# initialize random seed
np.random.seed(4)
houses = ["Gryffindor","Slytherin", "Hufflepuff", "Ravenclaw"]
houses_points = []
# generate random data for each house
for _ in houses:
# houses_points.append(randint(0, 100, randint(60,100)))
houses_points.append(randint(0, 50, randint(2,10)))
# count participation
houses_participations = []
houses_total_points = []
for house_id in xrange(len(houses)):
houses_total_points.append(np.sum(houses_points[house_id]))
houses_participations.append(len(houses_points[house_id]))
# sum the total number of participations
total_participations = np.sum(houses_participations)
# proposed model with weighted total participation points
houses_partic_points = []
for house_id in xrange(len(houses)):
tmp = houses_total_points[house_id]*houses_participations[house_id]/total_participations
houses_partic_points.append(tmp)
The results of this method are the following:
House Points per Participant
Gryffindor: [46 5 1 40]
Slytherin: [ 8 9 39 45 30 40 36 44 38]
Hufflepuff: [42 3 0 21 21 9 38 38]
Ravenclaw: [ 2 46]
House Number of Participations per House
Gryffindor: 4
Slytherin: 9
Hufflepuff: 8
Ravenclaw: 2
House Total Points
Gryffindor: 92
Slytherin: 289
Hufflepuff: 172
Ravenclaw: 48
House Points weighted by a participation factor
Gryffindor: 16
Slytherin: 113
Hufflepuff: 59
Ravenclaw: 4
You'll find the complete file with printing results here (https://gist.github.com/silgon/5be78b1ea0b55a20d90d9ec3e7c515e5).
You should enter some more rules to define the fairness.
Idea 1
You could set up the rule that anyone has to earn at least 10 points to enter the competition.
Then you can calculate the average points for each house.
Positive: Everyone needs to show some motivation.
Idea 2
Another approach would be to set the rule that from each house only the 10 best students will count for the competition.
Positive: Easy rule to calculate the points.
Negative: Students might become uninterested if they see they can't reach the top 10 places of their house.
From my point of view, your problem is diveded in a few points:
The best thing to do would be to re - assignate the player in the different Houses so that each House has the same number of players. (as explain by #navid-vafaei)
If you don't want to do that because you believe that it may affect your game popularity with player whom are in House that they don't want because you can change the choice of the Sorting Hat at least in the movie or books.
In that case, you can sum the point of the student's house and divide by the number of students. You may just remove the number of student with a very low score. You may remove as well the student with a very low activity because students whom skip school might be fired.
The most important part for me n your algorithm is weather or not you give points for all valuables things:
In the Harry Potter's story, the students earn point on the differents subjects they chose at school and get point according to their score.
At the end of the year, there is a special award event. At that moment, the Director gave points for valuable things which cannot be evaluated in the subject at school suche as the qualites (bravery for example).

Best algorithm for netting orders

I am building a marketplace, and I want to build a matching mechanism for market participants orders.
For instance I receive these orders:
A buys 50
B buys 100
C sells 50
D sells 20
that can be represented as a List<Orders>, where Order is a class with Participant,BuySell, and Amount
I want to create a Match function, that outputs 2 things:
A set of unmatched orders (List<Order>)
A set of matched orders (List<MatchedOrder> where MatchOrder has Buyer,Seller,Amount
The constrain is to minimize the number of orders (unmatched and matched), while leaving no possible match undone (ie, in the end there can only be either buy or sell orders that are unmatched)
So in the example above the result would be:
A buys 50 from C
B buys 20 from D
B buys 80 (outstanding)
This seems like a fairly complex algorithm to write but very common in practice. Any pointers for where to look at?
You can model this as a flow problem in a bipartite graph. Every selling node will be on the left, and every buying node will be on the right. Like this:
Then you must find the maximum amount of flow you can pass from source to sink.
You can use any maximum flow algorithms you want, e.g. Ford Fulkerson. To minimize the number of orders, you can use a Maximum Flow/Min Cost algorithm. There are a number of techniques to do that, including applying Cycle Canceling after finding a normal MaxFlow solution.
After running the algorithm, you'll probably have a residual network like the following:
Create a WithRemainingQuantity structure with 2 members: a pointeur o to an order and an integer to store the unmatched quantity
Consider 2 List<WithRemainingQuantity> , 1 for buys Bq, 1 for sells Sq, both sorted by descending quantities of the contained order.
the algo match the head of each queue until one of them is empty
Algo (mix of meta and c++) :
struct WithRemainingQuantity
{
Order * o;
int remainingQty; // initialised with o->getQty
}
struct MatchedOrder
{
Order * orderBuy;
Order * orderSell;
int matchedQty=0;
}
List<WithRemainingQuantity> Bq;
List<WithRemainingQuantity> Sq;
/*
populate Bq and Sq and sort by quantities descending,
this is what guarantees the minimum of matched.
*/
List<MatchedOrder> l;
while( ! Bq.empty && !Sq.empty)
{
int matchedQty = std::min(Bq.front().remainingQty, Sq.front().remainingQty)
l.push_back( MatchedOrder(orderBuy=Bq.front(), sellOrder=Sq.front(), qtyMatched=matchedQty) )
Bq.remainingQty -= matchedQty
Sq.remainingQty -= matchedQty
if(Bq.remainingQty==0)
Bq.pop_front()
if(Sq.remainingQty==0)
Sq.pop_front()
}
The unmatched orders are the remaining orders in Bq or Sq (one of them if fatally empty, according to the while clause).

How can this algorithm be expressed in programming for a calculation?

Is it possible for someone to make this algorithm easy to understand to enter into a calculation for my application.
Thanks
To explain:
Yes that algorithm most certainly can be expressed in a programming language, however I will not provide an implementation, but some pseudocode to get you started.
So the first thing we see is x is dealing with money so lets represent that as a double
double monthlyPayment //this is X
next we see that P sub 0 is the total loan amount so lets also represent that as a double
double loanAmount // this is P sub 0
next we see that i is the interest rate so this will also be a double
double interestRate // this is i
next we see that n is the number of months that remain on the loan this is an integer
int monthsRemaining // this is n
so looking at the formula you proposed we take the following:
monthlyPayment = (loanAmount * interestRate) / 1 - (1 + interestRate) ^ (-1 * monthsRemaining)
Now without actually implementing this I do believe you can take it from here.
This is how you could represent it using Javascript:
function repayment(amount, months, interest) {
return (amount * interest) / (1 - Math.pow(1 + interest, -months));
}
Are you looking for an explanation of the equation? It deals with loans. When you take out loans, you take out a certain amount of money (P) for a certain number of months (n). In addition, to make the loan worth while to the loaner, you are charged an interest rate (i). That's a percentage that you are charged every month on the remaining loan amount.
So, you need to calculate how much you have to pay each month (x) in order to have the loan paided off in the set amount of time (n months). It's not as simple as just dividing the loan amount (P) by the number of months (n) because you also have to pay interest.
So, the equation is giving you the amount of money you have to pay each month in order to pay off the original loan plus any interest.
If you're using Java:
public double calculateMonthlyPayment(double originalLoan, double interestRate, double monthsToRepay) {
return (originalLoan*interestRate)/(1-(Math.pow((1+interestRate), -monthsToRepay)));
}

Algorithm For Ranking Items

I have a list of 6500 items that I would like to trade or invest in. (Not for real money, but for a certain game.) Each item has 5 numbers that will be used to rank it among the others.
Total quantity of item traded per day: The higher this number, the better.
The Donchian Channel of the item over the last 5 days: The higher this number, the better.
The median spread of the price: The lower this number, the better.
The spread of the 20 day moving average for the item: The lower this number, the better.
The spread of the 5 day moving average for the item: The higher this number, the better.
All 5 numbers have the same 'weight', or in other words, they should all affect the final number in the with the same worth or value.
At the moment, I just multiply all 5 numbers for each item, but it doesn't rank the items the way I would them to be ranked. I just want to combine all 5 numbers into a weighted number that I can use to rank all 6500 items, but I'm unsure of how to do this correctly or mathematically.
Note: The total quantity of the item traded per day and the donchian channel are numbers that are much higher then the spreads, which are more of percentage type numbers. This is probably the reason why multiplying them all together didn't work for me; the quantity traded per day and the donchian channel had a much bigger role in the final number.
The reason people are having trouble answering this question is we have no way of comparing two different "attributes". If there were just two attributes, say quantity traded and median price spread, would (20million,50%) be worse or better than (100,1%)? Only you can decide this.
Converting everything into the same size numbers could help, this is what is known as "normalisation". A good way of doing this is the z-score which Prasad mentions. This is a statistical concept, looking at how the quantity varies. You need to make some assumptions about the statistical distributions of your numbers to use this.
Things like spreads are probably normally distributed - shaped like a normal distribution. For these, as Prasad says, take z(spread) = (spread-mean(spreads))/standardDeviation(spreads).
Things like the quantity traded might be a Power law distribution. For these you might want to take the log() before calculating the mean and sd. That is the z score is z(qty) = (log(qty)-mean(log(quantities)))/sd(log(quantities)).
Then just add up the z-score for each attribute.
To do this for each attribute you will need to have an idea of its distribution. You could guess but the best way is plot a graph and have a look. You might also want to plot graphs on log scales. See wikipedia for a long list.
You can replace each attribute-vector x (of length N = 6500) by the z-score of the vector Z(x), where
Z(x) = (x - mean(x))/sd(x).
This would transform them into the same "scale", and then you can add up the Z-scores (with equal weights) to get a final score, and rank the N=6500 items by this total score. If you can find in your problem some other attribute-vector that would be an indicator of "goodness" (say the 10-day return of the security?), then you could fit a regression model of this predicted attribute against these z-scored variables, to figure out the best non-uniform weights.
Start each item with a score of 0. For each of the 5 numbers, sort the list by that number and add each item's ranking in that sorting to its score. Then, just sort the items by the combined score.
You would usually normalize your data entries to their respective range. Since there is no fixed range for them, you'll have to use a sliding range - or, to keep it simpler, normalize them to the daily ranges.
For each day, get all entries for a given type, get the highest and the lowest of them, determine the difference between them. Let Bottom=value of the lowest, Range=difference between highest and lowest. Then you calculate for each entry (value - Bottom)/Range, which will result in something between 0.0 and 1.0. These are the numbers you can continue to work with, then.
Pseudocode (brackets replaced by indentation to make easier to read):
double maxvalues[5];
double minvalues[5];
// init arrays with any item
for(i=0; i<5; i++)
maxvalues[i] = items[0][i];
minvalues[i] = items[0][i];
// find minimum and maximum values
foreach (items as item)
for(i=0; i<5; i++)
if (minvalues[i] > item[i])
minvalues[i] = item[i];
if (maxvalues[i] < item[i])
maxvalues[i] = item[i];
// now scale them - in this case, to the range of 0 to 1.
double scaledItems[sizeof(items)][5];
double t;
foreach(i=0; i<5; i++)
double delta = maxvalues[i] - minvalues[i];
foreach(j=sizeof(items)-1; j>=0; --j)
scaledItems[j][i] = (items[j][i] - minvalues[i]) / delta;
// linear normalization
something like that. I'll be more elegant with a good library (STL, boost, whatever you have on the implementation platform), and the normalization should be in a separate function, so you can replace it with other variations like log() as the need arises.
Total quantity of item traded per day: The higher this number, the better. (a)
The Donchian Channel of the item over the last 5 days: The higher this number, the better. (b)
The median spread of the price: The lower this number, the better. (c)
The spread of the 20 day moving average for the item: The lower this number, the better. (d)
The spread of the 5 day moving average for the item: The higher this number, the better. (e)
a + b -c -d + e = "score" (higher score = better score)

Algorithm to share/settle expenses among a group

I am looking forward for an algorithm for the below problem.
Problem: There will be a set of people who owe each other some money or none. Now, I need an algorithm (the best and neat) to settle expense among this group.
Person AmtSpent
------ ---------
A 400
B 1000
C 100
Total 1500
Now, expense per person is 1500/3 = 500. Meaning B to give A 100. B to give C 400. I know, I can start with the least spent amount and work forward.
Can some one point me the best one if you have.
To sum up,
Find the total expense, and expense per head.
Find the amount each owe or outstanding (-ve denote outstanding).
Start with the least +ve amount. Allocate it to the -ve amount.
Keep repeating step 3, until you run out of -ve amount.
s. Move to next bigger +ve number. Keep repeating 3 & 4 until there are +ve numbers.
Or is there any better way to do?
The best way to get back to zero state (minimum number of transactions) was covered in this question here.
I have created an Android app which solves this problem. You can input expenses during the trip, it even recommends you "who should pay next". At the end it calculates "who should send how much to whom". My algorithm calculates minimum required number of transactions and you can setup "transaction tolerance" which can reduce transactions even further (you don't care about $1 transactions) Try it out, it's called Settle Up:
https://market.android.com/details?id=cz.destil.settleup
Description of my algorithm:
I have basic algorithm which solves the problem with n-1 transactions, but it's not optimal. It works like this: From payments, I compute balance for each member. Balance is what he paid minus what he should pay. I sort members according to balance increasingly. Then I always take the poorest and richest and transaction is made. At least one of them ends up with zero balance and is excluded from further calculations. With this, number of transactions cannot be worse than n-1. It also minimizes amount of money in transactions. But it's not optimal, because it doesn't detect subgroups which can settle up internally.
Finding subgroups which can settle up internally is hard. I solve it by generating all combinations of members and checking if sum of balances in subgroup equals zero. I start with 2-pairs, then 3-pairs ... (n-1)pairs. Implementations of combination generators are available. When I find a subgroup, I calculate transactions in the subgroup using basic algorithm described above. For every found subgroup, one transaction is spared.
The solution is optimal, but complexity increases to O(n!). This looks terrible but the trick is there will be just small number of members in reality. I have tested it on Nexus One (1 Ghz procesor) and the results are: until 10 members: <100 ms, 15 members: 1 s, 18 members: 8 s, 20 members: 55 s. So until 18 members the execution time is fine. Workaround for >15 members can be to use just the basic algorithm (it's fast and correct, but not optimal).
Source code:
Source code is available inside a report about algorithm written in Czech. Source code is at the end and it's in English:
https://web.archive.org/web/20190214205754/http://www.settleup.info/files/master-thesis-david-vavra.pdf
You have described it already. Sum all the expenses (1500 in your case), divide by number of people sharing the expense (500). For each individual, deduct the contributions that person made from the individual share (for person A, deduct 400 from 500). The result is the net that person "owes" to the central pool. If the number is negative for any person, the central pool "owes" the person.
Because you have already described the solution, I don't know what you are asking.
Maybe you are trying to resolve the problem without the central pool, the "bank"?
I also don't know what you mean by "start with the least spent amount and work forward."
Javascript solution to the accepted algorithm:
const payments = {
John: 400,
Jane: 1000,
Bob: 100,
Dave: 900,
};
function splitPayments(payments) {
const people = Object.keys(payments);
const valuesPaid = Object.values(payments);
const sum = valuesPaid.reduce((acc, curr) => curr + acc);
const mean = sum / people.length;
const sortedPeople = people.sort((personA, personB) => payments[personA] - payments[personB]);
const sortedValuesPaid = sortedPeople.map((person) => payments[person] - mean);
let i = 0;
let j = sortedPeople.length - 1;
let debt;
while (i < j) {
debt = Math.min(-(sortedValuesPaid[i]), sortedValuesPaid[j]);
sortedValuesPaid[i] += debt;
sortedValuesPaid[j] -= debt;
console.log(`${sortedPeople[i]} owes ${sortedPeople[j]} $${debt}`);
if (sortedValuesPaid[i] === 0) {
i++;
}
if (sortedValuesPaid[j] === 0) {
j--;
}
}
}
splitPayments(payments);
/*
C owes B $400
C owes D $100
A owes D $200
*/
I have recently written a blog post describing an approach to solve the settlement of expenses between members of a group where potentially everybody owes everybody else, such that the number of payments needed to settle the debts is the least possible. It uses a linear programming formulation. I also show an example using a tiny R package that implements the solution.
I had to do this after a trip with my friends, here's a python3 version:
import numpy as np
import pandas as pd
# setup inputs
people = ["Athos", "Porthos", "Aramis"] # friends names
totals = [300, 150, 90] # total spent per friend
# compute matrix
total_spent = np.array(totals).reshape(-1,1)
share = total_spent / len(totals)
mat = (share.T - share).clip(min=0)
# create a readable dataframe
column_labels = [f"to_{person}" for person in people]
index_labels = [f"{person}_owes" for person in people]
df = pd.DataFrame(data=mat, columns=column_labels, index=index_labels)
df.round(2)
Returns this dataframe:
to_Athos
to_Porthos
to_Aramis
Athos_owes
0
0
0
Porthos_owes
50
0
0
Aramis_owes
70
20
0
Read it like: "Porthos owes $50 to Athos" ....
This isn't the optimized version, this is the simple version, but it's simple code and may work in many situations.
I'd like to make a suggestion to change the core parameters, from a UX-standpoint if you don't mind terribly.
Whether its services or products being expensed amongst a group, sometimes these things can be shared. For example, an appetizer, or private/semi-private sessions at a conference.
For things like an appetizer party tray, it's sort of implied that everyone has access but not necessarily that everyone had it. To charge each person to split the expense when say, only 30% of the people partook can cause contention when it comes to splitting the bill. Other groups of people might not care at all. So from an algorithm standpoint, you need to first decide which of these three choices will be used, probably per-expense:
Universally split
Split by those who partook, evenly
Split by proportion per-partaker
I personally prefer the second one in-general because it has the utility to handle whole-expense-ownership for expenses only used by one person, some of the people, and the whole group too. It also remedies the ethical question of proportional differences with a blanket generalization of, if you partook, you're paying an even split regardless of how much you actually personally had. As a social element, I would consider someone who had a "small sample" of something just to try it and then decided not to have anymore as a justification to remove that person from the people splitting the expense.
So, small-sampling != partaking ;)
Then you take each expense and iterate through the group of who partook in what, and atomically handle each of those items, and at the end provide a total per-person.
So in the end, you take your list of expenses and iterate through them with each person. At the end of the individual expense check, you take the people who partook and apply an even split of that expense to each person, and update each person's current split of the bill.
Pardon the pseudo-code:
list_of_expenses[] = getExpenseList()
list_of_agents_to_charge[] = getParticipantList()
for each expense in list_of_expenses
list_of_partakers[] = getPartakerList(expense)
for each partaker in list_of_partakers
addChargeToAgent(expense.price / list_of_partakers.size, list_of_agents_to_charge[partaker])
Then just iterate through your list_of_agents_to_charge[] and report each total to each agent.
You can add support for a tip by simply treating the tip like an additional expense to your list of expenses.
Straightforward, as you do in your text:
Returns expenses to be payed by everybody in the original array.
Negativ values: this person gets some back
Just hand whatever you owe to the next in line and then drop out. If you get some, just wait for the second round. When done, reverse the whole thing. After these two round everybody has payed the same amount.
procedure SettleDepth(Expenses: array of double);
var
i: Integer;
s: double;
begin
//Sum all amounts and divide by number of people
// O(n)
s := 0.0;
for i := Low(Expenses) to High(Expenses) do
s := s + Expenses[i];
s := s / (High(Expenses) - Low(Expenses));
// Inplace Change to owed amount
// and hand on what you owe
// drop out if your even
for i := High(Expenses) downto Low(Expenses)+1 do begin
Expenses[i] := s - Expenses[i];
if (Expenses[i] > 0) then begin
Expenses[i-1] := Expenses[i-1] + Expenses[i];
Expenses.Delete(i);
end else if (Expenses[i] = 0) then begin
Expenses.Delete(i);
end;
end;
Expenses[Low(Expenses)] := s - Expenses[Low(Expenses)];
if (Expenses[Low(Expenses)] = 0) then begin
Expenses.Delete(Low(Expenses));
end;
// hand on what you owe
for i := Low(Expenses) to High(Expenses)-1 do begin
if (Expenses[i] > 0) then begin
Expenses[i+1] := Expenses[i+1] + Expenses[i];
end;
end;
end;
The idea (similar to what is asked but with a twist/using a bit of ledger concept) is to use Pool Account where for each bill, members either pay to the Pool or get from the Pool.
e.g.
in below attached image, the Costco expenses are paid by Mr P and needs $93.76 from Pool and other members pay $46.88 to the pool.
There are obviously better ways to do it. But that would require running a NP time complexity algorithm which could really show down your application. Anyways, this is how I implemented the solution in java for my android application using Priority Queues:
class calculateTransactions {
public static void calculateBalances(debtors,creditors) {
// add members who are owed money to debtors priority queue
// add members who owe money to others to creditors priority queue
}
public static void calculateTransactions() {
results.clear(); // remove previously calculated transactions before calculating again
PriorityQueue<Balance> debtors = new PriorityQueue<>(members.size(),new BalanceComparator()); // debtors are members of the group who are owed money, balance comparator defined for max priority queue
PriorityQueue<Balance> creditors = new PriorityQueue<>(members.size(),new BalanceComparator()); // creditors are members who have to pay money to the group
calculateBalances(debtors,creditors);
/*Algorithm: Pick the largest element from debtors and the largest from creditors. Ex: If debtors = {4,3} and creditors={2,7}, pick 4 as the largest debtor and 7 as the largest creditor.
* Now, do a transaction between them. The debtor with a balance of 4 receives $4 from the creditor with a balance of 7 and hence, the debtor is eliminated from further
* transactions. Repeat the same thing until and unless there are no creditors and debtors.
*
* The priority queues help us find the largest creditor and debtor in constant time. However, adding/removing a member takes O(log n) time to perform it.
* Optimisation: This algorithm produces correct results but the no of transactions is not minimum. To minimize it, we could use the subset sum algorithm which is a NP problem.
* The use of a NP solution could really slow down the app! */
while(!creditors.isEmpty() && !debtors.isEmpty()) {
Balance rich = creditors.peek(); // get the largest creditor
Balance poor = debtors.peek(); // get the largest debtor
if(rich == null || poor == null) {
return;
}
String richName = rich.name;
BigDecimal richBalance = rich.balance;
creditors.remove(rich); // remove the creditor from the queue
String poorName = poor.name;
BigDecimal poorBalance = poor.balance;
debtors.remove(poor); // remove the debtor from the queue
BigDecimal min = richBalance.min(poorBalance);
// calculate the amount to be send from creditor to debtor
richBalance = richBalance.subtract(min);
poorBalance = poorBalance.subtract(min);
HashMap<String,Object> values = new HashMap<>(); // record the transaction details in a HashMap
values.put("sender",richName);
values.put("recipient",poorName);
values.put("amount",currency.charAt(5) + min.toString());
results.add(values);
// Consider a member as settled if he has an outstanding balance between 0.00 and 0.49 else add him to the queue again
int compare = 1;
if(poorBalance.compareTo(new BigDecimal("0.49")) == compare) {
// if the debtor is not yet settled(has a balance between 0.49 and inf) add him to the priority queue again so that he is available for further transactions to settle up his debts
debtors.add(new Balance(poorBalance,poorName));
}
if(richBalance.compareTo(new BigDecimal("0.49")) == compare) {
// if the creditor is not yet settled(has a balance between 0.49 and inf) add him to the priority queue again so that he is available for further transactions
creditors.add(new Balance(richBalance,richName));
}
}
}
}
I've created a React App that implements Bin-packing approach to split trip expenses among friends with least number of transactions.
Check out the TypeScript file SplitPaymentCalculator.ts implementing the same.
You can find the working app's link on the homepage of the repo.

Resources