Say I need to place n=30 students into groups of between 2 and 6, and I collect the following preference data from each student:
Student Name: Tom
Likes to sit with: Jimi, Eric
Doesn't like to sit with: John, Paul, Ringo, George
It's implied that they're neutral about any other student in the overall class that they haven't mentioned.
How might I best run a large number of simulations of many different/random grouping arrangements, to be able to determine a score for each arrangement, through which I could then pick the "most optimal" score/arrangement?
Alternatively, are there any other methods by which I might be able to calculate a solution that satisfies all of the supplied constraints?
I'd like a generic method that can be reused on different class sizes each year, but within each simulation run, the following constants and variables apply:
Constants: Total number of students, Student preferences
Variables: Group sizes, Student Groupings, Number of different group arrangements/iterations to test
Thanks in advance for any help/advice/pointers provided.
I believe you can state this as an explicit mathematical optimization problem.
Define the binary decision variables:
x(p,g) = 1 if person p is assigned to group g
0 otherwise
I used:
I used your data set with 28 persons, and your preference matrix (with -1,+1,0 elements). For groups, I used 4 groups of 6 and 1 group of 4. A solution can look like:
---- 80 PARAMETER solution using MIQP model
group1 group2 group3 group4 group5
aimee 1
amber-la 1
amber-le 1
andrina 1
catelyn-t 1
charlie 1
charlotte 1
cory 1
daniel 1
ellie 1
ellis 1
eve 1
grace-c 1
grace-g 1
holly 1
jack 1
jade 1
james 1
kadie 1
kieran 1
kristiana 1
lily 1
luke 1
naz 1
nibah 1
niko 1
wiki 1
zeina 1
COUNT 6 6 6 6 4
Notes:
This model can be linearized, so it can be fed into a standard MIP solver
I solved this directly as a MIQP model (actually the solver reformulated the model into a MIP). The model solved in a few seconds.
Probably we need to add extra logic to make sure one person is not getting a really bad assignment. We optimize here only the total sum. This overall sum may allow an individual to get a bad deal. It is an interesting exercise to take this into account in the model. There are some interesting trade-offs.
1st approach should be, create matrix n x n where n is total number of students, indexes for row and columns are ordinals for every student, and each column representing preferences for sitting with the others students. Fills the cells with values 1=Like to sit, -1 = the Opposite, 0 = neutral. Zeroes to be filled too on main diagonal (i,i)
------Mark Maria John Peter
Mark 0 1 -1 1
Maria 0 0 -1 1
John -1 1 0 1
Peter 0
Score calculations are based on sums of these values. So ie: John likes to sit with Maria, = 1, but Maria doesn't like to sit with John -1, result is 0. Best result is when both score (sum) 2.
So on, based on Group Sizes, calculate Score of each posible combination. Bigger the score, better the arrangement. Combinations discriminate values on main diagonal. ie: John grouped with the same John is not a valid combination/group.
In a group size of 2, best score is 2
In a group size of 3, best score is 6,
In a group size of 4, best score is 12
In a group size of n, best score would be (n-1)*n
Now in ordered list of combinations / groups, you should take first the best tuples with highest scores, but avoiding duplicates of students between tuples.
In a recent research, a PSO was implemented to classify students under unknown number of groups of 4 to 6. PSO showed improved capabilities compared to GA. I think that all you need is the specific research.
The paper is: Forming automatic groups of learners using particle swarm optimization for applications of differentiated instruction
You can find the paper here: https://doi.org/10.1002/cae.22191
Perhaps the researchers could guide you through researchgate: https://www.researchgate.net/publication/338078753
Regarding the optimal sitting you need to specify an objective function with the specific data
Related
I need some help making a program that finds the best solution for everyone (more on that later).
6 7
0 0 0 0 0 0 0
1 0 0 1 1 0 0
2 2 2 1 2 2 2
2 1 1 1 2 1 2
0 1 2 2 1 0 0
1 2 1 2 0 1 1
The example given above is a problem that the algorithm is supposed to solve,
the first number of the first row indicates the number of people (6)
the second number of the first row indicates the number of appointments (7)
0 = the person doesnt have a problem with the date
1 = the person could choose these date if none else is available
2 = the person cant choose this appointment
Row = Person
Colum = Available Appointment
What the program needs to do now is to find the best possible solution for everyone by choosing which colum would be the best for the person's desire by arranging peoples appointments based on their choices
ex.
In the 3rd row the person can only attend the appointment on the 4th column since he cant attend to the other ones (2) which also makes column 4 complete and out of use for the other people.
The reason I need help with this is because I have no idea on how to approach this because this might be a simple example but since its an algorithm its supposed to work with dozens of peoples and appointments.
The exercise is somewhat ambiguous, probably on purpose. My wild guess would be to sort the meetings by:
the highest number of possible participants, i.e., the lowest number of 2s in a matrix column.
the lowest “badness”, i.e., the lowest number of 1s in a matrix column.
Why not #2s: Because we don’t care about those who cannot participate at this sorting stage.
Why not #0s: Because we want to minimize the number of people inconvenienced by the meeting time, not (necessarily) maximize the number of people pleased with the meeting time.
#!/usr/bin/env python
import sys
n_people, n_appointments = (int(i)
for i in sys.stdin.readline().split())
people_appointments = tuple(tuple(int(i)
for i in line.split())
for line in sys.stdin)
assert len(people_appointments) == n_people
for appointments in people_appointments:
assert len(appointments) == n_appointments
appointment_metric = {}
for appointment in range(n_appointments):
n_missing = sum(people_appointments[i][appointment] == 2
for i in range(n_people))
badness = sum(people_appointments[i][appointment] == 1
for i in range(n_people))
appointment_metric.setdefault(
(n_missing, badness), []).append(str(appointment + 1))
for metric in sorted(appointment_metric):
print(f'Appointment Nr. {" / ".join(appointment_metric[metric])} '
f'(absence {metric[0]}, badness {metric[1]})')
Possible output (best appointment (by the metric described above) to worst appointment):
Appointment Nr. 6 (absence 1, badness 2)
Appointment Nr. 7 (absence 2, badness 1)
Appointment Nr. 1 / 2 / 3 / 5 (absence 2, badness 2)
Appointment Nr. 4 (absence 2, badness 3)
There are (of course) many other ways to evaluate meetings. Picking and defining a metric is quite likely an implicit part of the exercise.
How to calculate percent of count of UNIQUE values?
E.g. I have a dataset with people who can pick multiple symptoms (i.e each person can have 0 to 10 values).
person 1 - symptom A, B
person 2 - symptom B, C, D
person 3 - no symptoms
person 4 - symptom A
etc.
E.g. if total UNIQUE count of people is 4 and 2 of them have picked symptom A, then I'd like to see:
A = 2/4 = 50%
Currently QuickSight is able to calculate shares based on total count of people (not unique count) as one person can have multiple symptoms, so A is 2/6 = 33% (not what I need).
As much as I've tried, QuickSight doesn't enable that??
Considering I have 4 chromosomes (gi, i=1 to 4}) to represent 4 percentages of different things so that the sum of 4 percentages are equal to 100. How Do I represent this efficiently?
I know that it is possible by: g1/(g1+g2+g3+g4). However, This is not efficient. Consider all gi=0.2 or all gi=0.1 will represent 25% in these two cases. It is possible to generate many cases where different genes present same percentage. Is there any other efficient way, where unique set of combination of genes present unique set of percentages.
Thanks in advance.
I think you're confusing genes and chromosomes. A chromosome encodes a candidate solution to your problem. A gene is part of a chromosome.
Under this setting, why would you want that constraint on the chromosomes? it sounds like you want it on the genes of a chromosome.
In order to do this you can do a number of things: have each gene encode an integer in [0, 100]. If the genes do not add to 100 in the end, penalize the fitness of those chromosomes.
Another way, which might make crossover operators more natural to apply, is to have each gene store 100 bits. If x bits are set, that means the gene will encode x%.
Yet another way is to have the entire chromosome encode 100 set bits. Then each gene will hold a value x, which represents an interval. The number of set bits between two split points is the percentage associated to that gene. For example:
1 2 3 4 5 6 7 8 ... 100
1 1 1 1 1 1 1 1 ... 1
| | | | |
g1 g2 g3 g4
This can be done by generating 5 random numbers <= 100, sorting them and taking the differences between them.
One way to assign X units to N possibilities is to store X * (N-1) bits. Every unit is given (N-1) bits and if k of the (N-1) bits are set then the unit is assigned to k.
This is easy to work with as there are no invalid solutions and no penalties/repairs are necessary. This makes fitness evaluation, crossover and mutation easier to implement.
For example, the problem is to assign 5 units (X) to one of 4 (N) possibilities. Each individual is (4-1)x5=15 bits.
The bit string: 010 100 000 011 111 assigns the first 2 units to possibility 1 because both groups have 1 bit set. The third unit which has no bits set is assigned to 0. The fourth unit is assigned to 2 and the fifth to 3.
partition units
0 1
1 2
2 1
3 1
Lets say we have 3 people, Alice, Bob, and Charlie.
Lets say each of them have a resource, Aplles, Bannanas, and Coconuts.
Each of them have 3 of this resource.
The goal of the algorithm is to make 1-1 trades such that each of them end up with 1 of each of our 3 resources. A list of those trades is what I want to obtain.
Ideally I would like to know how to solve this. But I'm willing to settle for the name of this kind of problem, or a problem similar to it that I can research and get ideas from.
The problem I'm working on will have around 600 objects, with ~1000 people each with a random amount/type of starting resources, (with the assumption that there are enough resources to satisfy our end result) so Ideally any solution provided would be feasible for such a scale. But I'll take whatever I can get, I just need some kind of starting point.
The answers of ElKamina and Tyler Durden are decent, but they don't seem to take into account that Kuriso would like to perform 1-1 trades, that people may have multiple commodities, and multiple units of commodities. I have a naive solution that does.
I think the original example was a bit oversimplified, so let's take another one:
c1 c2 c3 c4
A 5 0 1 0
B 0 1 0 1
C 0 6 2 0
Where A,B,C are people and c1,c2,c3,c4 are the commodities.
First, let's calculate the ideal distribution, which is easily done: for each commodity, divide the sum of stuff by the number of people, rounded down, and everybody gets that:
c1 c2 c3 c4
A 1 2 1 0
B 1 2 1 0
C 1 2 1 0
Now let's define a WANT function denoting how much of a stuff c would person X need to get into the ideal position: WANT(X,c) = IDEAL(c) - Xc.
c1 c2 c3 c4 sum
A -4 2 0 0 -2
B 1 1 1 0 3
C 1 -4 -1 0 -4
Let's make a list of people ordered by the sum of their wants. Let's take the richest guy, the one with the lowest want, in this case C, and let's try to satisfy his wants by matching him up with people who has the most to offer of the commodity he wants most. If they can make a trade, great, if not, continue until we find a match (a match is guaranteed, eventually). In this example, C needs c1; the one offering the most c1 is A, iterating over the commodities, we find that A needs c2 and C does have surplus c2, so they exchange them. Update their position in the list, or remove them if they no longer have any needs. Iterate this until nobody has any wants. This won't produce properly equal distribution, but as equal as they can get to by 1 for 1 trading.
This is indeed a naive solution, with the heuristics that the richest guy has the most chance to offer stuff in return for the commodity he needs. The complexity is high, but with ordered lists it should be managable for the numbers you specified.
Assume you have a total number of x1 resources of kind 1,..., xn resources of kind n.
Assume you have k people and each of them have (or need to end up with y1, y2,..., yk resources respectively.
Now, pick a person i and assign him resources that are most prevalent. Once assignment is done, decrement the corresponding xj s (i.e. if resource j is assigned to i, decrement xj).
Keep repeating until all resources are assigned.
This is the way to assign stuff most evenly. It assumes that you dont care about sequences of trades, but the end result itself.
To restate this, let's say you have set of lists like this:
{ 1, 1, 1 }
{ 2, 2, 2 }
{ 3, 3, 3 }
and you want to swap elements from different sets until you have the sets like this:
{ 1, 2, 3 }
{ 1, 2, 3 }
{ 1, 2, 3 }
Now, you might notice that if we regard these lists as a single matrix then one matrix is the inverse of the other. You can perform this inversion by swapping across the 1-2-3 diagonal.
So item 2 in list 1 is swapped with item 2 in row 2, item 3 in list 1 is swapped with item 1 in list 3, and finally item 3 in list 2 is swapped with item 2 in list 3.
To sum up: do a matrix inversion by swapping across the diagonal.
This is my code:
data INDAT8; set INDAT6;
Array myarray{24,27};
goodgroups=0;
do i=2 to 24 by 2;
do j=2 to 27;
if myarray[i,j] gt 1 then myarray[i+1,j] = 'bad';
else if myarray[i,j] eq 1 and myarray[i+1,j] = 1 then myarray[i+1,j]= 'good';
end;
end;
run;
proc print data=INDAT8;
run;
Problem:
I have the data in this format- it is just an example: n=2
X Y info
2 1 good
2 4 bad
3 2 good
4 1 bad
4 4 good
6 2 good
6 3 good
Now, the above data is in sorted manner (total 7 rows). I need to make a group of 2 , 3 or 4 rows separately and generate a graph. In the above data, I made a group of 2 rows. The third row is left alone as there is no other column in 3rd row to form a group. A group can be formed only within the same row. NOT with other rows.
Now, I will check if both the rows have “good” in the info column or not. If both rows have “good” – the group formed is also good , otherwise bad. In the above example, 3rd /last group is “good” group. Rest are all bad group. Once I’m done with all the rows, I will calculate the total no. of Good groups formed/Total no. of groups.
In the above example, the output will be: Total no. of good groups/Total no. of groups => 1/3.
This is the case of n=2(size of group)
Now, for n=3, we make group of 3 rows and for n=4, we make a group of 4 rows and find the good /bad groups in a similar way. If all the rows in a group has “good” block—the result is good block, otherwise bad.
Example: n= 3
2 1 good
2 4 bad
2 6 good
3 2 good
4 1 good
4 4 good
4 6 good
6 2 good
6 3 good
In the above case, I left the 4th row and last 2 rows as I can’t make group of 3 rows with them. The first group result is “bad” and last group result is “good”.
Output: 1/ 2
For n= 4:
2 1 good
2 4 good
2 6 good
2 7 good
3 2 good
4 1 good
4 4 good
4 6 good
6 2 good
6 3 good
6 4 good
6 5 good
In this case, I make a group of 4 and finds the result. The 5th,6th,7th,8th row are left behind or ignored. I made 2 groups of 4 rows and both are “good” blocks.
Output: 2/2
So, After getting 3 output values from n=2 , n-3, and n=4 I will plot a graph of these values.
If you can help in any any language using array, if and do loop. it would be great.
I can change my code accordingly.
Update:
The answer for this doesn't have to be in sas. Since it is more algorithm-related than anything, I will accept suggestions in any language as long as they show how to accomplish this using arrays and do.
I am having trouble understanding your problem statement, but from what I can gather here is what I can suggest:
Place data into bins and the process the summary data.
Implementation 1
Assumption: You don't know what the range of the first column will be or distriution will be sparse
Create a hash table. The Key will be the item you are doing your grouping on. The value will be the count seen so far.
Proces each record. If the key already exists, increment the count (value for that key in the hash). Otherwise add the key and set the value to 1.
Continue until you have processed all records
Count the number of keys in the hash table and the number of values that are greater than your threshold.
Implementation 2
Assumption: You know the range of the first column and the distriution is reasonably dense
Create an array of integers with enough elements so the index can match the column value. Initialize all elements to zero. This array will hold your count for each item you are grouping on
Process each record. Examine value of first column. Increment corresponding index in array. (So if you have "2 1 good", do groupCount[2]++)
Continue until you have processed all records
Walk each element in the array. Count how many items are non zero (meaning they appeared at least once) and how many items meet your threshold.
You can use the same approach for gathering the good and bad counts.