Dominance Matrices when teams play twice - matrix

I'm familiar with finding two step dominances when the players involved have only played each other once - you create a matrix of results filled with 1's (for wins) and 0's (for losses/ties), then square it. To find the power of each team you square the matrix then add it to itself.
So, how does the process change when you have teams involved that have played each other more than once and there are 2's introduced into the matrix? I'm working this with Matlab (Octave actually), and when I enter the matrix, which is actually a 31x31 matrix showing the results from the 2001-2002 NFL season, then square it, I get results showing that teams had dominance over themselves - like this:
Original Matrix (abbreviated):
Buf Ind Mia NE NYJ
Buf 0 0 0 0 1
Ind 2 0 0 0 1
Mia 2 2 0 1 0
NE 2 2 1 0 1
NYJ 1 1 2 1 0
Squared Matrix (abbreviated):
Buf Ind Mia NE NYJ
Buf 1 1 2 1 0
Ind 2 1 2 2 2
Mia 8 3 1 1 5
NE 10 4 2 2 4
NYJ 9 8 1 3 3
So how do I address the issue of the results showing a team having dominance over itself and get to my final power numbers like I would in a "played only once" scenario?
Thanks in advance.

I've had this same problem with soccer games with 2 points for a win, 1 for a draw and 0 for a loss, but I belief that is is possible to have a team with dominance over itself because they have beaten the team that beat them (or for soccer the draw). Therefore, I would say that you can just continue on as is. (p.s. I am a Year 11 Maths C student, so there may be other explainations for this)

Related

Is there an algorithm that divide a square cell grid map into 3 or 4 equal spaces?

So for example I have divide my map into something like this:
click on link
the matrix representative would be
0 1 0 1 0
1 1 1 1 0
0 1 1 1 1
0 1 0 0 0
one of the way I could divide it into even-ish would be:
click to see
where total square is 11 and since 11/3 gives us a decimal, I need to have 2 space with 4 square and one space with 3 squares.
but I don't know an algorithm that will be able to divide a small map like that.
there is probably a code that will be able to solve that particular map, but what if it is like :
1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 0 0
0 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 0 1
Each value is a square in the map and 1 is the square that should be considered. 0 is an empty/null space that is not part of the map and should not be taken into consideration when dividing the map.
So far I try a for loop adding all value and divide by 3 to determine how many squares is needed for each space. Also, if I get a decimal, then one space can have one more square than the other. So in this problem there is 36 squares so if I try to divide it into 3 space, then each space would have 12 squares.
So I am looking to see if there is an algorithm that will be able to solve all types of map.
This is actually NP-hard for k>=2, where you want k=3 or k=4:
theorem 2.2 in On the complexity of partitioning graphs into connected subgraphs - M.E. DYER, A.M. FRIEZE
You can get a decent answer by greedily removing nodes from your graph, and backtracking if you can't merge the remaining nodes.
It would help if you gave a more rigorous definition of 'even-ish' - for example, consider a map with 13 nodes - Would you rather have divisions of size (4,4,5), (3,3,3,4), (4,4,4,1), (5,5,3), or (4,4,3,2)?

Algorithm strategy to prevent values from bouncing between 2 values when on edge of two 'buckets'

I'm tracking various colored balls in OpenCV (Python) in real-time. The tracking is very stable. i.e. when stationary the values do not change with more then 1 / 2 pixels for the center of the circle.
However i'm running into what must surely be a well researched issue: I need to now place the positions of the balls into an rougher grid - essentially simply dividing (+ rounding) the x,y positions.
e.g.
input range is 0 -> 9
target range is 0 -> 1 (two buckets)
so i do: floor(input / 5)
input: [0 1 2 3 4 5 6 7 8 9]
output: [0 0 0 0 0 1 1 1 1 1]
This is fine, but the problem occurs when just a small change in the initial value might result it to be either in quickly changes output single I.e. at the 'edge' of the divisions -or a 'sensitive' area.
input: [4 5 4 5 4 5 5 4 ...]
output:[0 1 0 1 0 1 1 0 ...]
i.e. values 4 and 5 (which falls withing the 1 pixel error/'noisy' margin) cause rapid changes in output.
What are some of the strategems / algorithms that deal with these so help me further?
I searched but it seems i do not how to express the issue correctly for Google (or StackOverflow).
I tried adding 'deadzones'. i.e. rather then purely dividing i leave 'gaps' in my ouput range which means a value sometimes has no output (i.e. between 'buckets'). This somewhat works but means i have a lot (i.e. the range of the fluctuation) of the screen that is not used...
i.e.
input = [0 1 2 3 4 5 6 7 8 9]
output = [0 0 0 0 x x 1 1 1 1]
Temporal averaging is not ideal (and doesn't work too well either) - and increases the latency.
I just have a 'hunch' there is a whole set of Computer / Signal science about this.

How to solve 5 * 5 Cube in efficient easy way

There is 5*5 cube puzzle named Happy cube Problem where for given mat , need to make a cube .
http://www.mathematische-basteleien.de/cube_its.htm#top
Its like, 6 blue mats are given-
From the following mats, Need to derive a Cube -
These way it has 3 more solutions.
So like first cub
For such problem, the easiest approach I could imagine was Recursion based where for each cube, I have 6 position , and for each position I will try check all other mate and which fit, I will go again recursively to solve the same. Like finding all permutations of each of the cube and then find which fits the best.So Dynamic Programming approach.
But I am making loads of mistake in recursion , so is there any better easy approach which I can use to solve the same?
I made matrix out of each mat or diagram provided, then I rotated them in each 90 clock-wise 4 times and anticlock wise times . I flip the array and did the same, now for each of the above iteration I will have to repeat the step for other cube, so again recursion .
0 0 1 0 1
1 1 1 1 1
0 1 1 1 0
1 1 1 1 1
0 1 0 1 1
-------------
0 1 0 1 0
1 1 1 1 0
0 1 1 1 1
1 1 1 1 0
1 1 0 1 1
-------------
1 1 0 1 1
0 1 1 1 1
1 1 1 1 0
0 1 1 1 1
0 1 0 1 0
-------------
1 0 1 0 0
1 1 1 1 1
0 1 1 1 0
1 1 1 1 1
1 1 0 1 0
-------------
1st - block is the Diagram
2nd - rotate clock wise
3rd - rotate anti clockwise
4th - flip
Still struggling to sort out the logic .
I can't believe this, but I actually wrote a set of scripts back in 2009 to brute-force solutions to this exact problem, for the simple cube case. I just put the code on Github: https://github.com/niklasb/3d-puzzle
Unfortunately the documentation is in German because that's the only language my team understood, but source code comments are in English. In particular, check out the file puzzle_lib.rb.
The approach is indeed just a straightforward backtracking algorithm, which I think is the way to go. I can't really say it's easy though, as far as I remember the 3-d aspect is a bit challenging. I implemented one optimization: Find all symmetries beforehand and only try each unique orientation of a piece. The idea is that the more characteristic the pieces are, the less options for placing pieces exist, so we can prune early. In the case of many symmetries, there might be lots of possibilities and we want to inspect only the ones that are unique up to symmetry.
Basically the algorithm works as follows: First, assign a fixed order to the sides of the cube, let's number them 0 to 5 for example. Then execute the following algorithm:
def check_slots():
for each edge e:
if slot adjacent to e are filled:
if the 1-0 patterns of the piece edges (excluding the corners)
have XOR != 0:
return false
if the corners are not "consistent":
return false
return true
def backtrack(slot_idx, pieces_left):
if slot_idx == 6:
# finished, we found a solution, output it or whatever
return
for each piece in pieces_left:
for each orientation o of piece:
fill slot slot_idx with piece in orientation o
if check_slots():
backtrack(slot_idx + 1, pieces_left \ {piece})
empty slot slot_idx
The corner consistency is a bit tricky: Either the corner must be filled by exactly one of the adjacent pieces or it must be accessible from a yet unfilled slot, i.e. not cut off by the already assigned pieces.
Of course you can ignore to drop some or all of the consistency checks and only check in the end, seeing as there are only 8^6 * 6! possible configurations overall. If you have more than 6 pieces, it becomes more important to prune early.

How to Shuffle an Array with Fixed Row/Column Sum?

I need to assign random papers to students of a class, but I have the constraints that:
Each student should have two papers assigned.
Each paper should be assigned to (approximately) the same number of students.
Is there an elegant way to generate a matrix that has this property? i.e. it is shuffled but the row and column sums are constant? As an illustration:
Student A 1 0 0 1 1 0 | 3
Student B 1 0 1 0 0 1 | 3
Student C 0 1 1 0 1 0 | 3
Student D 0 1 0 1 0 1 | 3
----------------
2 2 2 2 2 2
I thought of first building an "initial matrix" with the right row/column sum, then randomly permuting first the rows, then the colums, but how do I generate this initial matrix? The problem here is that I'd be choosing between (e.g.) the following alternatives, and the fact that there are two students with the same pair of papers assigned (in the left setup) won't change through row/column shuffling:
INITIAL (MA): OR (MB):
A 1 1 1 0 0 0 || 1 1 1 0 0 0
B 1 1 1 0 0 0 || 0 1 1 1 0 0
C 0 0 0 1 1 1 || 0 0 0 1 1 1
D 0 0 0 1 1 1 || 1 0 0 0 1 1
I know I could come up with something quick/dirty and just tweak where necessary but it seemed like a fun exercise.
If you want to make permutations, what about:
Chose randomly a student, say student 1
For this student, chose a random paper he has, say paper A
Chose randomly another student
For this student, chose a random paper he has, say paper B (different from A)
Give paper B to student 1 and paper A to student 2.
That way, you preserve both the number of different papers and the number of papers per student. Indeed, both students give one paper and receive one back. Moreover, no paper is created nor deleted.
In term of table, it means finding two pairs of indices(i1,i2) and (j1,j2) such that A(i1,j1) = 1, A(i2,j2)=1, A(i1,j2)=0 and A(i2,j1)=0 and changing the 0s for 1s and the 1s for 0s => The sums of the rows and columns do not change.
Remark 1: If you do not want to proceed by permutations, you can simply put in a vector all the paper (put 2 times paper A, 2 times paper B,...). Then, random shuffle the vector and attribute the k first to the first student, the k next ones to student 2, ... However, you can end with a student having several times the same paper. In this case, make some permutations starting with the surnumerary papers.
You can generate the initial matrix as follows (pseudo-Python syntax):
column_sum = [0] * n_students
for i in range(n_students):
if column_sum[i] < max_allowed:
for j in range(i + 1, n_students):
if column_sum[j] < max_allowed:
generate_row_with_ones_at(i, j)
column_sum[i] += 1
column_sum[j] += 1
if n_rows == n_wanted:
return
This is a straightforward iteration over all n choose 2 distinct rows, but with the constraint on column sums enforced as early as possible.

Sorting a binary 2D matrix?

I'm looking for some pointers here as I don't quite know where to start researching this one.
I have a 2D matrix with 0 or 1 in each cell, such as:
1 2 3 4
A 0 1 1 0
B 1 1 1 0
C 0 1 0 0
D 1 1 0 0
And I'd like to sort it so it is as "upper triangular" as possible, like so:
4 3 1 2
B 0 1 1 1
A 0 1 0 1
D 0 0 1 1
C 0 0 0 1
The rows and columns must remain intact, i.e. elements can't be moved individually and can only be swapped "whole".
I understand that there'll probably be pathological cases where a matrix has multiple possible sorted results (i.e. same shape, but differ in the identity of the "original" rows/columns.)
So, can anyone suggest where I might find some starting points for this? An existing library/algorithm would be great, but I'll settle for knowing the name of the problem I'm trying to solve!
I doubt it's a linear algebra problem as such, and maybe there's some kind of image processing technique that's applicable.
Any other ideas aside, my initial guess is just to write a simple insertion sort on the rows, then the columns and iterate that until it stabilises (and hope that detecting the pathological cases isn't too hard.)
More details: Some more information on what I'm trying to do may help clarify. Each row represents a competitor, each column represents a challenge. Each 1 or 0 represents "success" for the competitor on a particular challenge.
By sorting the matrix so all 1s are in the top-right, I hope to then provide a ranking of the intrinsic difficulty of each challenge and a ranking of the competitors (which will take into account the difficulty of the challenges they succeeded at, not just the number of successes.)
Note on accepted answer: I've accepted Simulated Annealing as "the answer" with the caveat that this question doesn't have a right answer. It seems like a good approach, though I haven't actually managed to come up with a scoring function that works for my problem.
An Algorithm based upon simulated annealing can handle this sort of thing without too much trouble. Not great if you have small matrices which most likely hae a fixed solution, but great if your matrices get to be larger and the problem becomes more difficult.
(However, it also fails your desire that insertions can be done incrementally.)
Preliminaries
Devise a performance function that "scores" a matrix - matrices that are closer to your triangleness should get a better score than those that are less triangle-y.
Devise a set of operations that are allowed on the matrix. Your description was a little ambiguous, but if you can swap rows then one op would be SwapRows(a, b). Another could be SwapCols(a, b).
The Annealing loop
I won't give a full exposition here, but the idea is simple. You perform random transformations on the matrix using your operations. You measure how much "better" the matrix is after the operation (using the performance function before and after the operation). Then you decide whether to commit that transformation. You repeat this process a lot.
Deciding whether to commit the transform is the fun part: you need to decide whether to perform that operation or not. Toward the end of the annealing process, you only accept transformations that improved the score of the matrix. But earlier on, in a more chaotic time, you allow transformations that don't improve the score. In the beginning, the algorithm is "hot" and anything goes. Eventually, the algorithm cools and only good transforms are allowed. If you linearly cool the algorithm, then the choice of whether to accept a transformation is:
public bool ShouldAccept(double cost, double temperature, Random random) {
return Math.Exp(-cost / temperature) > random.NextDouble();
}
You should read the excellent information contained in Numerical Recipes for more information on this algorithm.
Long story short, you should learn some of these general purpose algorithms. Doing so will allow you to solve large classes of problems that are hard to solve analytically.
Scoring algorithm
This is probably the trickiest part. You will want to devise a scorer that guides the annealing process toward your goal. The scorer should be a continuous function that results in larger numbers as the matrix approaches the ideal solution.
How do you measure the "ideal solution" - triangleness? Here is a naive and easy scorer: For every point, you know whether it should be 1 or 0. Add +1 to the score if the matrix is right, -1 if it's wrong. Here's some code so I can be explicit (not tested! please review!)
int Score(Matrix m) {
var score = 0;
for (var r = 0; r < m.NumRows; r++) {
for (var c = 0; c < m.NumCols; c++) {
var val = m.At(r, c);
var shouldBe = (c >= r) ? 1 : 0;
if (val == shouldBe) {
score++;
}
else {
score--;
}
}
}
return score;
}
With this scoring algorithm, a random field of 1s and 0s will give a score of 0. An "opposite" triangle will give the most negative score, and the correct solution will give the most positive score. Diffing two scores will give you the cost.
If this scorer doesn't work for you, then you will need to "tune" it until it produces the matrices you want.
This algorithm is based on the premise that tuning this scorer is much simpler than devising the optimal algorithm for sorting the matrix.
I came up with the below algorithm, and it seems to work correctly.
Phase 1: move rows with most 1s up and columns with most 1s right.
First the rows. Sort the rows by counting their 1s. We don't care
if 2 rows have the same number of 1s.
Now the columns. Sort the cols by
counting their 1s. We don't care
if 2 cols have the same number of
1s.
Phase 2: repeat phase 1 but with extra criterions, so that we satisfy the triangular matrix morph.
Criterion for rows: if 2 rows have the same number of 1s, we move up the row that begin with fewer 0s.
Criterion for cols: if 2 cols have the same number of 1s, we move right the col that has fewer 0s at the bottom.
Example:
Phase 1
1 2 3 4 1 2 3 4 4 1 3 2
A 0 1 1 0 B 1 1 1 0 B 0 1 1 1
B 1 1 1 0 - sort rows-> A 0 1 1 0 - sort cols-> A 0 0 1 1
C 0 1 0 0 D 1 1 0 0 D 0 1 0 1
D 1 1 0 0 C 0 1 0 0 C 0 0 0 1
Phase 2
4 1 3 2 4 1 3 2
B 0 1 1 1 B 0 1 1 1
A 0 0 1 1 - sort rows-> D 0 1 0 1 - sort cols-> "completed"
D 0 1 0 1 A 0 0 1 1
C 0 0 0 1 C 0 0 0 1
Edit: it turns out that my algorithm doesn't give proper triangular matrices always.
For example:
Phase 1
1 2 3 4 1 2 3 4
A 1 0 0 0 B 0 1 1 1
B 0 1 1 1 - sort rows-> C 0 0 1 1 - sort cols-> "completed"
C 0 0 1 1 A 1 0 0 0
D 0 0 0 1 D 0 0 0 1
Phase 2
1 2 3 4 1 2 3 4 2 1 3 4
B 0 1 1 1 B 0 1 1 1 B 1 0 1 1
C 0 0 1 1 - sort rows-> C 0 0 1 1 - sort cols-> C 0 0 1 1
A 1 0 0 0 A 1 0 0 0 A 0 1 0 0
D 0 0 0 1 D 0 0 0 1 D 0 0 0 1
(no change)
(*) Perhaps a phase 3 will increase the good results. In that phase we place the rows that start with fewer 0s in the top.
Look for a 1987 paper by Anna Lubiw on "Doubly Lexical Orderings of Matrices".
There is a citation below. The ordering is not identical to what you are looking for, but is pretty close. If nothing else, you should be able to get a pretty good idea from there.
http://dl.acm.org/citation.cfm?id=33385
Here's a starting point:
Convert each row from binary bits into a number
Sort the numbers in descending order.
Then convert each row back to binary.
Basic algorithm:
Determine the row sums and store
values. Determine the column sums
and store values.
Sort the row sums in ascending order. Sort the column
sums in ascending order.
Hopefully, you should have a matrix with as close to an upper-right triangular region as possible.
Treat rows as binary numbers, with the leftmost column as the most significant bit, and sort them in descending order, top to bottom
Treat the columns as binary numbers with the bottommost row as the most significant bit and sort them in ascending order, left to right.
Repeat until you reach a fixed point. Proof that the algorithm terminates left as an excercise for the reader.

Resources