Related
I am working on something which I feel an NP-hard problem. So, I am not looking for the optimal solution but I am looking for a better heuristics. An integer input matrix (matrix A in the following example) is given as input and I have to produce an integer output matrix (matrix B in the following example) whose number of rows are smaller than the input matrix and should obey the following two conditions:
1) Each column of the output matrix should contain integers in the same order as they appear in the input matrix. (In the example below, first column of the matrix A and matrix B have the same integers 1,3 in the same order.)
2) Same integers must not appear in the same row (In the example below, first row of the matrix B contains the integers 1,3 and 2 which are different from each other.)
Note that the input matrix always obey the 2nd condition.
What a greedy algorithm looks like to solve this problem?
Example:
In this example the output matrix 'Matrix B' contains all the integers as they appear in the input matrix 'Matrix A" but the output matrix has 5 rows and the input matrix has 6 rows. So, the output 'Matrix B' is a valid solution of the input 'Matrix A'.
I would produce the output one row at a time. When working out what to put in the row I would consider the next number from each input column, starting from the input column which has the most numbers yet to be placed, and considering the columns in decreasing order of numbers yet to be placed. Where a column can put a number in the current output row when its turn comes up it should do so.
You could extend this to a branch and bound solution to find the exact best answer. Recursively try all possible rows at each stage, except when you can see that the current row cannot possibly improve on the best answer so far. You know that if you have a column with k entries yet to be placed, in the best possible case you will need at least k more rows.
In practice I would expect that this will be too expensive to be practical, so you will need to ignore some possible rows which you cannot rule out, and so cannot guarantee to find the best answer. You could try using a heuristic search such as Limited Discrepancy search.
Another non-exact speedup is to multiply the estimate for the number of rows that the best possible answer derived from a partial solution will require by some factor F > 1. This will allow you to rule out some solutions earlier than branch and bound. The answer you find can be no more than F times more expensive than the best possible answer, because you only discard possibilities that cannot improve on the current answer by more than a factor of F.
A greedy solution to this problem would involve placing the numbers column by column, top down, as they appear.
Pseudocode:
For each column c in A:
r = 0 // row index of next element in A
nextRow = 0 // row index of next element to be placed in B
while r < A.NumRows()
while r < A.NumRows() && A[r, c] is null:
r++ // increment row to check in A
if r < A.NumRows() // we found a non-null entry in A
while nextRow < A.NumRows() && ~CheckConstraints(A[r,c], B[nextRow, c]):
nextRow++ // increment output row in B
if 'nextRow' >= A.NumRows()
return unsolvable // couldn't find valid position in B
B[nextRow, c] = v // successfully found position in B
++nextRow // increment output row in B
If there are no conflicts you end up "packing" B as tightly as possible. Otherwise you greedily search for the next non-conflicting row position in B. If none can be found, the problem is unsolvable.
The helper function CheckConstraints checks backwards in columns for the same row value in B to ensure the same value hasn't already been placed in a row.
If the problem statement is relaxed such that the output row count in B is <= the row count in A, then if we are unable to pack B any tighter, then we can return A as a solution.
I edited my question trying to make it as short and precise.
I am developing a prototype of a facial recognition system for my Graduation Project. I use Eigenface and my main source is the document Turk and Pentland. It is available here: http://www.face-rec.org/algorithms/PCA/jcn.pdf.
My doubts focus on step 4 and 5.
I can not correctly interpret the number of thresholds: If two types of thresholds, or only one (Notice that the text speaks of two types but uses the same symbol). And again, my question is whether this (or these) threshold(s) is unique and global for all person or if each person has their own default.
I understand the steps to be calculated until an matrix O() of classes with weights or weighted. So this matrix O() is of dimension M'x P. Since M' equal to the amount of eigenfaces chosen and P the number of people.
What follows and confuses me. He speaks of two distances: the distance of a class against another, and also from a distance of one face to another. I call it D1 and D2 respectively. NOTE: In the training set there are M images in total, with F = M / P the number of images per person.
I understand that threshold(s) should be chosen empirically. But there must be a way to approximate. I was initially designing a matrix of distances D1() of dimension PxP. Where the row vector D(i) has the distances from the vector average class O(i) to each O(j), j = 1..P. Ie a "all vs all."
Until I came here, and what follows depends on whether I should actually choose a single global threshold for all. Or if I should be chosen for each individual value. Also not if they are 2 types: one for distance classes, and one for distance faces.
I have a theory as could proceed but not so supported by the concepts of Turk:
Stage Pre-Test:
Gender two matrices of distances D1 and D2:
In D1 would be stored distances between classes, and in D2 distances between faces. This basis of the matrices W and A respectively.
Then, as indeed in the training set are P people, taking the F vectors columns D1 for each person and estimate a threshold T1 was in range [Min, Max]. Thus I will have a T1(i), i = 1..P
Separately have a T2 based on the range [Min, Max] out of all the matrix D2. This define is a face or not.
Step Test:
Buid a test set of image with a 1 image for each known person
Itest = {Itest(1) ... Itest(P)}
For every image Itest(i) test:
Calculate the space face Atest = Itest - Imean
Calculate the weight vector Otest = UT * Atest
Calculating distances:
dist1(j) = distance(Otest, O (j)), j = 1..P
Af = project(Otest, U)
dist2 = distance(Atest, Af)
Evaluate recognition:
MinDist = Min(dist1)
For each j = 1..P
If dist2 > T2 then "not is face" else:
If MinDist <= T1(j) then "Subject identified as j" else "subject unidentified"
Then I take account of TFA and TFR and repeat the test process with different threshold values until I find the best approach gives to each person.
Already defined thresholds can put the system into operation unknown images. The algorithm is similar to the test.
I know I get out of "script" of the official documentation but at least this reasoning is the most logical place my head. I wondered if I could give guidance.
EDIT:
i No more to say that has not already been said and that may help clarify things.
Could anyone tell me if I'm okay tackled with my "theory"? I'm moving into my project, and if this is not the right way would appreciate some guidance and does not work and you wrong.
I am trying to implement the Parallel Algorithm for Longest Common Subsequence Problem described in http://www.iaeng.org/publication/WCE2010/WCE2010_pp499-504.pdf
But i am having a problem with the variable C in Equation 6 on page 4
The paper refered to C on at the end of page 3 as
C as Let C[1 : l] bethe finite alphabet
I am not sure what is ment by this, as i guess it would it with the 2 strings ABCDEF and ABQXYEF be ABCDEFQXY. But what if my 2 stings is a list of objects (Where my match test for an example is obj1.Name = obj2.Name), what would my C be here? just a union on the 2 arrays?
Having read and studied the paper, I can say that C is supposed to be an array holding the alphabet of your strings, where the alphabet size (and, thus, the size of C) is l.
By the looks of your question, however, I feel the need to go deeper on this, because it looks like you didn't get the whole picture yet. What is P[i,j], and why do you need it? The answer is that you don't really need it, but it's an elegant optimization. In page 3, a little bit before Theorem 1, it is said that:
[...] This process ends when j-k = 0 at the k-th step, or a(i) =
b(j-k) at the k-th step. Assume that the process stops at the k-th
step, and k must be the minimum number that makes a(i) = b(j-k) or j-k
= 0. [...]
The recurrence relation in (3) is equivalent to (2), but the fundamental difference is that (2) expands recursively, whereas with (3) you never have recursive calls, provided that you know k. In other words, the magic behind (3) not expanding recursively is that you somehow know the spot where the recursion on (2) would stop, so you look at that cell immediately, rather than recursively approaching it.
Ok then, but how do you find out the value for k? Since k is the spot where (2) reaches a base case, it can be seen that k is the amount of columns that you have to "go back" on B until you are either off the limits (i.e., the first column that is filled with 0's) OR you find a match between a character in B and a character in A (which corresponds to the base case conditions in (2)). Remember that you will be matching the character a(i-1), where i is the current row.
So, what you really want is to find the last position in B before j where the character a(i-1) appears. If no such character ever appears in B before j, then that would be equivalent to reaching the case i = 0 or j-1 = 0 in (2); otherwise, it's the same as reaching a(i) = b(j-1) in (2).
Let's look at an example:
Consider that the algorithm is working on computing the values for i = 2 and j = 3 (the row and column are highlighted in gray). Imagine that the algorithm is working on the cell highlighted in black and is applying (2) to determine the value of S[2,2] (the position to the left of the black one). By applying (2), it would then start by looking at a(2) and b(2). a(2) is C, b(2) is G, to there's no match (this is the same procedure as the original, well-known algorithm). The algorithm now wants to find the value of S[2,2], because it is needed to compute S[2,3] (where we are). S[2,2] is not known yet, but the paper shows that it is possible to determine that value without refering to the row with i = 2. In (2), the 3rd case is chosen: S[2,2] = max(S[1, 2], S[2, 1]). Notice, if you will, that all this formula is doing is looking at the positions that would have been used to calculate S[2,2]. So, to rephrase that: we're computing S[2,3], we need S[2,2] for that, we don't know it yet, so we're going back on the table to see what's the value of S[2,2] in pretty much the same way we did in the original, non-parallel algorithm.
When will this stop? In this example, it will stop when we find the letter C (this is our a(i)) in TGTTCGACA before the second T (the letter on the current column) OR when we reach column 0. Because there is no C before T, we reach column 0. Another example:
Here, (2) would stop with j-1 = 5, because that is the last position in TGTTCGACA where C shows up. Thus, the recursion reaches the base case a(i) = b(j-1) when j-1 = 5.
With this in mind, we can see a shortcut here: if you could somehow know the amount k such that j-1-k is a base case in (2), then you wouldn't have to go through the score table to find the base case.
That's the whole idea behind P[i,j]. P is a table where you lay down the whole alphabet vertically (on the left side); the string B is, once again, placed horizontally in the upper side. This table is computed as part of a preprocessing step, and it will tell you exactly what you will need to know ahead of time: for each position j in B, it says, for each character C[i] in C (the alphabet), what is the last position in B before j where C[i] is found (note that i is used to index C, the alphabet, and not the string A. Maybe the authors should have used another index variable to avoid confusion).
So, you can think of the semantics for an entry P[i,j] as something along the lines of: The last position in B where I saw C[i] before position j. For example, if you alphabet is sigma = {A, E, I, O, U}, and B = "AOOIUEI", thenP` is:
Take the time to understand this table. Note the row for O. Remember: this row lists, for every position in B, where is the last known "O". Only when j = 3 will we have a value that is not zero (it's 2), because that's the position after the first O in AOOIUEI. This entry says that the last position in B where O was seen before is position 2 (and, indeed, B[2] is an O, the one that follows A). Notice, in that same row, that for j = 4, we have the value 3, because now the last position for O is the one that correspnds to the second O in B (and since no more O's exist, the rest of the row will be 3).
Recall that building P is a preprocessing step necessary if you want to easily find the value of k that makes the recursion from equation (2) stop. It should make sense by now that P[i,j] is the k you're looking for in (3). With P, you can determine that value in O(1) time.
Thus, the C[i] in (6) is a letter of the alphabet - the letter that we are currently considering. In the example above, C = [A,E,I,O,U], and C[1] = A, C[2] = E, etc. In equaton (7), c is the position in C where a(i) (the current letter of string A being considered) lives. It makes sense: after all, when building the score table position S[i,j], we want to use P to find the value of k - we want to know where was the last time we saw an a(i) in B before j. We do that by reading P[index_of(a(i)), j].
Ok, now that you understand the use of P, let's see what's happening with your implementation.
About your specific case
In the paper, P is shown as a table that lists the whole alphabet. It is a good idea to iterate through the alphabet because the typical uses of this algorithm are in bioinformatics, where the alphabet is much, much smaller than the string A, making the iteration through the alphabet cheaper.
Because your strings are sequences of objects, your C would be the set of all possible objects, so you'd have to build a table P with the set of all possible object instance (nonsense, of course). This is definitely a case where the alphabet size is huge when compared to your string size. However, note that you will only be indexing P in those rows that correspond to letters from A: any row in P for a letter C[i] that is not in A is useless and will never be used. This makes your life easier, because it means you can build P with the string A instead of using the alphabet of every possible object.
Again, an example: if your alphabet is AEIOU, A is EEI and B is AOOIUEI, you will only be indexing P in the rows for E and I, so that's all you need in P:
This works and suffices, because in (7), P[c,j] is the entry in P for the character c, and c is the index of a(i). In other words: C[c] always belongs to A, so it makes perfect sense to build P for the characters of A instead of using the whole alphabet for the cases where the size of A is considerably smaller than the size of C.
All you have to do now is to apply the same principle to whatever your objects are.
I really don't know how to explain it any better. This may be a little dense at first. Make sure to re-read it until you really get it - and I mean every little detail. You have to master this before thinking about implementing it.
NOTE: You said you were looking for a credible and / or official source. I'm just another CS student, so I'm not an official source, but I think I can be considered "credible". I've studied this before and I know the subject. Happy coding!
This is going to be a long post and just for fun, so if you don't have much time better go help folks with more important questions instead :)
There is a game called "Tower Bloxx" recently released on xbox. One part of the game is to place different colored towers on a field in a most optimal way in order to maximize number of most valuable towers. I wrote an algorithm that would determine the most efficient tower placement but it is not very efficient and pretty much just brute forcing all possible combinations. For 4x4 field with 4 tower types it solves it in about 1 hr, 5 tower types would take about 40 hours which is too much.
Here are the rules:
There are 5 types of towers that could be placed on a field. There are several types of fields, the easiest one is just 4x4 matrix, others fields have some "blanks" where you can't build. Your aim is to put as many the most valuable towers on a field as possible to maximize total tower value on a field (lets assume that all towers are built at once, there is no turns).
Tower types (in order from less to most valuable):
Blue - can be placed anywhere, value = 10
Red - can be placed only besides blue, value = 20
Green - placed besides red and blue, value = 30
Yellow - besides green, red and blue, value = 40
White - besides yellow, green, red and blue, value = 100
Which means that for example green tower should have at least 1 red and 1 blue towers at either north, south, west or east neighbor cells (diagonals don't count). White tower should be surrounded with all other colors.
Here is my algorithm for 4 towers on 4x4 field:
Total number of combinations = 4^16
Loop through [1..4^16] and convert every number to base4 string in order to encode tower placement, so 4^16 = "3333 3333 3333 3333" which would represent our tower types (0=blue,...,3=yellow)
Convert tower placement string into matrix.
For every tower in a matrix check its neighbors and if any of requirements fails this whole combination fails.
Put all correct combinations into an array and then sort this array as strings in lexicographic order to find best possible combination (first need to sort characters in a string).
The only optimization I came up with is to skip combinations that don't contain any most valuable towers. It skips some processing but I still loop through all 4^16 combinations.
Any thought how this can be improved? Code samples would be helpful if in java or php.
-------Update--------
After adding more illegal states (yellow cannot be built in the corners, white cannot be built in corners and on the edges, field should contain at least one tower of each type), realizing that only 1 white tower could be possibly built on 4x4 field and optimizing java code the total time was brought down from 40 to ~16 hours. Maybe threading would bring it down to 10 hrs but that's probably brute forcing limit.
I found this question intriguing, and since I'm teaching myself Haskell, I decided to try my hand at implementing a solution in that language.
I thought about branch-and-bound, but couldn't come up with a good way to bound the solutions, so I just did some pruning by discarding boards that violate the rules.
My algorithm works by starting with an "empty" board. It places each possible color of tower in the first empty slot and in each case (each color) then recursively calls itself. The recursed calls try each color in the second slot, recursing again, until the board is full.
As each tower is placed, I check the just-placed tower and all of it's neighbors to verify that they're obeying the rules, treating any empty neighbors as wild cards. So if a white tower has four empty neighbors, I consider it valid. If a placement is invalid, I do not recurse on that placement, effectively pruning the entire tree of possibilities under it.
The way the code is written, I generate a list of all possible solutions, then look through the list to find the best one. In actuality, thanks to Haskell's lazy evaluation, the list elements are generated as the search function needs them, and since they're never referred to again they become available for garbage collection right away, so even for a 5x5 board memory usage is fairly small (2 MB).
Performance is pretty good. On my 2.1 GHz laptop, the compiled version of the program solves the 4x4 case in ~50 seconds, using one core. I'm running a 5x5 example right now to see how long it will take. Since functional code is quite easy to parallelize, I'm also going to experiment with parallel processing. There's a parallelized Haskell compiler that will not only spread the work across multiple cores, but across multiple machines as well, and this is a very parallelizable problem.
Here's my code so far. I realize that you specified Java or PHP, and Haskell is quite different. If you want to play with it, you can modify the definition of the variable "bnd" just above the bottom to set the board size. Just set it to ((1,1),(x, y)), where x and y are the number of columns and rows, respectively.
import Array
import Data.List
-- Enumeration of Tower types. "Empty" isn't really a tower color,
-- but it allows boards to have empty cells
data Tower = Empty | Blue | Red | Green | Yellow | White
deriving(Eq, Ord, Enum, Show)
type Location = (Int, Int)
type Board = Array Location Tower
-- towerScore omputes the score of a single tower
towerScore :: Tower -> Int
towerScore White = 100
towerScore t = (fromEnum t) * 10
-- towerUpper computes the upper bound for a single tower
towerUpper :: Tower -> Int
towerUpper Empty = 100
towerUpper t = towerScore t
-- boardScore computes the score of a board
boardScore :: Board -> Int
boardScore b = sum [ towerScore (b!loc) | loc <- range (bounds b) ]
-- boardUpper computes the upper bound of the score of a board
boardUpper :: Board -> Int
boardUpper b = sum [ bestScore loc | loc <- range (bounds b) ]
where
bestScore l | tower == Empty =
towerScore (head [ t | t <- colors, canPlace b l t ])
| otherwise = towerScore tower
where
tower = b!l
colors = reverse (enumFromTo Empty White)
-- Compute the neighbor locations of the specified location
neighborLoc :: ((Int,Int),(Int,Int)) -> (Int,Int) -> [(Int,Int)]
neighborLoc bounds (col, row) = filter valid neighborLoc'
where
valid loc = inRange bounds loc
neighborLoc' = [(col-1,row),(col+1,row),(col,row-1),(col,row+1)]
-- Array to store all of the neighbors of each location, so we don't
-- have to recalculate them repeatedly.
neighborArr = array bnd [(loc, neighborLoc bnd loc) | loc <- range bnd]
-- Get the contents of neighboring cells
neighborTowers :: Board -> Location -> [Tower]
neighborTowers board loc = [ board!l | l <- (neighborArr!loc) ]
-- The tower placement rule. Yields a list of tower colors that must
-- be adjacent to a tower of the specified color.
requiredTowers :: Tower -> [Tower]
requiredTowers Empty = []
requiredTowers Blue = []
requiredTowers Red = [Blue]
requiredTowers Green = [Red, Blue]
requiredTowers Yellow = [Green, Red, Blue]
requiredTowers White = [Yellow, Green, Red, Blue]
-- cellValid determines if a cell satisfies the rule.
cellValid :: Board -> Location -> Bool
cellValid board loc = null required ||
null needed ||
(length needed <= length empties)
where
neighbors = neighborTowers board loc
required = requiredTowers (board!loc)
needed = required \\ neighbors
empties = filter (==Empty) neighbors
-- canPlace determines if 'tower' can be placed in 'cell' without
-- violating the rule.
canPlace :: Board -> Location -> Tower -> Bool
canPlace board loc tower =
let b' = board // [(loc,tower)]
in cellValid b' loc && and [ cellValid b' l | l <- neighborArr!loc ]
-- Generate a board full of empty cells
cleanBoard :: Array Location Tower
cleanBoard = listArray bnd (replicate 80 Empty)
-- The heart of the algorithm, this function takes a partial board
-- (and a list of empty locations, just to avoid having to search for
-- them) and a score and returns the best board obtainable by filling
-- in the partial board
solutions :: Board -> [Location] -> Int -> Board
solutions b empties best | null empties = b
solutions b empties best =
fst (foldl' f (cleanBoard, best) [ b // [(l,t)] | t <- colors, canPlace b l t ])
where
f :: (Board, Int) -> Board -> (Board, Int)
f (b1, best) b2 | boardUpper b2 <= best = (b1, best)
| otherwise = if newScore > lstScore
then (new, max newScore best)
else (b1, best)
where
lstScore = boardScore b1
new = solutions b2 e' best
newScore = boardScore new
l = head empties
e' = tail empties
colors = reverse (enumFromTo Blue White)
-- showBoard converts a board to a printable string representation
showBoard :: Board -> String
showBoard board = unlines [ printRow row | row <- [minrow..maxrow] ]
where
((mincol, minrow), (maxcol, maxrow)) = bounds board
printRow row = unwords [ printCell col row | col <- [mincol..maxcol] ]
printCell col row = take 1 (show (board!(col,row)))
-- Set 'bnd' to the size of the desired board.
bnd = ((1,1),(4,4))
-- Main function generates the solutions, finds the best and prints
-- it out, along with its score
main = do putStrLn (showBoard best); putStrLn (show (boardScore best))
where
s = solutions cleanBoard (range (bounds cleanBoard)) 0
best = s
Also, please remember this is my first non-trivial Haskell program. I'm sure it can be done much more elegantly and succinctly.
Update: Since it was still very time-consuming to do a 5x5 with 5 colors (I waited 12 hours and it hadn't finished), I took another look at how to use bounding to prune more of the search tree.
My first approach was to estimate the upper bound of a partially-filled board by assuming every empty cell is filled with a white tower. I then modified the 'solution' function to track the best score seen and to ignore any board whose upper bound is less than than that best score.
That helped some, reducing a 4x4x5 board from 23s to 15s. To improve it further, I modified the upper bound function to assume that each Empty is filled with the best tower possible, consistent with the existing non-empty cell contents. That helped a great deal, reducing the 4x4x5 time to 2s.
Running it on 5x5x5 took 2600s, giving the following board:
G B G R B
R B W Y G
Y G R B R
B W Y G Y
G R B R B
with a score of 730.
I may make another modification and have it find all of the maximal-scoring boards, rather than just one.
If you don't want to do A*, use a branch and bound approach. The problem should be relatively easy to code up because your value functions are well defined. I imagine you should be able to prune off huge sections of the search space with relative ease. However because your search space is pretty large it may still take some time. Only one way to find out :)
The wiki article isn't the best in the world. Google can find you a ton of nice examples and trees and stuff to further illustrate the approach.
One easy way to improve the brute force method is to explore only legal states. For example, if you are trying all possible states, you will be testing many states where the top right corner is a white tower. All of these states will be illegal. It doesn't make sense to generate and test all of those states. So you want to generate your states one block at a time, and only go deeper into the tree when you are actually at a potentially valid state. This will cut down your search tree by many orders of magnitude.
There may be further fancy things you can do, but this is an easy to understand (hopefully) improvement to your current solution.
I think you will want to use a branch-and-bound algorithm because I think coming up with a good heuristic for an A* implementation will be hard (but, that's just my intuitition).
The pseudo-code for a branch-and-bound implementation is:
board = initial board with nothing on it, probably a 2D array
bestBoard = {}
function findBest(board)
if no more pieces can be added to board then
if score(board) > score(bestBoard) then
bestBoard = board
return
else
for each piece P we can legally add to board
newBoard = board with piece P added
//loose upper bound, could be improved
if score(newBoard) + 100*number of blanks in newBoard > score(bestBoard)
findBestHelper(newBoard)
The idea is that we search all possible boards, in order, but we keep track of the best one we have found so far (this is the bound). Then, if we find a partial board which we know will never be better than the best one so far then we stop looking working on that partial board: we trim that branch of the search tree.
In the code above I am doing the check by assuming that all the blanks would be filled by the white pieces, as we can't do better than that. I am sure that with a little bit of thought you can come up with a tighter bound than that.
Another place where you can try to optimize is in the order of the for-each loop. You want to try pieces in the order correct order. That is, optimally you want the first solution found to be the best one, or at least one with a really high score.
It seems like a good approach would be to start with a white tower and then build a set of towers around it based on the requirements, trying to find the smallest possible colored set of shapes which can act as interlocking tiles.
I wanted to advocate linear programming with integer unknowns, but it turns out that it's NP-hard even in the binary case. However, you can still get great success at optimizing a problem like yours, where there are many valid solutions and you simply want the best one.
Linear programming, for this kind of problem, essentially amounts to having a lot of variables (for example, the number of red towers present in cell (M, N)) and relationships among the variables (for example, the number of white towers in cell (M, N) must be less than or equal to the number of towers of the non-white color that has the smallest such number, among all its neighbors). It's kind of a pain to write up a linear program, but if you want a solution that runs in seconds, it's probably your best bet.
You've received a lot of good advice on the algorithmic side of things, so I don't have a lot to add. But, assuming Java as the language, here are a few fairly obvious suggestions for performance improvement.
Make sure you're not instantiating any objects inside that 4^16 loop. It's much, much cheaper for the JVM to re-initialize an existing object than to create a new one. Even cheaper to use arrays of primitives. :)
If you can help it, step away from the collection classes. They'll add a lot of overhead that you probably don't need.
Make sure you're not concatenating any strings. Use StringBuilder.
And lastly, consider re-writing the whole thing in C.
I have a set of students (referred to as items in the title for generality). Amongst these students, some have a reputation for being rambunctious. We are told about a set of hate relationships of the form 'i hates j'. 'i hates j' does not imply 'j hates i'. We are supposed to arrange the students in rows (front most row numbered 1) in a way such that if 'i hates j' then i should be put in a row that is strictly lesser numbered than that of j (in other words: in some row that is in front of j's row) so that i doesn't throw anything at j (Turning back is not allowed). What would be an efficient algorithm to find the minimum number of rows needed (each row need not have the same number of students)?
We will make the following assumptions:
1) If we model this as a directed graph, there are no cycles in the graph. The most basic cycle would be: if 'i hates j' is true, 'j hates i' is false. Because otherwise, I think the ordering would become impossible.
2) Every student in the group is at least hated by one other student OR at least hates one other student. Of course, there would be students who are both hated by some and who in turn hate other students. This means that there are no stray students who don't form part of the graph.
Update: I have already thought of constructing a directed graph with i --> j if 'i hates j and doing topological sorting. However, since the general topological sort would suit better if I had to line all the students in a single line. Since there is a variation of the rows here, I am trying to figure out how to factor in the change into topological sort so it gives me what I want.
When you answer, please state the complexity of your solution. If anybody is giving code and you don't mind the language, then I'd prefer Java but of course any other language is just as fine.
JFYI This is not for any kind of homework (I am not a student btw :)).
It sounds to me that you need to investigate topological sorting.
This problem is basically another way to put the longest path in a directed graph problem. The number of rows is actually number of nodes in path (number of edges + 1).
Assuming the graph is acyclic, the solution is topological sort.
Acyclic is a bit stronger the your assumption 1. Not only A -> B and B -> A is invalid. Also A -> B, B -> C, C -> A and any cycle of any length.
HINT: the question is how many rows are needed, not which student in which row. The answer to the question is the length of the longest path.
It's from a project management theory (or scheduling theory, I don't know the exact term). There the task is about sorting jobs (vertex is a job, arc is a job order relationship).
Obviously we have some connected oriented graph without loops. There is an arc from vertex a to vertex b if and only if a hates b. Let's assume there is a source (without incoming arcs) and destination (without outgoing arcs) vertex. If that is not the case, just add imaginary ones. Now we want to find length of a longest path from source to destination (it will be number of rows - 1, but mind the imaginary verteces).
We will define vertex rank (r[v]) as number of arcs in a longest path between source and this vertex v. Obviously we want to know r[destination]. Algorithm for finding rank:
0) r_0[v] := 0 for all verteces v
repeat
t) r_t[end(j)] := max( r_{t-1}[end(j)], r_{t-1}[start(j)] + 1 ) for all arcs j
until for all arcs j r_{t+1}[end(j)] = r_t[end(j)] // i.e. no changes on this iteration
On each step at least one vertex increases its rank. Therefore in this form complexity is O(n^3).
By the way, this algorithm also gives you student distribution among rows. Just group students by their respective ranks.
Edit: Another code with the same idea. Possibly it is better understandable.
# Python
# V is a list of vertex indices, let it be something like V = range(N)
# source has index 0, destination has index N-1
# E is a list of edges, i.e. tuples of the form (start vertex, end vertex)
R = [0] * len(V)
do:
changes = False
for e in E:
if R[e[1]] < R[e[0]] + 1:
changes = True
R[e[1]] = R[e[0]] + 1
while changes
# The answer is derived from value of R[N-1]
Of course this is the simplest implementation. It can be optimized, and time estimate can be better.
Edit2: obvious optimization - update only verteces adjacent to those that were updated on the previous step. I.e. introduce a queue with verteces whose rank was updated. Also for edge storing one should use adjacency lists. With such optimization complexity would be O(N^2). Indeed, each vertex may appear in the queue at most rank times. But vertex rank never exceeds N - number of verteces. Therefore total number of algorithm steps will not exceed O(N^2).
Essentailly the important thing in assumption #1 is that there must not be any cycles in this graph. If there are any cycles you can't solve this problem.
I would start by seating all of the students that do not hate any other students in the back row. Then you can seat the students who hate these students in the next row and etc.
The number of rows is the length of the longest path in the directed graph, plus one. As a limit case, if there is no hate relationship everyone can fit on the same row.
To allocate the rows, put everyone who is not hated by anyone else on the row one. These are the "roots" of your graph. Everyone else is put on row N + 1 if N is the length of the longest path from any of the roots to that person (this path is of length one at least).
A simple O(N^3) algorithm is the following:
S = set of students
for s in S: s.row = -1 # initialize row field
rownum = 0 # start from first row below
flag = true # when to finish
while (flag):
rownum = rownum + 1 # proceed to next row
flag = false
for s in S:
if (s.row != -1) continue # already allocated
ok = true
foreach q in S:
# Check if there is student q who will sit
# on this or later row who hates s
if ((q.row == -1 or q.row = rownum)
and s hated by q) ok = false; break
if (ok): # can put s here
s.row = rownum
flag = true
Simple answer = 1 row.
Put all students in the same row.
Actually that might not solve the question as stated - lesser row, rather than equal row...
Put all students in row 1
For each hate relation, put the not-hating student in a row behind the hating student
Iterate till you have no activity, or iterate Num(relation) times.
But I'm sure there are better algorithms - look at acyclic graphs.
Construct a relationship graph where i hates j will have a directed edge from i to j. So end result is a directed graph. It should be a DAG otherwise no solutions as it's not possible to resolve circular hate relations ship.
Now simply do a DFS search and during the post node callbacks, means the once the DFS of all the children are done and before returning from the DFS call to this node, simply check the row number of all the children and assign the row number of this node as row max row of the child + 1. Incase if there is some one who doesn't hate anyone basically node with no adjacency list simply assign him row 0.
Once all the nodes are processed reverse the row numbers. This should be easy as this is just about finding the max and assigning the row numbers as max-already assigned row numbers.
Here is the sample code.
postNodeCb( graph g, int node )
{
if ( /* No adj list */ )
row[ node ] = 0;
else
row[ node ] = max( row number of all children ) + 1;
}
main()
{
.
.
for ( int i = 0; i < NUM_VER; i++ )
if ( !visited[ i ] )
graphTraverseDfs( g, i );`enter code here`
.
.
}