stable matching - algorithm

I'm reading a algorithm book in my lesure time. Here is a question I have my own answer but not quite sure. What's your opinion? Thanks!
Question:
There are 2 television network company, let as assume A and B, each are planning the TV progame schedule in n time slots of a day. Each of them are putting their n programs in those slots. While each program have a rate based on the popularity in the past year, and these rate are distinct to each other. The company win a slot when its show have a higher rate then its opponent's. Is there a schedule matching that A made a schedule S and B made a schedule T, and (S, T) is stable that neither network can unilaterally change its own schedule and win more time slots.

There is no stable matching, unless one station has all its programs contiguous in the ratings (ie. the other station has no program which is rated better than one program on the first station but worse than another on the first station).
Proof
Suppose a station can improve its score, and the result is a stable matching. But then the other station could improve its own score by reversing the rearrangement. So it wasn't a stable matching. Contradiction.
Therefore a stable matching can not be reached by a station improving its score.
Therefore a stable matching can't be made worse (for either station), because then the lower state could be improved to a stable matching (which I just showed was not allowed).
Therefore every program rearrangement of a stable matching must give equal scores to both stations.
The only program sets which can't have scores changed by rearrangement are the ones where one of the stations' programs are contiguous in the ratings. (Proof left to reader)

Solution in Haskell:
hasStable :: Ord a => [a] -> [a] -> Bool
hasStable x y =
score x y + score y x == 0
-- score is number of slots we win minus number of slots they win
-- in our best possible response schedule
score :: Ord a => [a] -> [a] -> Integer
score others mine =
scoreSorted (revSort others) (revSort mine)
where
-- revSort is sort from biggest to smallest
revSort = reverse . sort
scoreSorted :: Ord a => [a] -> [a] -> Integer
scoreSorted (o:os) (m:ms)
| m > o =
-- our best show is better then theirs
-- we use it to beat theirs and move on
1 + score os ms
| otherwise =
-- their best show is better then ours
-- we match it with our worst show
-- so we could make use of our big guns
-1 + score os (m : ms)
scoreSorted _ _ = 0 -- there are no shows left
> hasStable [5,3] [4,2]
False
> hasStable [5,2] [3,4]
True

My own answer is no stable matching. Supposing there are only 2 time slots.
A have program p1(5.0) p2(3.0);
B have program p3(4.0) p4(2.0);
The schedule of A includes:
S1: p1, p2
S2: p2, p1
The schedule of B includes:
T1: p3, p4
T2: p4, p3
So the matching include:
(S1, T1)(S1, T2)(S2, T1)(S2, T2)
while the results are
(S1, T1) - (p1, p3)(p2, p4) 2:0 - not stable, becuase B can change its schedule to T2 and the result is : (S1, T2) - (p1, p4)(p2, p3) 1:0
Vise versa and so does the other matching.

Let each TV channel having 2 shows.
TV-1:
Show1 has a rating of 20 pts.
show2 has a rating of 40 pts.
TV-2:
Show1 has a rating of 30 pts.
Show2 has a rating of 50 pts.
Then it clearly shows the matching is unstable.

Related

Efficient, max-flow algorithm to route as many people as possible to one location?

I am trying to determine an efficient, maximum flow algorithm using a directed graph where, given a list of n flights (where each entry has the starting city, ending city, departure time, arrival time, and capacity of the flight), will route as many people as possible starting from city A and ending at city B. I also want to be able to return the set of flights that can be taken such that the maximum possible amount of people will get to city B from city A. I think that it can just be an implementation of the Ford-Fulkerson algorithm, or something similar, but I am having trouble being able to transform this schedule into a max-flow instance in an efficient way, and specifically what the pseudocode of said algorithm would look like after having done so.
The algorithm you're thinking about would to the trick, but the graph is it set to work on must be constructed properly.
Your issue here is timing. Let's say You want to people from A to C by 14:00 and we have a total of 4 flights:
Flight 1: A -> B, 10:00 -> 11:00, Cap. 100
Flight 2: A -> B, 11:00 -> 12:00, Cap. 100
Flight 3: B -> C, 11:30 -> 12:30, Cap. 100
Flight 4: B -> C, 12:30 -> 13:30, Cap. 100
You can see here that you could fill all flights and get 200 from A to C in time, but constructing the graph needs to take the time into account.
What I suggest is that you don't have one node to represent B, but rather have a few of them:
(B, 11:00) - B at 11:00.
(B, 12:00) - B at 12:00.
(B, 12:30) - when flight #3 departs.
(B, 13:30) - when flight #4 departs.
Any flight that can leave from B will be added to the graph once, starting from the relevant B node.
B nodes are connected to other B nodes in an edge with infinite capacity in the order of the time moving forwards. This allows passengers to "wait" between different times in B.
This example would end up with the following list of edges:
// Flight edges
[(A, 10:00), (B, 11:00)], Cap. 100
[(A, 11:00), (B, 12:00)], Cap. 100
[(B, 11:30), (C, 12:00)], Cap. 100
[(B, 12:30), (C, 13:00)], Cap. 100
// Waiting edges
[(B, 11:00), (B, 11:30)], Cap. infinite
[(B, 11:30), (B, 12:00)], Cap. infinite
[(B, 12:00), (B, 12:30)], Cap. infinite

sort a list of numbers by their 'visual similarity'

consider a function, which rates the level of 'visual similarity' between two numbers: 666666 and 666166 would be very similar, unlike 666666 and 111111
type N = Int
type Rate = Int
similar :: N -> N -> Rate
similar a b = length . filter id . zipWith (==) a' $ b'
where a' = show a
b' = show b
similar 666666 666166
--> 5
-- high rate : very similar
similar 666666 111111
--> 0
-- low rate : not similar
There will be more sophisticated implementations for this, however this serves the purpose.
The intention is to find a function that sorts a given list of N's, so that each item is the most similar one to it's preceding item. Since the first item does not have a predecessor, there must be a given first N.
similarSort :: N -> [N] -> [N]
Let's look at some sample data: They don't need to have the same arity but it makes it easier to reason about it.
sample :: [N]
sample = [2234, 8881, 1222, 8888, 8822, 2221, 5428]
one could be tempted to implement the function like so:
similarSortWrong x xs = reverse . sortWith (similar x) $ xs
but this would lead to a wrong result:
similarSortWrong 2222 sample
--> [2221,1222,8822,2234,5428,8888,8881]
In the beginning it looks correct, but it's obvious that 8822 should rather be followed by 8881, since it's more similar that 2234.
So here's the implementation I came up with:
similarSort _ [] = []
similarSort x xs = x : similarSort a as
where (a:as) = reverse . sortWith (similar x) $ xs
similarSort 2222 sample
--> [2222,2221,2234,1222,8822,8888,8881]
It seems to work. but it also seems to do lot more more work than necessary. Every step the whole rest is sorted again, just to pick up the first element. Usually lazyness should allow this, but reverse might break this again. I'd be keen to hear, if someone know if there's a common abstraction for this problem.
It's relatively straightforward to implement the greedy algorithm you ask for. Let's start with some boilerplate; we'll use the these package for a zip-like that hands us the "unused" tail ends of zipped-together lists:
import Data.Align
import Data.These
sampleStart = "2222"
sampleNeighbors = ["2234", "8881", "1222", "8888", "8822", "2221", "5428"]
Instead of using numbers, I'll use lists of digits -- just so we don't have to litter the code with conversions between the form that's convenient for the user and the form that's convenient for the algorithm. You've been a bit fuzzy about how to rate the similarity of two digit strings, so let's make it as concrete as possible: any digits that differ cost 1, and if the digit strings vary in length we have to pay 1 for each extension to the right. Thus:
distance :: Eq a => [a] -> [a] -> Int
distance l r = sum $ alignWith elemDistance l r where
elemDistance (These l r) | l == r = 0
elemDistance _ = 1
A handy helper function will pick the smallest element of some list (by a user-specified measure) and return the rest of the list in some implementation-defined order.
minRestOn :: Ord b => (a -> b) -> [a] -> Maybe (a, [a])
minRestOn f [] = Nothing
minRestOn f (x:xs) = Just (go x [] xs) where
go min rest [] = (min, rest)
go min rest (x:xs) = if f x < f min
then go x (min:rest) xs
else go min (x:rest) xs
Now the greedy algorithm almost writes itself:
greedy :: Eq a => [a] -> [[a]] -> [[a]]
greedy here neighbors = here : case minRestOn (distance here) neighbors of
Nothing -> []
Just (min, rest) -> greedy min rest
We can try it out on your sample:
> greedy sampleStart sampleNeighbors
["2222","1222","2221","2234","5428","8888","8881","8822"]
Just eyeballing it, that seems to do okay. However, as with many greedy algorithms, this one only minimizes the local cost of each edge in the path. If you want to minimize the total cost of the path found, you need to use another algorithm. For example, we can pull in the astar package. For simplicity, I'm going to do everything in a very inefficient way, but it's not too hard to do it "right". We'll need a fair chunk more imports:
import Data.Graph.AStar
import Data.Hashable
import Data.List
import Data.Maybe
import qualified Data.HashSet as HS
Unlike before, where we only wanted the nearest neighbor, we'll now want all the neighbors. (Actually, we could probably implement the previous use of minRestOn using the following function and minimumOn or something. Give it a try if you're interested!)
neighbors :: (a, [a]) -> [(a, [a])]
neighbors (_, xs) = go [] xs where
go ls [] = []
go ls (r:rs) = (r, ls ++ rs) : go (r:ls) rs
We can now call the aStar search method with appropriate parameters. We'll use ([a], [[a]]) -- representing the current list of digits and the remaining lists that we can choose from -- as our node type. The arguments to aStar are then, in order: the function for finding neighboring nodes, the function for computing distance between neighboring nodes, the heuristic for how far we have left to go (we'll just say 1 for each unique element in the list), whether we've reached a goal node, and the initial node to start the search from. We'll call fromJust, but it should be okay: all nodes have at least one path to a goal node, just by choosing the remaining lists of digits in order.
optimal :: (Eq a, Ord a, Hashable a) => [a] -> [[a]] -> [[a]]
optimal here elsewhere = (here:) . map fst . fromJust $ aStar
(HS.fromList . neighbors)
(\(x, _) (y, _) -> distance x y)
(\(x, xs) -> HS.size (HS.fromList (x:xs)) - 1)
(\(_, xs) -> null xs)
(here, elsewhere)
Let's see it run in ghci:
> optimal sampleStart sampleNeighbors
["2222","1222","8822","8881","8888","5428","2221","2234"]
We can see that it's done better this time by adding a pathLength function that computes all the distances between neighbors in a result.
pathLength :: Eq a => [[a]] -> Int
pathLength xs = sum [distance x y | x:y:_ <- tails xs]
In ghci:
> pathLength (greedy sampleStart sampleNeighbors)
15
> pathLength (optimal sampleStart sampleNeighbors)
14
In this particular example, I think the greedy algorithm could have found the optimal path if it had made the "right" choices whenever there were ties for minimal next step; but I expect it is not too hard to cook up an example where the greedy algorithm is forced into bad early choices.

Lazy Folding of Infinite Depth & Infinite Breadth Rose Tree to its Edge Paths

This question haskell fold rose tree paths delved into the code for folding a rose tree to its paths. I was experimenting with infinite rose trees, and I found that the provided solution was not lazy enough to work on infinite rose trees with infinity in both depth and breadth.
Consider a rose tree like:
data Rose a = Rose a [Rose a] deriving (Show, Functor)
Here's a finite rose tree:
finiteTree = Rose "root" [
Rose "a" [
Rose "d" [],
Rose "e" []
],
Rose "b" [
Rose "f" []
],
Rose "c" []
]
The output of the edge path list should be:
[["root","a","d"],["root","a","e"],["root","b","f"],["root","c"]]
Here is an infinite Rose tree in both dimensions:
infiniteRoseTree :: [[a]] -> Rose a
infiniteRoseTree ((root:_):breadthGens) = Rose root (infiniteRoseForest breadthGens)
infiniteRoseForest :: [[a]] -> [Rose a]
infiniteRoseForest (breadthGen:breadthGens) = [ Rose x (infiniteRoseForest breadthGens) | x <- breadthGen ]
infiniteTree = infiniteRoseTree depthIndexedBreadths where
depthIndexedBreadths = iterate (map (+1)) [0..]
The tree looks like this (it's just an excerpt, there's infinite depth and infinite breadth):
0
|
|
[1,2..]
/ \
/ \
/ \
[2,3..] [2,3..]
The paths would look like:
[[0,1,2..]..[0,2,2..]..]
Here was my latest attempt (doing it on GHCi causes an infinite loop, no streaming output):
rosePathsLazy (Rose x []) = [[x]]
rosePathsLazy (Rose x children) =
concat [ map (x:) (rosePathsLazy child) | child <- children ]
rosePathsLazy infiniteTree
The provided solution in the other answer also did not produce any output:
foldRose f z (Rose x []) = [f x z]
foldRose f z (Rose x ns) = [f x y | n <- ns, y <- foldRose f z n]
foldRose (:) [] infiniteTree
Both of the above work for the finite rose tree.
I tried a number of variations, but I can't figure out to make the edge folding operation lazy for infinite 2-dimensional rose tree. I feel like it has something to do with infinite amounts of concat.
Since the output is a 2 dimensional list. I can run a 2 dimensional take and project with a depth-limit or a breadth-limit or both at the same time!
Any help is appreciated!
After reviewing the answers here and thinking about it a bit more. I came to the realisation that this is unfoldable, because the resulting list is uncountably infinite. This is because an infinite depth & breadth rose tree is not a 2 dimensional data structure, but an infinite dimensional data structure. Each depth level confers an extra dimension. In other words, it is somewhat equivalent to an infinite dimensional matrix, imagine a matrix where each field is another matrix.. ad-infinitum. The cardinality of the infinite matrix is infinity ^ infinity, which has been proven (I think) to be uncountably infinite. This means any infinite dimensional data structure is not really computable in a useful sense.
To apply this to the rose tree, if we have infinite depth, then the paths never enumerate past the far left of the rose tree. That is this tree:
0
|
|
[1,2..]
/ \
/ \
/ \
[2,3..] [2,3..]
Would produce a path like: [[0,1,2..], [0,1,2..], [0,1,2..]..], and we'd never get past [0,1,2..].
Or in another way, if we have a list containing lists ad-infinitum. We can also never count (enumerate) it either, as there would be an infinite amount of dimensions that the code would jump to.
This also has some relationship to real numbers being uncountably infinite too. In a lazy list of infinite real numbers would just infinitely produce 0.000.. and never enumerate past that.
I'm not sure how to formalise the above explanation, but that's my intuition. (For reference see: https://en.wikipedia.org/wiki/Uncountable_set) It'd be cool to see someone expand on applying https://en.wikipedia.org/wiki/Cantor's_diagonal_argument to this problem.
This book seems to expand on it: https://books.google.com.au/books?id=OPFoJZeI8MEC&pg=PA140&lpg=PA140&dq=haskell+uncountably+infinite&source=bl&ots=Z5hM-mFT6A&sig=ovzWV3AEO16M4scVPCDD-gyFgII&hl=en&sa=X&redir_esc=y#v=onepage&q=haskell%20uncountably%20infinite&f=false
For some reason, dfeuer has deleted his answer, which included a very nice insight and only a minor, easily-fixed problem. Below I discuss his nice insight, and fix the easily-fixed problem.
His insight is that the reason the original code hangs is because it is not obvious to concat that any of the elements of its argument list are non-empty. Since we can prove this (outside of Haskell, with paper and pencil), we can cheat just a little bit to convince the compiler that it's so.
Unfortunately, concat isn't quite good enough: if you give concat a list like [[1..], foo], it will never draw elements from foo. The universe collection of packages can help here with its diagonal function, which does draw elements from all sublists.
Together, these two insights lead to the following code:
import Data.Tree
import Data.Universe.Helpers
paths (Node x []) = [[x]]
paths (Node x children) = map (x:) (p:ps) where
p:ps = diagonal (map paths children)
If we define a particular infinite tree:
infTree x = Node x [infTree (x+i) | i <- [1..]]
We can look at how it behaves in ghci:
> let v = paths (infTree 0)
> take 5 (head v)
[0,1,2,3,4]
> take 5 (map head v)
[0,0,0,0,0]
Looks pretty good! Of course, as observed by ErikR, we cannot have all paths in here. However, given any finite prefix p of an infinite path through t, there is a finite index in paths t whose element starts with prefix p.
Not a complete answer, but you might be interested in this detailed answer on how Haskell's permutations function is written so that it works on infinite lists:
What does this list permutations implementation in Haskell exactly do?
Update
Here's a simpler way to create an infinite Rose tree:
iRose x = Rose x [ iRose (x+i) | i <- [1..] ]
rindex (Rose a rs) [] = a
rindex (Rose _ rs) (x:xs) = rindex (rs !! x) xs
Examples:
rindex (iRose 0) [0,1,2,3,4,5,6] -- returns: 26
rindex infiniteTree [0,1,2,3,4,5,6] -- returns: 13
Infinite Depth
If a Rose tree has infinite depth and non-trivial width (> 1) there can't be an algorithm to list all of the paths just using a counting argument - the number of total paths is uncountable.
Finite Depth & Infinite Breadth
If the Rose tree has finite depth the number of paths is countable even if the trees have infinite breadth, and there is an algorithm which can produce all possible paths. Watch this space for updates.
ErikR has explained why you can't produce a list that necessarily contains all the paths, but it is possible to list paths lazily from the left. The simplest trick, albeit a dirty one, is to recognize that the result is never empty and force that fact on Haskell.
paths (Rose x []) = [[x]]
paths (Rose x children) = map (x :) (a : as)
where
a : as = concatMap paths children
-- Note that we know here that children is non-empty, and therefore
-- the result will not be empty.
For making very infinite rose trees, consider
infTree labels = Rose labels (infForest labels)
infForest labels = [Rose labels' (infForest labels')
| labels' <- map (: labels) [0..]]
As chi points out, while this definition of paths is productive, it will in some cases repeat the leftmost path forever, and never reach any more. Oops! So some attempt at fairness or diagonal traversal is necessary to give interesting/useful results.

How to assign rooms most effeciently?

Story:
Our company will go outing soon. For our staying in the resort, every two of our colleagues will share one room. Our admin assistant has collected our preference of who to share rooms with, and now she has to decide how to arrange rooms to minimize the required number of room. Everyone will be arranged to share a room with somebody he or she would like to. For example, there are only colleagues, Allen would like to share a room with Bob or Chris, Bob would like to share with Chris, and Chris would like to share with Allen; then the only result will be: Allen and Chris share a room, and Bob uses a room alone, and in totall, 2 rooms are needed.
Question:
To simplify the story as an algorithm question (which may not be the best simplification though): we have a few nodes in a graph, and the nodes connect to each other. We only care about nodes that are bi-directionally connected, so now we have an undirectional graph. How to divide the nodes in the undirectional graph into groups so that 1) any group contains at most 2 nodes, 2) if a group contains 2 nodes, the nodes are connected, 3) the number of the groups is minimized.
Algorithm:
What comes over my head is to solve the question greedily. In every step of arrangement, just remove one isolated node or two nodes so that the number of edges remain in the graph is maximized. By doing so repeatedly, we will find a solution finally.
Please either solve the question in an optimal way (and I am not looking for a way to try all combinations) or prove the greedy algorithm described above is optimal.
The problem you are solving is finding the maximum matching in a graph. This means finding the maximum number of edges that do not share vertices. In your case, those edges would correspond to shared rooms, and the remaining vertices would be single rooms.
The maximum matching can be found using the Blossom algorithm in polynomial time.
Here's something crude in Haskell. The function, "pairs," lists all pairs with a mutual preference, and people without a mutual partner (paired with ""). The function, "choose," returns pairs from the pair list. If both people in a pair are also paired with another (same) third person, "choose" removes those two people from the rest of the pair list, as well as pairs emptied as a consequence. The number of rooms needed is equal to the length of the final list.
Output (it would be nice to have more varied examples to test):
*Main> choose graph
[["Chris","Allen"],["Bob","Isaak"]]
*Main> choose graph1
[["Allen","Chris"],["Bob",""],["Dave",""],["Chris","Max"]] --four rooms
would be needed, although Chris appears in two pairs (..figured they can
decide later who stays where.)
*Main> choose graph2 --example given by Dante is not a Geek
[["Allen","Chris"],["Bob",""]]
Code:
import Data.List (group, sort, delete)
graph = [("Chris",["Isaak","Bob","Allen"]) --(person,preferences)
,("Allen",["Chris","Bob"])
,("Bob",["Allen","Chris","Isaak"])
,("Isaak",["Bob","Chris"])]
graph1 = [("Allen",["Bob","Chris"]), ("Bob",["Chris"]), ("Dave",[])
,("Chris",["Allen", "Max"]), ("Max", ["Chris"])]
graph2 = [("Allen",["Bob","Chris"]), ("Bob",["Chris"]), ("Chris",["Allen"])]
pairs graph = pairs' graph [] where
pairs' [] result = concat result
pairs' (x#(person1,_):xs) result
| null test = if elem [[person1, ""]] result
then pairs' xs result
else pairs' xs ([[person1,""]]:result)
| otherwise =
pairs' xs ((filter (\[x,y] -> notElem [y,x] (concat result)) test):result)
where isMutual a b = elem (fst a) (snd b) && elem (fst b) (snd a)
test = foldr comb [] graph
comb a#(person2,_) b =
if isMutual a x then [person1,person2]:b else b
choose graph = comb paired [] where
paired = pairs graph
comb [] result = filter (/=["",""]) result
comb (x#[p1,p2]:xs) result
| x == ["",""] = comb xs result
| test =
comb (map delete' xs) (x:map delete' result)
| otherwise = comb xs (x:result)
where delete' [x,y] = if elem x [p1,p2] then ["",y]
else if elem y [p1,p2] then [x,""]
else [x,y]
test = if not . null . filter ((>=2) . length) . group
. sort . map (delete p2 . delete p1)
. filter (\y -> y /= x && (elem p1 y || elem p2 y)) $ paired
then True
else False

How can you develop an algorithm for a developer review queue rotation?

Suppose you have developers A, B, C, D, E, F and they review each other's work.
How can you develop an algorithm to generate a review rotation telling each developer whose work they have to review each week AND satisfy these criteria:
You cannot review the same person two weeks in a row
There cannot be closed loops (A reviews B, B reviews A)
It would be nice if you review each other developer once before you start repeating.
I think I can make it work with an odd number of developers, but I am struggling with an even number.
There is simple round-robin tournament algorithm to get all possible pairs without repetitions.
Arrange developers in two columns, left column reviews right one.
Fix A place
Move all others in cyclic way.
A->F
B->E
C->D
A->B
C->F
D->E
A->C
D->B
E->F
A->D
E->C
F->B
A->E
F->D
B->C
I'd go the nieve route and rotate through a circular array. So week 1 everyone reviews the person to their right + 0. Week 2 everyone reviews the person to their right + 1. Week 3, right + 2, etc.
Week 1:
A -> B
B -> C
...
F -> A
Week 2:
A -> C
B -> D
...
F -> B
I seem to have found a solution inspired by the Round Robin rotation.
For Developers A, B, C, D, E, F
You fix a developer, say A. Then rotate the rest in a clockwise manner.
Then:
Everyone on the top row reviews the person below them
Everyone on the bottom row review the person above and to the right diagonally of them
Week 1:
A B C
D E F
AD
BE
CF
DB
EC
FA
Week 2:
A D B
E F C
AE
DF
BC
ED
FB
CA
Week 3:
A E D
F C B
AF
EC
DB
FE
CD
BA
Week 4:
A F E
C B D
AC
FB
ED
CF
BE
DA
Week 5:
A C F
B D E
AB
CD
FE
BC
DF
EA
Although it still exhibits unwanted properties where some people will never meet others such as B avoiding D.
Here's a brute-force in Haskell (takes about 10 seconds to get going).
Code:
import Control.Monad (guard, replicateM)
developers = ["A", "B", "C", "D", "E", "F"]
combinations = filter (\x -> head x /= last x) . replicateM 2 $ developers
makeWeek week =
if length week == length developers
then [week]
else do
review <- combinations
guard (notElem (take 1 review) (map (take 1) week)
&& notElem (drop 1 review) (map (drop 1) week)
&& notElem (reverse review) week
&& notElem review week)
makeWeek (review:week)
solve = solve' [] where
solve' weeks =
if length weeks == length developers - 1
then [weeks]
else do
week' <- makeWeek []
guard (all (\x -> notElem x (concat . take (length developers - 1) $ weeks)) week')
solve' (week':weeks)
Sample Output:
*Main> solve
[[[["F","B"],["E","A"],["D","C"],["C","E"],["B","D"],["A","F"]]
,[["F","C"],["E","B"],["D","A"],["C","D"],["B","F"],["A","E"]]
,[["F","A"],["E","C"],["D","B"],["C","F"],["B","E"],["A","D"]]
,[["F","E"],["E","D"],["D","F"],["C","B"],["B","A"],["A","C"]]
,[["F","D"],["E","F"],["D","E"],["C","A"],["B","C"],["A","B"]]],...etc
I will assume that by closed loops, you refer to cycles of length exactly 2. That is, it is allowed that A reviews B, B reviews C and C reviews A.
Let n be the number of people, and let 0, ..., n-1 be their names.
Week 1:
Person i reviews the code of person (i + 1) % n.
Week 2: Person i reviews the code of person (i + 2) % n.
...
Week n/2: Person i cannot review the code of person (i + n/2) % n, since this would cause a closed loop. Therefore, person i instead reviews the code of person (i + n/2 + 1) % n.
Week n/2 + 1: Person i reviews the code of person (i + n/2 + 2) % n.
...
Week n - 1: Person i reviews the code of person (i + 1) % n again, everything starts over.
Note: your last (optional) requirement (each person reviews every other person before the cycle starts again) is violated. For n = 2 and n = 4, no solution exists that satisfy all requirements anyway. The base case n = 2 is trivial. Consider the case n = 4: If you want to avoid closed loops, at least one person has to review the same person twice in a row. (Just enumerate all possible review relationships to see this).
If you really need your last requirement, you'll have to go with #groovy's solution. I'll leave mine here since it's very easy to compute.

Resources