The Movie Scheduling _Problem_ - algorithm

Currently I'm reading "The Algorithm Design Manual" by Skiena (well, beginning to read)
He asks a problem he calls the "Movie Scheduling Problem":
Problem: Movie Scheduling Problem
Input: A set I of n intervals on the line.
Output: What is the largest subset of mutually non-overlapping intervals which can
be selected from I?
Example: (Each dashed line is a movie, you want to find a set with the highest quantity of movies)
----a---
-----b---- -----c--- ---d---
-----e--- -------f---
--g-- --h--
The algorithm I thought of to solve it was like this:
I could throw out the "worst offender" (intersects with the most other movies) until there are no worst offenders (zero intersections). The only problem I see is that if there is a tie (say two different movies each intersect with 3 other movies) could it matter which one I throw out?
Basically I'm wondering how I go about turning the idea into "math" and how to prove it correct/incorrect.

The algorithm is incorrect. Let's consider the following example:
Counterexample
|----F----| |-----G------|
|-------D-------| |--------E--------|
|-----A------| |------B------| |------C-------|
You can see that there is a solution of size at least 3 because you can pick A, B and C.
Firstly, let's count, for each interval the number of intersections:
A = 2 [F, D]
B = 4 [D, F, E, G]
C = 2 [E, G]
D = 3 [A, B, F]
E = 3 [B, C, G]
F = 3 [A, B, D]
G = 3 [B, C, E]
Now consider a run of your algorithm. In the first step we delete B because it intersects with the most number of invervals and we get:
|----F----| |-----G------|
|-------D-------| |--------E--------|
|-----A------| |------C-------|
It's easy to see that now from {A, D, F} you can choose only one, because each pair intersects. The same case with {G, E, C}, so after deleting B, you can choose at most one from {A, D, F} and at most one from {G, E, C}, to get the total of 2, which is smaller than the size of {A, B, C}.
The conclusion is, that after deleting B which intersects with the most number of invervals, you can't get the maximum number of nonintersecting movies.
Correct solution
The problem is very well known and one solution is to pick the interval which ends first, delete all intervals intersecting with it and continue until there are no intervals to examine. This is an example of a greedy method and you can find or develop a proof that it's correct.

This looks like a dynamic programming problem to me:
Define the following functions:
sched(t) = best schedule starting at time t
next(t) = set of movies that start next after time t
len(m) = length of movie m
next returns a set because there may be more than one movie that starts at the same time.
then sched should be defined as follows:
sched(t) = max { 1 + sched(t + len(m)), sched(t+1) } where m in next(t)
This recursive function selects a movie m from next(t) and compares the largest possible sets that either include or don't include m.
Invoke sched with the time of your first movie and you will get the size of the optimal set. Getting the optimal set itself just requires a little extra logic to remember which movies you select at each invocation.
I think this recursive (as opposed to iterative) algorithm runs in O(n^2) if you use memoization, where n is the number of movies.
It's correct, but I'd have to consult my algorithms textbook to give you an explicit proof, but hopefully this algorithm makes intuitive sense why it is correct.

# go through the database and create a 2-D matrix indexed a..h by a..h. Set each
# element of the matrix to 1 if the row index movie overlaps the column index movie.
mtx = []
for i in range(8):
column = []
for j in range(8):
column.append(0)
mtx.append(column)
# b <> e
mtx[1][4] = 1
mtx[4][1] = 1
# e <> g
mtx[4][6] = 1
mtx[6][4] = 1
# e <> c
mtx[4][2] = 1
mtx[2][4] = 1
# c <> a
mtx[2][0] = 1
mtx[0][2] = 1
# c <> f
mtx[2][5] = 1
mtx[5][2] = 1
# c <> g
mtx[2][6] = 1
mtx[6][2] = 1
# c <> h
mtx[2][7] = 1
mtx[7][2] = 1
# d <> f
mtx[3][5] = 1
mtx[5][3] = 1
# a <> f
mtx[0][5] = 1
mtx[5][0] = 1
# a <> d
mtx[0][3] = 1
mtx[3][0] = 1
# a <> h
mtx[0][7] = 1
mtx[7][0] = 1
# g <> e
mtx[4][7] = 1
mtx[7][4] = 1
# print out contstraints
for line in mtx:
print line
# keep track of which movies are still allowed
allowed = set(range(8))
# loop through in greedy fashion, picking movie that throws out the least
# number of other movies at each step
best = 8
while best > 0:
best_col = None
best_lost = set()
best = 8 # score if move does not overlap with any other
# each step, only try movies still allowed
for col in allowed:
lost = set()
for row in range(8):
# keep track of other movies eliminated by this selection
if mtx[row][col] == 1:
lost.add(row)
# this was the best of all the allowed choices so far
if len(lost) < best:
best_col = col
best_lost = lost
best = len(lost)
# there was a valid selection, process
if best_col > 0:
print 'watch movie: ', str(unichr(best_col+ord('a')))
for row in best_lost:
# now eliminate the other movies you can't now watch
if row in allowed:
print 'throwing out: ', str(unichr(row+ord('a')))
allowed.remove(row)
# also throw out this movie from the allowed list (can't watch twice)
allowed.remove(best_col)
# this is just a greedy algorithm, not guaranteed optimal!
# you could also iterate through all possible combinations of movies
# and simply eliminate all illegal possibilities (brute force search)

Related

Find a substitution that sorts the list

Consider the following words:
PINEAPPLE
BANANA
ARTICHOKE
TOMATO
The goal is to sort it (in lexicographical order) without moving the words themselves, but using letter substitution. In this example, I can replace the letter P with A and A replace with P, so:
AINEPAALE
BPNPNP
PRTICHOKE
TOMPTO
This is a list in lexicographical order. If you switch letters, the letters will be switched in all words. It is worth noting that you can use the whole alphabet, nut just the letters in the words in the list.
I spent considerable time with this problem, but was not able to think of anything other than brute forcing it (trying all letter switch combinations) nor was I able to come up with the conditions that define when the list can be sorted.
Some more examples:
ABC
ABB
ABD
can be turned into
ACB
ACC
ACD
which satisfies the condition.
Let's assume the problem is possible for a particular case, just for now. Also, for simplicity, assume all the words are distinct (if two words are identical, they must be adjacent and one can be ignored).
The problem then turns into topological sort, though the details are slightly different from suspicious dog's answer, which misses a couple of cases.
Consider a graph of 26 nodes, labeled A through Z. Each pair of words contributes one directed edge to the partial ordering; this corresponds to the first character in which the words differ. For example, with the two words ABCEF and ABRKS in order, the first difference is in the third character, so sigma(C) < sigma(R).
The result can be obtained by doing a topological sort on this graph, and substituting A for the first node in the ordering, B for the second, etc.
Note that this also gives a useful measure of when the problem is impossible to solve. This occurs when two words are the same but not adjacent (in a "cluster"), when one word is a prefix of another but is after it, or when the graph has a cycle and topological sort is impossible.
Here is a fully functional solution in Python, complete with detection of when a particular instance of the problem is unsolvable.
def topoSort(N, adj):
stack = []
visited = [False for _ in range(N)]
current = [False for _ in range(N)]
def dfs(v):
if current[v]: return False # there's a cycle!
if visited[v]: return True
visited[v] = current[v] = True
for x in adj[v]:
if not dfs(x):
return False
current[v] = False
stack.append(v)
return True
for i in range(N):
if not visited[i]:
if not dfs(i):
return None
return list(reversed(stack))
def solve(wordlist):
N = 26
adj = [set([]) for _ in range(N)] # adjacency list
for w1, w2 in zip(wordlist[:-1], wordlist[1:]):
idx = 0
while idx < len(w1) and idx < len(w2):
if w1[idx] != w2[idx]: break
idx += 1
else:
# no differences found between the words
if len(w1) > len(w2):
return None
continue
c1, c2 = w1[idx], w2[idx]
# we want c1 < c2 after the substitution
adj[ord(c1) - ord('A')].add(ord(c2) - ord('A'))
li = topoSort(N, adj)
sub = {}
for i in range(N):
sub[chr(ord('A') + li[i])] = chr(ord('A') + i)
return sub
def main():
words = ['PINEAPPLE', 'BANANA', 'ARTICHOKE', 'TOMATO']
print('Before: ' + ' '.join(words))
sub = solve(words)
nwords = [''.join(sub[c] for c in w) for w in words]
print('After : ' + ' '.join(nwords))
if __name__ == '__main__':
main()
EDIT: The time complexity of this solution is a provably-optimal O(S), where S is the length of the input. Thanks to suspicious dog for this; the original time complexity was O(N^2 L).
Update: the original analysis was wrong and failed on some class of test cases, as pointed out by Eric Zhang.
I believe this can be solved with a form of topological sort. Your initial list of words defines a partial order or a directed graph on some set of letters. You wish to find a substitution that linearizes this graph of letters. Let's use one of your non-trivial examples:
P A R K O V I S T E
P A R A D O N T O Z A
P A D A K
A B B A
A B E C E D A
A B S I N T
Let x <* y indicate that substitution(x) < substitution(y) for some letters (or words) x and y. We want word1 <* word2 <* word3 <* word4 <* word5 <* word6 overall, but in terms of letters, we just need to look at each pair of adjacent words and find the first pair of differing characters in the same column:
K <* A (from PAR[K]OVISTE <* PAR[A]DONTOZA)
R <* D (from PA[R]ADONTOZA <* PA[D]AK)
P <* A (from [P]ADAK <* [A]BBA)
B <* E (from AB[B]A <* AB[E]CEDA)
E <* S (from AB[E]CEDA <* AB[S]INT)
If we find no mismatched letters, then there are 3 cases:
word 1 and word 2 are the same
word 1 is a prefix of word 2
word 2 is a prefix of word 1
In case 1 and 2, the words are already in lexicographic order, so we don't need to perform any substitutions (although we might) and they add no extra constraints that we need to adhere to. In case 3, there is no substitution at all that will fix this (think of ["DOGGO", "DOG"]), so there's no possible solution and we can quit early.
Otherwise, we build the directed graph corresponding to the partial ordering information we obtained and perform a topological sort. If the sorting process indicates that no linearization is possible, then there is no solution for sorting the list of words. Otherwise, you get back something like:
P <* K <* R <* B <* E <* A <* D <* S
Depending on how you implement your topological sort, you might get a different linear ordering. Now you just need to assign each letter a substitution that respects this ordering and is itself sorted alphabetically. A simple option is to pair the linear ordering with itself sorted alphabetically, and use that as the substitution:
P <* K <* R <* B <* E <* A <* D <* S
| | | | | | | |
A < B < D < E < K < P < R < S
But you could implement a different substitution rule if you wish.
Here's a proof-of-concept in Python:
import collections
import itertools
# a pair of outgoing and incoming edges
Edges = collections.namedtuple('Edges', 'outgoing incoming')
# a mapping from nodes to edges
Graph = lambda: collections.defaultdict(lambda: Edges(set(), set()))
def substitution_sort(words):
graph = build_graph(words)
if graph is None:
return None
ordering = toposort(graph)
if ordering is None:
return None
# create a substitition that respects `ordering`
substitutions = dict(zip(ordering, sorted(ordering)))
# apply substititions
return [
''.join(substitutions.get(char, char) for char in word)
for word in words
]
def build_graph(words):
graph = Graph()
# loop over every pair of adjacent words and find the first
# pair of corresponding characters where they differ
for word1, word2 in zip(words, words[1:]):
for char1, char2 in zip(word1, word2):
if char1 != char2:
break
else: # no differing characters found...
if len(word1) > len(word2):
# ...but word2 is a prefix of word1 and comes after;
# therefore, no solution is possible
return None
else:
# ...so no new information to add to the graph
continue
# add edge from char1 -> char2 to the graph
graph[char1].outgoing.add(char2)
graph[char2].incoming.add(char1)
return graph
def toposort(graph):
"Kahn's algorithm; returns None if graph contains a cycle"
result = []
working_set = {node for node, edges in graph.items() if not edges.incoming}
while working_set:
node = working_set.pop()
result.append(node)
outgoing = graph[node].outgoing
while outgoing:
neighbour = outgoing.pop()
neighbour_incoming = graph[neighbour].incoming
neighbour_incoming.remove(node)
if not neighbour_incoming:
working_set.add(neighbour)
if any(edges.incoming or edges.outgoing for edges in graph.values()):
return None
else:
return result
def print_all(items):
for item in items:
print(item)
print()
def test():
test_cases = [
('PINEAPPLE BANANA ARTICHOKE TOMATO', True),
('ABC ABB ABD', True),
('AB AA AB', False),
('PARKOVISTE PARADONTOZA PADAK ABBA ABECEDA ABSINT', True),
('AA AB CA', True),
('DOG DOGGO DOG DIG BAT BAD', False),
('DOG DOG DOGGO DIG BIG BAD', True),
]
for words, is_sortable in test_cases:
words = words.split()
print_all(words)
subbed = substitution_sort(words)
if subbed is not None:
assert subbed == sorted(subbed), subbed
print_all(subbed)
else:
print('<no solution>')
print()
print('expected solution?', 'yes' if is_sortable else 'no')
print()
if __name__ == '__main__':
test()
Now, it's not ideal--for example, it still performs a substitution even if the original list of words is already sorted--but it appears to work. I can't formally prove it works though, so if you find a counter-example, please let me know!
Extract all the first letter of each word in a list. (P,B,A,T)
Sort the list. (A,B,P,T)
Replace all occurrences of the first letter in the word with the first character in the sorted list.
Replace P(Pineapple) from all words with A.
Replace B from all words with B.
Replace A from all words with P.
Replace T from all words with T.
This will give you your intended result.
Edit:
Compare two adjacent strings. If one is greater than the other, then find the first occurrence of character mismatch and swap and replace all words with the swapped characters.
Repeat this for the entire list like in bubble sort.
Example -
ABC < ABB
First occurrence of character mismatch is at 3rd position. So we swap all C's with B's.

Algorithm to group items in groups of 3

I am trying to solve a problem where I have pairs like:
A C
B F
A D
D C
F E
E B
A B
B C
E D
F D
and I need to group them in groups of 3 where I must have a triangule of matching from that list. Basically I need a result if its possible or not to group a collection.
So the possible groups are (ACD and BFE), or (ABC and DEF) and this collection is groupable since all letters can be grouped in groups of 3 and no one is left out.
I made a script where I can achieve this for small ammounts of input but for big ammounts it gets too slow.
My logic is:
make nested loop to find first match (looping untill I find a match)
> remove 3 elements from the collection
> run again
and I do this until I am out of letters. Since there can be different combinations I run this multiple times starting on different letters until I find a match.
I can understand that this gives me loops in order at least N^N and can get too slow. Is there a better logic for such problems? can a binary tree be used here?
This problem can be modeled as a graph Clique cover problem. Every letter is a node and every pair is an edge and you want to partition the graph into vertex-disjoint cliques of size 3 (triangles). If you want the partitioning to be of minimum cardinality then you want a minimum clique cover.
Actually this would be a k-clique cover problem, because in the clique cover problem you can have cliques of arbitrary/different sizes.
As Alberto Rivelli already stated, this problem is reducible to the Clique Cover problem, which is NP-hard.
It is also reducible to the problem of finding a clique of particular/maximum size. Maybe there are others, not NP-hard problems to which your particular case could be reduced to, but I didn't think of any.
However, there do exist algorithms which can find the solution in polynomial time, although not always for worst cases. One of them is Bron–Kerbosch algorithm, which is known by far to be the most efficient algorithm for finding the maximum clique and can find a clique in the worst case of O(3^(n/3)). I don't know the size of your inputs, but I hope it will be sufficient for your problem.
Here is the code in Python, ready to go:
#!/usr/bin/python3
# #by DeFazer
# Solution to:
# stackoverflow.com/questions/40193648/algorithm-to-group-items-in-groups-of-3
# Input:
# N P - number of vertices and number of pairs
# P pairs, 1 pair per line
# Output:
# "YES" and groups themselves if grouping is possible, and "NO" otherwise
# Input example:
# 6 10
# 1 3
# 2 6
# 1 4
# 4 3
# 6 5
# 5 2
# 1 2
# 2 3
# 5 4
# 6 4
# Output example:
# YES
# 1-2-3
# 4-5-6
# Output commentary:
# There are 2 possible coverages: 1-2-3*4-5-6 and 2-5-6*1-3-4.
# If required, it can be easily modified to return all possible groupings rather than just one.
# Algorithm:
# 1) List *all* existing triangles (1-2-3, 1-3-4, 2-5-6...)
# 2) Build a graph where vertices represent triangles and edges connect these triangles with no common... vertices. Sorry for ambiguity. :)
# 3) Use [this](en.wikipedia.org/wiki/Bron–Kerbosch_algorithm) algorithm (slightly modified) to find a clique of size N/3.
# The grouping is possible if such clique exists.
N, P = map(int, input().split())
assert (N%3 == 0) and (N>0)
cliquelength = N//3
pairs = {} # {a:{b, d, c}, b:{a, c, f}, c:{a, b}...}
# Get input
# [(0, 1), (1, 3), (3, 2)...]
##pairlist = list(map(lambda ab: tuple(map(lambda a: int(a)-1, ab)), (input().split() for pair in range(P))))
pairlist=[]
for pair in range(P):
a, b = map(int, input().split())
if a>b:
b, a = a, b
a, b = a-1, b-1
pairlist.append((a, b))
pairlist.sort()
for pair in pairlist:
a, b = pair
if a not in pairs:
pairs[a] = set()
pairs[a].add(b)
# Make list of triangles
triangles = []
for a in range(N-2):
for b in pairs.get(a, []):
for c in pairs.get(b, []):
if c in pairs[a]:
triangles.append((a, b, c))
break
def no_mutual_elements(sortedtupleA, sortedtupleB):
# Utility function
# TODO: if too slow, can be improved to O(n) since tuples are sorted. However, there are only 9 comparsions in case of triangles.
return all((a not in sortedtupleB) for a in sortedtupleA)
# Make a graph out of that list
tgraph = [] # if a<b and (b in tgraph[a]), then triangles[a] has no common elements with triangles[b]
T = len(triangles)
for t1 in range(T):
s = set()
for t2 in range(t1+1, T):
if no_mutual_elements(triangles[t1], triangles[t2]):
s.add(t2)
tgraph.append(s)
def connected(a, b):
if a > b:
b, a = a, b
return (b in tgraph[a])
# Finally, the magic algorithm!
CSUB = set()
def extend(CAND:set, NOT:set) -> bool:
# while CAND is not empty and there is no vertex in NOT connected to *all* vertexes in CAND
while CAND and all((any(not connected(n, c) for c in CAND)) for n in NOT):
v = CAND.pop()
CSUB.add(v)
newCAND = {c for c in CAND if connected(c, v)}
newNOT = {n for n in NOT if connected(n, v)}
if (not newCAND) and (not newNOT) and (len(CSUB)==cliquelength): # the last condition is the algorithm modification
return True
elif extend(newCAND, newNOT):
return True
else:
CSUB.remove(v)
NOT.add(v)
if extend(set(range(T)), set()):
print("YES")
# If the clique itself is not needed, it's enough to remove the following 2 lines
for a, b, c in [triangles[c] for c in CSUB]:
print("{}-{}-{}".format(a+1, b+1, c+1))
else:
print("NO")
If this solution is still too slow, perphaps it may be more efficient to solve the Clique Cover problem instead. If that's the case, I can try to find a proper algorithm for it.
Hope that helps!
Well i have implemented the job in JS where I feel most confident. I also tried with 100000 edges which are randomly selected from 26 letters. Provided that they are all unique and not a point such as ["A",A"] it resolves in like 90~500 msecs. The most convoluted part was to obtain the nonidentical groups, those without just the order of the triangles changing. For the given edges data it resolves within 1 msecs.
As a summary the first reduce stage finds the triangles and the second reduce stage groups the disconnected ones.
function getDisconnectedTriangles(edges){
return edges.reduce(function(p,e,i,a){
var ce = a.slice(i+1)
.filter(f => f.some(n => e.includes(n))), // connected edges
re = []; // resulting edges
if (ce.length > 1){
re = ce.reduce(function(r,v,j,b){
var xv = v.find(n => e.indexOf(n) === -1), // find the external vertex
xe = b.slice(j+1) // find the external edges
.filter(f => f.indexOf(xv) !== -1 );
return xe.length ? (r.push([...new Set(e.concat(v,xe[0]))]),r) : r;
},[]);
}
return re.length ? p.concat(re) : p;
},[])
.reduce((s,t,i,a) => t.used ? s
: (s.push(a.map((_,j) => a[(i+j)%a.length])
.reduce((p,c,k) => k-1 ? p.every(t => t.every(n => c.every(v => n !== v))) ? (c.used = true, p.push(c),p) : p
: [p].every(t => t.every(n => c.every(v => n !== v))) ? (c.used = true, [p,c]) : [p])),s)
,[]);
}
var edges = [["A","C"],["B","F"],["A","D"],["D","C"],["F","E"],["E","B"],["A","B"],["B","C"],["E","D"],["F","D"]],
ps = 0,
pe = 0,
result = [];
ps = performance.now();
result = getDisconnectedTriangles(edges);
pe = performance.now();
console.log("Disconnected triangles are calculated in",pe-ps, "msecs and the result is:");
console.log(result);
You may generate random edges in different lengths and play with the code here

find all indices of multiple value pairs in a matrix

Suppose I have a matrix A, containing possible value pairs and a matrix B, containing all value pairs:
A = [1,1;2,2;3,3];
B = [1,1;3,4;2,2;1,1];
I would like to create a matrix C that contains all pairs that are allowed by A (i.e. C = [1,1;2,2;1,1]).
Using C = ismember(A,B,'rows') will only show the first occurence of 1,1, but I need both.
Currently I use a for-loop to create C, which looks like:
TFtot = false(size(B(:,1,1),1);
for i = 1:size(a(:,1),1)
TF1 = A(i,1) == B(:,1) & A(i,2) = B(:,2);
TFtot = TF1 | TFtot;
end
C = B(TFtot,:);
I would like to create a faster approach, because this loop currently greatly slows down the algorithm.
You're pretty close. You just need to swap B and A, then use this output to index into B:
L = ismember(B, A, 'rows');
C = B(L,:);
How ismember works in this particular case is that it outputs a logical vector that has the same number of rows as B where the ith value in B tells you whether we have found this ith row somewhere in A (logical 1) or if we haven't found this row (logical 0).
You want to select out those entries in B that are seen in A, and so you simply use the output of ismember to slice into B to extract out the affected rows, and grab all of the columns.
We get for C:
>> C
C =
1 1
2 2
1 1
Here's an alternative using bsxfun:
C = B(all(any(bsxfun(#eq, B, permute(A, [3 2 1])),3),2),:);
Or you could use pdist2 (Statistics Toolbox):
B(any(~pdist2(A,B),1),:);
Using matrix-multiplication based euclidean distance calculations -
Bt = B.'; %//'
[m,n] = size(A);
dists = [A.^2 ones(size(A)) -2*A ]*[ones(size(Bt)) ; Bt.^2 ; Bt];
C = B(any(dists==0,1),:);

numpy: evaluating function in matrix, using previous array as argument in calculating the next

I have an m x n array: a, where the integers m > 1E6, and n <= 5.
I have functions F and G, which are composed like this: F( u, G ( u, t)). u is a 1 x n array, t is a scalar, and F and G returns 1 x n arrays.
I need to evaluate each row of a in F, and use previously evaluated row as the u-array for the next evaluation. I need to make m such evaluations.
This has to be really fast. I was previously impressed by scitools.std StringFunction evaluaion for a whole array, but this problem requires using the previously calculated array as an argument in calculating the next. I don't know if StringFunction can do this.
For example:
a = zeros((1000000, 4))
a[0] = asarray([1.,69.,3.,4.1])
# A is a float defined elsewhere, h is a function which accepts a float as its argument and returns an arbitrary float. h is defined elsewhere.
def G(u, t):
return asarray([u[0], u[1]*A, cos(u[2]), t*h(u[3])])
def F(u, t):
return u + G(u, t)
dt = 1E-6
for i in range(1, 1000000):
a[i] = F(a[i-1], i*dt)
i += 1
The problem with the above code is that it is slow as hell. I need to get these calculations done by numpy milliseconds.
How can I do what I want?
Thank you for our time.
Kind regards,
Marius
This sort of thing is very difficult to do in numpy. If we look at this by column we see a few simpler solutions.
a[:,0] is very easy:
col0 = np.ones((1000))*2
col0[0] = 1 #Or whatever start value.
np.cumprod(col0, out=col0)
np.allclose(col0, a[:1000,0])
True
As mentioned earlier this will overflow very quickly. a[:,1] can be done much along the same lines.
I do not believe there is a way to do the next two columns inside numpy alone quickly. We can turn to numba for this:
from numba import auotojit
def python_loop(start, count):
out = np.zeros((count), dtype=np.double)
out[0] = start
for x in xrange(count-1):
out[x+1] = out[x] + np.cos(out[x+1])
return out
numba_loop = autojit(python_loop)
np.allclose(numba_loop(3,1000),a[:1000,2])
True
%timeit python_loop(3,1000000)
1 loops, best of 3: 4.14 s per loop
%timeit numba_loop(3,1000000)
1 loops, best of 3: 42.5 ms per loop
Although its worth pointing out that this converges to pi/2 very very quickly and there is little point in calculating this recursion past ~20 values for any start value. This returns the exact same answer to double point precision- I didn't bother finding the cutoff, but it is much less then 50:
%timeit tmp = np.empty((1000000));
tmp[:50] = numba_loop(3,50);
tmp[50:] = np.pi/2
100 loops, best of 3: 2.25 ms per loop
You can do something similar with the fourth column. Of course you can autojit all of the functions, but this gives you several different options to try out depending on numba usage:
Use cumprod for the first two columns
Use an approximation for column 3 (and possible 4) where only the first few iterations are calculated
Implement columns 3 and 4 in numba using autojit
Wrap everything inside of an autojit loop (the best option)
The way you have presented this all rows past ~200 will either be np.inf or np.pi/2. Exploit this.
Slightly faster. Your first column is basicly 2^n. Calculating 2^n for n up to 1000000 is gonna overflow.. second column is even worse.
def calc(arr, t0=1E-6):
u = arr[0]
dt = 1E-6
h = lambda x: np.random.random(1)*50.0
def firstColGen(uStart):
u = uStart
while True:
u += u
yield u
def secondColGen(uStart, A):
u = uStart
while True:
u += u*A
yield u
def thirdColGen(uStart):
u = uStart
while True:
u += np.cos(u)
yield u
def fourthColGen(uStart, h, t0, dt):
u = uStart
t = t0
while True:
u += h(u) * dt
t += dt
yield u
first = firstColGen(u[0])
second = secondColGen(u[1], A)
third = thirdColGen(u[2])
fourth = fourthColGen(u[3], h, t0, dt)
for i in xrange(1, len(arr)):
arr[i] = [first.next(), second.next(), third.next(), fourth.next()]

Algorithm for: All possible ways of splitting a set of elements into two sets?

I have n elements in a set U (lets assume represented by an array of size n). I want to find all possible ways of dividing the set U into two sets A and B, where |A| + |B| = n.
So for example, if U = {a,b,c,d}, the combinations would be:
A = {a} -- B = {b,c,d}
A = {b} -- B = {a,c,d}
A = {c} -- B = {a,b,d}
A = {d} -- B = {a,b,c}
A = {a,b} -- B = {c,d}
A = {a,c} -- B = {b,d}
A = {a,d} -- B = {b,c}
Note that the following two cases are considered equal and only one should be computed:
Case 1: A = {a,b} -- B = {c,d}
Case 2: A = {c,d} -- B = {a,b}
Also note that none of the sets A or B can be empty.
The way I'm thinking of implementing it is by just keeping track of indices in the array and moving them step by step. The number of indices will be equal to the number of elements in the set A, and set B will contain all the remaining un-indexed elements.
I was wondering if anyone knew of a better implementation. Im looking for better efficiency because this code will be executed on a fairly large set of data.
Thanks!
Take all the integers from 1 to 2^(n-1), non-inclusive. So if n = 4, the integers from 1 to 7.
Each of these numbers, written in binary, represents the elements present in set A. Set B consists of the remaining elements. Note that since we're only going to 2^(n-1), not 2^n, the high bit is always set for set B; we're always putting the first element in set B, since you want order not to matter.

Resources