I am getting the following errors when evaluating and fitting my model:
2020-11-11 16:08:44.264595: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2020-11-11 16:08:44.267135: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library nvcuda.dll
2020-11-11 16:08:44.285733: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1070 with Max-Q Design major: 6 minor: 1 memoryClockRate(GHz): 1.2655
pciBusID: 0000:01:00.0
2020-11-11 16:08:44.285768: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2020-11-11 16:08:44.287137: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-11-11 16:08:44.763834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-11-11 16:08:44.763859: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2020-11-11 16:08:44.763865: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2020-11-11 16:08:44.763991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6798 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 with Max-Q Design, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-11-11 16:08:51.749906: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 0, topological sort failed with message: The graph couldn't be sorted in topological order.
2020-11-11 16:08:51.764509: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 1, topological sort failed with message: The graph couldn't be sorted in topological order.
2020-11-11 16:08:51.779508: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] layout failed: Invalid argument: The graph couldn't be sorted in topological order.
2020-11-11 16:08:51.893354: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] remapper failed: Invalid argument: The graph couldn't be sorted in topological order.
2020-11-11 16:08:51.900916: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] arithmetic_optimizer failed: Invalid argument: The graph couldn't be sorted in topological order.
2020-11-11 16:08:51.910006: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 0, topological sort failed with message: The graph couldn't be sorted in topological order.
2020-11-11 16:08:51.926004: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 1, topological sort failed with message: The graph couldn't be sorted in topological order.
2020-11-11 16:09:42.427153: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 0, topological sort failed with message: The graph couldn't be sorted in topological order.
2020-11-11 16:09:42.429043: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 1, topological sort failed with message: The graph couldn't be sorted in topological order.
2020-11-11 16:09:42.431491: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] layout failed: Invalid argument: The graph couldn't be sorted in topological order.
2020-11-11 16:09:42.444270: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] remapper failed: Invalid argument: The graph couldn't be sorted in topological order.
2020-11-11 16:09:42.445833: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] arithmetic_optimizer failed: Invalid argument: The graph couldn't be sorted in topological order.
2020-11-11 16:09:42.447143: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 0, topological sort failed with message: The graph couldn't be sorted in topological order.
2020-11-11 16:09:42.448878: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 1, topological sort failed with message: The graph couldn't be sorted in topological order.
The model itself is an autoencoder (getting 512x1 arrays as inputs):
# ENCODER
img_width=512
input_shape = (img_width,1)
maxpoolSize=4
i = Input(shape=input_shape, name='encoder_input')
x = Conv1D(64, kernel_size=64, activation=activation ,padding='same')(i)
x = MaxPooling1D(maxpoolSize, padding='same')(x)
x = Conv1D(64, kernel_size=32, activation=activation ,padding='same')(x)
x = MaxPooling1D(maxpoolSize, padding='same')(x)
x = Flatten()(x)
x = Dense(128, activation=activation, kernel_initializer=initializer)(x)
bottleneck = Dense(bottleneck_dim, activation=activation, kernel_initializer=initializer, name="bottleneck")(x)
encoder = Model(i, bottleneck, name='encoder')
# DECODER
d_i = Input(shape=(bottleneck_dim,), name='decoder_input')
x = Dense(128, activation=activation, kernel_initializer=initializer)(d_i)
x = Reshape((128,1))(x)
x = Conv1D(64, kernel_size=32, activation='relu', padding='same')(x)
x = MaxPooling1D(2, padding='same')(x)
x = UpSampling1D(4)(x)
x = Conv1D(64, kernel_size=32, activation='relu', padding='same')(x)
x = MaxPooling1D(2, padding='same')(x)
x = UpSampling1D(4)(x)
x = Conv1D(64, kernel_size=32, activation='relu', padding='same')(x)
x = MaxPooling1D(2, padding='same')(x)
x = UpSampling1D(2)(x)
decoded = Conv1D(1, kernel_size=32, activation='sigmoid', padding='same')(x)
decoder = Model(d_i, decoded, name='decoder')
# instantiate autoEncoder
autoEncoder_outputs = decoder(encoder(i))
autoEncoder = Model(i, autoEncoder_outputs, name='autoEncoder')
Whenever I evaluate or fit the AutoEncoder the errors rise. This seems strange to me as I do not see any loops in the models. Any one would have an idea about what's going on please ? I am using TF 1.4.1.
Cheers!
edit: When I evalute the encoder alone, the error do not rise. I'm guessing the problem arises from the decoder. I have tried to work on it a bit by removing some layers and still cannot find the source of errors.
edit: Here's what the model looks like:
enter image description here
Related
ac_scores = []
for k in neighbors:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
scores = accuracy_score(y_test, y_pred)
ac_scores.append(scores)
# changing to misclassification error
MSE = [1 - x for x in ac_scores]
# determining best k
optimal_k = neighbors[MSE.index(min(MSE))]
print("The optimal number of neighbors is %d" % optimal_k)
I was trying to find the optimal K neighbors for the model building. error is occuring in the name of 'name: neighbors not defined' in the statement for k in neighbors.
I have this question that I am stuck at.
Given a network N,find the number of min cuts.
required time complexity:Poly(|N|) * #(min cuts).
I didn't success in finding anything useful, only how to find the first min cut by using BFS starting from S in the residual graph.
Thanks.
The number of minimum s−t cuts complexity, in the worst case, can be exponential. It is easy to construct flow networks with unique minimum cuts.
Here is an example in Python
# Python program for finding min-cut in the given graph
# Complexity : (E*(V^3))
# This class represents a directed graph using adjacency matrix
class Graph:
def __init__(self,graph):
self.graph = graph # residual graph
self.org_graph = [i[:] for i in graph]
self. ROW = len(graph)
self.COL = len(graph[0])
'''Returns true if there is a path from source 's' to sink 't' in
residual graph. Also fills parent[] to store the path '''
def BFS(self,s, t, parent):
# Mark all the vertices as not visited
visited =[False]*(self.ROW)
# Create a queue for BFS
queue=[]
# Mark the source node as visited and enqueue it
queue.append(s)
visited[s] = True
# Standard BFS Loop
while queue:
#Dequeue a vertex from queue and print it
u = queue.pop(0)
# Get all adjacent vertices of the dequeued vertex u
# If a adjacent has not been visited, then mark it
# visited and enqueue it
for ind, val in enumerate(self.graph[u]):
if visited[ind] == False and val > 0 :
queue.append(ind)
visited[ind] = True
parent[ind] = u
# If we reached sink in BFS starting from source, then return
# true, else false
return True if visited[t] else False
# Returns tne min-cut of the given graph
def minCut(self, source, sink):
# This array is filled by BFS and to store path
parent = [-1]*(self.ROW)
max_flow = 0 # There is no flow initially
# Augment the flow while there is path from source to sink
while self.BFS(source, sink, parent) :
# Find minimum residual capacity of the edges along the
# path filled by BFS. Or we can say find the maximum flow
# through the path found.
path_flow = float("Inf")
s = sink
while(s != source):
path_flow = min (path_flow, self.graph[parent[s]][s])
s = parent[s]
# Add path flow to overall flow
max_flow += path_flow
# update residual capacities of the edges and reverse edges
# along the path
v = sink
while(v != source):
u = parent[v]
self.graph[u][v] -= path_flow
self.graph[v][u] += path_flow
v = parent[v]
# print the edges which initially had weights
# but now have 0 weight
for i in range(self.ROW):
for j in range(self.COL):
if self.graph[i][j] == 0 and self.org_graph[i][j] > 0:
print str(i) + " - " + str(j)
# Create a graph given in the above diagram
graph = [[0, 16, 13, 0, 0, 0],
[0, 0, 10, 12, 0, 0],
[0, 4, 0, 0, 14, 0],
[0, 0, 9, 0, 0, 20],
[0, 0, 0, 7, 0, 4],
[0, 0, 0, 0, 0, 0]]
g = Graph(graph)
source = 0; sink = 5
g.minCut(source, sink)
I am trying to solve a problem where I have pairs like:
A C
B F
A D
D C
F E
E B
A B
B C
E D
F D
and I need to group them in groups of 3 where I must have a triangule of matching from that list. Basically I need a result if its possible or not to group a collection.
So the possible groups are (ACD and BFE), or (ABC and DEF) and this collection is groupable since all letters can be grouped in groups of 3 and no one is left out.
I made a script where I can achieve this for small ammounts of input but for big ammounts it gets too slow.
My logic is:
make nested loop to find first match (looping untill I find a match)
> remove 3 elements from the collection
> run again
and I do this until I am out of letters. Since there can be different combinations I run this multiple times starting on different letters until I find a match.
I can understand that this gives me loops in order at least N^N and can get too slow. Is there a better logic for such problems? can a binary tree be used here?
This problem can be modeled as a graph Clique cover problem. Every letter is a node and every pair is an edge and you want to partition the graph into vertex-disjoint cliques of size 3 (triangles). If you want the partitioning to be of minimum cardinality then you want a minimum clique cover.
Actually this would be a k-clique cover problem, because in the clique cover problem you can have cliques of arbitrary/different sizes.
As Alberto Rivelli already stated, this problem is reducible to the Clique Cover problem, which is NP-hard.
It is also reducible to the problem of finding a clique of particular/maximum size. Maybe there are others, not NP-hard problems to which your particular case could be reduced to, but I didn't think of any.
However, there do exist algorithms which can find the solution in polynomial time, although not always for worst cases. One of them is Bron–Kerbosch algorithm, which is known by far to be the most efficient algorithm for finding the maximum clique and can find a clique in the worst case of O(3^(n/3)). I don't know the size of your inputs, but I hope it will be sufficient for your problem.
Here is the code in Python, ready to go:
#!/usr/bin/python3
# #by DeFazer
# Solution to:
# stackoverflow.com/questions/40193648/algorithm-to-group-items-in-groups-of-3
# Input:
# N P - number of vertices and number of pairs
# P pairs, 1 pair per line
# Output:
# "YES" and groups themselves if grouping is possible, and "NO" otherwise
# Input example:
# 6 10
# 1 3
# 2 6
# 1 4
# 4 3
# 6 5
# 5 2
# 1 2
# 2 3
# 5 4
# 6 4
# Output example:
# YES
# 1-2-3
# 4-5-6
# Output commentary:
# There are 2 possible coverages: 1-2-3*4-5-6 and 2-5-6*1-3-4.
# If required, it can be easily modified to return all possible groupings rather than just one.
# Algorithm:
# 1) List *all* existing triangles (1-2-3, 1-3-4, 2-5-6...)
# 2) Build a graph where vertices represent triangles and edges connect these triangles with no common... vertices. Sorry for ambiguity. :)
# 3) Use [this](en.wikipedia.org/wiki/Bron–Kerbosch_algorithm) algorithm (slightly modified) to find a clique of size N/3.
# The grouping is possible if such clique exists.
N, P = map(int, input().split())
assert (N%3 == 0) and (N>0)
cliquelength = N//3
pairs = {} # {a:{b, d, c}, b:{a, c, f}, c:{a, b}...}
# Get input
# [(0, 1), (1, 3), (3, 2)...]
##pairlist = list(map(lambda ab: tuple(map(lambda a: int(a)-1, ab)), (input().split() for pair in range(P))))
pairlist=[]
for pair in range(P):
a, b = map(int, input().split())
if a>b:
b, a = a, b
a, b = a-1, b-1
pairlist.append((a, b))
pairlist.sort()
for pair in pairlist:
a, b = pair
if a not in pairs:
pairs[a] = set()
pairs[a].add(b)
# Make list of triangles
triangles = []
for a in range(N-2):
for b in pairs.get(a, []):
for c in pairs.get(b, []):
if c in pairs[a]:
triangles.append((a, b, c))
break
def no_mutual_elements(sortedtupleA, sortedtupleB):
# Utility function
# TODO: if too slow, can be improved to O(n) since tuples are sorted. However, there are only 9 comparsions in case of triangles.
return all((a not in sortedtupleB) for a in sortedtupleA)
# Make a graph out of that list
tgraph = [] # if a<b and (b in tgraph[a]), then triangles[a] has no common elements with triangles[b]
T = len(triangles)
for t1 in range(T):
s = set()
for t2 in range(t1+1, T):
if no_mutual_elements(triangles[t1], triangles[t2]):
s.add(t2)
tgraph.append(s)
def connected(a, b):
if a > b:
b, a = a, b
return (b in tgraph[a])
# Finally, the magic algorithm!
CSUB = set()
def extend(CAND:set, NOT:set) -> bool:
# while CAND is not empty and there is no vertex in NOT connected to *all* vertexes in CAND
while CAND and all((any(not connected(n, c) for c in CAND)) for n in NOT):
v = CAND.pop()
CSUB.add(v)
newCAND = {c for c in CAND if connected(c, v)}
newNOT = {n for n in NOT if connected(n, v)}
if (not newCAND) and (not newNOT) and (len(CSUB)==cliquelength): # the last condition is the algorithm modification
return True
elif extend(newCAND, newNOT):
return True
else:
CSUB.remove(v)
NOT.add(v)
if extend(set(range(T)), set()):
print("YES")
# If the clique itself is not needed, it's enough to remove the following 2 lines
for a, b, c in [triangles[c] for c in CSUB]:
print("{}-{}-{}".format(a+1, b+1, c+1))
else:
print("NO")
If this solution is still too slow, perphaps it may be more efficient to solve the Clique Cover problem instead. If that's the case, I can try to find a proper algorithm for it.
Hope that helps!
Well i have implemented the job in JS where I feel most confident. I also tried with 100000 edges which are randomly selected from 26 letters. Provided that they are all unique and not a point such as ["A",A"] it resolves in like 90~500 msecs. The most convoluted part was to obtain the nonidentical groups, those without just the order of the triangles changing. For the given edges data it resolves within 1 msecs.
As a summary the first reduce stage finds the triangles and the second reduce stage groups the disconnected ones.
function getDisconnectedTriangles(edges){
return edges.reduce(function(p,e,i,a){
var ce = a.slice(i+1)
.filter(f => f.some(n => e.includes(n))), // connected edges
re = []; // resulting edges
if (ce.length > 1){
re = ce.reduce(function(r,v,j,b){
var xv = v.find(n => e.indexOf(n) === -1), // find the external vertex
xe = b.slice(j+1) // find the external edges
.filter(f => f.indexOf(xv) !== -1 );
return xe.length ? (r.push([...new Set(e.concat(v,xe[0]))]),r) : r;
},[]);
}
return re.length ? p.concat(re) : p;
},[])
.reduce((s,t,i,a) => t.used ? s
: (s.push(a.map((_,j) => a[(i+j)%a.length])
.reduce((p,c,k) => k-1 ? p.every(t => t.every(n => c.every(v => n !== v))) ? (c.used = true, p.push(c),p) : p
: [p].every(t => t.every(n => c.every(v => n !== v))) ? (c.used = true, [p,c]) : [p])),s)
,[]);
}
var edges = [["A","C"],["B","F"],["A","D"],["D","C"],["F","E"],["E","B"],["A","B"],["B","C"],["E","D"],["F","D"]],
ps = 0,
pe = 0,
result = [];
ps = performance.now();
result = getDisconnectedTriangles(edges);
pe = performance.now();
console.log("Disconnected triangles are calculated in",pe-ps, "msecs and the result is:");
console.log(result);
You may generate random edges in different lengths and play with the code here
I have a 2-dimensional array which describes the distance between objects:
A B C
A 0 1 2
B 1 0 3
C 2 3 0
for example distance(A,B) = 1, distance(B,C) = 3, distance(A,C) = 2,
distance(x,y) = distance(x,y). I do not know anything more about this distance, it is not Euclides Distance or any commonly known distance function.
How to find number of groups and partition points (x,y)?
I have found solution:
D =[x][y] #two dimencion array with distances between x and y
sorted_distance = sorted_distance(D) # all values apears in D, delete duplicates and sort from max to min value
for distance in sorted_distance:
V = D.keys()
E = []
for x in V:
for y in V:
if x==y: continue
if D[x][y]<=distance:
E.append((x,y))
G = Grapth(V,E)
connected_components = get_connected_components(G)
if len(connected_components)>1: # this value could be increase if result is not rewarding
return connected_components
I'm considering all permutations of 0, ..., n-1 in lexicographic order. I'm given two ranks, i and j, and asked to find the rank of the permutation that results from applying the i'th permutation to the j'th permutation.
A couple examples for n=3:
p(3) = [1, 2, 0], p(4) = [2, 0, 1], result = [0, 1, 2], rank = 0
Given i = j = 4, we get [2, 0, 1] applied to itself is [1, 2, 0], rank = 3.
What I've come up with so far: I convert the ranks to their respective permutations via Lehmer codes, calculate the desired permutation, and convert back to rank via Lehmer codes.
Can anyone suggest a way to get the rank of the desired permutation from the other two ranks, without having to actually calculate the permutations? Storing the n! x n! array is not an option.
-edit- Note that I'm not wedded to lexicographic order if some other ordering would enable this.
-edit- Here are the n! by n! grids for n=3 & 4, for lexicographic ranks. Row i is indexed into column j to get the output. Note that the n=3 grid is identical to the top-left corner of the n=4 grid.
00|01|02|03|04|05|
01|00|03|02|05|04|
02|04|00|05|01|03|
03|05|01|04|00|02|
04|02|05|00|03|01|
05|03|04|01|02|00|
00|01|02|03|04|05|06|07|08|09|10|11|12|13|14|15|16|17|18|19|20|21|22|23|
01|00|03|02|05|04|07|06|09|08|11|10|13|12|15|14|17|16|19|18|21|20|23|22|
02|04|00|05|01|03|08|10|06|11|07|09|14|16|12|17|13|15|20|22|18|23|19|21|
03|05|01|04|00|02|09|11|07|10|06|08|15|17|13|16|12|14|21|23|19|22|18|20|
04|02|05|00|03|01|10|08|11|06|09|07|16|14|17|12|15|13|22|20|23|18|21|19|
05|03|04|01|02|00|11|09|10|07|08|06|17|15|16|13|14|12|23|21|22|19|20|18|
06|07|12|13|18|19|00|01|14|15|20|21|02|03|08|09|22|23|04|05|10|11|16|17|
07|06|13|12|19|18|01|00|15|14|21|20|03|02|09|08|23|22|05|04|11|10|17|16|
08|10|14|16|20|22|02|04|12|17|18|23|00|05|06|11|19|21|01|03|07|09|13|15|
09|11|15|17|21|23|03|05|13|16|19|22|01|04|07|10|18|20|00|02|06|08|12|14|
10|08|16|14|22|20|04|02|17|12|23|18|05|00|11|06|21|19|03|01|09|07|15|13|
11|09|17|15|23|21|05|03|16|13|22|19|04|01|10|07|20|18|02|00|08|06|14|12|
12|18|06|19|07|13|14|20|00|21|01|15|08|22|02|23|03|09|10|16|04|17|05|11|
13|19|07|18|06|12|15|21|01|20|00|14|09|23|03|22|02|08|11|17|05|16|04|10|
14|20|08|22|10|16|12|18|02|23|04|17|06|19|00|21|05|11|07|13|01|15|03|09|
15|21|09|23|11|17|13|19|03|22|05|16|07|18|01|20|04|10|06|12|00|14|02|08|
16|22|10|20|08|14|17|23|04|18|02|12|11|21|05|19|00|06|09|15|03|13|01|07|
17|23|11|21|09|15|16|22|05|19|03|13|10|20|04|18|01|07|08|14|02|12|00|06|
18|12|19|06|13|07|20|14|21|00|15|01|22|08|23|02|09|03|16|10|17|04|11|05|
19|13|18|07|12|06|21|15|20|01|14|00|23|09|22|03|08|02|17|11|16|05|10|04|
20|14|22|08|16|10|18|12|23|02|17|04|19|06|21|00|11|05|13|07|15|01|09|03|
21|15|23|09|17|11|19|13|22|03|16|05|18|07|20|01|10|04|12|06|14|00|08|02|
22|16|20|10|14|08|23|17|18|04|12|02|21|11|19|05|06|00|15|09|13|03|07|01|
23|17|21|11|15|09|22|16|19|05|13|03|20|10|18|04|07|01|14|08|12|02|06|00|
Here are the factoradics for n=4. I left off the last digit, which is always zero, for compactness.
000|001|010|011|020|021|100|101|110|111|120|121|200|201|210|211|220|221|300|301|310|311|320|321|
001|000|011|010|021|020|101|100|111|110|121|120|201|200|211|210|221|220|301|300|311|310|321|320|
010|020|000|021|001|011|110|120|100|121|101|111|210|220|200|221|201|211|310|320|300|321|301|311|
011|021|001|020|000|010|111|121|101|120|100|110|211|221|201|220|200|210|311|321|301|320|300|310|
020|010|021|000|011|001|120|110|121|100|111|101|220|210|221|200|211|201|320|310|321|300|311|301|
021|011|020|001|010|000|121|111|120|101|110|100|221|211|220|201|210|200|321|311|320|301|310|300|
100|101|200|201|300|301|000|001|210|211|310|311|010|011|110|111|320|321|020|021|120|121|220|221|
101|100|201|200|301|300|001|000|211|210|311|310|011|010|111|110|321|320|021|020|121|120|221|220|
110|120|210|220|310|320|010|020|200|221|300|321|000|021|100|121|301|311|001|011|101|111|201|211|
111|121|211|221|311|321|011|021|201|220|301|320|001|020|101|120|300|310|000|010|100|110|200|210|
120|110|220|210|320|310|020|010|221|200|321|300|021|000|121|100|311|301|011|001|111|101|211|201|
121|111|221|211|321|311|021|011|220|201|320|301|020|001|120|101|310|300|010|000|110|100|210|200|
200|300|100|301|101|201|210|310|000|311|001|211|110|320|010|321|011|111|120|220|020|221|021|121|
201|301|101|300|100|200|211|311|001|310|000|210|111|321|011|320|010|110|121|221|021|220|020|120|
210|310|110|320|120|220|200|300|010|321|020|221|100|301|000|311|021|121|101|201|001|211|011|111|
211|311|111|321|121|221|201|301|011|320|021|220|101|300|001|310|020|120|100|200|000|210|010|110|
220|320|120|310|110|210|221|321|020|300|010|200|121|311|021|301|000|100|111|211|011|201|001|101|
221|321|121|311|111|211|220|320|021|301|011|201|120|310|020|300|001|101|110|210|010|200|000|100|
300|200|301|100|201|101|310|210|311|000|211|001|320|110|321|010|111|011|220|120|221|020|121|021|
301|201|300|101|200|100|311|211|310|001|210|000|321|111|320|011|110|010|221|121|220|021|120|020|
310|210|320|110|220|120|300|200|321|010|221|020|301|100|311|000|121|021|201|101|211|001|111|011|
311|211|321|111|221|121|301|201|320|011|220|021|300|101|310|001|120|020|200|100|210|000|110|010|
320|220|310|120|210|110|321|221|300|020|200|010|311|121|301|021|100|000|211|111|201|011|101|001|
321|221|311|121|211|111|320|220|301|021|201|011|310|120|300|020|101|001|210|110|200|010|100|000|
I found an algorithm to convert between permutations and ranks in linear time. That's not quite what I want, but is probably good enough. It turns out that the fact that I don't care about lexicographic order is important. The ranking this uses is weird. I'm going to give two functions, one that converts from a rank to a permutation, and one that does the inverse.
First, to unrank (go from rank to permutation)
Initialize:
n = length(permutation)
r = desired rank
p = identity permutation of n elements [0, 1, ..., n]
unrank(n, r, p)
if n > 0 then
swap(p[n-1], p[r mod n])
unrank(n-1, floor(r/n), p)
fi
end
Next, to rank:
Initialize:
p = input permutation
q = inverse input permutation (in linear time, q[p[i]] = i for 0 <= i < n)
n = length(p)
rank(n, p, q)
if n=1 then return 0 fi
s = p[n-1]
swap(p[n-1], p[q[n-1]])
swap(q[s], q[n-1])
return s + n * rank(n-1, p, q)
end
That's the pseudocode. For my project I'll be careful to work with a copy of p so I don't mutate it when calculating its rank.
The running time of both of these is O(n).
There's a nice, readable paper explaining why this works: Ranking & Unranking Permutations in Linear Time, by Myrvold & Ruskey, Information Processing Letters Volume 79, Issue 6, 30 September 2001, Pages 281–284.
http://webhome.cs.uvic.ca/~ruskey/Publications/RankPerm/MyrvoldRuskey.pdf
If, in addition to R, you are not wedded to a particular P either, we could redefine the permutation function to facilitate a possible answer. The function, newPerm, below would permute a list in relation to R with the same consistency as the permuting function that "indexes into."
The example below is not optimized for efficiency (e.g., ranking/unranking can be done in O(n)). The last two lines of output compare the redefined permuting function to the "indexing" permuting function - as you can see, they both generate the same number of unique permutations when mapped to the permutation set. The function, f, would be the answer to the question.
Haskell code:
import Data.List (sort,permutations)
import Data.Maybe (fromJust)
sortedPermutations = sort $ permutations [0,1,2,3,4,5,6]
rank p = fromJust (lookup p rs) where rs = zip sortedPermutations [0..]
unrank r = fromJust (lookup r ps) where ps = zip [0..] sortedPermutations
tradPerm p s = foldr (\a b -> s!!a : b) [] p
newPerm p s = unrank (f (rank p) (rank s))
f r1 r2 = let l = r1 - r2 in if l < 0 then length sortedPermutations + l else l
Output:
*Main Data.List> unrank 3
[0,1,2,3,5,6,4]
*Main Data.List> unrank 8
[0,1,2,4,5,3,6]
*Main Data.List> f 3 8
5035
*Main Data.List> newPerm [0,1,2,3,5,6,4] [0,1,2,4,5,3,6]
[6,5,4,3,0,2,1]
*Main Data.List> rank [6,5,4,3,0,2,1]
5035
*Main Data.List> length $ group $ sort $ map (tradPerm [1,2,5,0,4,3,6]) sortedPermutations
5040
*Main Data.List> length $ group $ sort $ map (newPerm [1,2,5,0,4,3,6]) sortedPermutations
5040