Increase efficiency for following algorithm - algorithm

Problem:-
input = n
output :-
1 2 3.......n [first row]
2n+1 2n+2 2n+3....3n [second row]
3n+1 3n+2 3n+3...4n [second last row]
n+1 n+2 n+3....2n [last row]
In the problem we have to print a square such that we have 'n' numbers of rows in our square and in every row we have 'n' numbers. We prepare rows from numbers from 1 to square(n) in such way we fill numbers for first row, then last row, second row, second last row and so on.....
for e.g. if n = 4
We start from 1 print upto 4 then print a newline, so our first row is:-
1 2 3 4
Then our last row comes in continuation
5 6 7 8
then our second row will be
9 10 11 12
few examples:
input = 1
output = 1
input = 2
output = 1 2
3 4
input = 3
output = 1 2 3
7 8 9
4 5 6
My Code:
n = int(input().strip())
lines = [i for i in range (1, n + 1)]
line_order1 = []
line_order2 = []
#Reordering lines so we know the staring element of our method
for i in lines:
if(i % 2 == 1):
line_order1.append(i)
else:
line_order2.append(i)
print(line_order1)
print(line_order2)
// Getting the desired order of lines
line_order2.reverse()
line_order1.extend(line_order2)
print(line_order1)
// Now printing the desired square
for l in line_order1:
for i in range (1, n+1):
k = n * (l - 1)
print(k + i, end = " ")
print("\n")
Is there a better way to do this in terms of execution time?

While I see a few minor places you can improve your code, the performance is unlikely to be much better (my suggestions below might not make any performance difference at all). Your code will take O(n**2) time, which is the best you can do, since you need to print out that many numbers to form your square. Even if you combine some of your longer, more verbose steps into more compact versions, they'll can only possibly be better by a constant factor.
My first suggestion is to number the lines from 0 to n-1 instead of from 1 to n. This will save you some effort when you have to calculate how what multiple of n to include in the values for the row. Currently you've got an awkward l - 1 in your calculation that you could skip if you just used zero-indexed numbers for the rows. (Also l is a terrible variable name, since it looks like the digit 1 (one) in some fonts.)
My next suggestion is to simplify your code that builds the order. You don't need three lists, you can do the whole thing with one list that you feed two range objects, each counting up or down by two.
line_order = list(range(0, n, 2)) # count up by twos
line_order.extend(range(n - 1 - n%2, 0, -2)) # count down starting at either n-1 or n-2
Or, if you're willing to use a standard library module, you could import itertools and then use:
line_order = itertools.chain(range(0, n, 2), range(n - 1 - n%2, 0, -2))
The itertools.chain function returns an iterator that yields values from each of its iterable arguments as if they were concatenated together, without making any copies of the data or using significant extra memory. The difference is not likely to be a much here (since the maximum n you can usefully print out is fairly small), but if you were doing something different with the result of this algorithm and n was in the billions it would be very nice to avoid filling a list with that many values.
My last suggestion is to use a range again to generate all the numbers in each row directly, rather than explicitly looping from 1 to n and adding k each time.
for row_num in line_order:
print(*range(n * row_num + 1, n * (rownum + 1) + 1))
You can compute the start and end points with the multiples of n already included, rather than needing to do that in a separate step for each one. You certainly didn't need to be recomputing k as often as you were before. You can pass all the values from the range to print in one go using iterable unpacking syntax (*args).
Note though that unpacking the range that way is sort of the reverse of the previous suggestion regarding itertools.chain. If n is large, using a loop over the range would be more memory efficient, since you won't need all n values to exist in memory at a the same time. Here's what that would look like:
for line_num in line_order:
for value in range(n * row_num + 1, n * (rownum + 1) + 1):
print(value, end=" ")
print()

Related

Arranging the number 1 in a 2d matrix

Given the number of rows and columns of a 2d matrix
Initially all elements of matrix are 0
Given the number of 1's that should be present in each row
Given the number of 1's that should be present in each column
Determine if it is possible to form such matrix.
Example:
Input: r=3 c=2 (no. of rows and columns)
2 1 0 (number of 1's that should be present in each row respectively)
1 2 (number of 1's that should be present in each column respectively)
Output: Possible
Explanation:
1 1
0 1
0 0
I tried solving this problem for like 12 hours by checking if summation of Ri = summation of Ci
But I wondered if wouldn't be possible for cases like
3 3
1 3 0
0 2 2
r and c can be upto 10^5
Any ideas how should I move further?
Edit: Constraints added and output should only be "possible" or "impossible". The possible matrix need not be displayed.
Can anyone help me now?
Hint: one possible solution utilizes Maximum Flow Problem by creating a special graph and running the standard maximum flow algorithm on it.
If you're not familiar with the above problem, you may start reading about it e.g. here https://en.wikipedia.org/wiki/Maximum_flow_problem
If you're interested in the full solution please comment and I'll update the answer. But it requires understading the above algorithm.
Solution as requested:
Create a graph of r+c+2 nodes.
Node 0 is the source, node r+c+1 is the sink. Nodes 1..r represent the rows, while r+1..r+c the columns.
Create following edges:
from source to nodes i=1..r of capacity r_i
from nodes i=r+1..r+c to sink of capacity c_i
between all the nodes i=1..r and j=r+1..r+c of capacity 1
Run maximum flow algorithm, the saturated edges between row nodes and column nodes define where you should put 1.
Or if it's not possible then the maximum flow value is less than number of expected ones in the matrix.
I will illustrate the algorithm with an example.
Assume we have m rows and n columns. Let rows[i] be the number of 1s in row i, for 0 <= i < m,
and cols[j] be the number of 1s in column j, for 0 <= j < n.
For example, for m = 3, and n = 4, we could have: rows = {4 2 3}, cols = {1 3 2 3}, and
the solution array would be:
1 3 2 3
+--------
4 | 1 1 1 1
2 | 0 1 0 1
3 | 0 1 1 1
Because we only want to know whether a solution exists, the values in rows and cols may be permuted in any order. The solution of each permutation is just a permutation of the rows and columns of the above solution.
So, given rows and cols, sort cols in decreasing order, and rows in increasing order. For our example, we have cols = {3 3 2 1} and rows = {2 3 4}, and the equivalent problem.
3 3 2 1
+--------
2 | 1 1 0 0
3 | 1 1 1 0
4 | 1 1 1 1
We transform cols into a form that is better suited for the algorithm. What cols tells us is that we have two series of 1s of length 3, one series of 1s of length 2, and one series of 1s of length 1, that are to be distributed among the rows of the array. We rewrite cols to capture just that, that is COLS = {2/3 1/2 1/1}, 2 series of length 3, 1 series of length 2, and 1 series of length 1.
Because we have 2 series of length 3, a solution exists only if we can put two 1s in the first row. This is possible because rows[0] = 2. We do not actually put any 1 in the first row, but record the fact that 1s have been placed there by decrementing the length of the series of length 3. So COLS becomes:
COLS = {2/2 1/2 1/1}
and we combine our two counts for series of length 2, yielding:
COLS = {3/2 1/1}
We now have the reduced problem:
3 | 1 1 1 0
4 | 1 1 1 1
Again we need to place 1s from our series of length 2 to have a solution. Fortunately, rows[1] = 3 and we can do this. We decrement the length of 3/2 and get:
COLS = {3/1 1/1} = {4/1}
We have the reduced problem:
4 | 1 1 1 1
Which is solved by 4 series of length 1, just what we have left. If at any step, the series in COLS cannot be used to satisfy a row count, then no solution is possible.
The general processing for each row may be stated as follows. For each row r, starting from the first element in COLS, decrement the lengths of as many elements count[k]/length[k] of COLS as needed, so that the sum of the count[k]'s equals rows[r]. Eliminate series of length 0 in COLS and combine series of same length.
Note that because elements of COLS are in decreasing order of lengths, the length of the last element decremented is always less than or equal to the next element in COLS (if there is a next element).
EXAMPLE 2 : Solution exists.
rows = {1 3 3}, cols = {2 2 2 1} => COLS = {3/2 1/1}
1 series of length 2 is decremented to satisfy rows[0] = 1, and the 2 other series of length 2 remains at length 2.
rows[0] = 1
COLS = {2/2 1/1 1/1} = {2/2 2/1}
The 2 series of length 2 are decremented, and 1 of the series of length 1.
The series whose length has become 0 is deleted, and the series of length 1 are combined.
rows[1] = 3
COLS = {2/1 1/0 1/1} = {2/1 1/1} = {3/1}
A solution exists for rows[2] can be satisfied.
rows[2] = 3
COLS = {3/0} = {}
EXAMPLE 3: Solution does not exists.
rows = {0 2 3}, cols = {3 2 0 0} => COLS = {1/3 1/2}
rows[0] = 0
COLS = {1/3 1/2}
rows[1] = 2
COLS = {1/2 1/1}
rows[2] = 3 => impossible to satisfy; no solution.
SPACE COMPLEXITY
It is easy to see that it is O(m + n).
TIME COMPLEXITY
We iterate over each row only once. For each row i, we need to iterate over at most
rows[i] <= n elements of COLS. Time complexity is O(m x n).
After finding this algorithm, I found the following theorem:
The Havel-Hakimi theorem (Havel 1955, Hakimi 1962) states that there exists a matrix Xn,m of 0’s and 1’s with row totals a0=(a1, a2,… , an) and column totals b0=(b1, b2,… , bm) such that bi ≥ bi+1 for every 0 < i < m if and only if another matrix Xn−1,m of 0’s and 1’s with row totals a1=(a2, a3,… , an) and column totals b1=(b1−1, b2−1,… ,ba1−1, ba1+1,… , bm) also exists.
from the post Finding if binary matrix exists given the row and column sums.
This is basically what my algorithm does, while trying to optimize the decrementing part, i.e., all the -1's in the above theorem. Now that I see the above theorem, I know my algorithm is correct. Nevertheless, I checked the correctness of my algorithm by comparing it with a brute-force algorithm for arrays of up to 50 cells.
Here is the C# implementation.
public class Pair
{
public int Count;
public int Length;
}
public class PairsList
{
public LinkedList<Pair> Pairs;
public int TotalCount;
}
class Program
{
static void Main(string[] args)
{
int[] rows = new int[] { 0, 0, 1, 1, 2, 2 };
int[] cols = new int[] { 2, 2, 0 };
bool success = Solve(cols, rows);
}
static bool Solve(int[] cols, int[] rows)
{
PairsList pairs = new PairsList() { Pairs = new LinkedList<Pair>(), TotalCount = 0 };
FillAllPairs(pairs, cols);
for (int r = 0; r < rows.Length; r++)
{
if (rows[r] > 0)
{
if (pairs.TotalCount < rows[r])
return false;
if (pairs.Pairs.First != null && pairs.Pairs.First.Value.Length > rows.Length - r)
return false;
DecrementPairs(pairs, rows[r]);
}
}
return pairs.Pairs.Count == 0 || pairs.Pairs.Count == 1 && pairs.Pairs.First.Value.Length == 0;
}
static void DecrementPairs(PairsList pairs, int count)
{
LinkedListNode<Pair> pair = pairs.Pairs.First;
while (count > 0 && pair != null)
{
LinkedListNode<Pair> next = pair.Next;
if (pair.Value.Count == count)
{
pair.Value.Length--;
if (pair.Value.Length == 0)
{
pairs.Pairs.Remove(pair);
pairs.TotalCount -= count;
}
else if (pair.Next != null && pair.Next.Value.Length == pair.Value.Length)
{
pair.Value.Count += pair.Next.Value.Count;
pairs.Pairs.Remove(pair.Next);
next = pair;
}
count = 0;
}
else if (pair.Value.Count < count)
{
count -= pair.Value.Count;
pair.Value.Length--;
if (pair.Value.Length == 0)
{
pairs.Pairs.Remove(pair);
pairs.TotalCount -= pair.Value.Count;
}
else if(pair.Next != null && pair.Next.Value.Length == pair.Value.Length)
{
pair.Value.Count += pair.Next.Value.Count;
pairs.Pairs.Remove(pair.Next);
next = pair;
}
}
else // pair.Value.Count > count
{
Pair p = new Pair() { Count = count, Length = pair.Value.Length - 1 };
pair.Value.Count -= count;
if (p.Length > 0)
{
if (pair.Next != null && pair.Next.Value.Length == p.Length)
pair.Next.Value.Count += p.Count;
else
pairs.Pairs.AddAfter(pair, p);
}
else
pairs.TotalCount -= count;
count = 0;
}
pair = next;
}
}
static int FillAllPairs(PairsList pairs, int[] cols)
{
List<Pair> newPairs = new List<Pair>();
int c = 0;
while (c < cols.Length && cols[c] > 0)
{
int k = c++;
if (cols[k] > 0)
pairs.TotalCount++;
while (c < cols.Length && cols[c] == cols[k])
{
if (cols[k] > 0) pairs.TotalCount++;
c++;
}
newPairs.Add(new Pair() { Count = c - k, Length = cols[k] });
}
LinkedListNode<Pair> pair = pairs.Pairs.First;
foreach (Pair p in newPairs)
{
while (pair != null && p.Length < pair.Value.Length)
pair = pair.Next;
if (pair == null)
{
pairs.Pairs.AddLast(p);
}
else if (p.Length == pair.Value.Length)
{
pair.Value.Count += p.Count;
pair = pair.Next;
}
else // p.Length > pair.Value.Length
{
pairs.Pairs.AddBefore(pair, p);
}
}
return c;
}
}
(Note: to avoid confusion between when I'm talking about the actual numbers in the problem vs. when I'm talking about the zeros in the ones in the matrix, I'm going to instead fill the matrix with spaces and X's. This obviously doesn't change the problem.)
Some observations:
If you're filling in a row, and there's (for example) one column needing 10 more X's and another column needing 5 more X's, then you're sometimes better off putting the X in the "10" column and saving the "5" column for later (because you might later run into 5 rows that each need 2 X's), but you're never better off putting the X in the "5" column and saving the "10" column for later (because even if you later run into 10 rows that all need an X, they won't mind if they don't all go in the same column). So we can use a somewhat "greedy" algorithm: always put an X in the column still needing the most X's. (Of course, we'll need to make sure that we don't greedily put an X in the same column multiple times for the same row!)
Since you don't need to actually output a possible matrix, the rows are all interchangeable and the columns are all interchangeable; all that matter is how many rows still need 1 X, how many still need 2 X's, etc., and likewise for columns.
With that in mind, here's one fairly simple approach:
(Optimization.) Add up the counts for all the rows, add up the counts for all the columns, and return "impossible" if the sums don't match.
Create an array of length r+1 and populate it with how many columns need 1 X, how many need 2 X's, etc. (You can ignore any columns needing 0 X's.)
(Optimization.) To help access the array efficiently, build a stack/linked-list/etc. of the indices of nonzero array elements, in decreasing order (e.g., starting at index r if it's nonzero, then index r−1 if it's nonzero, etc.), so that you can easily find the elements representing columns to put X's in.
(Optimization.) To help determine when there'll be a row can't be satisfied, also make note of the total number of columns needing any X's, and make note of the largest number of X's needed by any row. If the former is less than the latter, return "impossible".
(Optimization.) Sort the rows by the number of X's they need.
Iterate over the rows, starting with the one needing the fewest X's and ending with the one needing the most X's, and for each one:
Update the array accordingly. For example, if a row needs 12 X's, and the array looks like [..., 3, 8, 5], then you'll update the array to look like [..., 3+7 = 10, 8+5−7 = 6, 5−5 = 0]. If it's not possible to update the array because you run out of columns to put X's in, return "impossible". (Note: this part should never actually return "impossible", because we're keeping count of the number of columns left and the max number of columns we'll need, so we should have already returned "impossible" if this was going to happen. I mention this check only for clarity.)
Update the stack/linked-list of indices of nonzero array elements.
Update the total number of columns needing any X's. If it's now less than the greatest number of X's needed by any row, return "impossible".
(Optimization.) If the first nonzero array element has an index greater than the number of rows left, return "impossible".
If we complete our iteration without having returned "impossible", return "possible".
(Note: the reason I say to start with the row needing the fewest X's, and work your way to the row with the most X's, is that a row needing more X's may involve examining updating more elements of the array and of the stack, so the rows needing fewer X's are cheaper. This isn't just a matter of postponing the work: the rows needing fewer X's can help "consolidate" the array, so that there will be fewer distinct column-counts, making the later rows cheaper than they would otherwise be. In a very-bad-case scenario, such as the case of a square matrix where every single row needs a distinct positive number of X's and every single column needs a distinct positive number of X's, the fewest-to-most order means you can handle each row in O(1) time, for linear time overall, whereas the most-to-fewest order would mean that each row would take time proportional to the number of X's it needs, for quadratic time overall.)
Overall, this takes no worse than O(r+c+n) time (where n is the number of X's); I think that the optimizations I've listed are enough to ensure that it's closer to O(r+c) time, but it's hard to be 100% sure. I recommend trying it to see if it's fast enough for your purposes.
You can use brute force (iterating through all 2^(r * c) possibilities) to solve it, but that will take a long time. If r * c is under 64, you can accelerate it to a certain extent using bit-wise operations on 64-bit integers; however, even then, iterating through all 64-bit possibilities would take, at 1 try per ms, over 500M years.
A wiser choice is to add bits one by one, and only continue placing bits if no constraints are broken. This will eliminate the vast majority of possibilities, greatly speeding up the process. Look up backtracking for the general idea. It is not unlike solving sudokus through guesswork: once it becomes obvious that your guess was wrong, you erase it and try guessing a different digit.
As with sudokus, there are certain strategies that can be written into code and will result in speedups when they apply. For example, if the sum of 1s in rows is different from the sum of 1s in columns, then there are no solutions.
If over 50% of the bits will be on, you can instead work on the complementary problem (transform all ones to zeroes and vice-versa, while updating row and column counts). Both problems are equivalent, because any answer for one is also valid for the complementary.
This problem can be solved in O(n log n) using Gale-Ryser Theorem. (where n is the maximum of lengths of the two degree sequences).
First, make both sequences of equal length by adding 0's to the smaller sequence, and let this length be n.
Let the sequences be A and B. Sort A in non-decreasing order, and sort B in non-increasing order. Create another prefix sum array P for B such that ith element of P is equal to sum of first i elements of B.
Now, iterate over k's from 1 to n, and check for
The second sum can be calculated in O(log n) using binary search for index of last number in B smaller than k, and then using precalculated P.
Inspiring from the solution given by RobertBaron I have tried to build a new algorithm.
rows = [int(x)for x in input().split()]
cols = [int (ss) for ss in input().split()]
rows.sort()
cols.sort(reverse=True)
for i in range(len(rows)):
for j in range(len(cols)):
if(rows[i]!= 0 and cols[j]!=0):
rows[i] = rows[i] - 1;
cols[j] =cols[j]-1;
print("rows: ",rows)
print("cols: ",cols)
#if there is any non zero value, print NO else print yes
flag = True
for i in range(len(rows)):
if(rows[i]!=0):
flag = False
break
for j in range(len(cols)):
if(cols[j]!=0):
flag = False
if(flag):
print("YES")
else:
print("NO")
here, i have sorted the rows in ascending order and cols in descending order. later decrementing particular row and column if 1 need to be placed!
it is working for all the test cases posted here! rest GOD knows

Number of Paths in a Triangle

I recently encountered a much more difficult variation of this problem, but realized I couldn't generate a solution for this very simple case. I searched Stack Overflow but couldn't find a resource that previously answered this.
You are given a triangle ABC, and you must compute the number of paths of certain length that start at and end at 'A'. Say our function f(3) is called, it must return the number of paths of length 3 that start and end at A: 2 (ABA,ACA).
I'm having trouble formulating an elegant solution. Right now, I've written a solution that generates all possible paths, but for larger lengths, the program is just too slow. I know there must be a nice dynamic programming solution that reuses sequences that we've previously computed but I can't quite figure it out. All help greatly appreciated.
My dumb code:
def paths(n,sequence):
t = ['A','B','C']
if len(sequence) < n:
for node in set(t) - set(sequence[-1]):
paths(n,sequence+node)
else:
if sequence[0] == 'A' and sequence[-1] == 'A':
print sequence
Let PA(n) be the number of paths from A back to A in exactly n steps.
Let P!A(n) be the number of paths from B (or C) to A in exactly n steps.
Then:
PA(1) = 1
PA(n) = 2 * P!A(n - 1)
P!A(1) = 0
P!A(2) = 1
P!A(n) = P!A(n - 1) + PA(n - 1)
= P!A(n - 1) + 2 * P!A(n - 2) (for n > 2) (substituting for PA(n-1))
We can solve the difference equations for P!A analytically, as we do for Fibonacci, by noting that (-1)^n and 2^n are both solutions of the difference equation, and then finding coefficients a, b such that P!A(n) = a*2^n + b*(-1)^n.
We end up with the equation P!A(n) = 2^n/6 + (-1)^n/3, and PA(n) being 2^(n-1)/3 - 2(-1)^n/3.
This gives us code:
def PA(n):
return (pow(2, n-1) + 2*pow(-1, n-1)) / 3
for n in xrange(1, 30):
print n, PA(n)
Which gives output:
1 1
2 0
3 2
4 2
5 6
6 10
7 22
8 42
9 86
10 170
11 342
12 682
13 1366
14 2730
15 5462
16 10922
17 21846
18 43690
19 87382
20 174762
21 349526
22 699050
23 1398102
24 2796202
25 5592406
26 11184810
27 22369622
28 44739242
29 89478486
The trick is not to try to generate all possible sequences. The number of them increases exponentially so the memory required would be too great.
Instead, let f(n) be the number of sequences of length n beginning and ending A, and let g(n) be the number of sequences of length n beginning with A but ending with B. To get things started, clearly f(1) = 1 and g(1) = 0. For n > 1 we have f(n) = 2g(n - 1), because the penultimate letter will be B or C and there are equal numbers of each. We also have g(n) = f(n - 1) + g(n - 1) because if a sequence ends begins A and ends B the penultimate letter is either A or C.
These rules allows you to compute the numbers really quickly using memoization.
My method is like this:
Define DP(l, end) = # of paths end at end and having length l
Then DP(l,'A') = DP(l-1, 'B') + DP(l-1,'C'), similar for DP(l,'B') and DP(l,'C')
Then for base case i.e. l = 1 I check if the end is not 'A', then I return 0, otherwise return 1, so that all bigger states only counts those starts at 'A'
Answer is simply calling DP(n, 'A') where n is the length
Below is a sample code in C++, you can call it with 3 which gives you 2 as answer; call it with 5 which gives you 6 as answer:
ABCBA, ACBCA, ABABA, ACACA, ABACA, ACABA
#include <bits/stdc++.h>
using namespace std;
int dp[500][500], n;
int DP(int l, int end){
if(l<=0) return 0;
if(l==1){
if(end != 'A') return 0;
return 1;
}
if(dp[l][end] != -1) return dp[l][end];
if(end == 'A') return dp[l][end] = DP(l-1, 'B') + DP(l-1, 'C');
else if(end == 'B') return dp[l][end] = DP(l-1, 'A') + DP(l-1, 'C');
else return dp[l][end] = DP(l-1, 'A') + DP(l-1, 'B');
}
int main() {
memset(dp,-1,sizeof(dp));
scanf("%d", &n);
printf("%d\n", DP(n, 'A'));
return 0;
}
EDITED
To answer OP's comment below:
Firstly, DP(dynamic programming) is always about state.
Remember here our state is DP(l,end), represents the # of paths having length l and ends at end. So to implement states using programming, we usually use array, so DP[500][500] is nothing special but the space to store the states DP(l,end) for all possible l and end (That's why I said if you need a bigger length, change the size of array)
But then you may ask, I understand the first dimension which is for l, 500 means l can be as large as 500, but how about the second dimension? I only need 'A', 'B', 'C', why using 500 then?
Here is another trick (of C/C++), the char type indeed can be used as an int type by default, which value is equal to its ASCII number. And I do not remember the ASCII table of course, but I know that around 300 will be enough to represent all the ASCII characters, including A(65), B(66), C(67)
So I just declare any size large enough to represent 'A','B','C' in the second dimension (that means actually 100 is more than enough, but I just do not think that much and declare 500 as they are almost the same, in terms of order)
so you asked what DP[3][1] means, it means nothing as the I do not need / calculate the second dimension when it is 1. (Or one can think that the state dp(3,1) does not have any physical meaning in our problem)
In fact, I always using 65, 66, 67.
so DP[3][65] means the # of paths of length 3 and ends at char(65) = 'A'
You can do better than the dynamic programming/recursion solution others have posted, for the given triangle and more general graphs. Whenever you are trying to compute the number of walks in a (possibly directed) graph, you can express this in terms of the entries of powers of a transfer matrix. Let M be a matrix whose entry m[i][j] is the number of paths of length 1 from vertex i to vertex j. For a triangle, the transfer matrix is
0 1 1
1 0 1.
1 1 0
Then M^n is a matrix whose i,j entry is the number of paths of length n from vertex i to vertex j. If A corresponds to vertex 1, you want the 1,1 entry of M^n.
Dynamic programming and recursion for the counts of paths of length n in terms of the paths of length n-1 are equivalent to computing M^n with n multiplications, M * M * M * ... * M, which can be fast enough. However, if you want to compute M^100, instead of doing 100 multiplies, you can use repeated squaring: Compute M, M^2, M^4, M^8, M^16, M^32, M^64, and then M^64 * M^32 * M^4. For larger exponents, the number of multiplies is about c log_2(exponent).
Instead of using that a path of length n is made up of a path of length n-1 and then a step of length 1, this uses that a path of length n is made up of a path of length k and then a path of length n-k.
We can solve this with a for loop, although Anonymous described a closed form for it.
function f(n){
var as = 0, abcs = 1;
for (n=n-3; n>0; n--){
as = abcs - as;
abcs *= 2;
}
return 2*(abcs - as);
}
Here's why:
Look at one strand of the decision tree (the other one is symmetrical):
A
B C...
A C
B C A B
A C A B B C A C
B C A B B C A C A C A B B C A B
Num A's Num ABC's (starting with first B on the left)
0 1
1 (1-0) 2
1 (2-1) 4
3 (4-1) 8
5 (8-3) 16
11 (16-5) 32
Cleary, we can't use the strands that end with the A's...
You can write a recursive brute force solution and then memoize it (aka top down dynamic programming). Recursive solutions are more intuitive and easy to come up with. Here is my version:
# search space (we have triangle with nodes)
nodes = ["A", "B", "C"]
#cache # memoize!
def recurse(length, steps):
# if length of the path is n and the last node is "A", then it's
# a valid path and we can count it.
if length == n and ((steps-1)%3 == 0 or (steps+1)%3 == 0):
return 1
# we don't want paths having len > n.
if length > n:
return 0
# from each position, we have two possibilities, either go to next
# node or previous node. Total paths will be sum of both the
# possibilities. We do this recursively.
return recurse(length+1, steps+1) + recurse(length+1, steps-1)

best way to rewind a variable at 1?

I have an array with 12 entries.
When doing 12+1, I want to get the entry 1 of the array
When doing 12+4, I want to get the entry 4 of the array
etc...
I'm done with
cases_to_increment.each do |k|
if k > 12
k = k-12
end
self.inc(:"case#{k}", 1)
end
I found a solution with modulo
k = 13%12 = 1
k = 16%12 = 4
I like the modulo way but 12%12 return 0 and I need only numbers between 1..12
There is a way to do that without condition ?
You almost had the solution there yourself. Instead of a simple modulo, try:
index = (number % 12) + 1
Edit: njzk2 is correct, modulo is a very expensive function if you are using it with a value that is not a power of two. If, however, your total number of elements (the number you are modulo-ing with) is a power of 2, the calculation is essentially free.

count and iterate arrangements of a sequence

I have a string and a number of spaces e.g. the string is "accttgagattcagt" and I have 10 spaces to insert.
How can you iterate over all combinations of that string and spaces? The letters in the string cannot be reordered, and all spaces must be inserted.
And how can you calculate the number of rearrangements (without iterating them)?
And what is the proper word for this? Permutations, combinations or something else?
(I visualise this as strings of 1s and 0s where the 1s are used by the string and the 0s are spaces.
So a short string of 3 letters and 2 spaces would be asking for all all 5 bit numbers with 3 1s and 2 0s e.g. 11100, 11010, 11001, 10110, 10101, 10011, 01110, 01101, 01011, 00111?
But easy as short sequences are to make on paper, I am struggling to make a for-loop to do it :(. So nice pseudocode to create this sequence, and count how long it would be, please?
Recursion will be easier to understand but can it be faster if recursion is avoided somehow?)
That's combinations.
So a short string of 3 letters and 2 spaces would be asking for all all 5 bit numbers with 3 1s and 2 0s e.g. 11100, 11010, 11001, 10110, 10101, 10011, 01110, 01101, 01011, 00111?
You put the three '1's on one of the 5 indices and order does not matter. So its 5 over 3:
5!/((5-3)!3!) = 5*4/(2*1) = 10
The article on wikipedia.org has an image illustrating a random sequence of 3 red and 2 white squares.
This might be useful:
Statistics: combinations in Python
Hmm, a bit pseudo code, but you should get the idea
list doThat(string, spaces){
returnList
spacesTemp = spaces;
for(c = 0; c < string.length; c++){
subString = string.getSubString(c, string.length);
tmpString = string.insertStringAtPosition(c, createSpaceString(spacesTemp);
retSubStringList = doThat(subString, spaces - spacesTemp);
retCombinedList = addStringInFrontOfAllStringsInList(tmpString, retSubStringList);
returnList.addList(retCombinedList);
spacesTemp--;
}
return returnList;
}
n - number of letters
m - number of spaces
Counting
Denote number of spaces between first and second letter as a_1, between second and third as a_2 and so on. Your question can now be stated as: In how many different ways a_1, a_2 ..a_n-1 can be chosen that each number is not less than 0 and their sum satisfies a_1 + a_2 .... + a_(n-1) = m? Answer to this question is n + m over n (by over I mean Newton symbol).
Why is it so? Visualize this problem as n + m empty bins in a row. If we fill exactly n of them with sand, distances between filled ones will satisfy our requirements for sum of a_1 ... a_n-1.
Generation
def generate(s, num_spaces):
ans = generate_aux("_" + s, num_spaces)
return [x[1:] for x in ans]
def generate_aux(s, num_spaces) : # returns list of arrangements
if num_spaces == 0:
return [s]
if s == "":
return []
val = []
for i in range(0, num_spaces + 1):
tmp = generate_aux(s[1:], num_spaces - i)
pref = s[0] + (" " * i)
val.extend([pref + x for x in tmp])
return val
print generate("abc", 2)

How can you compare to what extent two lists are in the same order?

I have two arrays containing the same elements, but in different orders, and I want to know the extent to which their orders differ.
The method I tried, didn't work. it was as follows:
For each list I built a matrix which recorded for each pair of elements whether they were above or below each other in the list. I then calculated a pearson correlation coefficient of these two matrices. This worked extremely badly. Here's a trivial example:
list 1:
1
2
3
4
list 2:
1
3
2
4
The method I described above produced matrices like this (where 1 means the row number is higher than the column, and 0 vice-versa):
list 1:
1 2 3 4
1 1 1 1
2 1 1
3 1
4
list 2:
1 2 3 4
1 1 1 1
2 0 1
3 1
4
Since the only difference is the order of elements 2 and 3, these should be deemed to be very similar. The Pearson Correlation Coefficient for those two matrices is 0, suggesting they are not correlated at all. I guess the problem is that what I'm looking for is not really a correlation coefficient, but some other kind of similarity measure. Edit distance, perhaps?
Can anyone suggest anything better?
Mean square of differences of indices of each element.
List 1: A B C D E
List 2: A D C B E
Indices of each element of List 1 in List 2 (zero based)
A B C D E
0 3 2 1 4
Indices of each element of List 1 in List 1 (zero based)
A B C D E
0 1 2 3 4
Differences:
A B C D E
0 -2 0 2 0
Square of differences:
A B C D E
4 4
Average differentness = 8 / 5.
Just an idea, but is there any mileage in adapting a standard sort algorithm to count the number of swap operations needed to transform list1 into list2?
I think that defining the compare function may be difficult though (perhaps even just as difficult as the original problem!), and this may be inefficient.
edit: thinking about this a bit more, the compare function would essentially be defined by the target list itself. So for example if list 2 is:
1 4 6 5 3
...then the compare function should result in 1 < 4 < 6 < 5 < 3 (and return equality where entries are equal).
Then the swap function just needs to be extended to count the swap operations.
A bit late for the party here, but just for the record, I think Ben almost had it... if you'd looked further into correlation coefficients, I think you'd have found that Spearman's rank correlation coefficient might have been the way to go.
Interestingly, jamesh seems to have derived a similar measure, but not normalized.
See this recent SO answer.
You might consider how many changes it takes to transform one string into another (which I guess it was you were getting at when you mentioned edit distance).
See: http://en.wikipedia.org/wiki/Levenshtein_distance
Although I don't think l-distance takes into account rotation. If you allow rotation as an operation then:
1, 2, 3, 4
and
2, 3, 4, 1
Are pretty similar.
There is a branch-and-bound algorithm that should work for any set of operators you like. It may not be real fast. The pseudocode goes something like this:
bool bounded_recursive_compare_routine(int* a, int* b, int level, int bound){
if (level > bound) return false;
// if at end of a and b, return true
// apply rule 0, like no-change
if (*a == *b){
bounded_recursive_compare_routine(a+1, b+1, level+0, bound);
// if it returns true, return true;
}
// if can apply rule 1, like rotation, to b, try that and recur
bounded_recursive_compare_routine(a+1, b+1, level+cost_of_rotation, bound);
// if it returns true, return true;
...
return false;
}
int get_minimum_cost(int* a, int* b){
int bound;
for (bound=0; ; bound++){
if (bounded_recursive_compare_routine(a, b, 0, bound)) break;
}
return bound;
}
The time it takes is roughly exponential in the answer, because it is dominated by the last bound that works.
Added: This can be extended to find the nearest-matching string stored in a trie. I did that years ago in a spelling-correction algorithm.
I'm not sure exactly what formula it uses under the hood, but difflib.SequenceMatcher.ratio() does exactly this:
ratio(self) method of difflib.SequenceMatcher instance:
Return a measure of the sequences' similarity (float in [0,1]).
Code example:
from difflib import SequenceMatcher
sm = SequenceMatcher(None, '1234', '1324')
print sm.ratio()
>>> 0.75
Another approach that is based on a little bit of mathematics is to count the number of inversions to convert one of the arrays into the other one. An inversion is the exchange of two neighboring array elements. In ruby it is done like this:
# extend class array by new method
class Array
def dist(other)
raise 'can calculate distance only to array with same length' if length != other.length
# initialize count of inversions to 0
count = 0
# loop over all pairs of indices i, j with i<j
length.times do |i|
(i+1).upto(length) do |j|
# increase count if i-th and j-th element have different order
count += 1 if (self[i] <=> self[j]) != (other[i] <=> other[j])
end
end
return count
end
end
l1 = [1, 2, 3, 4]
l2 = [1, 3, 2, 4]
# try an example (prints 1)
puts l1.dist(l2)
The distance between two arrays of length n can be between 0 (they are the same) and n*(n+1)/2 (reversing the first array one gets the second). If you prefer to have distances always between 0 and 1 to be able to compare distances of pairs of arrays of different length, just divide by n*(n+1)/2.
A disadvantage of this algorithms is it running time of n^2. It also assumes that the arrays don't have double entries, but it could be adapted.
A remark about the code line "count += 1 if ...": the count is increased only if either the i-th element of the first list is smaller than its j-th element and the i-th element of the second list is bigger than its j-th element or vice versa (meaning that the i-th element of the first list is bigger than its j-th element and the i-th element of the second list is smaller than its j-th element). In short: (l1[i] < l1[j] and l2[i] > l2[j]) or (l1[i] > l1[j] and l2[i] < l2[j])
If one has two orders one should look at two important ranking correlation coefficients:
Spearman's rank correlation coefficient: https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient
This is almost the same as Jamesh answer but scaled in the range -1 to 1.
It is defined as:
1 - ( 6 * sum_of_squared_distances ) / ( n_samples * (n_samples**2 - 1 )
Kendalls tau: https://nl.wikipedia.org/wiki/Kendalls_tau
When using python one could use:
from scipy import stats
order1 = [ 1, 2, 3, 4]
order2 = [ 1, 3, 2, 4]
print stats.spearmanr(order1, order2)[0]
>> 0.8000
print stats.kendalltau(order1, order2)[0]
>> 0.6667
if anyone is using R language, I've implemented a function that computes the "spearman rank correlation coefficient" using the method described above by #bubake here:
get_spearman_coef <- function(objectA, objectB) {
#getting the spearman rho rank test
spearman_data <- data.frame(listA = objectA, listB = objectB)
spearman_data$rankA <- 1:nrow(spearman_data)
rankB <- c()
for (index_valueA in 1:nrow(spearman_data)) {
for (index_valueB in 1:nrow(spearman_data)) {
if (spearman_data$listA[index_valueA] == spearman_data$listB[index_valueB]) {
rankB <- append(rankB, index_valueB)
}
}
}
spearman_data$rankB <- rankB
spearman_data$distance <-(spearman_data$rankA - spearman_data$rankB)**2
spearman <- 1 - ( (6 * sum(spearman_data$distance)) / (nrow(spearman_data) * ( nrow(spearman_data)**2 -1) ) )
print(paste("spearman's rank correlation coefficient"))
return( spearman)
}
results :
get_spearman_coef(c("a","b","c","d","e"), c("a","b","c","d","e"))
spearman's rank correlation coefficient: 1
get_spearman_coef(c("a","b","c","d","e"), c("b","a","d","c","e"))
spearman's rank correlation coefficient: 0.9

Resources