How to speed up pandas dataframe.apply() for large data - performance

def func(row):
if row.GT_x == row.GT_y or row.GT_x == row.GT_y[::-1]:
return 2
elif len(set(row.GT_x) & set(row.GT_y)) != 0:
return 1
else:
return 0
%%timeit
merged_df['Decision'] = merged_df.apply(func, axis=1)
1 loop, best of 3: 30.2 s per loop
I'm going to apply "func" for all dataframe rows and the number of row is approximately 650,000.
I guess pandas.apply() takes more time than iterating by for loop.
I also tried lambda function rather than "func", but the result is same.
my dataframe has two columns named GT_x, GT_y
and, it has "AA" or "BB".
Function "func" detect GT_x and GT_y is same, it return 2, if one of them matches, return 1, else return 0.
And, I'm gonna make another column(Decision) by using apply function "func"
Could you recommend another faster method?
+
Here's sample data I have
GT_x GT_y
0 AG GA
1 AA GA
2 AA GG
3 GG GG
...
65000 GG GG
index 0 result should be 2,
index 1 result should be 1,
index 2 result should be 0,
also index 3 and 65,000 result should be 2

you can use df.apply(func, axis=1, raw=True) for faster computations
(in that case input of your function will be raw numpy array instead of series)
from apply function description:
raw : boolean, default False
If False, convert each row or column into a Series. If raw=True the
passed function will receive ndarray objects instead. If you are just a
applying a NumPy reduction function this will achieve much better
performance
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html

Related

Python: break up dataframe (one row per entry in column, instead of multiple entries in column)

I have a solution to a problem, that to my despair is somewhat slow, and I am seeking advice on how to speed up my solution (by adding vectorization or other clever methods). I have a dataframe that looks like this:
toy = pd.DataFrame([[1,'cv','c,d,e'],[2,'search','a,b,c,d,e'],[3,'cv','d']],
columns=['id','ch','kw'])
Output is:
The task is to break up kw column into one (replicated) row per comma-separated entry in each string. Thus, what I wish to achieve is:
My initial solution is the following:
data = pd.DataFrame()
for x in toy.itertuples():
id = x.id; ch = x.ch; keys = x.kw.split(",")
data = data.append([[id, ch, x] for x in keys], ignore_index=True)
data.columns = ['id','ch','kw']
Problem is: it is slow for larger dataframes. My hope is that someone has encountered a similar problem before, and knows how to optimize my solution. I'm using python 3.4.x and pandas 0.19+ if that is of importance.
Thank you!
You can use str.split for lists, then get len for length.
Last create new DataFrame by constructor with numpy.repeat and numpy.concatenate:
cols = toy.columns
splitted = toy['kw'].str.split(',')
l = splitted.str.len()
toy = pd.DataFrame({'id':np.repeat(toy['id'], l),
'ch':np.repeat(toy['ch'], l),
'kw':np.concatenate(splitted)})
toy = toy.reindex_axis(cols, axis=1)
print (toy)
id ch kw
0 1 cv c
0 1 cv d
0 1 cv e
1 2 search a
1 2 search b
1 2 search c
1 2 search d
1 2 search e
2 3 cv d

Use of IF statement with matrices in fortran

I want to go through a matrix and check if any block of it is the same as a predefined unit. Here is my code. 'sd5' is the 2 by 2 predefined unit.
ALLOCATE (fList((n-1)**2,3))
fList = 0
p = 1
DO i = 1, n-1, 1
DO j = 1, n-1, 1
IF (TEST(i:i+1, j:j+1) == sd5) THEN
fList(p,1:3) = (i, j+1, 101) ! 101 should be replaced by submatrix number
END IF
p = p+1
END DO
END DO
The problem seems to be in the IF statement as four logical statements are returned in TEST(i:i+1, j:j+1) == sd5. I get this error:
Error: IF clause at (1) requires a scalar LOGICAL expression
I get another error:
fList(p,1:3) = (i, j+1, 101) ! 101 should be replaced by sub matrix number
1
Error: Expected PARAMETER symbol in complex constant at (1)
I do not understand this error, as all variables are integer which I defined.
First, if statements require scalar clauses.
(TEST(i:i+1, j:j+1) == sd5)
results in a 2x2 matrix containing .true. or .false.. Since you want to check all entries, the statement should read
IF ( all( TEST(i:i+1, j:j+1) == sd5) ) THEN
[ You could also use any if only one matching entry is sufficient. ]
The second statement is a little tricky, since you do not state what you want to achieve. As it is, it is not what you would expect. My guess is that you are trying to store a vector of length three, and the assignment should read
fList(p,1:3) = (/ i, j+1, 101 /)
or
fList(p,1:3) = [ i, j+1, 101 ]
The syntax you provided is in fact used to specify complex constants:
( Real, Imag )
In this form, Real and Imag need to be constants or literals themselves, cf. the Fortran 2008 Standard, R417.

scala version of swap algorithm for null models

The problem I am having is with trying to find an efficient way to find swappable elements in a matrix in order to implement a swap algorithm for null model creation.
The matrix consists of 0's and 1's and the idea is that elements can be switched between columns so that the row and column totals of the matrix remain the same.
For example, given the following matrix:
c1 c2 c3 c4
r1 0 1 0 0 = 1
r2 1 0 0 1 = 2
r3 0 0 0 0 = 0
r4 1 1 1 1 = 4
------------
2 2 1 2
columns c2 and c4 in r1 and r2 can each be swapped in such a way that totals are not altered i.e.:
c1 c2 c3 c4
r1 0 0 0 1 = 1
r2 1 1 0 0 = 2
r3 0 0 0 0 = 0
r4 1 1 1 1 = 4
------------
2 2 1 2
This all needs to be done randomly so as not to introduce any bias.
I have one solution that works. I randomly select a row and two columns. If they yield a 10 or 01 pattern then I randomly select another row and check the same columns to see if they yield the opposite pattern. If either of them fail I start over and select a new element.
This method works but I only "hit" the correct patterns about 10% of the time. In a large matrix or in one with few 1's in the rows I waste a lot of time "missing". I figured that there had to be a more intelligent way of choosing elements in the matrix but still doing it randomly.
The code for the working method is:
def isSwappable(matrix: Matrix): Tuple2[Tuple2[Int, Int], Tuple2[Int, Int]] = {
val indices = getRowAndColIndices(matrix)
(matrix(indices._1._1)(indices._2._1), matrix(indices._1._1)(indices._2._2)) match {
case (1, 0) => {
if (matrix(indices._1._2)(indices._2._1) == 0 & matrix(indices._1._2)(indices._2._2) == 1) {
indices
}
else {
isSwappable(matrix)
}
}
case (0, 1) => {
if (matrix(indices._1._2)(indices._2._1) == 1 & matrix(indices._1._2)(indices._2._2) == 0) {
indices
}
else {
isSwappable(matrix)
}
}
case _ => {
isSwappable(matrix)
}
}
}
def getRowAndColIndices(matrix: Matrix): Tuple2[Tuple2[Int, Int], Tuple2[Int, Int]] = {
(getNextIndex(rnd.nextInt(matrix.size), matrix.size), getNextIndex(rnd.nextInt(matrix(0).size), matrix(0).size))
}
def getNextIndex(i: Int, constraint: Int): Tuple2[Int, Int] = {
val newIndex = rnd.nextInt(constraint)
newIndex match {
case `i` => getNextIndex(i, constraint)
case _ => (i, newIndex)
}
}
I figured a more efficient way to handle this was to remove any rows that could not be used (all 1's or 0's) and then choose an element randomly. From there I could filter out any columns in the row that had the same value and the choose from the remaining columns.
Once the first row and column are chosen I then filter out the rows that can not provide the required pattern and then choose from the remaining rows.
This works for the most part but the problem that I can't figure out how to deal with is what happens when there are no columns or rows to choose from? I don't want to loop infinitely trying to find the pattern I need and I need a way of starting over if I do get an empty list of rows or columns to choose from.
The code that I have so far that sort of works (until I get an empty list) is:
def getInformativeRowIndices(matrix: Matrix) = (
matrix
.zipWithIndex
.filter(_._1.distinct.size > 1)
.map(_._2)
.toList
)
def getRowsWithOppositeValueInColumn(col: Int, value: Int, matrix: Matrix) = (
matrix
.zipWithIndex
.filter(_._1(col) != value)
.map(_._2)
.toList
)
def getColsWithOppositeValueInSameRow(row: Int, value: Int, matrix: Matrix) = (
matrix(row)
.zipWithIndex
.filter(_._1 != value)
.map(_._2)
.toList
)
def process(matrix: Matrix): Tuple2[Tuple2[Int, Int], Tuple2[Int, Int]] = {
val row1Indices = getInformativeRowIndices(matrix)
if (row1Indices.isEmpty) sys.error("No informative rows")
val row1 = row1Indices(rnd.nextInt(row1Indices.size))
val col1 = rnd.nextInt(matrix(0).size)
val colIndices = getColsWithOppositeValueInSameRow(row1, matrix(row1)(col1), matrix)
if (colIndices.isEmpty) process(matrix)
val col2 = colIndices(rnd.nextInt(colIndices.size))
val row2Indices = getRowsWithOppositeValueInColumn(col1, matrix(row1)(col1), matrix)
.intersect(getRowsWithOppositeValueInColumn(col2, matrix(row1)(col2), matrix))
println(row2Indices)
if (row2Indices.isEmpty) process(matrix)
val row2 = row2Indices(rnd.nextInt(row2Indices.size))
((row1, row2), (col1, col2))
}
I think the recursive methods are wrong and don't really work here. Also, I am really just trying to improve the speed of cell selection so any ideas or suggestions would be greatly appreciated.
EDIT:
I have had a chance to play with this little more and have come up with another solution but it does not seem to be much faster then just randomly choosing cells in the matrix. Also, I should add that the matrix needs to be swapped about 30000 times in succession in order for it to be considered randomised and I need to generate 5000 random matrices for each test of which I have at least another 5000 to do so performance is kind of important.
The current solution (besides random cell selection is:
Randomly select 2 rows from the matrix
subtract one row from the other and put it in an Array
if the new Array contains both a 1 and -1 then we can swap
The logic of the subtraction looks like this:
0 1 0 0
- 1 0 0 1
---------------
-1 1 0 -1
The method that does this looks like this:
def findSwaps(matrix: Matrix, iterations: Int): Boolean = {
var result = false
val mtxLength = matrix.length
val row1 = rnd.nextInt(mtxLength)
val row2 = getNextIndex(row1, mtxLength)
val difference = subRows(matrix(row1), matrix(row2))
if (difference.min == -1 & difference.max == 1) {
val zeroOne = difference.zipWithIndex.filter(_._1 == -1).map(_._2)
val oneZero = difference.zipWithIndex.filter(_._1 == 1).map(_._2)
val col1 = zeroOne(rnd.nextInt(zeroOne.length))
val col2 = oneZero(rnd.nextInt(oneZero.length))
swap(matrix, row1, row2, col1, col2)
result = true
}
result
}
The matrix row subtraction looks like this:
def subRows(a: Array[Int], b: Array[Int]): Array[Int] = (a, b).zipped.map(_ - _)
And the actual swap looks like this:
def swap(matrix: Matrix, row1: Int, row2: Int, col1: Int, col2: Int) = {
val temp = (matrix(row1)(col1), matrix(row1)(col2))
matrix(row1)(col1) = matrix(row2)(col1)
matrix(row1)(col2) = matrix(row2)(col2)
matrix(row2)(col1) = temp._1
matrix(row2)(col2) = temp._2
matrix
}
This works much better than before in that I get have between 80% and 90% success for an attempted swap (it was only about 10% with the random cell selection) however... it is still taking about 2.5 minutes to generate 1000 randomised matrices.
Any ideas on how to improve the speed?
I'm going to assume the matrices are big so that storage of the order of (matrix size squared) is not viable (for reasons of either speed or memory).
If you have a sparse matrix, you can enter the index of each 1 in each column in a set (here I show the compact way to do things, but you may wish to iterate with while loops for speed):
val mtx = Array(Array(0,1,0,0),Array(1,0,0,1),Array(0,0,0,0),Array(1,1,1,1))
val cols = mtx.transpose.map(x => x.zipWithIndex.filter(_._1==1).map(_._2).toSet)
Now for each column, a later column contains compatible pairs (at least one) if and only if only the following two sets are nonempty:
def xorish(a: Set[Int], b: Set[Int]) = (a--b, b--a)
So the answer will involve computing these sets and testing whether they're both nonempty.
Now the question is what you mean by "sample randomly". Randomly sampling single 1,0 pairs is not the same as randomly sampling possible swaps. To see this, consider the following:
1 0 1 0
1 0 1 0
1 0 1 0
0 1 1 0
0 1 1 0
0 1 0 1
The two columns on the left have nine possible swaps. The two on the right have only five possible swaps. But if you are looking for (1,0) patterns, you will sample only three times on the left vs. five on the right; if you are looking for either (1,0) or (0,1), you will sample six and six, which again distorts the probabilities. The only way to fix this is either to not be clever, and randomly sample a second time (which in the first case will work out with a usable swap 3/5 of the time, while only 1/5 in the second), or to basically compute every possible pair for swapping (or at least how many pairs there are) and select from that predefined set.
If we want to do the latter, we note that for each pair of nonidentical columns, we can compute the two sets to swap among, and we know the sizes and the product is the total number of possibilities. In order to avoid instantiating all the possibilities, we can create
val poss = {
for (i<-cols.indices; j <- (i+1) until cols.length) yield
(i, j, (cols(i)--cols(j)).toArray, (cols(j)--cols(i)).toArray)
}.filter{ case (_,_,a,b) => a.length>0 && b.length>0 }
and then count how many there are:
val cuml = poss.map{ case (_,_,a,b) => a.size*b.size }.scanLeft(0)(_ + _).toArray
Now to pick a number at random, we pick a number between 0 and cuml.last and pick out which bucket this is and which item within the bucket:
def pickItem(cuml: Array[Int], poss: Seq[(Int,Int,Array[Int],Array[Int])]) = {
val n = util.Random.nextInt(cuml.last)
val k = {
val i = java.util.Arrays.binarySearch(cuml,n)
if (i<0) -i-2 else i
}
val j = n - cuml(k)
val bucket = poss(k)
(
bucket._1, bucket._2,
bucket._3(j % bucket._3.size), bucket._4(j / bucket._3.size)
)
}
This ends up returning (c1,c2,r1,r2) selected randomly.
Now that you have the coordinates, you can create the new matrix however you wish. (Most efficient is probably to do an in-place swap of the entries, and then swap back when you want to try again.)
Note that this is only sensible for a large number of independent swaps from the same starting matrix. If you instead want to do this iteratively and maintain independence, you are probably best off doing this randomly after all unless the matrices are extremely sparse, at which point it's worth simply storing the matrices in some standard sparse matrix format (i.e. by index of nonzero entries) and doing your manipulation on those (probably with mutable sets and an update strategy, since the consequences of a single swap are confined to about n of the entries in an n*n matrix).

How to find same-value rectangular areas of a given size in a matrix most efficiently?

My problem is very simple but I haven't found an efficient implementation yet.
Suppose there is a matrix A like this:
0 0 0 0 0 0 0
4 4 2 2 2 0 0
4 4 2 2 2 0 0
0 0 2 2 2 1 1
0 0 0 0 0 1 1
Now I want to find all starting positions of rectangular areas in this matrix which have a given size. An area is a subset of A where all numbers are the same.
Let's say width=2 and height=3. There are 3 areas which have this size:
2 2 2 2 0 0
2 2 2 2 0 0
2 2 2 2 0 0
The result of the function call would be a list of starting positions (x,y starting with 0) of those areas.
List((2,1),(3,1),(5,0))
The following is my current implementation. "Areas" are called "surfaces" here.
case class Dimension2D(width: Int, height: Int)
case class Position2D(x: Int, y: Int)
def findFlatSurfaces(matrix: Array[Array[Int]], surfaceSize: Dimension2D): List[Position2D] = {
val matrixWidth = matrix.length
val matrixHeight = matrix(0).length
var resultPositions: List[Position2D] = Nil
for (y <- 0 to matrixHeight - surfaceSize.height) {
var x = 0
while (x <= matrixWidth - surfaceSize.width) {
val topLeft = matrix(x)(y)
val topRight = matrix(x + surfaceSize.width - 1)(y)
val bottomLeft = matrix(x)(y + surfaceSize.height - 1)
val bottomRight = matrix(x + surfaceSize.width - 1)(y + surfaceSize.height - 1)
// investigate further if corners are equal
if (topLeft == bottomLeft && topLeft == topRight && topLeft == bottomRight) {
breakable {
for (sx <- x until x + surfaceSize.width;
sy <- y until y + surfaceSize.height) {
if (matrix(sx)(sy) != topLeft) {
x = if (x == sx) sx + 1 else sx
break
}
}
// found one!
resultPositions ::= Position2D(x, y)
x += 1
}
} else if (topRight != bottomRight) {
// can skip x a bit as there won't be a valid match in current row in this area
x += surfaceSize.width
} else {
x += 1
}
}
}
return resultPositions
}
I already tried to include some optimizations in it but I am sure that there are far better solutions. Is there a matlab function existing for it which I could port? I'm also wondering whether this problem has its own name as I didn't exactly know what to google for.
Thanks for thinking about it! I'm excited to see your proposals or solutions :)
EDIT: Matrix dimensions in my application range from 300x300 to 3000x3000 approximately. Also, the algorithm will only be called once for the same matrix. The reason is that the matrix will always be changed afterwards (approx. 1-20% of it).
RESULTS
I implemented the algorithms of Kevin, Nikita and Daniel and benchmarked them in my application environment, i.e. no isolated synthetic benchmark here, but special care was taken to integrate all algorithms in their most performant way which was especially important for Kevin's approach as it uses generics (see below).
First, the raw results, using Scala 2.8 and jdk 1.6.0_23. The algorithms were executed several hundred times as part of solving an application-specific problem. "Duration" denotes the total time needed until the application algorithm finished (of course without jvm startup etc.). My machine is a 2.8GHz Core 2 Duo with 2 cores and 2gig of memory, -Xmx800M were given to the JVM.
IMPORTANT NOTE: I think my benchmark setup is not really fair for parallelized algorithms like the one from Daniel. This is because the application is already calculating multi-threaded. So the results here probably only show an equivalent to single-threaded speed.
Matrix size 233x587:
duration | JVM memory | avg CPU utilization
original O(n^4) | 3000s 30M 100%
original/-server| 840s 270M 100%
Nikita O(n^2) | 5-6s 34M 70-80%
Nikita/-server | 1-2s 300M 100%
Kevin/-server | 7400s 800M 96-98%
Kevin/-server** | 4900s 800M 96-99%
Daniel/-server | 240s 360M 96-99%
** with #specialized, to make generics faster by avoiding type erasure
Matrix size 2000x3000:
duration | JVM memory | avg CPU utilization
original O(n^4) | too long 100M 100%
Nikita O(n^2) | 150s 760M 70%
Nikita/-server | 295s (!) 780M 100%
Kevin/-server | too long, didn't try
First, a small note on memory. The -server JVM option uses considerably more memory at the advantage of more optimizations and in general faster execution. As you can see from the 2nd table Nikita's algorithm is slower with the -server option which is obviously due to hitting the memory limit. I assume that this also slows down Kevin's algorithm even for the small matrix as the functional approach is using much more memory anyways. To eliminate the memory factor I also tried it once with a 50x50 matrix and then Kevin's took 5secs and Nikita's 0secs (well, nearly 0). So in any case it's still slower and not just because of memory.
As you can see from the numbers, I will obviously use Nikita's algorithm because it's damn fast and this is absolutely necessary in my case. It can also be parallelized easily as Daniel pointed out. The only downside is that it's not really the scala-way.
At the moment Kevin's algorithm is probably in general a bit too complex and therefore slow, but I'm sure there are more optimizations possible (see last comments in his answer).
With the goal of directly transforming Nikita's algorithm to functional style Daniel came up with a solution which is already quite fast and as he says would even be faster if he could use scanRight (see last comments in his answer).
What's next?
At the technological side: waiting for Scala 2.9, ScalaCL, and doing synthetic benchmarks to get raw speeds.
My goal in all this is to have functional code, BUT only if it's not sacrificing too much speed.
Choice of answer:
As for choosing an answer, I would want to mark Nikita's and Daniel's algorithms as answers but I have to choose one. The title of my question included "most efficiently", and one is the fastest in imperative and the other in functional style. Although this question is tagged Scala I chose Nikita's imperative algorithm as 2s vs. 240s is still too much difference for me to accept. I'm sure the difference can still be pushed down a bit, any ideas?
So, thank you all very very much! Although I won't use the functional algorithms yet, I got many new insights into Scala and I think I slowly get an understanding of all the functional crazyness and its potential. (of course, even without doing much functional programming, Scala is much more pleasing than Java... that's another reason to learn it)
First, a couple of helper functions:
//count the number of elements matching the head
def runLength[T](xs:List[T]) = xs.takeWhile(_ == xs.head).size
//pair each element with the number of subsequent occurrences
def runLengths[T](row:List[T]) : List[(T,Int)] = row match {
case Nil => Nil
case h :: t => (h, runLength(row)) :: runLengths(t)
}
//should be optimised for tail-call, but easier to understand this way
//sample input: 1,1,2,2,2,3,4,4,4,4,5,5,6
//output: (1,2), (1,1), (2,3), (2,2), (2,1), (3,1), (4,4), (4,3), (4,2), (4,1), (5,2), (5,1), (6,1)
This can then be used against each row in the grid:
val grid = List(
List(0,0,0,0),
List(0,1,1,0),
List(0,1,1,0),
List(0,0,0,0))
val stage1 = grid map runLengths
// returns stage1: List[List[(Int, Int)]] =
// 0,4 0,3 0,2 0,1
// 0,1 1,2 1,1 0,1
// 0,1 1,2 1,1 0,1
// 0,4 0,3 0,2 0,1
Then having done the horizontal, the rows, we now perform exactly the same operation on the columns. This uses the transpose method available in the Scala standard collection library to exchange rows<->columns, as per the mathematical matrix operation of the same name. We also transpose back once this is done.
val stage2 = (stage1.transpose map runLengths).transpose
// returns stage2: List[List[((Int, Int), Int)]] =
// (0,4),1 (0,3),1 (0,2),1 (0,1),4
// (0,1),2 (1,2),2 (1,1),2 (0,1),3
// (0,1),1 (1,2),1 (1,1),1 (0,1),2
// (0,4),1 (0,3),1 (0,2),1 (0,1),1
What does this mean? Taking one element: (1,2),2, it means that that cell contains the value 1, and scanning to the right that there are 2 adjacent cells in the row containing 1. Scanning down, there are two adjacent cells with the same property of containing the value 1 and having the same number of equal values to their right.
It's a little clearer after tidying up, converting nested tuples of the form ((a,b),c) to (a,(b,c)):
val stage3 = stage2 map { _.map {case ((a,b),c) => a->(b,c) } }
//returns stage3: List[List[(Int, (Int, Int))]] =
// 0,(4,1) 0,(3,1) 0,(2,1) 0,(1,4)
// 0,(1,2) 1,(2,2) 1,(1,2) 0,(1,3)
// 0,(1,1) 1,(2,1) 1,(1,1) 0,(1,2)
// 0,(4,1) 0,(3,1) 0,(2,1) 0,(1,1)
Where 1,(2,2) refers to a cell containing the value 1, and being at the top-left corner of a 2x2 rectangle of similarly-valued cells.
From here, it's trivial to spot rectangles of the same size. Though a little more work is necessary if you also want to exclude areas that are the subset of a larger rectangle.
UPDATE: As written, the algorithm doesn't work well for cases like the cell at (0,0), which belongs to two distinct rectangles (1x4 and 4x1). If needed, this is also solvable using the same techniques. (do one pass using map/transpose/map/transpose and a second with transpose/map/transpose/map, then zip and flatten the results).
It would also need modifying if the input might contain adjoining rectangles containing cells of the same value, such as:
0 0 0 0 0 0 0 0
0 0 1 1 1 0 0 0
0 0 1 1 1 0 0 0
0 0 1 1 1 1 1 0
0 0 1 1 1 1 1 0
0 0 1 1 1 1 1 0
0 0 0 0 0 0 0 0
Putting it all together, and cleaning up a little:
type Grid[T] = List[List[T]]
def runLengths[T](row:List[T]) : List[(T,Int)] = row match {
case Nil => Nil
case h :: t => (h -> row.takeWhile(_ == h).size) :: runLengths(t)
}
def findRectangles[T](grid: Grid[T]) = {
val step1 = (grid map runLengths)
val step2 = (step1.transpose map runLengths).transpose
step2 map { _ map { case ((a,b),c) => (a,(b,c)) } }
}
UPDATE2
Hold onto your hats, this is a big one...
Before writing a single line of new functionality, we'll first refactor a bit, pulling some of the methods into Ops classes with implicit conversions, so they can be used as though they were methods defined on the underlying collection types:
import annotation.tailrec
class RowOps[T](row: List[T]) {
def withRunLengths[U](func: (T,Int)=>U) : List[U] = {
#tailrec def recurse(row:List[T], acc:List[U]): List[U] = row match {
case Nil => acc
case head :: tail =>
recurse(
tail,
func(head, row.takeWhile(head==).size) :: acc)
}
recurse(row, Nil).reverse
}
def mapRange(start: Int, len: Int)(func: T=>T) =
row.splitAt(start) match {
case (l,r) => l ::: r.take(len).map(func) ::: r.drop(len)
}
}
implicit def rowToOps[T](row: List[T]) = new RowOps(row)
This adds a withRunLengths method to lists. One notable difference here is that instead of returning a List of (value, length) pairs the method accepts, as a parameter, a function to create an output value for each such pair. This will come in handy later...
type Grid[T] = List[List[T]]
class GridOps[T](grid: Grid[T]) {
def deepZip[U](other: Grid[U]) = (grid zip other) map { case (g,o) => g zip o}
def deepMap[U](f: (T)=>U) = grid map { _ map f}
def mapCols[U](f: List[T]=>List[U]) = (grid.transpose map f).transpose
def height = grid.size
def width = grid.head.size
def coords = List.tabulate(height,width){ case (y,x) => (x,y) }
def zipWithCoords = deepZip(coords)
def deepMapRange(x: Int, y: Int, w: Int, h: Int)(func: T=>T) =
grid mapRange (y,h){ _.mapRange(x,w)(func) }
}
implicit def gridToOps[T](grid: Grid[T]) = new GridOps(grid)
There shouldn't be any surprises here. The deepXXX methods avoid having to write constructs of the form list map { _ map { ... } }. tabulate may also be new to you, but its purpose is hopefully obvious from the use.
Using these, it becomes trivial to define functions for finding the horizontal and vertical run lengths over a whole grid:
def withRowRunLengths[T,U](grid: Grid[T])(func: (T,Int)=>U) =
grid map { _.withRunLengths(func) }
def withColRunLengths[T,U](grid: Grid[T])(func: (T,Int)=>U) =
grid mapCols { _.withRunLengths(func) }
Why 2 parameter blocks and not one? I'll explain shortly.
I could have defined these as methods in GridOps, but they don't seem appropriate for general purpose use. Feel free to disagree with me here :)
Next, define some input...
def parseIntGrid(rows: String*): Grid[Int] =
rows.toList map { _ map {_.toString.toInt} toList }
val input: Grid[Int] = parseIntGrid("0000","0110","0110","0000")
...another useful helper type...
case class Rect(w: Int, h: Int)
object Rect { def empty = Rect(0,0) }
as an alternative to a tuple, this really helps with debugging. Deeply nested parenthesis are not easy on the eyes. (sorry Lisp fans!)
...and use the functions defined above:
val stage1w = withRowRunLengths(input) {
case (cell,width) => (cell,width)
}
val stage2w = withColRunLengths(stage1w) {
case ((cell,width),height) => Rect(width,height)
}
val stage1t = withColRunLengths(input) {
case (cell,height) => (cell,height)
}
val stage2t = withRowRunLengths(stage1t) {
case ((cell,height),width) => Rect(width,height)
}
All of the above blocks should be one-liners, but I reformatted for StackOverflow.
The outputs at this stage are just grids of Rectangles, I've intentionally dropped any mention of the actual value that the rectangle is comprised of. That's absolutely fine, it's easy enough to find from its co-ordinates in the grid, and we'll be recombining the data in just a brief moment.
Remember me explaining that RowOps#withRunLengths takes a function as a parameter? Well, this is where it's being used. case (cell,width) => (cell,width) is actually a function that takes the cell value and run length (calling them cell and width) then returns the (cell,width) Tuple.
This is also why I used two parameter blocks in defining the functions, so the second param can be passed in { braces }, and makes the whole thing all nice and DSL-like.
Another very important principle illustrated here is that the type inferencer works on parameter blocks in succession, so for this (remember it?):
def withRowRunLengths[T,U](grid: Grid[T])(func: (T,Int)=>U)
The type T will be determined by the supplied grid. The compiler then knows the input type for the function supplied as the second argument, - Int in this case, as the first param was a Grid[Int] - which is why I was able to the write case (cell,width) => (cell,width) and not have to explicitly state anywhere that cell and width are integers. In the second usage, a Grid[(Int,Int)] is supplied, this fits the closure case ((cell,width),height) => Rect(width,height) and again, it just works.
If that closure had contained anything that wouldn't work for the underlying type of the grid then the compiler would have complained, this is what type safety is all about...
Having calculated all the possible rectangles, all that remains is to gather up the data and present it in a format that's more practical for analysing. Because the nesting at this stage could get very messy, I defined another datatype:
case class Cell[T](
value: T,
coords: (Int,Int) = (0,0),
widest: Rect = Rect.empty,
tallest: Rect = Rect.empty
)
Nothing special here, just a case class with named/default parameters. I'm also glad I had the foresight to define Rect.empty above :)
Now mix the 4 datasets (input vals, coords, widest rects, tallest rects), gradually fold into the cells, stir gently, and serve:
val cellsWithCoords = input.zipWithCoords deepMap {
case (v,(x,y)) => Cell(value=v, coords=(x,y))
}
val cellsWithWidest = cellsWithCoords deepZip stage2w deepMap {
case (cell,rect) => cell.copy(widest=rect)
}
val cellsWithWidestAndTallest = cellsWithWidest deepZip stage2t deepMap {
case (cell,rect) => cell.copy(tallest=rect)
}
val results = (cellsWithWidestAndTallest deepMap {
case Cell(value, coords, widest, tallest) =>
List((widest, value, coords), (tallest, value, coords))
}
).flatten.flatten
The last stage is interesting there, it converts each cell to a size-2 list of (rectangle, value, coords) tuples (size 2 because we have both widest and tallest rects for each cell). Calling flatten twice then takes the resulting List[List[List[_]]] back down to a single List. There's no need to retain the 2D structure any more, as the necessary coordinates are already embedded in the results.
Voila!
It's now up to you what you now do with this List. The next stage is probably some form of sorting and duplicate removal...
You can do it in O(n^2) relatively easy.
First, some-precalculations. For each cell in matrix, calculate how many consecutive cells below it have the same number.
For your example, resulting matrix a (couldn't think of better name :/) will look like this
0 0 0 0 0 2 2
1 1 2 2 2 1 1
0 0 1 1 1 0 0
1 1 0 0 0 1 1
0 0 0 0 0 0 0
It can be produced in O(n^2) easily.
Now, for each row i, let's find all rectangles with top side in row i (and bottom side in row i + height - 1).
Here's an illustration for i = 1
0 0 0 0 0 0 0
-------------
4 4 2 2 2 0 0
4 4 2 2 2 0 0
0 0 2 2 2 1 1
-------------
0 0 0 0 0 1 1
Now, the main idea
int current_width = 0;
for (int j = 0; j < matrix.width; ++j) {
if (a[i][j] < height - 1) {
// this column has different numbers in it, no game
current_width = 0;
continue;
}
if (current_width > 0) {
// this column should consist of the same numbers as the one before
if (matrix[i][j] != matrix[i][j - 1]) {
current_width = 1; // start streak anew, from the current column
continue;
}
}
++current_width;
if (current_width >= width) {
// we've found a rectangle!
}
}
In the example above (i = 1) current_width after each iteration will be 0, 0, 1, 2, 3, 0, 0.
Now, we need to iterate over all possible i and we have a solution.
I'll play the devil's advocate here. I'll show Nikita's exact algorithm written in a functional style. I'll also parallelize it, just to show it can be done.
First, computing consecutive cells with the same number below a cell. I made a slight change by returning all values plus one compared to Nikita's proposed output, to avoid a - 1 in some other part of the code.
def computeHeights(column: Array[Int]) = (
column
.reverse
.sliding(2)
.map(pair => pair(0) == pair(1))
.foldLeft(List(1)) (
(list, flag) => (if (flag) list.head + 1 else 1) :: list
)
)
I'd rather use .view before reversing, but that doesn't work with the present Scala version. If it did, it would save repeated array creations, which ought to speed up the code a lot, for memory locality and bandwidth reasons if no other.
Now, all columns at the same time:
import scala.actors.Futures.future
def getGridHeights(grid: Array[Array[Int]]) = (
grid
.transpose
.map(column => future(computeHeights(column)))
.map(_())
.toList
.transpose
)
I'm not sure if the parallelization overhead will pay off here or not, but this is the first algorithm on Stack Overflow where it actually has a chance, given the non-trivial effort in computing a column. Here's another way of writing it, using the upcoming 2.9 features (it might work on Scala 2.8.1 -- not sure what :
def getGridHeights(grid: Array[Array[Int]]) = (
grid
.transpose
.toParSeq
.map(computeHeights)
.toList
.transpose
)
Now, the meat of Nikita's algorithm:
def computeWidths(height: Int, row: Array[Int], heightRow: List[Int]) = (
row
.sliding(2)
.zip(heightRow.iterator)
.toSeq
.reverse
.foldLeft(List(1)) { case (widths # (currWidth :: _), (Array(prev, curr), currHeight)) =>
(
if (currHeight >= height && currWidth > 0 && prev == curr) currWidth + 1
else 1
) :: widths
}
.toArray
)
I'm using pattern matching in this bit of code, though I'm concerned with its speed, because with all the sliding, and zipping and folding there's two many things being juggled here. And, speaking of performance, I'm using Array instead of IndexedSeq because Array is the only type in the JVM that is not erased, resulting in much better performance with Int. And, then, there's the .toSeq I'm not particular happy about either, because of memory locality and bandwidth.
Also, I'm doing this from right to left instead of Nikita's left to right, just so I can find the upper left corner.
However, is the same thing as the code in Nikita's answer, aside from the fact that I'm still adding one to current width compared to his code, and not printing the result right here. There's a bunch of things without clear origins here, though -- row, heightRow, height... Let's see this code in context -- and parallelized! -- to get the overall picture.
def getGridWidths(height: Int, grid: Array[Array[Int]]) = (
grid
.zip(getGridHeights(grid))
.map { case (row, heightsRow) => future(computeWidths(height, row, heightsRow)) }
.map(_())
)
And the 2.9 version:
def getGridWidths(height: Int, grid: Array[Array[Int]]) = (
grid
.toParSeq
.zip(getGridHeights(grid))
.map { case (row, heightsRow) => computeWidths(height, row, heightsRow) }
)
And, for the gran finale,
def findRectangles(height: Int, width: Int, grid: Array[Array[Int]]) = {
val gridWidths = getGridWidths(height, grid)
for {
y <- gridWidths.indices
x <- gridWidths(y).indices
if gridWidths(y)(x) >= width
} yield (x, y)
}
So... I have no doubt that the imperative version of Nikita's algorithm is faster -- it only uses Array, which is much faster with primitives than any other type, and it avoids massive creation of temporary collections -- though Scala could have done better here. Also, no closures -- as much as they help, they are slower than code without closures. At least until JVM grow something to help with them.
Also, Nikita's code can be easily parallelized with threads -- of all things! -- with little trouble.
But my point here is that Nikita's code is not particularly bad just because it uses arrays and a mutable variable here and there. The algorithm translates cleanly to a more functional style.
EDIT
So, I decided to take a shot at making an efficient functional version. It's not really fully functional because I use Iterator, which is mutable, but it's close enough. Unfortunately, it won't work on Scala 2.8.1, because it lacks scanLeft on Iterator.
There are two other unfortunate things here. First, I gave up on parallelization of grid heights, since I couldn't get around having at least one transpose, with all the collection copying that entails. There's still at least one copying of the data, though (see toArray call to understand where).
Also, since I'm working with Iterable I loose the ability to use the parallel collections. I do wonder if the code couldn't be made better by having grid be a parallel collection of parallel collections from the beginning.
I have no clue if this is more efficient than the previous version of not. It's an interesting question...
def getGridHeights(grid: Array[Array[Int]]) = (
grid
.sliding(2)
.scanLeft(Array.fill(grid.head.size)(1)) { case (currHeightArray, Array(prevRow, nextRow)) =>
(prevRow, nextRow, currHeightArray)
.zipped
.map { case (x, y, currHeight) => if (x == y) currHeight + 1 else 1 }
}
)
def computeWidths(height: Int, row: Array[Int], heightRow: Array[Int]) = (
row
.sliding(2)
.map { case Array(x, y) => x == y }
.zip(heightRow.iterator)
.scanLeft(1) { case (currWidth , (isConsecutive, currHeight)) =>
if (currHeight >= height && currWidth > 0 && isConsecutive) currWidth + 1
else 1
}
.toArray
)
import scala.actors.Futures.future
def getGridWidths(height: Int, grid: Array[Array[Int]]) = (
grid
.iterator
.zip(getGridHeights(grid))
.map { case (row, heightsRow) => future(computeWidths(height, row, heightsRow)) }
.map(_())
.toArray
)
def findRectangles(height: Int, width: Int, grid: Array[Array[Int]]) = {
val gridWidths = getGridWidths(height, grid)
for {
y <- gridWidths.indices
x <- gridWidths(y).indices
if gridWidths(y)(x) >= width
} yield (x - width + 1, y - height + 1)
}

How can you compare to what extent two lists are in the same order?

I have two arrays containing the same elements, but in different orders, and I want to know the extent to which their orders differ.
The method I tried, didn't work. it was as follows:
For each list I built a matrix which recorded for each pair of elements whether they were above or below each other in the list. I then calculated a pearson correlation coefficient of these two matrices. This worked extremely badly. Here's a trivial example:
list 1:
1
2
3
4
list 2:
1
3
2
4
The method I described above produced matrices like this (where 1 means the row number is higher than the column, and 0 vice-versa):
list 1:
1 2 3 4
1 1 1 1
2 1 1
3 1
4
list 2:
1 2 3 4
1 1 1 1
2 0 1
3 1
4
Since the only difference is the order of elements 2 and 3, these should be deemed to be very similar. The Pearson Correlation Coefficient for those two matrices is 0, suggesting they are not correlated at all. I guess the problem is that what I'm looking for is not really a correlation coefficient, but some other kind of similarity measure. Edit distance, perhaps?
Can anyone suggest anything better?
Mean square of differences of indices of each element.
List 1: A B C D E
List 2: A D C B E
Indices of each element of List 1 in List 2 (zero based)
A B C D E
0 3 2 1 4
Indices of each element of List 1 in List 1 (zero based)
A B C D E
0 1 2 3 4
Differences:
A B C D E
0 -2 0 2 0
Square of differences:
A B C D E
4 4
Average differentness = 8 / 5.
Just an idea, but is there any mileage in adapting a standard sort algorithm to count the number of swap operations needed to transform list1 into list2?
I think that defining the compare function may be difficult though (perhaps even just as difficult as the original problem!), and this may be inefficient.
edit: thinking about this a bit more, the compare function would essentially be defined by the target list itself. So for example if list 2 is:
1 4 6 5 3
...then the compare function should result in 1 < 4 < 6 < 5 < 3 (and return equality where entries are equal).
Then the swap function just needs to be extended to count the swap operations.
A bit late for the party here, but just for the record, I think Ben almost had it... if you'd looked further into correlation coefficients, I think you'd have found that Spearman's rank correlation coefficient might have been the way to go.
Interestingly, jamesh seems to have derived a similar measure, but not normalized.
See this recent SO answer.
You might consider how many changes it takes to transform one string into another (which I guess it was you were getting at when you mentioned edit distance).
See: http://en.wikipedia.org/wiki/Levenshtein_distance
Although I don't think l-distance takes into account rotation. If you allow rotation as an operation then:
1, 2, 3, 4
and
2, 3, 4, 1
Are pretty similar.
There is a branch-and-bound algorithm that should work for any set of operators you like. It may not be real fast. The pseudocode goes something like this:
bool bounded_recursive_compare_routine(int* a, int* b, int level, int bound){
if (level > bound) return false;
// if at end of a and b, return true
// apply rule 0, like no-change
if (*a == *b){
bounded_recursive_compare_routine(a+1, b+1, level+0, bound);
// if it returns true, return true;
}
// if can apply rule 1, like rotation, to b, try that and recur
bounded_recursive_compare_routine(a+1, b+1, level+cost_of_rotation, bound);
// if it returns true, return true;
...
return false;
}
int get_minimum_cost(int* a, int* b){
int bound;
for (bound=0; ; bound++){
if (bounded_recursive_compare_routine(a, b, 0, bound)) break;
}
return bound;
}
The time it takes is roughly exponential in the answer, because it is dominated by the last bound that works.
Added: This can be extended to find the nearest-matching string stored in a trie. I did that years ago in a spelling-correction algorithm.
I'm not sure exactly what formula it uses under the hood, but difflib.SequenceMatcher.ratio() does exactly this:
ratio(self) method of difflib.SequenceMatcher instance:
Return a measure of the sequences' similarity (float in [0,1]).
Code example:
from difflib import SequenceMatcher
sm = SequenceMatcher(None, '1234', '1324')
print sm.ratio()
>>> 0.75
Another approach that is based on a little bit of mathematics is to count the number of inversions to convert one of the arrays into the other one. An inversion is the exchange of two neighboring array elements. In ruby it is done like this:
# extend class array by new method
class Array
def dist(other)
raise 'can calculate distance only to array with same length' if length != other.length
# initialize count of inversions to 0
count = 0
# loop over all pairs of indices i, j with i<j
length.times do |i|
(i+1).upto(length) do |j|
# increase count if i-th and j-th element have different order
count += 1 if (self[i] <=> self[j]) != (other[i] <=> other[j])
end
end
return count
end
end
l1 = [1, 2, 3, 4]
l2 = [1, 3, 2, 4]
# try an example (prints 1)
puts l1.dist(l2)
The distance between two arrays of length n can be between 0 (they are the same) and n*(n+1)/2 (reversing the first array one gets the second). If you prefer to have distances always between 0 and 1 to be able to compare distances of pairs of arrays of different length, just divide by n*(n+1)/2.
A disadvantage of this algorithms is it running time of n^2. It also assumes that the arrays don't have double entries, but it could be adapted.
A remark about the code line "count += 1 if ...": the count is increased only if either the i-th element of the first list is smaller than its j-th element and the i-th element of the second list is bigger than its j-th element or vice versa (meaning that the i-th element of the first list is bigger than its j-th element and the i-th element of the second list is smaller than its j-th element). In short: (l1[i] < l1[j] and l2[i] > l2[j]) or (l1[i] > l1[j] and l2[i] < l2[j])
If one has two orders one should look at two important ranking correlation coefficients:
Spearman's rank correlation coefficient: https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient
This is almost the same as Jamesh answer but scaled in the range -1 to 1.
It is defined as:
1 - ( 6 * sum_of_squared_distances ) / ( n_samples * (n_samples**2 - 1 )
Kendalls tau: https://nl.wikipedia.org/wiki/Kendalls_tau
When using python one could use:
from scipy import stats
order1 = [ 1, 2, 3, 4]
order2 = [ 1, 3, 2, 4]
print stats.spearmanr(order1, order2)[0]
>> 0.8000
print stats.kendalltau(order1, order2)[0]
>> 0.6667
if anyone is using R language, I've implemented a function that computes the "spearman rank correlation coefficient" using the method described above by #bubake here:
get_spearman_coef <- function(objectA, objectB) {
#getting the spearman rho rank test
spearman_data <- data.frame(listA = objectA, listB = objectB)
spearman_data$rankA <- 1:nrow(spearman_data)
rankB <- c()
for (index_valueA in 1:nrow(spearman_data)) {
for (index_valueB in 1:nrow(spearman_data)) {
if (spearman_data$listA[index_valueA] == spearman_data$listB[index_valueB]) {
rankB <- append(rankB, index_valueB)
}
}
}
spearman_data$rankB <- rankB
spearman_data$distance <-(spearman_data$rankA - spearman_data$rankB)**2
spearman <- 1 - ( (6 * sum(spearman_data$distance)) / (nrow(spearman_data) * ( nrow(spearman_data)**2 -1) ) )
print(paste("spearman's rank correlation coefficient"))
return( spearman)
}
results :
get_spearman_coef(c("a","b","c","d","e"), c("a","b","c","d","e"))
spearman's rank correlation coefficient: 1
get_spearman_coef(c("a","b","c","d","e"), c("b","a","d","c","e"))
spearman's rank correlation coefficient: 0.9

Resources