Fast summing of subarrays in Python - parallel-processing

I have a data cube a of radius w and for every element of that cube, I would like to add the element and all surrounding values within a cube of radius r, where r < w. The result should be returned in an array of the same shape, b.
As a simple example, suppose:
a = numpy.ones(shape=(2*w,2*w,2*w),dtype='float32')
kernel = numpy.ones(shape=(2*r,2*r,2*r),dtype='float32')
b = convolve(a,kernel,mode='constant',cval=0)
then b would have the value (2r)(2r)(2r) for all the indices not on the edge.
Currently I am using a loop to do this and it is very slow, especially for larger w and r. I tried scipy convolution but got little speedup over the loop. I am now looking at numba's parallel computation feature but cannot figure out how to rewrite the code to work with numba. I have a Nvidia RTX card so CUDA GPU calculations are also possible.
Suggestions are welcome.
Here is my current code:
for x in range(0,w*2):
print(x)
for y in range(0,w*2):
for z in range(0,w*2):
if x >= r:
x1 = x - r
else:
x1 = 0
if x < w*2-r:
x2 = x + r
else:
x2 = w*2 - 1
if y >= r:
y1 = y - r
else:
y1 = 0
if y < w*2-r:
y2 = y + r
else:
y2 = w*2 - 1
if z >= r:
z1 = z - r
else:
z1 = 0
if z < w*2-r:
z2 = z + r
else:
z2 = w*2 - 1
b[x][y][z] = numpy.sum(a[x1:x2,y1:y2,z1:z2])
return b

Here is a very simple version of your code so that it works with numba. I was finding speed-ups of a factor of 10 relative to the pure numpy code. However, you should be able to get even greater speed-ups using a FFT convolution algorithm (e.g. scipy's fftconvolve). Can you share your attempt at getting convolution to work?
from numba import njit
#njit
def sum_cubes(a,b,w,r):
for x in range(0,w*2):
#print(x)
for y in range(0,w*2):
for z in range(0,w*2):
if x >= r:
x1 = x - r
else:
x1 = 0
if x < w*2-r:
x2 = x + r
else:
x2 = w*2 - 1
if y >= r:
y1 = y - r
else:
y1 = 0
if y < w*2-r:
y2 = y + r
else:
y2 = w*2 - 1
if z >= r:
z1 = z - r
else:
z1 = 0
if z < w*2-r:
z2 = z + r
else:
z2 = w*2 - 1
b[x,y,z] = np.sum(a[x1:x2,y1:y2,z1:z2])
return b
EDIT: Your original code has a small bug in it. The way numpy indexing works, the final line should be
b[x,y,z] = np.sum(a[x1:x2+1,y1:y2+1,z1:z2+1])
unless you want the cube to be off-centre.
Assuming you do want the cube to be centred, then a much faster way to do this calculation is using scipy's uniform filter:
from scipy.ndimage import uniform_filter
def sum_cubes_quickly(a,b,w,r):
b = uniform_filter(a,mode='constant',cval=0,size=2*r+1)*(2*r+1)**3
return b
A few quick runtime comparisons for randomly generated data with w = 50, r = 10:
Original raw numpy code - 15.1 sec
Numba'd numpy code - 8.1 sec
uniform_filter - 13.1 ms

Related

Solving linear equations

I have to find out the integral solution of a equation ax+by=c such that x>=0 and y>=0 and value of (x+y) is minimum.
I know if c%gcd(a,b)}==0 then it's always possible. How to find the values of x and y?
My approach
for(i 0 to 2*c):
x=i
y= (c-a*i)/b
if(y is integer)
ans = min(ans,x+y)
Is there any better way to do this ? Having better time complexity.
Using the Extended Euclidean Algorithm and the theory of linear Diophantine equations there is no need to search. Here is a Python 3 implementation:
def egcd(a,b):
s,t = 1,0 #coefficients to express current a in terms of original a,b
x,y = 0,1 #coefficients to express current b in terms of original a,b
q,r = divmod(a,b)
while(r > 0):
a,b = b,r
old_x, old_y = x,y
x,y = s - q*x, t - q*y
s,t = old_x, old_y
q,r = divmod(a,b)
return b, x ,y
def smallestSolution(a,b,c):
d,x,y = egcd(a,b)
if c%d != 0:
return "No integer solutions"
else:
u = a//d #integer division
v = b//d
w = c//d
x = w*x
y = w*y
k1 = -x//v if -x % v == 0 else 1 + -x//v #k1 = ceiling(-x/v)
x1 = x + k1*v # x + k1*v is solution with smallest x >= 0
y1 = y - k1*u
if y1 < 0:
return "No nonnegative integer solutions"
else:
k2 = y//u #floor division
x2 = x + k2*v #y-k2*u is solution with smallest y >= 0
y2 = y - k2*u
if x2 < 0 or x1+y1 < x2+y2:
return (x1,y1)
else:
return (x2,y2)
Typical run:
>>> smallestSolution(1001,2743,160485)
(111, 18)
The way it works: first use the extended Euclidean algorithm to find d = gcd(a,b) and one solution, (x,y). All other solutions are of the form (x+k*v,y-k*u) where u = a/d and v = b/d. Since x+y is linear, it has no critical points, hence is minimized in the first quadrant when either x is as small as possible or y is as small as possible. The k above is an arbitrary integer parameter. By appropriate use of floor and ceiling you can locate the integer points with either x as small as possible or y is as small as possible. Just take the one with the smallest sum.
On Edit: My original code used the Python function math.ceiling applied to -x/v. This is problematic for very large integers. I tweaked it so that the ceiling is computed with just int operations. It can now handle arbitrarily large numbers:
>>> a = 236317407839490590865554550063
>>> b = 127372335361192567404918884983
>>> c = 475864993503739844164597027155993229496457605245403456517677648564321
>>> smallestSolution(a,b,c)
(2013668810262278187384582192404963131387, 120334243940259443613787580180)
>>> x,y = _
>>> a*x+b*y
475864993503739844164597027155993229496457605245403456517677648564321
Most of the computation takes place in the running the extended Euclidean algorithm, which is known to be O(min(a,b)).
First let assume a,b,c>0 so:
a.x+b.y = c
x+y = min(xi+yi)
x,y >= 0
a,b,c > 0
------------------------
x = ( c - b.y )/a
y = ( c - a.x )/b
c - a.x >= 0
c - b.y >= 0
c >= b.y
c >= a.x
x <= c/x
y <= c/b
So naive O(n) solution is in C++ like this:
void compute0(int &x,int &y,int a,int b,int c) // naive
{
int xx,yy;
xx=-1; yy=-1;
for (y=0;;y++)
{
x = c - b*y;
if (x<0) break; // y out of range stop
if (x%a) continue; // non integer solution
x/=a; // remember minimal solution
if ((xx<0)||(x+y<=xx+yy)) { xx=x; yy=y; }
}
x=xx; y=yy;
}
if no solution found it returns -1,-1 If you think about the equation a bit then you should realize that min solution will be when x or y is minimal (which one depends on a<b condition) so adding such heuristics we can increase only the minimal coordinate until first solution found. This will speed up considerably the whole thing:
void compute1(int &x,int &y,int a,int b,int c)
{
if (a<=b){ for (x=0,y=c;y>=0;x++,y-=a) if (y%b==0) { y/=b; return; } }
else { for (y=0,x=c;x>=0;y++,x-=b) if (x%a==0) { x/=a; return; } }
x=-1; y=-1;
}
I measured this on my setup:
x y ax+by x+y a=50 b=105 c=500000000
[ 55.910 ms] 10 4761900 500000000 4761910 naive
[ 0.000 ms] 10 4761900 500000000 4761910 opt
x y ax+by x+y a=105 b=50 c=500000000
[ 99.214 ms] 4761900 10 500000000 4761910 naive
[ 0.000 ms] 4761900 10 500000000 4761910 opt
The ~2.0x difference for naive method times is due to a/b=~2.0and selecting worse coordinate to iterate in the second run.
Now just handle special cases when a,b,c are zero (to avoid division by zero)...

finding max width of blob

I'm trying to find the maximum width of the blob by counting the number of white pixels of each line in the blob I wrote the code, however, it never stops. how it can be fixed?
For y = 0 To bmp.ScaleHeight - 1
sum = 0
For x = 0 To bmp.ScaleWidth - 1
pixel1 = bmp.Point(x, y)
If pixel1 = vbWhite Then
sum = sum + 1
If bmp.Point(x + 1, y) = vbBlack Then
If sum > max Then
Lmax = sum
y1 = y
x2 = x
x1 = x2 - sum
End If
End If
End If
Next x
Next y

Finding the intersection of two lines

I have two lines:
y = -1/3x + 4
y = 3x + 85
The intersection is at [24.3, 12.1].
I have a set of coordinates prepared:
points = [[1, 3], [4, 8], [25, 10], ... ]
#y = -1/3x + b
m_regr = -1/3
b_regr = 4
m_perp = 3 #(1 / m_regr * -1)
distances = []
points.each do |pair|
x1 = pair.first
y2 = pair.last
x2 = ((b_perp - b_regr / (m_regr - m_perp))
y2 = ((m_regr * b_perp) / (m_perp * b_regr))/(m_regr - m_perp)
distance = Math.hypot((y2 - y1), (x2 - x1))
distances << distance
end
Is there a gem or some better method for this?
NOTE: THE ABOVE METHOD DOES NOT WORK. See my answer for a solution that works.
What's wrong with using a little math?
If you have:
y = m1 x + b1
y = m2 x + b2
It's a simple system of linear equations.
If you solve them, your intersection is:
x = (b2 - b1)/(m1 - m2)
y = (m1 b2 - m2 b1)/(m1 - m2)
After much suffering and many different tries, I found a simple algebraic method here that not only works but is dramatically simplified.
distance = ((y - mx - b).abs / Math.sqrt(m**2 + 1))
where x and y are the coordinates for the known point.
For Future Googlers:
def solution k, l, m, n, p, q, r, s
intrsc_x1 = m - k
intrsc_y1 = n - l
intrsc_x2 = r - p
intrsc_y2 = s - q
v1 = (-intrsc_y1 * (k - p) + intrsc_x1 * (l - q)) / (-intrsc_x2 * intrsc_y1 + intrsc_x1 * intrsc_y2);
v2 = ( intrsc_x2 * (l - q) - intrsc_y2 * (k - p)) / (-intrsc_x2 * intrsc_y1 + intrsc_x1 * intrsc_y2);
(v1 >= 0 && v1 <= 1 && v2 >= 0 && v2 <= 1) ? true : false
end
The simplest and cleanest way I've found on the internet.

what is the algorithm for an RBF kernel matrix in Matlab?

If I am given training data sets and unlabeled datasets, what is the RBF Kernel matrix algorithm for Matlab?
This should be what you are looking for. It is taken from here
% With Fast Computation of the RBF kernel matrix
% To speed up the computation, we exploit a decomposition of the Euclidean distance (norm)
%
% Inputs:
% ker: 'lin','poly','rbf','sam'
% X: data matrix with training samples in rows and features in columns
% X2: data matrix with test samples in rows and features in columns
% sigma: width of the RBF kernel
% b: bias in the linear and polinomial kernel
% d: degree in the polynomial kernel
%
% Output:
% K: kernel matrix
%
% Gustavo Camps-Valls
% 2006(c)
% Jordi (jordi#uv.es), 2007
% 2007-11: if/then -> switch, and fixed RBF kernel
function K = kernelmatrix(ker,X,X2,sigma)
switch ker
case 'lin'
if exist('X2','var')
K = X' * X2;
else
K = X' * X;
end
case 'poly'
if exist('X2','var')
K = (X' * X2 + b).^d;
else
K = (X' * X + b).^d;
end
case 'rbf'
n1sq = sum(X.^2,1);
n1 = size(X,2);
if isempty(X2);
D = (ones(n1,1)*n1sq)' + ones(n1,1)*n1sq -2*X'*X;
else
n2sq = sum(X2.^2,1);
n2 = size(X2,2);
D = (ones(n2,1)*n1sq)' + ones(n1,1)*n2sq -2*X'*X2;
end;
K = exp(-D/(2*sigma^2));
case 'sam'
if exist('X2','var');
D = X'*X2;
else
D = X'*X;
end
K = exp(-acos(D).^2/(2*sigma^2));
otherwise
error(['Unsupported kernel ' ker])
end
To produce the grahm/kernel matrix (matrix of inner products) do:
function [ Kern ] = produce_kernel_matrix( X, t, beta )
%
X = X';
t = t';
X_T_2 = sum(X.^2,2) + sum(t.^2,2).' - (2*X)*t.'; % ||x||^2 + ||t||^2 - 2<x,t>
Kern =exp(-beta*X_T_2); %
end
then to do the interpolation do:
function [ mdl ] = learn_RBF_linear_algebra( X_training_data, Y_training_data, mdl )
%
Kern_matrix = produce_kernel_matrix_bsxfun(X_training_data, mdl.t, mdl.beta); % (N x K)
C = Kern_matrix \ Y_training_data'; % (K x D) = (N x K)' x (N x D)
mdl.c = C; % (K x D)
end
note beta is 1/2sigma

Python performance: iteration and operations on nested lists

Problem Hey folks. I'm looking for some advice on python performance. Some background on my problem:
Given:
A (x,y) mesh of nodes each with a value (0...255) starting at 0
A list of N input coordinates each at a specified location within the range (0...x, 0...y)
A value Z that defines the "neighborhood" in count of nodes
Increment the value of the node at the input coordinate and the node's neighbors. Neighbors beyond the mesh edge are ignored. (No wrapping)
BASE CASE: A mesh of size 1024x1024 nodes, with 400 input coordinates and a range Z of 75 nodes.
Processing should be O(x*y*Z*N). I expect x, y and Z to remain roughly around the values in the base case, but the number of input coordinates N could increase up to 100,000. My goal is to minimize processing time.
Current results Between my start and the comments below, we've got several implementations.
Running speed on my 2.26 GHz Intel Core 2 Duo with Python 2.6.1:
f1: 2.819s
f2: 1.567s
f3: 1.593s
f: 1.579s
f3b: 1.526s
f4: 0.978s
f1 is the initial naive implementation: three nested for loops.
f2 is replaces the inner for loop with a list comprehension.
f3 is based on Andrei's suggestion in the comments and replaces the outer for with map()
f is Chris's suggestion in the answers below
f3b is kriss's take on f3
f4 is Alex's contribution.
Code is included below for your perusal.
Question How can I further reduce the processing time? I'd prefer sub-1.0s for the test parameters.
Please, keep the recommendations to native Python. I know I can move to a third-party package such as numpy, but I'm trying to avoid any third party packages. Also, I've generated random input coordinates, and simplified the definition of the node value updates to keep our discussion simple. The specifics have to change slightly and are outside the scope of my question.
thanks much!
**`f1` is the initial naive implementation: three nested `for` loops.**
def f1(x,y,n,z):
rows = [[0]*x for i in xrange(y)]
for i in range(n):
inputX, inputY = (int(x*random.random()), int(y*random.random()))
topleft = (inputX - z, inputY - z)
for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)):
for j in xrange(max(0, topleft[1]), min(topleft[1]+(z*2), y)):
if rows[i][j] <= 255: rows[i][j] += 1
f2 is replaces the inner for loop with a list comprehension.
def f2(x,y,n,z):
rows = [[0]*x for i in xrange(y)]
for i in range(n):
inputX, inputY = (int(x*random.random()), int(y*random.random()))
topleft = (inputX - z, inputY - z)
for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)):
l = max(0, topleft[1])
r = min(topleft[1]+(z*2), y)
rows[i][l:r] = [j+(j<255) for j in rows[i][l:r]]
UPDATE: f3 is based on Andrei's suggestion in the comments and replaces the outer for with map(). My first hack at this requires several out-of-local-scope lookups, specifically recommended against by Guido: local variable lookups are much faster than global or built-in variable lookups I hardcoded all but the reference to the main data structure itself to minimize that overhead.
rows = [[0]*x for i in xrange(y)]
def f3(x,y,n,z):
inputs = [(int(x*random.random()), int(y*random.random())) for i in range(n)]
rows = map(g, inputs)
def g(input):
inputX, inputY = input
topleft = (inputX - 75, inputY - 75)
for i in xrange(max(0, topleft[0]), min(topleft[0]+(75*2), 1024)):
l = max(0, topleft[1])
r = min(topleft[1]+(75*2), 1024)
rows[i][l:r] = [j+(j<255) for j in rows[i][l:r]]
UPDATE3: ChristopeD also pointed out a couple improvements.
def f(x,y,n,z):
rows = [[0] * y for i in xrange(x)]
rn = random.random
for i in xrange(n):
topleft = (int(x*rn()) - z, int(y*rn()) - z)
l = max(0, topleft[1])
r = min(topleft[1]+(z*2), y)
for u in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)):
rows[u][l:r] = [j+(j<255) for j in rows[u][l:r]]
UPDATE4: kriss added a few improvements to f3, replacing min/max with the new ternary operator syntax.
def f3b(x,y,n,z):
rn = random.random
rows = [g1(x, y, z) for x, y in [(int(x*rn()), int(y*rn())) for i in xrange(n)]]
def g1(x, y, z):
l = y - z if y - z > 0 else 0
r = y + z if y + z < 1024 else 1024
for i in xrange(x - z if x - z > 0 else 0, x + z if x + z < 1024 else 1024 ):
rows[i][l:r] = [j+(j<255) for j in rows[i][l:r]]
UPDATE5: Alex weighed in with his substantive revision, adding a separate map() operation to cap the values at 255 and removing all non-local-scope lookups. The perf differences are non-trivial.
def f4(x,y,n,z):
rows = [[0]*y for i in range(x)]
rr = random.randrange
inc = (1).__add__
sat = (0xff).__and__
for i in range(n):
inputX, inputY = rr(x), rr(y)
b = max(0, inputX - z)
t = min(inputX + z, x)
l = max(0, inputY - z)
r = min(inputY + z, y)
for i in range(b, t):
rows[i][l:r] = map(inc, rows[i][l:r])
for i in range(x):
rows[i] = map(sat, rows[i])
Also, since we all seem to be hacking around with variations, here's my test harness to compare speeds: (improved by ChristopheD)
def timing(f,x,y,z,n):
fn = "%s(%d,%d,%d,%d)" % (f.__name__, x, y, z, n)
ctx = "from __main__ import %s" % f.__name__
results = timeit.Timer(fn, ctx).timeit(10)
return "%4.4s: %.3f" % (f.__name__, results / 10.0)
if __name__ == "__main__":
print timing(f, 1024, 1024, 400, 75)
#add more here.
On my (slow-ish;-) first-day Macbook Air, 1.6GHz Core 2 Duo, system Python 2.5 on MacOSX 10.5, after saving your code in op.py I see the following timings:
$ python -mtimeit -s'import op' 'op.f1()'
10 loops, best of 3: 5.58 sec per loop
$ python -mtimeit -s'import op' 'op.f2()'
10 loops, best of 3: 3.15 sec per loop
So, my machine is slower than yours by a factor of a bit more than 1.9.
The fastest code I have for this task is:
def f3(x=x,y=y,n=n,z=z):
rows = [[0]*y for i in range(x)]
rr = random.randrange
inc = (1).__add__
sat = (0xff).__and__
for i in range(n):
inputX, inputY = rr(x), rr(y)
b = max(0, inputX - z)
t = min(inputX + z, x)
l = max(0, inputY - z)
r = min(inputY + z, y)
for i in range(b, t):
rows[i][l:r] = map(inc, rows[i][l:r])
for i in range(x):
rows[i] = map(sat, rows[i])
which times as:
$ python -mtimeit -s'import op' 'op.f3()'
10 loops, best of 3: 3 sec per loop
so, a very modest speedup, projecting to more than 1.5 seconds on your machine - well above the 1.0 you're aiming for:-(.
With a simple C-coded extensions, exte.c...:
#include "Python.h"
static PyObject*
dopoint(PyObject* self, PyObject* args)
{
int x, y, z, px, py;
int b, t, l, r;
int i, j;
PyObject* rows;
if(!PyArg_ParseTuple(args, "iiiiiO",
&x, &y, &z, &px, &py, &rows
))
return 0;
b = px - z;
if (b < 0) b = 0;
t = px + z;
if (t > x) t = x;
l = py - z;
if (l < 0) l = 0;
r = py + z;
if (r > y) r = y;
for(i = b; i < t; ++i) {
PyObject* row = PyList_GetItem(rows, i);
for(j = l; j < r; ++j) {
PyObject* pyitem = PyList_GetItem(row, j);
long item = PyInt_AsLong(pyitem);
if (item < 255) {
PyObject* newitem = PyInt_FromLong(item + 1);
PyList_SetItem(row, j, newitem);
}
}
}
Py_RETURN_NONE;
}
static PyMethodDef exteMethods[] = {
{"dopoint", dopoint, METH_VARARGS, "process a point"},
{0}
};
void
initexte()
{
Py_InitModule("exte", exteMethods);
}
(note: I haven't checked it carefully -- I think it doesn't leak memory due to the correct interplay of reference stealing and borrowing, but it should be code inspected very carefully before being put in production;-), we could do
import exte
def f4(x=x,y=y,n=n,z=z):
rows = [[0]*y for i in range(x)]
rr = random.randrange
for i in range(n):
inputX, inputY = rr(x), rr(y)
exte.dopoint(x, y, z, inputX, inputY, rows)
and the timing
$ python -mtimeit -s'import op' 'op.f4()'
10 loops, best of 3: 345 msec per loop
shows an acceleration of 8-9 times, which should put you in the ballpark you desire. I've seen a comment saying you don't want any third-party extension, but, well, this tiny extension you could make entirely your own;-). ((Not sure what licensing conditions apply to code on Stack Overflow, but I'll be glad to re-release this under the Apache 2 license or the like, if you need that;-)).
1. A (smaller) speedup could definitely be the initialization of your rows...
Replace
rows = []
for i in range(x):
rows.append([0 for i in xrange(y)])
with
rows = [[0] * y for i in xrange(x)]
2. You can also avoid some lookups by moving random.random out of the loops (saves a little).
3. EDIT: after corrections -- you could arrive at something like this:
def f(x,y,n,z):
rows = [[0] * y for i in xrange(x)]
rn = random.random
for i in xrange(n):
topleft = (int(x*rn()) - z, int(y*rn()) - z)
l = max(0, topleft[1])
r = min(topleft[1]+(z*2), y)
for u in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)):
rows[u][l:r] = [j+(j<255) for j in rows[u][l:r]]
EDIT: some new timings with timeit (10 runs) -- seems this provides only minor speedups:
import timeit
print timeit.Timer("f1(1024,1024,400,75)", "from __main__ import f1").timeit(10)
print timeit.Timer("f2(1024,1024,400,75)", "from __main__ import f2").timeit(10)
print timeit.Timer("f(1024,1024,400,75)", "from __main__ import f3").timeit(10)
f1 21.1669280529
f2 12.9376120567
f 11.1249599457
in your f3 rewrite, g can be simplified. (Can also be applied to f4)
You have the following code inside a for loop.
l = max(0, topleft[1])
r = min(topleft[1]+(75*2), 1024)
However, it appears that those values never change inside the for loop. So calculate them once, outside the loop instead.
Based on your f3 version I played with the code. As l and r are constants you can avoid to compute them in g1 loop. Also using new ternary if instead of min and max seems to be consistently faster. Also simplified expression with topleft. On my system it appears to be about 20% faster using with the code below.
def f3b(x,y,n,z):
rows = [g1(x, y, z) for x, y in [(int(x*random.random()), int(y*random.random())) for i in range(n)]]
def g1(x, y, z):
l = y - z if y - z > 0 else 0
r = y + z if y + z < 1024 else 1024
for i in xrange(x - z if x - z > 0 else 0, x + z if x + z < 1024 else 1024 ):
rows[i][l:r] = [j+(j<255) for j in rows[i][l:r]]
You can create your own Python module in C, and control the performance as you want:
http://docs.python.org/extending/

Resources