Say that my images are simple shapes - set of lines, dots, curves, and simple objects,
How do I calculate the distance between images - so length is important but total scale is non important, location of line\curve is important, angles is important etc
Attached image For example:
My comparison object is a cube on the top left, score are fictitious just for this example.
that the distance to the Cylinder is 80 (has 2 lines but top geometry is different)
The bottom left cube score is 100 since it exact match lines with different scale.
The bottom right Rectangle score is 90 since it has exact match lines on the top but different scale lines on the side.
I am looking for algorithm name or general approach that will help me to start to think towards a solution....
Thank you for your help.
Here is something to get you started. When jumping into new problems, I don't see much value in trying a lot of complex steps just because they are available somewhere to use. So my focus is on using relatively simple things, that will fail in more varied situations, but hopefully you will see its value and get some sense of the problem.
The approach is fully based on corner detection; two typical methods for this detection are the Harris detector or the one by Shi and Tomasi described in the paper "Good Features to Track", 1994. I will use the second one, just because there is a ready implementation in OpenCV, newer Matlab, and possibly many other places. Its implementation on these packages also allows for easier parameter adjustment, regarding corner quality and minimum distance between corners. So, suppose you can detect all corner points correctly, how do you measure how close one shape is to another one based on these points ? The images have arbitrary size, so my idea is to normalize the point coordinates to the range [0, 1]. This solves for the scaling issue, which is desired according to the original description. Now we have to compare point sets in the range [0, 1]. Here we go for the simplest thing: consider one point p from the shape a, what is the closest point in shape b ? We assume it is one with the minimum absolute different between this point p and any point in b. If we sum all the values, we get a scoring between shapes. The lower the score, the more similar the shapes (according to this approach).
Here are some shapes I drew:
Here are the detected corners:
As you can clearly see in this last set of images, the method will easily confuse a rectangle/square with a cylinder. To handle that you will need to combine the approach with other descriptors. Initially, a simple one that you might consider is the ratio between the shape's area and its bounding box area (which would give 1 for rectangle, and lower for cylinder).
With the method described above, here are the measurements between the first and second shapes, first and third shapes, ..., respectively: 0.02358485, 0.41350339, 0.30128458 0.4980852, 0.18031262. The second cube is a resized version of the first one, and as you see, they are very similar by this metric. The last shape is a resized version of the first cube but without keeping the aspect ratio, and the metric gives a much higher difference.
If you want to play with the code that performs this, here it is (in Python, depends on OpenCV, numpy):
import sys
import cv2 as cv
import numpy
inp = []
for fname in sys.argv[1:]:
img_color = cv.imread(fname)
img = cv.cvtColor(img_color, cv.COLOR_RGB2GRAY)
inp.append((img_color, img))
ptsets = []
# Corner detection parameters.
params = (
200, # max number of corners
0.01, # minimum quality level of corners
10, # minimum distance between corners
)
# Params for visual circle markers.
circle_radii = 3
circle_color = (255, 0, 0)
for i, (img_color, img) in enumerate(inp):
height, width = img.shape
cornerMap = cv.goodFeaturesToTrack(img, *params)
corner = numpy.array([c[0] for c in cornerMap])
for c in corner:
cv.circle(img_color, tuple(c), circle_radii, circle_color, -1)
# Just to visually check for correct corners.
cv.imwrite('temp_%d.png' % i, img_color)
# Convert corner coordinates to [0, 1]
cornerUnity = (corner - corner.min()) / (corner.max() - corner.min())
# You might want to use other descriptors here. XXX
ptsets.append(cornerUnity)
def compare_ptsets(p):
res = numpy.zeros(len(p))
base = p[0]
for i in xrange(1, len(p)):
sum_min_diff = sum(numpy.abs(p[i] - value).min() for value in base)
res[i] = sum_min_diff
return res
res = compare_ptsets(ptsets)
print res
The process to be followed depends on what depth of features you are going to consider and accuracy required.
If you want something more accurate, search some technical papers like this which can give a concrete and well-proven approach or algorithm.
EDIT:
The idea from Waltz algorithm (one method in AI) can be tweaked. This is just my thought. Interpret the original image, generate some constraints out of it. For each candidate, find out the number of constraints it satisfies. The one which satisfies more constraints will be the most similar to the original image.
Try to calculate mass center for each figure. Treat each point of figure as particle with mass equal 1.
Then calculate each distance as sqrt((x1-x2)^2 + (y1-y2)^2), where (xi, yi) is mass center coordinate for figure i.
Related
Suppose I have an image and I want to find a subarray with shape 3x3 that contains the maximum sum compared to other subarrays.
How do I do that in python efficiently (run as fast as possible)? If you can provide a sample code that would be great.
My specific problem:
I want to extract the location of the center of the blob in this heatmap
I don't want to just get the maximum point because that would cause the coordinate to not be very precise. The true center of the blob could actually be between 2 pixels. Thus, it's better to do weighted average between many points to obtain subpixel precision. For example, if there are 2 points (x1,y1) and (x2,y2) with values 200 and 100. Then the average coordinate will be x=(200*x1+100*x2)/300 y=(200*y1+100*y2)/300
One of my solution is to do a convolution operation. But I think it's not efficient enough because it requires multiplication to the kernel (which contains only ones). I'm looking for a fast implementation so I cannot do looping myself because I'm not sure if it will be fast.
I want to do this algorithm to 50 images every few milliseconds. (Image come in as a batch). Concretely, think of these images as output of a machine learning model that output heatmaps. In order to obtain the coordinate from these heatmaps, I need to do some kind of weighted average between the coordinates with high intensity. My idea is to do a weighted average around 3x3 area on the image. I am also open to other approaches that can be faster or more elegant.
Looking for the "subarray of shape 3x3 with the maximum sum" is the same as looking for the maximum of an image after it has been filtered with an un-normalized 3x3 box filter. So it boils down to finding efficiently the maximum of an image, which you assume is a (perhaps "noisy") discrete sample of an underlying continuous and smooth signal - hence your desire to find a sub-pixel location.
You really need to split the problem in 2 parts:
Find the pixel location m=(xm, ym) of the maximum value of the image. This requires no more than a visit of every pixel in the image, and one comparison per pixel, so it's O(N) and hence optimal as long as you are operating at the native image resolution. In OpenCv it is done using
the minMaxLoc function.
Apply whatever model of the image you are using to find its (subpixel-interpolated) maximum in a neighborhood of m.
To clarify point (2): you write
I don't want to just get the maximum point because that would cause the coordinate to not be very precise. The true center of the blob could actually be between 2 pixels
While intuitively plausible, this assertion needs to be made more precise in order to be computable. That is, you need to express mathematically what assumptions you make about the image, that bring you to search for a "true" maximum between pixel-sampled location.
A simple example for such assumptions is quadratic smoothness. In this scenario you assume that, in a small (say, 3x3, of 5x5) neighborhood of the "true" maximum location, the image signal z is well approximated by a quadratic:
z = A00 dx^2 + A01 dx dy + A11 dy^2 + A02 dx + A12 dy + A22
where:
dx = x - xm; dy = y - ym
This assumption makes sense if the underlying signal is expected to be at least 3rd order continuous and differentiable, because of the Taylor series theorem. Geometrically, it means that you assume (hope?) that the signal looks like a quadric (a paraboloid, or an ellipsoid) near its maximum.
You can then evaluate the above equation for each of the pixels in a neighborhood of m, replacing the actual image values for z, and thus obtain a linear system in the unknown Aij, with as many equations as there are neighbor pixels (so even a 3x3 neighborhood will yield an over-constrained system). Solving the system in the least-squares sense gives you the "optimal" coefficients Aij. The theoretical maximum as predicted by this model is where the first partial derivatives vanish:
del z / del dx = 2 A00 dx + A01 dy = 0
del z / del dy = A01 dx + 2 A11 dy = 0
This is a linear system in the two unknown (dx, dy), and solving it yields the estimated location of the maximum and, through the above equation for z, the predicted image value at the maximum.
In terms of computational cost, all such model estimations are extremely fast, compared with traversing an image of even moderate size.
I am sorry I did not exactly understand the meaning of your last paragraph so I have just stopped at a point where I got all the coordinates having the maximum value. I have used cv2.filter2D for convolution on a thresholded image and then using np.amax and np.where have found the coordinates having the maximum value.
import cv2
import numpy as np
from timeit import default_timer as timer
img = cv2.imread('blob.png', 0)
start = timer()
_, thresh = cv2.threshold(img, 240, 1, cv2.THRESH_BINARY)
mask = np.ones((3, 3), np.uint8)
res = cv2.filter2D(thresh, -1, mask)
result = np.where(res == np.amax(res))
end = timer()
print(end - start)
I don't whether it as efficient as you want or not but the output was 0.0013461999999435648 s
P.S. The image you have provided had a white border which I had to crop out for this method.
One way is to sub-sampling the image and find the neighborhood of the desired point. You can make it by doing a loop not on all the pixels but on e.g. every 5 pixels (row=row+5andcol=col+5 in the loop). After finding the near location, consider a specific neighborhood around that location and do a loop on whole pixels of that specific crop to find the exact location.
Based on my knowledge of image processing, to get a reliable result that works for any one blob, follow these steps:
Make the image greyscale if it isn’t already (pixel values 0-255)
Normalise the image so that pixel intensities cover the full range of 0-255
Convert image to binary (a pixel is either 0 or 1) - this can be achieved by thresholding, such as applying the rule that any pixel less than or equal to 127 in intensity is given an intensity of 0 and anything else is given an intensity of 1
Find the weighted average of all the pixels that hold the value of “1”
or
Apple an erosion to the image until you are left with either 2 pixels or 1 pixel.
Case 1
If you have two pixels then you need to find the u and v co-ordinates if both pixels. The centre of the blob will be the halfway point between the u and v coordinates of the pixels.
Case 2
If you have one pixel left then that pixel’s co-ordinates is the centre point.
—————
You mentioned about achieving this quickly in Python:
Python by design is an interpreted language, so it executed line by line, making it less suitable for highly iterative tasks like image processing. However, you can make use of libraries like OpenCV (https://docs.opencv.org/2.4/index.html), which is written in C, to mitigate this apart from making the task at hand a lot easier for you.
OpenCV also provides solutions for all the steps I listed above in this capacity, therefore you should be able to achieve a reliable solution fairly quickly, though I can’t say for sure if it will hit your target of 50 images every few milliseconds. Other factors to take into account is the size of the image you are processing. That will increase the processing load exponentially.
UPDATE
I just found a good article that practically echoes my step-process:
https://www.learnopencv.com/find-center-of-blob-centroid-using-opencv-cpp-python/
More importantly it also denotes the formula for finding the centroid mathematically as:
c = (1/n)sigma(n, i = 1, x_i)
but this is better written in the article than I can do so here.
I am using matlab's built in function called Procrustes to see the rotation translation and scale between two images. But, I am just using coordinates of the brightest points in the image and rotating these coordinates about the center of the image. Procrustes compares two matrices and gives you the rotation, translation, and scale. However, procrustes only works correctly if the matrices are in the same order for comparison.
I am given an image and a separate comparison coordinate matrix. The end goal is to find how much the image has been rotated, translated, and scaled compared to the coordinate matrix. I can just use Procrustes for this, but I need to correctly order the coordinates found from the image to match the order in the comparison coordinate matrix. My thought was to compare the distance between every possible combination of points in the coordinate matrix and compare it to the coordinates that I find in the picture. I just do not know how to write this code due to the fact if there is n coordinates, there will be n! possible combinations.
Just searching for the shortest distance is not so hard.
A = rand(1E4,2);
B = rand(1E4,2);
tic
idx = nan(1,1E4);
for ct = 1:size(A,1)
d = sum((A(ct,:)-B).^2,2);
idx(ct) = find(d==min(d));
end
toc
plot(A(1:10,1),A(1:10,2),'.r',B(idx(1:10),1),B(idx(1:10),2),'.b')
takes half a second on my PC.
The problems can start when two points in set A are matched to the same location in set B.
length(unique(idx))==length(idx)
This can be solved in several ways. The best (imho) is to determine a probability that point B matches with point A based on the distance (usually something that decreases exponentially), and solve for the most probable situation.
A simpler method (but more error prone) is to remove the matched point from set B.
I am writing a function to draw an approximate circle on a square array (in Matlab, but the problem is mainly algorithmic).
The goal is to produce a mask for integrating light that falls on a portion of a CCD sensor from a diffraction-limited point source (whose diameter corresponds to a few pixels on the CCD array). In summary, the CCD sensor sees a pattern with revolution-symmetry, that has of course no obligation to be centered on one particular pixel of the CCD (see example image below).
Here is the algorithm that I currently use to produce my discretized circular mask, and which works partially (Matlab/Octave code):
xt = linspace(-xmax, xmax, npixels_cam); % in physical coordinates (meters)
[X Y] = meshgrid(xt-center(1), xt-center(2)); % shifted coordinate matrices
[Theta R] = cart2pol(X,Y);
R = R'; % cart2pol uses a different convention for lines/columns
mask = (R<=radius);
As you can see, my algorithm selects (sets to 1) all the pixels whose physical distance (in meters) is smaller or equal to a radius, which doesn't need to be an integer.
I feel like my algorithm may not be the best solution to this problem. In particular, I would like it to include the pixel in which the center is present, even when the radius is very small.
Any ideas ?
(See http://i.stack.imgur.com/3mZ5X.png for an example image of a diffraction-limited spot on a CCD camera).
if you like to select pixels if and only if they contain any part of the circle C:
in each pixel place a small circle A with the radius = halv size of the pixel, and another one around it with R=sqrt(2)*half size of the circle (a circumscribed circle)
To test if two circles touch each other you just calculate the center to center distances and subtract the sum of the two radii.
If the test circle C is within A then you select the pixel. If it's within B but not C you need to test all four pixel sides for overlap like this Circle line-segment collision detection algorithm?
A brute force approximate method is to make a much finer grid within each pixel and test each center point in that grid.
This is a well-studied problem. Several levels of optimization are possible:
You can brute-force check if every pixel is inside the circle. (r^2 >= (x-x0)^2 + (y-y0)^2)
You can brute-force check if every pixel in a square bounding the circle is inside the circle. (r^2 >= (x-x0)^2 + (y-y0)^2 where |x-x0| < r and |y-y0| < r)
You can go line-by-line (where |y-y0| < r) and calculate the starting x ending x and fill all the lines in between. (Although square roots aren't cheap.)
There's an infinite possibility of more sophisticated algorithms. Here's a common one: http://en.wikipedia.org/wiki/Midpoint_circle_algorithm (filling in the circle is left as an exercise)
It really depends on how sophisticated you want to be based on how imperative good performance is.
I am trying to generate colors on the fly for a chart control. I want the colors to be visually distinctive. I don't just want the colors to be distinctive from the adjacent colors, but all colors generated so far.
I also don't want to have to have a known color collection size. Some algorithms I have seen for this require the number of things to color to be known. I want to implement a GetNextColor() for my color generator so I will not know at the time of choosing how many colors I will ultimately have and choosing a number up front feels wrong.
I am not just trying to graph a bunch of stuff in different colors, I am interested in this problem and want some feedback.
Here's where I'm at:
Using the HSV color space.
The hue is a value from [0-360] where 0
and 360 are the same (reddish).
Hue starts at 0, I ad 27 (so that
when it cycles around it doesn't land on the same color it started
on), take MOD 360.
For S and V (both between 0 and 1) I start out at a low number like
.25
Run through about 20 hues
Then take a high number like .85
Run through 20 hues
Then start bisecting to get the most distant
values that haven't been used yet.
This isn't a very effective method, it works OK, but it could be much
more scientific. It started out with a lot of thought and then
morphed into this mess.
Any ideas on how to do this elegantly?
(It shouldn't matter, but I am using C# and I will post code when I get back to my computer I have all this stuff on.)
I believe that your question should be split into two questions:
How to map colors into a n-dimensional Cartesian space, and define an Euclidean distance function between colors, such that the distance reflects the difference for a human observer.
Given a n-dimensional cuboid, generate a sequence of dots such that minimal Euclidean distance between any two dots generated so far would be maximized.
And now the answers:
Color difference is calculated using the The CIEDE2000 Color-Difference Formula. The CIEDE2000 formula is based on the LCH color space (Luminosity, Chroma, and Hue). LCH color space is represented as a cylinder (see image here).
However, the difference formula is highly nonlinear. Therefore it would be impossible to map the colors into a square grid such that Euclidean distance would give the CIEDE2000 color-difference.
Settling on a less accurate model, we can use the CIE76 Color-Difference formula, which is based on the Lab color space ( L*a*b*). We can use Euclidean distance directly on this color space to measure the difference. There are no simple formulas for conversion between RGB or CMYK values and L*a*b*, because the RGB and CMYK color models are device dependent. The RGB or CMYK values first need to be transformed to a specific absolute color space, such as sRGB or Adobe RGB. This adjustment will be device dependent, but the resulting data from the transform will be device independent, allowing data to be transformed to the CIE 1931 color space and then transformed into L*a*b*. This article explains the procedure and the formulas.
For the L*a*b* color space and the CIE76 Color-Difference formula - we'll need to solve the problem for a 3D cube.
I believe that your best strategy would be to divide the cube into 8 cubes, which will generate 27 points. Use these points. Now divide each of the 8 cubes into another 8 cubes. For each of these cubes, 12 out of the 27 points have already been used, so you're left with 15*8 new points. In each additional step n, you can generate 15*8^n additional points.
The points-set in each step should be sorted such that the minimal distance between two consecutive points would be maximized. I don't know how to do it - I've just posted a question.
Edit:
I've crossposted on https://cstheory.stackexchange.com/ and got a good answer. See https://cstheory.stackexchange.com/questions/8609/sorting-points-such-that-the-minimal-euclidean-distance-between-consecutive-poin.
If you map the whole color space linearly then your next color would map into it using the powers of 2. Your first choice would be the center, your 2nd choice would be between start and center. Your 3rd choice would be between center and end.
Some JavaScript to illustrate.
// initialize start and end of our linear transform
var START = 0;
var END = 100;
// next function
var _level = 1;
var _index = 1;
function next() {
var pow2 = 2 << (_level - 1);
var result = (END-START) / pow2;
result = result * _index
_index = (_index + 2) % pow2;
if(_index == 1) {
_level++;
}
return result;
}
// testing
for(var i=0; i<32; i++)
console.log(next());
Given n circles with radii r1 ... rn, position them in such a way that no circles are overlapping and the bounding circle is of "small" radius.
The program takes a list [r1, r2, ... rn] as input and outputs the centers of the circles.
I ask for "small" because "minimum" radius converts it into a much more difficult problem (minimum version has already been proved to be NP hard/complete - see footnote near end of question). We don't need the minimum. If the shape made by the circles seems to be fairly circular, that is good enough.
You can assume that Rmax/Rmin < 20 if it helps.
A low priority concern - the program should be able to handle 2000+ circles. As a start, even 100-200 circles should be fine.
You might have guessed that the circles need not be packed together tightly or even touching each other.
The aim is to come up with a visually pleasing arrangement of the given circles which can fit inside a larger circle and not leave too much empty space. (like the circles in a color blindness test picture).
You can use the Python code below as a starting point (you would need numpy and matplotlib for this code - "sudo apt-get install numpy matplotlib" on linux)...
import pylab
from matplotlib.patches import Circle
from random import gauss, randint
from colorsys import hsv_to_rgb
def plotCircles(circles):
# input is list of circles
# each circle is a tuple of the form (x, y, r)
ax = pylab.figure()
bx = pylab.gca()
rs = [x[2] for x in circles]
maxr = max(rs)
minr = min(rs)
hue = lambda inc: pow(float(inc - minr)/(1.02*(maxr - minr)), 3)
for circle in circles:
circ = Circle((circle[0], circle[1]), circle[2])
color = hsv_to_rgb(hue(circle[2]), 1, 1)
circ.set_color(color)
circ.set_edgecolor(color)
bx.add_patch(circ)
pylab.axis('scaled')
pylab.show()
def positionCircles(rn):
# You need rewrite this function
# As of now, this is a dummy function
# which positions the circles randomly
maxr = int(max(rn)/2)
numc = len(rn)
scale = int(pow(numc, 0.5))
maxr = scale*maxr
circles = [(randint(-maxr, maxr), randint(-maxr, maxr), r)
for r in rn]
return circles
if __name__ == '__main__':
minrad, maxrad = (3, 5)
numCircles = 400
rn = [((maxrad-minrad)*gauss(0,1) + minrad) for x in range(numCircles)]
circles = positionCircles(rn)
plotCircles(circles)
Added info : The circle packing algorithm commonly referred to in google search results is not applicable to this problem.
The problem statement of the other "Circle packing algorithm" is thus : Given a complex K ( graphs in this context are called simplicial complexes, or complex in short) and appropriate boundary conditions, compute the radii of the corresponding circle packing for K....
It basically starts off from a graph stating which circles are touching each other (vertices of the graph denote circles, and the edges denote touch/tangential relation between circles). One has to find the circle radii and positions so as to satisfy the touching relationship denoted by the graph.
The other problem does have an interesting observation (independent of this problem) :
Circle Packing Theorem - Every circle packing has a corresponding planar graph (this is the easy/obvious part), and every planar graph has a corresponding circle packing (the not so obvious part). The graphs and packings are duals of each other and are unique.
We do not have a planar graph or tangential relationship to start from in our problem.
This paper - Robert J. Fowler, Mike Paterson, Steven L. Tanimoto: Optimal Packing and Covering in the Plane are NP-Complete - proves that the minimum version of this problem is NP-complete. However, the paper is not available online (at least not easily).
Not a solution, just a brainstorming idea: IIRC one common way to get approximate solutions to the TSP is to start with a random configuration, and then applying local operations (e.g. "swapping" two edges in the path) to try and get shorter and shorter paths. (Wikipedia link)
I think something similar would be possible here:
Start with random center positions
"Optimize" these positions, so there are no overlapping circles and so the circles are as close as possible, by increasing the distance between overlapping circles and decreasing the distance between other circles, until they're tightly packed. This could be done by some kind of energy minimization, or there might be a more efficient greedy solution.
Apply an iterative improvement operator to the center positons
Goto 2, break after a maximum number of iterations or if the last iteration didn't find any improvement
The interesting question is: what kind of "iterative improvement operator" could you use in step 3? We can assume that the positions at that stage are locally optimal, but they might be improved by rearranging a large fraction of the circles. My suggestion would be to arbitrarily choose a line through the circles. Then take all the circles "left" of the line and mirror them at some axis perpendicular to that line:
You would probably try multiple lines and pick the one that leads to the most compact solution.
The idea is, if some of the circles are already at or close to their optimal configuration, chances are good this operation won't disturb them.
Other possible operations I could think of:
Take one of the circles with the highest distance from the center (one touching the boundary circle), and randomly move it somewhere else:
Choose a set of cirlces that are close to each other (e.g. if their centers lie in an randomly chosen circle) and rotate them by a random angle.
Another option (although a bit more complex) would be to measure the area between the circles, when they're tightly packed:
Then you could pick one of the circles adjacent to the largest between-circle-area (the red area, in the image) and swap it with another circle, or move it somewhere to the boundary.
(Response to comment:) Note that each of these "improvements" is almost guaranteed to create overlaps and/or unneccessary space between circles. But in the next iteration, step 2 will move the circles so they are tightly packed and non-overlapping again. This way, I can have one step for local optimizations (without caring about global ones), and one for global optimizations (which might create locally suboptimal solutions). This is far easier than having one complex step that does both.
I have a pretty naive one pass (over the radii) solution that produces alright results, although there is definitely room for improvement. I do have some ideas in that direction but figure I might as well share what I have in case anybody else wants to hack on it too.
It looks like they intersect at the center, but they don't. I decorated the placement function with a nested loop that checks every circle against every other circle (twice) and raises an AssertionError if there is an intersection.
Also, I can get the edge close to perfect by simply reverse sorting the list but I don't think the center looks good that way. It's (pretty much the only thing ;) discussed in the comments to the code.
The idea is to only look at discrete points that a circle might live at and iterate over them using the following generator:
def base_points(radial_res, angular_res):
circle_angle = 2 * math.pi
r = 0
while 1:
theta = 0
while theta <= circle_angle:
yield (r * math.cos(theta), r * math.sin(theta))
r_ = math.sqrt(r) if r > 1 else 1
theta += angular_res/r_
r += radial_res
This just starts at the origin and traces out points along concentric circles around it. We process the radii by sorting them according to some parameters to keep the large circles near the center (beginning of list) but enough small ones near the beginning to fill in spaces. We then iterate over the radii. within the main loop, we first loop over points that we have already looked at and saved away. If none of those are suitable, we start pulling new points out of the generator and saving them (in order) until we find a suitable spot. We then place the circle and go through our list of saved points pulling out all of the ones that fall within the new circle. We then repeat. on the next radius.
I'll put some ideas I have into play and make it mo`bettah. This might serve as a good first step for a physics based idea because you get to start with no overlaps. Of course it might already be tight enough so that you wouldn't have much room.
Also, I've never played with numpy or matplotlib so I write just vanilla python. There might be something in there that will make it run much faster, I'll have to look.
Can you treat the circles as charged particles in a charged cavity and look for a stable solution? That is, circles repel one another according to proximity, but are attracted towards the origin. A few steps of simulation might get you a decent answer.
Sounds like a Circle Packing problem, here is some information:
Circle Packing Wolfram MathWorld
Circle Packing Algorithms Google Scholar
CirclePack software
http://en.wikipedia.org/wiki/Apollonian_gasket
This seems somewhat relevant to what you are trying to do, and may provide some potential constraints for you.