Remove barrel distortion from an image in MATLAB [duplicate] - image

BOUNTY STATUS UPDATE:
I discovered how to map a linear lens, from destination coordinates to source coordinates.
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted?
2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please...
Question as originally stated:
I have some points that describe positions in a picture taken with a fisheye lens.
I want to convert these points to rectilinear coordinates. I want to undistort the image.
I've found this description of how to generate a fisheye effect, but not how to reverse it.
There's also a blog post that describes how to use tools to do it; these pictures are from that:
(1) : SOURCE Original photo link
Input : Original image with fish-eye distortion to fix.
(2) : DESTINATION Original photo link
Output : Corrected image (technically also with perspective correction, but that's a separate step).
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
My function stub looks like this:
Point correct_fisheye(const Point& p,const Size& img) {
// to polar
const Point centre = {img.width/2,img.height/2};
const Point rel = {p.x-centre.x,p.y-centre.y};
const double theta = atan2(rel.y,rel.x);
double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
// fisheye undistortion in here please
//... change R ...
// back to rectangular
const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
return ret;
}
Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?

The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by
R_u = f*tan(theta)
and the projection by common fisheye lens cameras (that is, distorted) is modeled by
R_d = 2*f*sin(theta/2)
You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. In other words,
R_u = f*tan(2*asin(R_d/(2*f)))
is the formula you're looking for. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene.
In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion:
#!/usr/bin/python
# ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056
import sys
import cv
def main(argv):
if len(argv) < 10:
print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argv[0]
sys.exit(-1)
src = argv[1]
fx, fy, cx, cy, k1, k2, p1, p2, output = argv[2:]
intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1)
cv.Zero(intrinsics)
intrinsics[0, 0] = float(fx)
intrinsics[1, 1] = float(fy)
intrinsics[2, 2] = 1.0
intrinsics[0, 2] = float(cx)
intrinsics[1, 2] = float(cy)
dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)
cv.Zero(dist_coeffs)
dist_coeffs[0, 0] = float(k1)
dist_coeffs[0, 1] = float(k2)
dist_coeffs[0, 2] = float(p1)
dist_coeffs[0, 3] = float(p2)
src = cv.LoadImage(src)
dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
# cv.Undistort2(src, dst, intrinsics, dist_coeffs)
cv.SaveImage(output, dst)
if __name__ == '__main__':
main(sys.argv)
Also note that OpenCV uses a very different lens distortion model to the one in the web page you linked to.

(Original poster, providing an alternative)
The following function maps destination (rectilinear) coordinates to source (fisheye-distorted) coordinates. (I'd appreciate help in reversing it)
I got to this point through trial-and-error: I don't fundamentally grasp why this code is working, explanations and improved accuracy appreciated!
def dist(x,y):
return sqrt(x*x+y*y)
def correct_fisheye(src_size,dest_size,dx,dy,factor):
""" returns a tuple of source coordinates (sx,sy)
(note: values can be out of range)"""
# convert dx,dy to relative coordinates
rx, ry = dx-(dest_size[0]/2), dy-(dest_size[1]/2)
# calc theta
r = dist(rx,ry)/(dist(src_size[0],src_size[1])/factor)
if 0==r:
theta = 1.0
else:
theta = atan(r)/r
# back to absolute coordinates
sx, sy = (src_size[0]/2)+theta*rx, (src_size[1]/2)+theta*ry
# done
return (int(round(sx)),int(round(sy)))
When used with a factor of 3.0, it successfully undistorts the images used as examples (I made no attempt at quality interpolation):
Dead link
(And this is from the blog post, for comparison:)

If you think your formulas are exact, you can comput an exact formula with trig, like so:
Rin = 2 f sin(w/2) -> sin(w/2)= Rin/2f
Rout= f tan(w) -> tan(w)= Rout/f
(Rin/2f)^2 = [sin(w/2)]^2 = (1 - cos(w))/2 -> cos(w) = 1 - 2(Rin/2f)^2
(Rout/f)^2 = [tan(w)]^2 = 1/[cos(w)]^2 - 1
-> (Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
However, as #jmbr says, the actual camera distortion will depend on the lens and the zoom. Rather than rely on a fixed formula, you might want to try a polynomial expansion:
Rout = Rin*(1 + A*Rin^2 + B*Rin^4 + ...)
By tweaking first A, then higher-order coefficients, you can compute any reasonable local function (the form of the expansion takes advantage of the symmetry of the problem). In particular, it should be possible to compute initial coefficients to approximate the theoretical function above.
Also, for good results, you will need to use an interpolation filter to generate your corrected image. As long as the distortion is not too great, you can use the kind of filter you would use to rescale the image linearly without much problem.
Edit: as per your request, the equivalent scaling factor for the above formula:
(Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
-> Rout/f = [Rin/f] * sqrt(1-[Rin/f]^2/4)/(1-[Rin/f]^2/2)
If you plot the above formula alongside tan(Rin/f), you can see that they are very similar in shape. Basically, distortion from the tangent becomes severe before sin(w) becomes much different from w.
The inverse formula should be something like:
Rin/f = [Rout/f] / sqrt( sqrt(([Rout/f]^2+1) * (sqrt([Rout/f]^2+1) + 1) / 2 )

I blindly implemented the formulas from here, so I cannot guarantee it would do what you need.
Use auto_zoom to get the value for the zoom parameter.
def dist(x,y):
return sqrt(x*x+y*y)
def fisheye_to_rectilinear(src_size,dest_size,sx,sy,crop_factor,zoom):
""" returns a tuple of dest coordinates (dx,dy)
(note: values can be out of range)
crop_factor is ratio of sphere diameter to diagonal of the source image"""
# convert sx,sy to relative coordinates
rx, ry = sx-(src_size[0]/2), sy-(src_size[1]/2)
r = dist(rx,ry)
# focal distance = radius of the sphere
pi = 3.1415926535
f = dist(src_size[0],src_size[1])*factor/pi
# calc theta 1) linear mapping (older Nikon)
theta = r / f
# calc theta 2) nonlinear mapping
# theta = asin ( r / ( 2 * f ) ) * 2
# calc new radius
nr = tan(theta) * zoom
# back to absolute coordinates
dx, dy = (dest_size[0]/2)+rx/r*nr, (dest_size[1]/2)+ry/r*nr
# done
return (int(round(dx)),int(round(dy)))
def fisheye_auto_zoom(src_size,dest_size,crop_factor):
""" calculate zoom such that left edge of source image matches left edge of dest image """
# Try to see what happens with zoom=1
dx, dy = fisheye_to_rectilinear(src_size, dest_size, 0, src_size[1]/2, crop_factor, 1)
# Calculate zoom so the result is what we wanted
obtained_r = dest_size[0]/2 - dx
required_r = dest_size[0]/2
zoom = required_r / obtained_r
return zoom

I took what JMBR did and basically reversed it. He took the radius of the distorted image (Rd, that is, the distance in pixels from the center of the image) and found a formula for Ru, the radius of the undistorted image.
You want to go the other way. For each pixel in the undistorted (processed image), you want to know what the corresponding pixel is in the distorted image.
In other words, given (xu, yu) --> (xd, yd). You then replace each pixel in the undistorted image with its corresponding pixel from the distorted image.
Starting where JMBR did, I do the reverse, finding Rd as a function of Ru. I get:
Rd = f * sqrt(2) * sqrt( 1 - 1/sqrt(r^2 +1))
where f is the focal length in pixels (I'll explain later), and r = Ru/f.
The focal length for my camera was 2.5 mm. The size of each pixel on my CCD was 6 um square. f was therefore 2500/6 = 417 pixels. This can be found by trial and error.
Finding Rd allows you to find the corresponding pixel in the distorted image using polar coordinates.
The angle of each pixel from the center point is the same:
theta = arctan( (yu-yc)/(xu-xc) ) where xc, yc are the center points.
Then,
xd = Rd * cos(theta) + xc
yd = Rd * sin(theta) + yc
Make sure you know which quadrant you are in.
Here is the C# code I used
public class Analyzer
{
private ArrayList mFisheyeCorrect;
private int mFELimit = 1500;
private double mScaleFESize = 0.9;
public Analyzer()
{
//A lookup table so we don't have to calculate Rdistorted over and over
//The values will be multiplied by focal length in pixels to
//get the Rdistorted
mFisheyeCorrect = new ArrayList(mFELimit);
//i corresponds to Rundist/focalLengthInPixels * 1000 (to get integers)
for (int i = 0; i < mFELimit; i++)
{
double result = Math.Sqrt(1 - 1 / Math.Sqrt(1.0 + (double)i * i / 1000000.0)) * 1.4142136;
mFisheyeCorrect.Add(result);
}
}
public Bitmap RemoveFisheye(ref Bitmap aImage, double aFocalLinPixels)
{
Bitmap correctedImage = new Bitmap(aImage.Width, aImage.Height);
//The center points of the image
double xc = aImage.Width / 2.0;
double yc = aImage.Height / 2.0;
Boolean xpos, ypos;
//Move through the pixels in the corrected image;
//set to corresponding pixels in distorted image
for (int i = 0; i < correctedImage.Width; i++)
{
for (int j = 0; j < correctedImage.Height; j++)
{
//which quadrant are we in?
xpos = i > xc;
ypos = j > yc;
//Find the distance from the center
double xdif = i-xc;
double ydif = j-yc;
//The distance squared
double Rusquare = xdif * xdif + ydif * ydif;
//the angle from the center
double theta = Math.Atan2(ydif, xdif);
//find index for lookup table
int index = (int)(Math.Sqrt(Rusquare) / aFocalLinPixels * 1000);
if (index >= mFELimit) index = mFELimit - 1;
//calculated Rdistorted
double Rd = aFocalLinPixels * (double)mFisheyeCorrect[index]
/mScaleFESize;
//calculate x and y distances
double xdelta = Math.Abs(Rd*Math.Cos(theta));
double ydelta = Math.Abs(Rd * Math.Sin(theta));
//convert to pixel coordinates
int xd = (int)(xc + (xpos ? xdelta : -xdelta));
int yd = (int)(yc + (ypos ? ydelta : -ydelta));
xd = Math.Max(0, Math.Min(xd, aImage.Width-1));
yd = Math.Max(0, Math.Min(yd, aImage.Height-1));
//set the corrected pixel value from the distorted image
correctedImage.SetPixel(i, j, aImage.GetPixel(xd, yd));
}
}
return correctedImage;
}
}

I found this pdf file and I have proved that the maths are correct (except for the line vd = *xd**fv+v0 which should say vd = **yd**+fv+v0).
http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV.pdf
It does not use all of the latest co-efficients that OpenCV has available but I am sure that it could be adapted fairly easily.
double k1 = cameraIntrinsic.distortion[0];
double k2 = cameraIntrinsic.distortion[1];
double p1 = cameraIntrinsic.distortion[2];
double p2 = cameraIntrinsic.distortion[3];
double k3 = cameraIntrinsic.distortion[4];
double fu = cameraIntrinsic.focalLength[0];
double fv = cameraIntrinsic.focalLength[1];
double u0 = cameraIntrinsic.principalPoint[0];
double v0 = cameraIntrinsic.principalPoint[1];
double u, v;
u = thisPoint->x; // the undistorted point
v = thisPoint->y;
double x = ( u - u0 )/fu;
double y = ( v - v0 )/fv;
double r2 = (x*x) + (y*y);
double r4 = r2*r2;
double cDist = 1 + (k1*r2) + (k2*r4);
double xr = x*cDist;
double yr = y*cDist;
double a1 = 2*x*y;
double a2 = r2 + (2*(x*x));
double a3 = r2 + (2*(y*y));
double dx = (a1*p1) + (a2*p2);
double dy = (a3*p1) + (a1*p2);
double xd = xr + dx;
double yd = yr + dy;
double ud = (xd*fu) + u0;
double vd = (yd*fv) + v0;
thisPoint->x = ud; // the distorted point
thisPoint->y = vd;

This can be solved as an optimization problem. Simply draw on curves in images that are supposed to be straight lines. Store the contour points for each of those curves. Now we can solve the fish eye matrix as a minimization problem. Minimize the curve in points and that will give us a fisheye matrix. It works.
It can be done manually by adjusting the fish eye matrix using trackbars! Here is a fish eye GUI code using OpenCV for manual calibration.

Related

Fast Radial Symmetry Transform (FRST) implementation (python) results in unusual cross-hair looking artifacts

I am trying to implement FRST on python to detect centroids of elliptical objects (e.g. cells in microscopy images), but my implementation does not find seed points (more or less center points) of elliptical objects. This effort comes from duplicating FRST from Segmentation of Overlapping Elliptical Objects in Silhouette Images (https://ieeexplore.ieee.org/document/7300433). I don't know why I have these artifacts. An interesting thing is that I see these patterns (crosses) all in the same direction per object. Any point in the right direction to generate the same result as in the paper (just to find the seed points) will be most welcome.
Original Paper: A Fast Radial Symmetry Transform for Detecting Points of Interest by Loy and Zelinsky (ECCV 2002)
I have also tried the pre-existing python package for FRST: https://pypi.org/project/frst/. This somehow results in the same artifacts. Weird.
First image: Original Image
Second image: Sobel-operated Image
Third image: Magnitude Projection Image
Fourth image: Magnitude Projection Image with positively affected pixels only
Fifth image: FRST'd image: end-product with original image overlaid (shadowed)
Sixth image: FRST'd image by the pre-existing python package with original image overlaid (shadowed).
from scipy.ndimage import gaussian_filter
import numpy as np
from scipy.signal import convolve
# Get orientation projection image
def get_proj_img(image, radius):
workingDims = tuple((e + 2*radius) for e in image.shape)
h,w = image.shape
ori_img = np.zeros(workingDims) # Orientation Projection Image
mag_img = np.zeros(workingDims) # Magnitutde Projection Image
# Kenels for the sobel operator
a1 = np.matrix([1, 2, 1])
a2 = np.matrix([-1, 0, 1])
Kx = a1.T * a2
Ky = a2.T * a1
# Apply the Sobel operator
sobel_x = convolve(image, Kx)
sobel_y = convolve(image, Ky)
sobel_norms = np.hypot(sobel_x, sobel_y)
# Distances to afpx, afpy (affected pixels)
dist_afpx = np.multiply(np.divide(sobel_x, sobel_norms, out = np.zeros(sobel_x.shape), where = sobel_norms!=0), radius)
dist_afpx = np.round(dist_afpx).astype(int)
dist_afpy = np.multiply(np.divide(sobel_y, sobel_norms, out = np.zeros(sobel_y.shape), where = sobel_norms!=0), radius)
dist_afpy = np.round(dist_afpy).astype(int)
for cords, sobel_norm in np.ndenumerate(sobel_norms):
i, j = cords
pos_aff_pix = (i+dist_afpx[i,j], j+dist_afpy[i,j])
neg_aff_pix = (i-dist_afpx[i,j], j-dist_afpy[i,j])
ori_img[pos_aff_pix] += 1
ori_img[neg_aff_pix] -= 1
mag_img[pos_aff_pix] += sobel_norm
mag_img[neg_aff_pix] -= sobel_norm
ori_img = ori_img[:h, :w]
mag_img = mag_img[:h, :w]
print ("Did it go back to the original image size? ")
print (ori_img.shape == image.shape)
# try normalizing ori and mag img
return ori_img, mag_img
def get_sn(ori_img, mag_img, radius, kn, alpha):
ori_img_limited = np.minimum(ori_img, kn)
fn = np.multiply(np.divide(mag_img,kn), np.power((np.absolute(ori_img_limited)/kn), alpha))
# convolute fn with gaussian filter.
sn = gaussian_filter(fn, 0.25*radius)
return sn
def do_frst(image, radius, kn, alpha, ksize = 3):
ori_img, mag_img = get_proj_img(image, radius)
sn = get_sn(ori_img, mag_img, radius, kn, alpha)
return sn
Parameters:
radius = 50
kn = 10
alpha = 2
beta = 0
stdfactor = 0.25

Processing - creating circles from current pixels

I'm using processing, and I'm trying to create a circle from the pixels i have on my display.
I managed to pull the pixels on screen and create a growing circle from them.
However i'm looking for something much more sophisticated, I want to make it seem as if the pixels on the display are moving from their current location and forming a turning circle or something like this.
This is what i have for now:
int c = 0;
int radius = 30;
allPixels = removeBlackP();
void draw {
loadPixels();
for (int alpha = 0; alpha < 360; alpha++)
{
float xf = 350 + radius*cos(alpha);
float yf = 350 + radius*sin(alpha);
int x = (int) xf;
int y = (int) yf;
if (radius > 200) {radius =30;break;}
if (c> allPixels.length) {c= 0;}
pixels[y*700 +x] = allPixels[c];
updatePixels();
}
radius++;
c++;
}
the function removeBlackP return an array with all the pixels except for the black ones.
This code works for me. There is an issue that the circle only has the numbers as int so it seems like some pixels inside the circle won't fill, i can live with that. I'm looking for something a bit more complex like I explained.
Thanks!
Fill all pixels of scanlines belonging to the circle. Using this approach, you will paint all places inside the circle. For every line calculate start coordinate (end one is symmetric). Pseudocode:
for y = center_y - radius; y <= center_y + radius; y++
dx = Sqrt(radius * radius - y * y)
for x = center_x - dx; x <= center_x + dx; x++
fill a[y, x]
When you find places for all pixels, you can make correlation between initial pixels places and calculated ones and move them step-by-step.
For example, if initial coordinates relative to center point for k-th pixel are (x0, y0) and final coordinates are (x1,y1), and you want to make M steps, moving pixel by spiral, calculate intermediate coordinates:
calc values once:
r0 = Sqrt(x0*x0 + y0*y0) //Math.Hypot if available
r1 = Sqrt(x1*x1 + y1*y1)
fi0 = Math.Atan2(y0, x0)
fi1 = Math.Atan2(y1, x1)
if fi1 < fi0 then
fi1 = fi1 + 2 * Pi;
for i = 1; i <=M ; i++
x = (r0 + i / M * (r1 - r0)) * Cos(fi0 + i / M * (fi1 - fi0))
y = (r0 + i / M * (r1 - r0)) * Sin(fi0 + i / M * (fi1 - fi0))
shift by center coordinates
The way you go about drawing circles in Processing looks a little convoluted.
The simplest way is to use the ellipse() function, no pixels involved though:
If you do need to draw an ellipse and use pixels, you can make use of PGraphics which is similar to using a separate buffer/"layer" to draw into using Processing drawing commands but it also has pixels[] you can access.
Let's say you want to draw a low-res pixel circle circle, you can create a small PGraphics, disable smoothing, draw the circle, then render the circle at a higher resolution. The only catch is these drawing commands must be placed within beginDraw()/endDraw() calls:
PGraphics buffer;
void setup(){
//disable sketch's aliasing
noSmooth();
buffer = createGraphics(25,25);
buffer.beginDraw();
//disable buffer's aliasing
buffer.noSmooth();
buffer.noFill();
buffer.stroke(255);
buffer.endDraw();
}
void draw(){
background(255);
//draw small circle
float circleSize = map(sin(frameCount * .01),-1.0,1.0,0.0,20.0);
buffer.beginDraw();
buffer.background(0);
buffer.ellipse(buffer.width / 2,buffer.height / 2, circleSize,circleSize);
buffer.endDraw();
//render small circle at higher resolution (blocky - no aliasing)
image(buffer,0,0,width,height);
}
If you want to manually draw a circle using pixels[] you are on the right using the polar to cartesian conversion formula (x = cos(angle) * radius, y = sin(angle) * radius).Even though it's focusing on drawing a radial gradient, you can find an example of drawing a circle(a lot actually) using pixels in this answer

Image Based Visual Servoing algorithm in MATLAB

I was trying to implement the IBVS algorithm (the one explained in the Introduction here) in MATLAB myself, but I am facing the following problem : The algorithm seems to work only for the cases that the camera does not have to change its orientation in respect to the world frame.For example, if I just try to make one vertex of the initial (almost) square go closer to its opposite vertex, the algorithm does not work, as can be seen in the following image
The red x are the desired projections, the blue circles are the initial ones and the green ones are the ones I get from my algorithm.
Also the errors are not exponentially dereasing as they should.
What am I doing wrong? I am attaching my MATLAB code which is fully runable. If anyone could take a look, I would be really grateful. I took out the code that was performing the plotting. I hope it is more readable now. Visual servoing has to be performed with at least 4 target points, because else the problem has no unique solution. If you are willing to help, I would suggest you take a look at the calc_Rotation_matrix() function to check that the rotation matrix is properly calculated, then verify that the line ds = vc; in euler_ode is correct. The camera orientation is expressed in Euler angles according to this convention. Finally, one could check if the interaction matrix L is properly calculated.
function VisualServo()
global A3D B3D C3D D3D A B C D Ad Bd Cd Dd
%coordinates of the 4 points wrt camera frame
A3D = [-0.2633;0.27547;0.8956];
B3D = [0.2863;-0.2749;0.8937];
C3D = [-0.2637;-0.2746;0.8977];
D3D = [0.2866;0.2751;0.8916];
%initial projections (computed here only to show their relation with the desired ones)
A=A3D(1:2)/A3D(3);
B=B3D(1:2)/B3D(3);
C=C3D(1:2)/C3D(3);
D=D3D(1:2)/D3D(3);
%initial camera position and orientation
%orientation is expressed in Euler angles (X-Y-Z around the inertial frame
%of reference)
cam=[0;0;0;0;0;0];
%desired projections
Ad=A+[0.1;0];
Bd=B;
Cd=C+[0.1;0];
Dd=D;
t0 = 0;
tf = 50;
s0 = cam;
%time step
dt=0.01;
t = euler_ode(t0, tf, dt, s0);
end
function ts = euler_ode(t0,tf,dt,s0)
global A3D B3D C3D D3D Ad Bd Cd Dd
s = s0;
ts=[];
for t=t0:dt:tf
ts(end+1)=t;
cam = s;
% rotation matrix R_WCS_CCS
R = calc_Rotation_matrix(cam(4),cam(5),cam(6));
r = cam(1:3);
% 3D coordinates of the 4 points wrt the NEW camera frame
A3D_cam = R'*(A3D-r);
B3D_cam = R'*(B3D-r);
C3D_cam = R'*(C3D-r);
D3D_cam = R'*(D3D-r);
% NEW projections
A=A3D_cam(1:2)/A3D_cam(3);
B=B3D_cam(1:2)/B3D_cam(3);
C=C3D_cam(1:2)/C3D_cam(3);
D=D3D_cam(1:2)/D3D_cam(3);
% computing the L matrices
L1 = L_matrix(A(1),A(2),A3D_cam(3));
L2 = L_matrix(B(1),B(2),B3D_cam(3));
L3 = L_matrix(C(1),C(2),C3D_cam(3));
L4 = L_matrix(D(1),D(2),D3D_cam(3));
L = [L1;L2;L3;L4];
%updating the projection errors
e = [A-Ad;B-Bd;C-Cd;D-Dd];
%compute camera velocity
vc = -0.5*pinv(L)*e;
%change of the camera position and orientation
ds = vc;
%update camera position and orientation
s = s + ds*dt;
end
ts(end+1)=tf+dt;
end
function R = calc_Rotation_matrix(theta_x, theta_y, theta_z)
Rx = [1 0 0; 0 cos(theta_x) -sin(theta_x); 0 sin(theta_x) cos(theta_x)];
Ry = [cos(theta_y) 0 sin(theta_y); 0 1 0; -sin(theta_y) 0 cos(theta_y)];
Rz = [cos(theta_z) -sin(theta_z) 0; sin(theta_z) cos(theta_z) 0; 0 0 1];
R = Rx*Ry*Rz;
end
function L = L_matrix(x,y,z)
L = [-1/z,0,x/z,x*y,-(1+x^2),y;
0,-1/z,y/z,1+y^2,-x*y,-x];
end
Cases that work:
Ad=2*A;
Bd=2*B;
Cd=2*C;
Dd=2*D;
Ad=A+1;
Bd=B+1;
Cd=C+1;
Dd=D+1;
Ad=2*A+1;
Bd=2*B+1;
Cd=2*C+1;
Dd=2*D+1;
Cases that do NOT work:
Rotation by 90 degrees and zoom out (zoom out alone works, but I am doing it here for better visualization)
Ad=2*D;
Bd=2*C;
Cd=2*A;
Dd=2*B;
Your problem comes from the way you move the camera from the resulting visual servoing velocity. Rather than
cam = cam + vc*dt;
you should compute the new camera position using the exponential map
cam = cam*expm(vc*dt)

matlab: efficient computation of local histograms within circular neighboorhoods

I've an image over which I would like to compute a local histogram within a circular neighborhood. The size of the neighborhood is given by a radius. Although the code below does the job, it's computationally expensive. I run the profiler and the way I'm accessing to the pixels within the circular neighborhoods is already expensive.
Is there any sort of improvement/optimization based maybe on vectorization? Or for instance, storing the neighborhoods as columns?
I found a similar question in this post and the proposed solution is quite in the spirit of the code below, however the solution is still not appropriate to my case. Any ideas are really welcomed :-) Imagine for the moment, the image is binary, but the method should also ideally work with gray-level images :-)
[rows,cols] = size(img);
hist_img = zeros(rows, cols, 2);
[XX, YY] = meshgrid(1:cols, 1:rows);
for rr=1:rows
for cc=1:cols
distance = sqrt( (YY-rr).^2 + (XX-cc).^2 );
mask_radii = (distance <= radius);
bwresponses = img(mask_radii);
[nelems, ~] = histc(double(bwresponses),0:255);
% do some processing over the histogram
...
end
end
EDIT 1 Given the received feedback, I tried to update the solution. However, it's not yet correct
radius = sqrt(2.0);
disk = diskfilter(radius);
fun = #(x) histc( x(disk>0), min(x(:)):max(x(:)) );
output = im2col(im, size(disk), fun);
function disk = diskfilter(radius)
height = 2*ceil(radius)+1;
width = 2*ceil(radius)+1;
[XX,YY] = meshgrid(1:width,1:height);
dist = sqrt((XX-ceil(width/2)).^2+(YY-ceil(height/2)).^2);
circfilter = (dist <= radius);
end
Following on the technique I described in my answer to a similar question you could try to do the following:
compute the index offsets from a particular voxel that get you to all the neighbors within a radius
Determine which voxels have all neighbors at least radius away from the edge
Compute the neighbors for all these voxels
Generate your histograms for each neighborhood
It is not hard to vectorize this, but note that
It will be slow when the neighborhood is large
It involves generating an intermediate matrix that is NxM (N = voxels in image, M = voxels in neighborhood) which could get very large
Here is the code:
% generate histograms for neighborhood within radius r
A = rand(200,200,200);
radius = 2.5;
tic
sz=size(A);
[xx yy zz] = meshgrid(1:sz(2), 1:sz(1), 1:sz(3));
center = round(sz/2);
centerPoints = find((xx - center(1)).^2 + (yy - center(2)).^2 + (zz - center(3)).^2 < radius.^2);
centerIndex = sub2ind(sz, center(1), center(2), center(3));
% limit to just the points that are "far enough on the inside":
inside = find(xx > radius+1 & xx < sz(2) - radius & ...
yy > radius + 1 & yy < sz(1) - radius & ...
zz > radius + 1 & zz < sz(3) - radius);
offsets = centerPoints - centerIndex;
allPoints = 1:prod(sz);
insidePoints = allPoints(inside);
indices = bsxfun(#plus, offsets, insidePoints);
hh = histc(A(indices), 0:0.1:1); % <<<< modify to give you the histogram you want
toc
A 2D version of the same code (which might be all you need, and is considerably faster):
% generate histograms for neighborhood within radius r
A = rand(200,200);
radius = 2.5;
tic
sz=size(A);
[xx yy] = meshgrid(1:sz(2), 1:sz(1));
center = round(sz/2);
centerPoints = find((xx - center(1)).^2 + (yy - center(2)).^2 < radius.^2);
centerIndex = sub2ind(sz, center(1), center(2));
% limit to just the points that are "far enough on the inside":
inside = find(xx > radius+1 & xx < sz(2) - radius & ...
yy > radius + 1 & yy < sz(1) - radius);
offsets = centerPoints - centerIndex;
allPoints = 1:prod(sz);
insidePoints = allPoints(inside);
indices = bsxfun(#plus, offsets, insidePoints);
hh = histc(A(indices), 0:0.1:1); % <<<< modify to give you the histogram you want
toc
You're right, I don't think that colfilt can be used as you're not applying a filter. You'll have to check the correctness, but here's my attempt using im2col and your diskfilter function (I did remove the conversion to double so it now output logicals):
function circhist
% Example data
im = randi(256,20)-1;
% Ranges - I do this globally for the whole image rather than for each neighborhood
mini = min(im(:));
maxi = max(im(:));
edges = linspace(mini,maxi,20);
% Disk filter
radius = sqrt(2.0);
disk = diskfilter(radius); % Returns logical matrix
% Pad array with -1
im_pad = padarray(im, (size(disk)-1)/2, -1);
% Convert sliding neighborhoods to columns
B = im2col(im_pad, size(disk), 'sliding');
% Get elements from each column that correspond to disk (logical indexing)
C = B(disk(:), :);
% Apply histogram across columns to count number of elements
out = histc(C, edges)
% Display output
figure
imagesc(out)
h = colorbar;
ylabel(h,'Counts');
xlabel('Neighborhood #')
ylabel('Bins')
axis xy
function disk = diskfilter(radius)
height = 2*ceil(radius)+1;
width = 2*ceil(radius)+1;
[XX,YY] = meshgrid(1:width,1:height);
dist = sqrt((XX-ceil(width/2)).^2+(YY-ceil(height/2)).^2);
disk = (dist <= radius);
If you want to set your ranges (edges) based on each neighborhood then you'll need to make sure that the vector is always the same length if you want to build a big matrix (and then the rows of that matrix won't correspond to each other).
You should note that the shape of the disk returned by fspecial is not as circular as what you were using. It's meant to be used a smoothing/averaging filter so the edges are fuzzy (anti-aliased). Thus when you use ~=0 it will grab more pixels. It'd stick with your own function, which is faster anyways.
You could try processing with an opposite logic (as briefly explained in the comment)
hist = zeros(W+2*R, H+2*R, Q);
for i = 1:R+1;
for j = 1:R+1;
if ((i-R-1)^2+(j-R-1)^2 < R*R)
for q = 0:1:Q-1;
hist(i:i+W-1,j:j+H-1,q+1) += (image == q);
end
end
end
end

HCL color to RGB and backward

I need an algorithm to convert the HCL color to RGB and backward RGB to HCL keeping in mind that these color spaces have different gamuts (I need to constrain the HCL colors to those that can be reproduced in RGB color space). What is the algorithm for this (the algorithm is intended to be implemented in Wolfram Mathematica that supports natively only RGB color)? I have no experience in working with color spaces.
P.S. Some articles about HCL color:
M. Sarifuddin (2005). A new perceptually uniform color space with associated color similarity measure for content–based image and video retrieval.
Zeileis, Hornik and Murrell (2009): Escaping RGBland: Selecting Colors for Statistical Graphics // Computational Statistics & Data Analysis Volume 53, Issue 9, 1 July 2009, Pages 3259-3270
UPDATE:
As pointed out by Jonathan Jansson, in the above two articles different color spaces are described by the name "HCL": "The second article uses LCh(uv) which is the same as Luv* but described in polar coordiates where h(uv) is the angle of the u* and v* coordinate and C* is the magnitude of that vector". So in fact I need an algorithm for converting RGB to Luv* and backward.
I was just learing about the HCL colorspace too. The colorspace used in the two articles in your question seems to be different color spaces though.
The second article uses L*C*h(uv) which is the same as L*u*v* but described in polar coordiates where h(uv) is the angle of the u* and v* coordiate and C* is the magnitude of that vector.
The LCH color space in the first article seems to describe another color space than that uses a more algorithmical conversion. There is also another version of the first paper here: http://isjd.pdii.lipi.go.id/admin/jurnal/14209102121.pdf
If you meant to use the CIE L*u*v* you need to first convert sRGB to CIE XYZ and then convert to CIE L*u*v*. RGB actually refers to sRGB in most cases so there is no need to convert from RGB to sRGB.
All source code needed
Good article about how conversion to XYZ works
Nice online converter
But I can't answer your question about how to constrain the colors to the sRGB space. You could just throw away RGB colors which are outside the range 0 to 1 after conversion. Just clamping colors can give quite weird results. Try to go to the converter and enter the color RGB 0 0 255 and convert to L*a*b* (similar to L*u*v*) and then increase L* to say 70 and convert it back and the result is certainly not blue anymore.
Edit: Corrected the URL
Edit: Merged another answer into this answer
HCL is a very generic name there are a lot of ways to have a hue, a chroma, and a lightness. Chroma.js for example has something it calls HCL which is polar coord converted Lab (when you look at the actual code). Other implementations, even ones linked from that same site use Polar Luv. Since you can simply borrow the L factor and derive the hue by converting to polar coords these are both valid ways to get those three elements. It is far better to call them Polar Lab and Polar Luv, because of the confusion factor.
M. Sarifuddin (2005)'s algorithm is not Polar Luv or Polar Lab and is computationally simpler (you don't need to derive Lab or Luv space first), and may actually be better. There are some things that seem wrong in the paper. For example applying a Euclidean distance to a CIE L*C*H* colorspace. The use of a Hue means it's necessarily round, and just jamming that number into A²+B²+C² is going to give you issues. The same is true to apply a hue-based colorspace to D94 or D00 as these are distance algorithms with built in corrections specific to Lab colorspace. Unless I'm missing something there, I'd disregard figures 6-8. And I question the rejection tolerances in the graphics. You could set a lower threshold and do better, and the numbers between color spaces are not normalized. In any event, despite a few seeming flaws in the paper, the algorithm described is worth a shot. You might want to do Euclidean on RGB if it doesn't really matter much. But, if you're shopping around for color distance algorithms, here you go.
Here is HCL as given by M. Sarifuddin implemented in Java. Having read the paper repeatedly I cannot avoid the conclusion that it scales the distance by a factor of between 0.16 and 180.16 with regard to the change in hue in the distance_hcl routine. This is such a profound factor that it almost cannot be right at all. And makes the color matching suck. I have the paper's line commented out and use a line with only the Al factor. Scaling Luminescence by constant ~1.4 factor isn't going to make it unusable. With neither scale factor it ends up being identical to cycldistance.
http://w3.uqo.ca/missaoui/Publications/TRColorSpace.zip is corrected and improved version of the paper.
static final public double Y0 = 100;
static final public double gamma = 3;
static final public double Al = 1.4456;
static final public double Ach_inc = 0.16;
public void rgb2hcl(double[] returnarray, int r, int g, int b) {
double min = Math.min(Math.min(r, g), b);
double max = Math.max(Math.max(r, g), b);
if (max == 0) {
returnarray[0] = 0;
returnarray[1] = 0;
returnarray[2] = 0;
return;
}
double alpha = (min / max) / Y0;
double Q = Math.exp(alpha * gamma);
double rg = r - g;
double gb = g - b;
double br = b - r;
double L = ((Q * max) + ((1 - Q) * min)) / 2;
double C = Q * (Math.abs(rg) + Math.abs(gb) + Math.abs(br)) / 3;
double H = Math.toDegrees(Math.atan2(gb, rg));
/*
//the formulae given in paper, don't work.
if (rg >= 0 && gb >= 0) {
H = 2 * H / 3;
} else if (rg >= 0 && gb < 0) {
H = 4 * H / 3;
} else if (rg < 0 && gb >= 0) {
H = 180 + 4 * H / 3;
} else if (rg < 0 && gb < 0) {
H = 2 * H / 3 - 180;
} // 180 causes the parts to overlap (green == red) and it oddly crumples up bits of the hue for no good reason. 2/3H and 4/3H expanding and contracting quandrants.
*/
if (rg < 0) {
if (gb >= 0) H = 90 + H;
else { H = H - 90; }
} //works
returnarray[0] = H;
returnarray[1] = C;
returnarray[2] = L;
}
public double cycldistance(double[] hcl1, double[] hcl2) {
double dL = hcl1[2] - hcl2[2];
double dH = Math.abs(hcl1[0] - hcl2[0]);
double C1 = hcl1[1];
double C2 = hcl2[1];
return Math.sqrt(dL*dL + C1*C1 + C2*C2 - 2*C1*C2*Math.cos(Math.toRadians(dH)));
}
public double distance_hcl(double[] hcl1, double[] hcl2) {
double c1 = hcl1[1];
double c2 = hcl2[1];
double Dh = Math.abs(hcl1[0] - hcl2[0]);
if (Dh > 180) Dh = 360 - Dh;
double Ach = Dh + Ach_inc;
double AlDl = Al * Math.abs(hcl1[2] - hcl2[2]);
return Math.sqrt(AlDl * AlDl + (c1 * c1 + c2 * c2 - 2 * c1 * c2 * Math.cos(Math.toRadians(Dh))));
//return Math.sqrt(AlDl * AlDl + Ach * (c1 * c1 + c2 * c2 - 2 * c1 * c2 * Math.cos(Math.toRadians(Dh))));
}
I'm familiar with quite a few color spaces, but this one is new to me. Alas, Mathematica's ColorConvert doesn't know it either.
I found an rgb2hcl routine here, but no routine going the other way.
A more comprehensive colorspace conversion package can be found here. It seems to be able to do conversions to and from all kinds of color spaces. Look for the file colorspace.c in colorspace_1.1-0.tar.gz\colorspace_1.1-0.tar\colorspace\src. Note that HCL is known as PolarLUV in this package.
As mentioned in other answers, there are a lot of ways to implement an HCL colorspace and map that into RGB.
HSLuv ended up being what I used, and has MIT-licensed implementations in C, C#, Go, Java, PHP, and several other languages. It's similar to CIELUV LCh but fully maps to RGB. The implementations are available on GitHub.
Here's a short graphic from the website describing the HSLuv color space, with the implementation output in the right two panels:
I was looking to interpolate colors on the web and found HCL to be the most fitting color space, I couldn't find any library making the conversion straightforward and performant so I wrote my own.
There's many constants at play, and some of them vary significantly depending on where you source them from.
My target being the web, I figured I'd be better off matching the chromium source code. Here's a minimized snippet written in Typescript, the sRGB XYZ matrix is precomputed and all constants are inlined.
const rgb255 = (v: number) => (v < 255 ? (v > 0 ? v : 0) : 255);
const b1 = (v: number) => (v > 0.0031308 ? v ** (1 / 2.4) * 269.025 - 14.025 : v * 3294.6);
const b2 = (v: number) => (v > 0.2068965 ? v ** 3 : (v - 4 / 29) * (108 / 841));
const a1 = (v: number) => (v > 10.314724 ? ((v + 14.025) / 269.025) ** 2.4 : v / 3294.6);
const a2 = (v: number) => (v > 0.0088564 ? v ** (1 / 3) : v / (108 / 841) + 4 / 29);
function fromHCL(h: number, c: number, l: number): RGB {
const y = b2((l = (l + 16) / 116));
const x = b2(l + (c / 500) * Math.cos((h *= Math.PI / 180)));
const z = b2(l - (c / 200) * Math.sin(h));
return [
rgb255(b1(x * 3.021973625 - y * 1.617392459 - z * 0.404875592)),
rgb255(b1(x * -0.943766287 + y * 1.916279586 + z * 0.027607165)),
rgb255(b1(x * 0.069407491 - y * 0.22898585 + z * 1.159737864)),
];
}
function toHCL(r: number, g: number, b: number) {
const y = a2((r = a1(r)) * 0.222488403 + (g = a1(g)) * 0.716873169 + (b = a1(b)) * 0.06060791);
const l = 500 * (a2(r * 0.452247074 + g * 0.399439023 + b * 0.148375274) - y);
const q = 200 * (y - a2(r * 0.016863605 + g * 0.117638439 + b * 0.865350722));
const h = Math.atan2(q, l) * (180 / Math.PI);
return [h < 0 ? h + 360 : h, Math.sqrt(l * l + q * q), 116 * y - 16];
}
Here's a playground for the above snippet.
It includes d3's interpolateHCL and the browser native css transition for comparaison.
https://svelte.dev/repl/0a40a8348f8841d0b7007c58e4d9b54c
Here's a gist to do the conversion to and from any web color format and interpolate it in the HCL color space.
https://gist.github.com/pushkine/c8ba98294233d32ab71b7e19a0ebdbb9
I think
if (rg < 0) {
if (gb >= 0) H = 90 + H;
else { H = H - 90; }
} //works
is not really necessary because of atan2(,) instead of atan(/) from paper (but dont now anything about java atan2(,) especially

Resources