How can i get device tilt in xamarin forms? - xamarin

I´d like to get the tilt of the device, so i can use this to mesure the tilt os some surface, laying down the device over the surface.
Right now i am using Device Motion Plugin for xamarin forms from here https://github.com/rdelrosario/xamarin-plugins
and the code below:
CrossDeviceMotion.Current.Start(MotionSensorType.Accelerometer);
CrossDeviceMotion.Current.SensorValueChanged += (s, a) =>
{
switch (a.SensorType)
{
case MotionSensorType.Accelerometer:
{
Debug.WriteLine("A: {0},{1},{2}", ((MotionVector)a.Value).X, ((MotionVector)a.Value).Y,
((MotionVector)a.Value).Z);
Exposicao.Inclinacao = ((MotionVector)a.Value).Z;
break;
}
case MotionSensorType.Compass:
{
// Debug.WriteLine("H: {0}", a.Value);
Exposicao.Bussola = (double)a.Value.Value;
break;
}
}
};
The compass part is ok, the accelerometer part is working but there are some but´s.
If i am not wrong, i get the tilt in Z axis, so z.Value.Value.
This value is diferent for android and ios, lets focus in android.
z values are from 10 when device is laying down on flat surface, to 0 if device is stand up, lets focus only in just one quadrant.
I am doing something wrong to achieve what i explained?
How can i convert those values to a Angle between 0 and 90? It seems not linear, so the 5 does not seem 45 degrees.
Thanks

I'd probably roll out my own platform implementation for the feature you're looking for. The DeviceMotion library looks a bit simple for your purposes, as can be seen from the answer below. I'm pretty sure you can use it as a good starting point but it needs to be extended a little.
Android
On Android, you should use the Rotation Vector Sensor which uses a Kalman filter (with accelerometer, magnetometer and gyroscope) to get accurate measurements of the device's rotation:
The rotation vector represents the orientation of the device as a combination of an angle and an axis, in which the device has rotated through an angle θ around an axis (x, y, or z).
Image from the official Android documentation
iOS:
For iOS, you have to do a bit more work yourself. The key is to make use of CMAttitude, which describes the attitude of the device relative to an initial attitude. I found a snippet I've saved to my collection from an unknown source (can't credit the original author) here:
public void CalculateLeanAngle ()
{
motionManager = new CMMotionManager ();
motionManager.DeviceMotionUpdateInterval = 0.02;
if (motionManager.DeviceMotionAvailable) {
motionManager.StartDeviceMotionUpdates(CMAttitudeReferenceFrame.XArbitraryZVertical, NSOperationQueue.CurrentQueue, (data, error) => {
CMQuaternion quat = motionManager.DeviceMotion.Attitude.Quaternion;
double x = quat.x;
double y = quat.y;
double w = quat.w;
double z = quat.z;
double degrees = 0.0;
//Roll
double roll = Math.Atan2 (2 * y * w - 2 * x * z, 1 - 2 * y * y - 2 * z * z);
degrees = Math.Round (-applyKalmanFiltering (roll) * 180.0 / Constants.M_PI);
});
}
public double applyKalmanFiltering (double yaw)
{
if (motionLastYaw == 0)
motionLastYaw = yaw;
float q = 0.1f; // process noise
float r = 0.1f; // sensor noise
float p = 0.1f; // estimated error
float k = 0.5f; // kalman filter gain
double x = motionLastYaw;
p = p + q;
k = p / (p + r);
x = x + k * (yaw - x);
p = (1 - k) * p;
motionLastYaw = x;
return motionLastYaw;
}
Image from the official Xamarin documentation
I'll try to look for the original source when I have more time but I'm pretty sure this will work out of the box for your purposes.

Related

Processing - creating circles from current pixels

I'm using processing, and I'm trying to create a circle from the pixels i have on my display.
I managed to pull the pixels on screen and create a growing circle from them.
However i'm looking for something much more sophisticated, I want to make it seem as if the pixels on the display are moving from their current location and forming a turning circle or something like this.
This is what i have for now:
int c = 0;
int radius = 30;
allPixels = removeBlackP();
void draw {
loadPixels();
for (int alpha = 0; alpha < 360; alpha++)
{
float xf = 350 + radius*cos(alpha);
float yf = 350 + radius*sin(alpha);
int x = (int) xf;
int y = (int) yf;
if (radius > 200) {radius =30;break;}
if (c> allPixels.length) {c= 0;}
pixels[y*700 +x] = allPixels[c];
updatePixels();
}
radius++;
c++;
}
the function removeBlackP return an array with all the pixels except for the black ones.
This code works for me. There is an issue that the circle only has the numbers as int so it seems like some pixels inside the circle won't fill, i can live with that. I'm looking for something a bit more complex like I explained.
Thanks!
Fill all pixels of scanlines belonging to the circle. Using this approach, you will paint all places inside the circle. For every line calculate start coordinate (end one is symmetric). Pseudocode:
for y = center_y - radius; y <= center_y + radius; y++
dx = Sqrt(radius * radius - y * y)
for x = center_x - dx; x <= center_x + dx; x++
fill a[y, x]
When you find places for all pixels, you can make correlation between initial pixels places and calculated ones and move them step-by-step.
For example, if initial coordinates relative to center point for k-th pixel are (x0, y0) and final coordinates are (x1,y1), and you want to make M steps, moving pixel by spiral, calculate intermediate coordinates:
calc values once:
r0 = Sqrt(x0*x0 + y0*y0) //Math.Hypot if available
r1 = Sqrt(x1*x1 + y1*y1)
fi0 = Math.Atan2(y0, x0)
fi1 = Math.Atan2(y1, x1)
if fi1 < fi0 then
fi1 = fi1 + 2 * Pi;
for i = 1; i <=M ; i++
x = (r0 + i / M * (r1 - r0)) * Cos(fi0 + i / M * (fi1 - fi0))
y = (r0 + i / M * (r1 - r0)) * Sin(fi0 + i / M * (fi1 - fi0))
shift by center coordinates
The way you go about drawing circles in Processing looks a little convoluted.
The simplest way is to use the ellipse() function, no pixels involved though:
If you do need to draw an ellipse and use pixels, you can make use of PGraphics which is similar to using a separate buffer/"layer" to draw into using Processing drawing commands but it also has pixels[] you can access.
Let's say you want to draw a low-res pixel circle circle, you can create a small PGraphics, disable smoothing, draw the circle, then render the circle at a higher resolution. The only catch is these drawing commands must be placed within beginDraw()/endDraw() calls:
PGraphics buffer;
void setup(){
//disable sketch's aliasing
noSmooth();
buffer = createGraphics(25,25);
buffer.beginDraw();
//disable buffer's aliasing
buffer.noSmooth();
buffer.noFill();
buffer.stroke(255);
buffer.endDraw();
}
void draw(){
background(255);
//draw small circle
float circleSize = map(sin(frameCount * .01),-1.0,1.0,0.0,20.0);
buffer.beginDraw();
buffer.background(0);
buffer.ellipse(buffer.width / 2,buffer.height / 2, circleSize,circleSize);
buffer.endDraw();
//render small circle at higher resolution (blocky - no aliasing)
image(buffer,0,0,width,height);
}
If you want to manually draw a circle using pixels[] you are on the right using the polar to cartesian conversion formula (x = cos(angle) * radius, y = sin(angle) * radius).Even though it's focusing on drawing a radial gradient, you can find an example of drawing a circle(a lot actually) using pixels in this answer

Remove barrel distortion from an image in MATLAB [duplicate]

BOUNTY STATUS UPDATE:
I discovered how to map a linear lens, from destination coordinates to source coordinates.
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted?
2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please...
Question as originally stated:
I have some points that describe positions in a picture taken with a fisheye lens.
I want to convert these points to rectilinear coordinates. I want to undistort the image.
I've found this description of how to generate a fisheye effect, but not how to reverse it.
There's also a blog post that describes how to use tools to do it; these pictures are from that:
(1) : SOURCE Original photo link
Input : Original image with fish-eye distortion to fix.
(2) : DESTINATION Original photo link
Output : Corrected image (technically also with perspective correction, but that's a separate step).
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
My function stub looks like this:
Point correct_fisheye(const Point& p,const Size& img) {
// to polar
const Point centre = {img.width/2,img.height/2};
const Point rel = {p.x-centre.x,p.y-centre.y};
const double theta = atan2(rel.y,rel.x);
double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
// fisheye undistortion in here please
//... change R ...
// back to rectangular
const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
return ret;
}
Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?
The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by
R_u = f*tan(theta)
and the projection by common fisheye lens cameras (that is, distorted) is modeled by
R_d = 2*f*sin(theta/2)
You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. In other words,
R_u = f*tan(2*asin(R_d/(2*f)))
is the formula you're looking for. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene.
In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion:
#!/usr/bin/python
# ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056
import sys
import cv
def main(argv):
if len(argv) < 10:
print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argv[0]
sys.exit(-1)
src = argv[1]
fx, fy, cx, cy, k1, k2, p1, p2, output = argv[2:]
intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1)
cv.Zero(intrinsics)
intrinsics[0, 0] = float(fx)
intrinsics[1, 1] = float(fy)
intrinsics[2, 2] = 1.0
intrinsics[0, 2] = float(cx)
intrinsics[1, 2] = float(cy)
dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)
cv.Zero(dist_coeffs)
dist_coeffs[0, 0] = float(k1)
dist_coeffs[0, 1] = float(k2)
dist_coeffs[0, 2] = float(p1)
dist_coeffs[0, 3] = float(p2)
src = cv.LoadImage(src)
dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
# cv.Undistort2(src, dst, intrinsics, dist_coeffs)
cv.SaveImage(output, dst)
if __name__ == '__main__':
main(sys.argv)
Also note that OpenCV uses a very different lens distortion model to the one in the web page you linked to.
(Original poster, providing an alternative)
The following function maps destination (rectilinear) coordinates to source (fisheye-distorted) coordinates. (I'd appreciate help in reversing it)
I got to this point through trial-and-error: I don't fundamentally grasp why this code is working, explanations and improved accuracy appreciated!
def dist(x,y):
return sqrt(x*x+y*y)
def correct_fisheye(src_size,dest_size,dx,dy,factor):
""" returns a tuple of source coordinates (sx,sy)
(note: values can be out of range)"""
# convert dx,dy to relative coordinates
rx, ry = dx-(dest_size[0]/2), dy-(dest_size[1]/2)
# calc theta
r = dist(rx,ry)/(dist(src_size[0],src_size[1])/factor)
if 0==r:
theta = 1.0
else:
theta = atan(r)/r
# back to absolute coordinates
sx, sy = (src_size[0]/2)+theta*rx, (src_size[1]/2)+theta*ry
# done
return (int(round(sx)),int(round(sy)))
When used with a factor of 3.0, it successfully undistorts the images used as examples (I made no attempt at quality interpolation):
Dead link
(And this is from the blog post, for comparison:)
If you think your formulas are exact, you can comput an exact formula with trig, like so:
Rin = 2 f sin(w/2) -> sin(w/2)= Rin/2f
Rout= f tan(w) -> tan(w)= Rout/f
(Rin/2f)^2 = [sin(w/2)]^2 = (1 - cos(w))/2 -> cos(w) = 1 - 2(Rin/2f)^2
(Rout/f)^2 = [tan(w)]^2 = 1/[cos(w)]^2 - 1
-> (Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
However, as #jmbr says, the actual camera distortion will depend on the lens and the zoom. Rather than rely on a fixed formula, you might want to try a polynomial expansion:
Rout = Rin*(1 + A*Rin^2 + B*Rin^4 + ...)
By tweaking first A, then higher-order coefficients, you can compute any reasonable local function (the form of the expansion takes advantage of the symmetry of the problem). In particular, it should be possible to compute initial coefficients to approximate the theoretical function above.
Also, for good results, you will need to use an interpolation filter to generate your corrected image. As long as the distortion is not too great, you can use the kind of filter you would use to rescale the image linearly without much problem.
Edit: as per your request, the equivalent scaling factor for the above formula:
(Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
-> Rout/f = [Rin/f] * sqrt(1-[Rin/f]^2/4)/(1-[Rin/f]^2/2)
If you plot the above formula alongside tan(Rin/f), you can see that they are very similar in shape. Basically, distortion from the tangent becomes severe before sin(w) becomes much different from w.
The inverse formula should be something like:
Rin/f = [Rout/f] / sqrt( sqrt(([Rout/f]^2+1) * (sqrt([Rout/f]^2+1) + 1) / 2 )
I blindly implemented the formulas from here, so I cannot guarantee it would do what you need.
Use auto_zoom to get the value for the zoom parameter.
def dist(x,y):
return sqrt(x*x+y*y)
def fisheye_to_rectilinear(src_size,dest_size,sx,sy,crop_factor,zoom):
""" returns a tuple of dest coordinates (dx,dy)
(note: values can be out of range)
crop_factor is ratio of sphere diameter to diagonal of the source image"""
# convert sx,sy to relative coordinates
rx, ry = sx-(src_size[0]/2), sy-(src_size[1]/2)
r = dist(rx,ry)
# focal distance = radius of the sphere
pi = 3.1415926535
f = dist(src_size[0],src_size[1])*factor/pi
# calc theta 1) linear mapping (older Nikon)
theta = r / f
# calc theta 2) nonlinear mapping
# theta = asin ( r / ( 2 * f ) ) * 2
# calc new radius
nr = tan(theta) * zoom
# back to absolute coordinates
dx, dy = (dest_size[0]/2)+rx/r*nr, (dest_size[1]/2)+ry/r*nr
# done
return (int(round(dx)),int(round(dy)))
def fisheye_auto_zoom(src_size,dest_size,crop_factor):
""" calculate zoom such that left edge of source image matches left edge of dest image """
# Try to see what happens with zoom=1
dx, dy = fisheye_to_rectilinear(src_size, dest_size, 0, src_size[1]/2, crop_factor, 1)
# Calculate zoom so the result is what we wanted
obtained_r = dest_size[0]/2 - dx
required_r = dest_size[0]/2
zoom = required_r / obtained_r
return zoom
I took what JMBR did and basically reversed it. He took the radius of the distorted image (Rd, that is, the distance in pixels from the center of the image) and found a formula for Ru, the radius of the undistorted image.
You want to go the other way. For each pixel in the undistorted (processed image), you want to know what the corresponding pixel is in the distorted image.
In other words, given (xu, yu) --> (xd, yd). You then replace each pixel in the undistorted image with its corresponding pixel from the distorted image.
Starting where JMBR did, I do the reverse, finding Rd as a function of Ru. I get:
Rd = f * sqrt(2) * sqrt( 1 - 1/sqrt(r^2 +1))
where f is the focal length in pixels (I'll explain later), and r = Ru/f.
The focal length for my camera was 2.5 mm. The size of each pixel on my CCD was 6 um square. f was therefore 2500/6 = 417 pixels. This can be found by trial and error.
Finding Rd allows you to find the corresponding pixel in the distorted image using polar coordinates.
The angle of each pixel from the center point is the same:
theta = arctan( (yu-yc)/(xu-xc) ) where xc, yc are the center points.
Then,
xd = Rd * cos(theta) + xc
yd = Rd * sin(theta) + yc
Make sure you know which quadrant you are in.
Here is the C# code I used
public class Analyzer
{
private ArrayList mFisheyeCorrect;
private int mFELimit = 1500;
private double mScaleFESize = 0.9;
public Analyzer()
{
//A lookup table so we don't have to calculate Rdistorted over and over
//The values will be multiplied by focal length in pixels to
//get the Rdistorted
mFisheyeCorrect = new ArrayList(mFELimit);
//i corresponds to Rundist/focalLengthInPixels * 1000 (to get integers)
for (int i = 0; i < mFELimit; i++)
{
double result = Math.Sqrt(1 - 1 / Math.Sqrt(1.0 + (double)i * i / 1000000.0)) * 1.4142136;
mFisheyeCorrect.Add(result);
}
}
public Bitmap RemoveFisheye(ref Bitmap aImage, double aFocalLinPixels)
{
Bitmap correctedImage = new Bitmap(aImage.Width, aImage.Height);
//The center points of the image
double xc = aImage.Width / 2.0;
double yc = aImage.Height / 2.0;
Boolean xpos, ypos;
//Move through the pixels in the corrected image;
//set to corresponding pixels in distorted image
for (int i = 0; i < correctedImage.Width; i++)
{
for (int j = 0; j < correctedImage.Height; j++)
{
//which quadrant are we in?
xpos = i > xc;
ypos = j > yc;
//Find the distance from the center
double xdif = i-xc;
double ydif = j-yc;
//The distance squared
double Rusquare = xdif * xdif + ydif * ydif;
//the angle from the center
double theta = Math.Atan2(ydif, xdif);
//find index for lookup table
int index = (int)(Math.Sqrt(Rusquare) / aFocalLinPixels * 1000);
if (index >= mFELimit) index = mFELimit - 1;
//calculated Rdistorted
double Rd = aFocalLinPixels * (double)mFisheyeCorrect[index]
/mScaleFESize;
//calculate x and y distances
double xdelta = Math.Abs(Rd*Math.Cos(theta));
double ydelta = Math.Abs(Rd * Math.Sin(theta));
//convert to pixel coordinates
int xd = (int)(xc + (xpos ? xdelta : -xdelta));
int yd = (int)(yc + (ypos ? ydelta : -ydelta));
xd = Math.Max(0, Math.Min(xd, aImage.Width-1));
yd = Math.Max(0, Math.Min(yd, aImage.Height-1));
//set the corrected pixel value from the distorted image
correctedImage.SetPixel(i, j, aImage.GetPixel(xd, yd));
}
}
return correctedImage;
}
}
I found this pdf file and I have proved that the maths are correct (except for the line vd = *xd**fv+v0 which should say vd = **yd**+fv+v0).
http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV.pdf
It does not use all of the latest co-efficients that OpenCV has available but I am sure that it could be adapted fairly easily.
double k1 = cameraIntrinsic.distortion[0];
double k2 = cameraIntrinsic.distortion[1];
double p1 = cameraIntrinsic.distortion[2];
double p2 = cameraIntrinsic.distortion[3];
double k3 = cameraIntrinsic.distortion[4];
double fu = cameraIntrinsic.focalLength[0];
double fv = cameraIntrinsic.focalLength[1];
double u0 = cameraIntrinsic.principalPoint[0];
double v0 = cameraIntrinsic.principalPoint[1];
double u, v;
u = thisPoint->x; // the undistorted point
v = thisPoint->y;
double x = ( u - u0 )/fu;
double y = ( v - v0 )/fv;
double r2 = (x*x) + (y*y);
double r4 = r2*r2;
double cDist = 1 + (k1*r2) + (k2*r4);
double xr = x*cDist;
double yr = y*cDist;
double a1 = 2*x*y;
double a2 = r2 + (2*(x*x));
double a3 = r2 + (2*(y*y));
double dx = (a1*p1) + (a2*p2);
double dy = (a3*p1) + (a1*p2);
double xd = xr + dx;
double yd = yr + dy;
double ud = (xd*fu) + u0;
double vd = (yd*fv) + v0;
thisPoint->x = ud; // the distorted point
thisPoint->y = vd;
This can be solved as an optimization problem. Simply draw on curves in images that are supposed to be straight lines. Store the contour points for each of those curves. Now we can solve the fish eye matrix as a minimization problem. Minimize the curve in points and that will give us a fisheye matrix. It works.
It can be done manually by adjusting the fish eye matrix using trackbars! Here is a fish eye GUI code using OpenCV for manual calibration.

Path Tracing Shadowing Error

I really dont know what else do to to fix this problem.I have written a path tracer using explicit light sampling in c++ and I keep getting this weird really black shadows which I know is wrong.I have done everything to fix it but I still keep getting it,even on higher samples.What am I doing wrong ? Below is a image of the scene.
And The Radiance Main Code
RGB Radiance(Ray PixRay,std::vector<Primitive*> sceneObjects,int depth,std::vector<AreaLight> AreaLights,unsigned short *XI,int E)
{
int MaxDepth = 10;
if(depth > MaxDepth) return RGB();
double nearest_t = INFINITY;
Primitive* nearestObject = NULL;
for(int i=0;i<sceneObjects.size();i++)
{
double root = sceneObjects[i]->intersect(PixRay);
if(root > 0)
{
if(root < nearest_t)
{
nearest_t = root;
nearestObject = sceneObjects[i];
}
}
}
RGB EstimatedRadiance;
if(nearestObject)
{
EstimatedRadiance = nearestObject->getEmission() * E;
Point intersectPoint = nearestObject->intersectPoint(PixRay,nearest_t);
Vector intersectNormal = nearestObject->surfacePointNormal(intersectPoint).Normalize();
if(nearestObject->getBRDF().Type == 1)
{
for(int x=0;x<AreaLights.size();x++)
{
Point pointOnTriangle = RandomPointOnTriangle(AreaLights[x].shape,XI);
Vector pointOnTriangleNormal = AreaLights[x].shape.surfacePointNormal(pointOnTriangle).Normalize();
Vector LightDistance = (pointOnTriangle - intersectPoint).Normalize();
//Geometric Term
RGB Geometric_Term = GeometricTerm(intersectPoint,pointOnTriangle,sceneObjects);
//Lambertian BRDF
RGB LambertianBRDF = nearestObject->getColor() * (1. / M_PI);
//Emitted Light Power
RGB Emission = AreaLights[x].emission;
double MagnitudeOfXandY = (pointOnTriangle - intersectPoint).Magnitude() * (pointOnTriangle - intersectPoint).Magnitude();
RGB DirectLight = Emission * LambertianBRDF * Dot(intersectNormal,-LightDistance) *
Dot(pointOnTriangleNormal,LightDistance) * (1./MagnitudeOfXandY) * AreaLights[x].shape.Area() * Geometric_Term;
EstimatedRadiance = EstimatedRadiance + DirectLight;
}
//
Vector diffDir = CosWeightedRandHemiDirection(intersectNormal,XI);
Ray diffRay = Ray(intersectPoint,diffDir);
EstimatedRadiance = EstimatedRadiance + ( Radiance(diffRay,sceneObjects,depth+1,AreaLights,XI,0) * nearestObject->getColor() * (1. / M_PI) * M_PI );
}
//Mirror
else if(nearestObject->getBRDF().Type == 2)
{
Vector reflDir = PixRay.d-intersectNormal*2*Dot(intersectNormal,PixRay.d);
Ray reflRay = Ray(intersectPoint,reflDir);
return nearestObject->getColor() *Radiance(reflRay,sceneObjects,depth+1,AreaLights,XI,0);
}
}
return EstimatedRadiance;
}
I haven't debugged your code, so there may be any number of bugs of course, but I can give you some tips: First, go look at SmallPT, and see what it does that you don't. It's tiny but still quite easy to read.
From the look of it, it seems there are issues with either the sampling and/or gamma correction. The easiest one is gamma: when converting RGB intensity in the range 0..1 to RGB in the range 0..255, remember to always gamma correct. Use a gamma of 2.2
R = r^(1.0/gamma)
G = g^(1.0/gamma)
B = b^(1.0/gamma)
Having the wrong gamma will make any path traced image look bad.
Second: sampling. It's not obvious from the code how the sampling is weighted. I'm only familiar with Path Tracing using russian roulette sampling. With RR the radiance basically works like so:
if (depth > MaxDepth)
return RGB();
RGB color = mat.Emission;
// Russian roulette:
float survival = 1.0f;
float pContinue = material.Albedo();
survival = 1.0f / pContinue;
if (Rand.Next() > pContinue)
return color;
color += DirectIllumination(sceneIntersection);
color += Radiance(sceneIntersection, depth+1) * survival;
RR is basically a way of terminating rays at random, but still maintaining an unbiased estimate of the true radiance. Since it adds a weight to the indirect term, and the shadow and bottom of the speheres are only indirectly lit, I'd suspect that has something to do with it (if it isn't just the gamma).

Fast algorithm for image distortion

I am working on a tool which distorts images, the purpose of the distortion is to project images to a sphere screen. The desired output is as the following image.
The code I use is as follow - for every Point(x, y) in the destination area, I calculate the corresponding pixel (sourceX, sourceY) in the original image to retrieve from.
But this approach is awkwardly slow, in my test, processing the sunset.jpg (800*600) requires more than 1500ms, if I remove the Mathematical/Trigonometrical calculations, calling cvGet2D and cvSet2D alone require more than 1200ms.
Is there a better way to do this? I am using Emgu CV (a .NET wrapper library for OpenCV) but examples in other language is also OK.
private static void DistortSingleImage()
{
System.Diagnostics.Stopwatch stopWatch = System.Diagnostics.Stopwatch.StartNew();
using (Image<Bgr, Byte> origImage = new Image<Bgr, Byte>("sunset.jpg"))
{
int frameH = origImage.Height;
using (Image<Bgr, Byte> distortImage = new Image<Bgr, Byte>(2 * frameH, 2 * frameH))
{
MCvScalar pixel;
for (int x = 0; x < 2 * frameH; x++)
{
for (int y = 0; y < 2 * frameH; y++)
{
if (x == frameH && y == frameH) continue;
int x1 = x - frameH;
int y1 = y - frameH;
if (x1 * x1 + y1 * y1 < frameH * frameH)
{
double radius = Math.Sqrt(x1 * x1 + y1 * y1);
double theta = Math.Acos(x1 / radius);
int sourceX = (int)(theta * (origImage.Width - 1) / Math.PI);
int sourceY = (int)radius;
pixel = CvInvoke.cvGet2D(origImage.Ptr, sourceY, sourceX);
CvInvoke.cvSet2D(distortImage, y, x, pixel);
}
}
}
distortImage.Save("Distort.jpg");
}
Console.WriteLine(stopWatch.ElapsedMilliseconds);
}
}
From my personal experience, I was doing some stereoscopic vision stuff, the best way to talk to openCV is through own wrapper, you could put your method in c++ and call it from c#, that would give you 1 call to native, faster code, and because under the hood Emgu's keeping OpenCV data, it's also possible to create an image with emgu, process it natively and enjoy processed image in c# again.
The get/set methods looks like Gdi's GetPixel / SetPixel ones, and, according to documentation they are "slow but safe way".
For staying with Emgu only, documentation tells that if you want to iterate over pixels, you should access .Data property:
The safe (slow) way
Suppose you are working on an Image. You can obtain the pixel on the y-th row and x-th column by calling
Bgr color = img[y, x];
Setting the pixel on the y-th row and x-th column is also simple
img[y,x] = color;
The fast way
The Image pixels values are stored in the Data property, a 3D array. Use this property if you need to iterate through the pixel values of the image.

What are some algorithms that will allow me to simulate planetary physics?

I'm interested in doing a "Solar System" simulator that will allow me to simulate the rotational and gravitational forces of planets and stars.
I'd like to be able to say, simulate our solar system, and simulate it across varying speeds (ie, watch Earth and other planets rotate around the sun across days, years, etc). I'd like to be able to add planets and change planets mass, etc, to see how it would effect the system.
Does anyone have any resources that would point me in the right direction for writing this sort of simulator?
Are there any existing physics engines which are designed for this purpose?
It's everything here and in general, everything that Jean Meeus has written.
You need to know and understand Newton's Law of Universal Gravitation and Kepler's Laws of Planetary Motion. These two are simple and I'm sure you've heard about them, if not studied them in high school. Finally, if you want your simulator to be as accurate as possible, you should familiarize yourself with the n-Body problem.
You should start out simple. Try making a Sun object and an Earth object that revolves around it. That should give you a very solid start and it's fairly easy to expand from there. A planet object would look something like:
Class Planet {
float x;
float y;
float z; // If you want to work in 3D
double velocity;
int mass;
}
Just remember that F = MA and the rest just just boring math :P
This is a great tutorial on N-body problems in general.
http://www.artcompsci.org/#msa
It's written using Ruby but pretty easy to map into other languages etc. It covers some of the common integration approaches; Forward-Euler, Leapfrog and Hermite.
You might want to take a look at Celestia, a free space simulator. I believe that you can use it to create fictitious solar systems and it is open source.
All you need to implement is proper differential equation (Keplers law) and using Runge-Kutta. (at lest this worked for me, but there are probably better methods)
There are loads of such simulators online.
Here is one simple one implemented in 500lines of c code. (montion algorhitm is much less)
http://astro.berkeley.edu/~dperley/programs/ssms.html.
Also check this:
http://en.wikipedia.org/wiki/Kepler_problem
http://en.wikipedia.org/wiki/Two-body_problem
http://en.wikipedia.org/wiki/N-body_problem
In physics this is known as the N-Body Problem. It is famous because you can not solve this by hand for a system with more than three planets. Luckily, you can get approximate solutions with a computer very easily.
A nice paper on writing this code from the ground up can be found here.
However, I feel a word of warning is important here. You may not get the results you expect. If you want to see how:
the mass of a planet affects its orbital speed around the Sun, cool. You will see that.
the different planets interact with each other, you will be bummed.
The problem is this.
Yeah, modern astronomers are concerned with how Saturn's mass changes the Earth's orbit around the Sun. But this is a VERY minor effect. If you are going to plot the path of a planet around the Sun, it will hardly matter that there are other planets in the Solar System. The Sun is so big it will drown out all other gravity. The only exceptions to this are:
If your planets have very elliptical orbits. This will cause the planets to potentially get closer together, so they interact more.
If your planets are almost the exact same distance from the Sun. They will interact more.
If you make your planets so comically large they compete with the Sun for gravity in the outer Solar System.
To be clear, yes, you will be able to calculate some interactions between planets. But no, these interactions will not be significant to the naked eye if you create a realistic Solar System.
Try it though, and find out!
Check out nMod, a n-body modeling toolkit written in C++ and using OpenGL. It has a pretty well populated solar system model that comes with it and it should be easy to modify. Also, he has a pretty good wiki about n-body simulation in general. The same guy who created this is also making a new program called Moody, but it doesn't appear to be as far along.
In addition, if you are going to do n-body simulations with more than just a few objects, you should really look at the fast multipole method (also called the fast multipole algorithm). It can the reduce number of computations from O(N^2) to O(N) to really speed up your simulation. It is also one of the top ten most successful algorithms of the 20th century, according to the author of this article.
Algorithms to simulate planetary physics.
Here is an implementation of the Keppler parts, in my Android app. The main parts are on my web site for you can download the whole source: http://www.barrythomas.co.uk/keppler.html
This is my method for drawing the planet at the 'next' position in the orbit. Think of the steps like stepping round a circle, one degree at a time, on a circle which has the same period as the planet you are trying to track. Outside of this method I use a global double as the step counter - called dTime, which contains a number of degrees of rotation.
The key parameters passed to the method are, dEccentricty, dScalar (a scaling factor so the orbit all fits on the display), dYear (the duration of the orbit in Earth years) and to orient the orbit so that perihelion is at the right place on the dial, so to speak, dLongPeri - the Longitude of Perihelion.
drawPlanet:
public void drawPlanet (double dEccentricity, double dScalar, double dYear, Canvas canvas, Paint paint,
String sName, Bitmap bmp, double dLongPeri)
{
double dE, dr, dv, dSatX, dSatY, dSatXCorrected, dSatYCorrected;
float fX, fY;
int iSunXOffset = getWidth() / 2;
int iSunYOffset = getHeight() / 2;
// get the value of E from the angle travelled in this 'tick'
dE = getE (dTime * (1 / dYear), dEccentricity);
// get r: the length of 'radius' vector
dr = getRfromE (dE, dEccentricity, dScalar);
// calculate v - the true anomaly
dv = 2 * Math.atan (
Math.sqrt((1 + dEccentricity) / (1 - dEccentricity))
*
Math.tan(dE / 2)
);
// get X and Y coords based on the origin
dSatX = dr / Math.sin(Math.PI / 2) * Math.sin(dv);
dSatY = Math.sin((Math.PI / 2) - dv) * (dSatX / Math.sin(dv));
// now correct for Longitude of Perihelion for this planet
dSatXCorrected = dSatX * (float)Math.cos (Math.toRadians(dLongPeri)) -
dSatY * (float)Math.sin(Math.toRadians(dLongPeri));
dSatYCorrected = dSatX * (float)Math.sin (Math.toRadians(dLongPeri)) +
dSatY * (float)Math.cos(Math.toRadians(dLongPeri));
// offset the origin to nearer the centre of the display
fX = (float)dSatXCorrected + (float)iSunXOffset;
fY = (float)dSatYCorrected + (float)iSunYOffset;
if (bDrawOrbits)
{
// draw the path of the orbit travelled
paint.setColor(Color.WHITE);
paint.setStyle(Paint.Style.STROKE);
paint.setAntiAlias(true);
// get the size of the rect which encloses the elliptical orbit
dE = getE (0.0, dEccentricity);
dr = getRfromE (dE, dEccentricity, dScalar);
rectOval.bottom = (float)dr;
dE = getE (180.0, dEccentricity);
dr = getRfromE (dE, dEccentricity, dScalar);
rectOval.top = (float)(0 - dr);
// calculate minor axis from major axis and eccentricity
// http://www.1728.org/ellipse.htm
double dMajor = rectOval.bottom - rectOval.top;
double dMinor = Math.sqrt(1 - (dEccentricity * dEccentricity)) * dMajor;
rectOval.left = 0 - (float)(dMinor / 2);
rectOval.right = (float)(dMinor / 2);
rectOval.left += (float)iSunXOffset;
rectOval.right += (float)iSunXOffset;
rectOval.top += (float)iSunYOffset;
rectOval.bottom += (float)iSunYOffset;
// now correct for Longitude of Perihelion for this orbit's path
canvas.save();
canvas.rotate((float)dLongPeri, (float)iSunXOffset, (float)iSunYOffset);
canvas.drawOval(rectOval, paint);
canvas.restore();
}
int iBitmapHeight = bmp.getHeight();
canvas.drawBitmap(bmp, fX - (iBitmapHeight / 2), fY - (iBitmapHeight / 2), null);
// draw planet label
myPaint.setColor(Color.WHITE);
paint.setTextSize(30);
canvas.drawText(sName, fX+20, fY-20, paint);
}
The method above calls two further methods which provide values of E (the mean anomaly) and r, the length of the vector at the end of which the planet is found.
getE:
public double getE (double dTime, double dEccentricity)
{
// we are passed the degree count in degrees (duh)
// and the eccentricity value
// the method returns E
double dM1, dD, dE0, dE = 0; // return value E = the mean anomaly
double dM; // local value of M in radians
dM = Math.toRadians (dTime);
int iSign = 1;
if (dM > 0) iSign = 1; else iSign = -1;
dM = Math.abs(dM) / (2 * Math.PI); // Meeus, p 206, line 110
dM = (dM - (long)dM) * (2 * Math.PI) * iSign; // line 120
if (dM < 0)
dM = dM + (2 * Math.PI); // line 130
iSign = 1;
if (dM > Math.PI) iSign = -1; // line 150
if (dM > Math.PI) dM = 2 * Math.PI - dM; // line 160
dE0 = Math.PI / 2; // line 170
dD = Math.PI / 4; // line 170
for (int i = 0; i < 33; i++) // line 180
{
dM1 = dE0 - dEccentricity * Math.sin(dE0); // line 190
dE0 = dE0 + dD * Math.signum((float)(dM - dM1));
dD = dD / 2;
}
dE = dE0 * iSign;
return dE;
}
getRfromE:
public double getRfromE (double dE, double dEccentricty, double dScalar)
{
return Math.min(getWidth(), getHeight()) / 2 * dScalar * (1 - (dEccentricty * Math.cos(dE)));
}
It looks like it is very hard and requires strong knowledge of physics but in fact it is very easy, you need to know only 2 formulas and basic understanding of vectors:
Attractional force (or gravitational force) between planet1 and planet2 with mass m1 and m2 and distance between them d: Fg = G*m1*m2/d^2; Fg = m*a. G is a constant, find it by substituting random values so that acceleration "a" will not be too small and not too big approximately "0.01" or "0.1".
If you have total vector force which is acting on a current planet at that instant of time, you can find instant acceleration a=(total Force)/(mass of current planet). And if you have current acceleration and current velocity and current position, you can find new velocity and new position
If you want to look it real you can use following supereasy algorythm (pseudocode):
int n; // # of planets
Vector2D planetPosition[n];
Vector2D planetVelocity[n]; // initially set by (0, 0)
double planetMass[n];
while (true){
for (int i = 0; i < n; i++){
Vector2D totalForce = (0, 0); // acting on planet i
for (int j = 0; j < n; j++){
if (j == i)
continue; // force between some planet and itself is 0
Fg = G * planetMass[i] * planetMass[j] / distance(i, j) ^ 2;
// Fg is a scalar value representing magnitude of force acting
// between planet[i] and planet[j]
// vectorFg is a vector form of force Fg
// (planetPosition[j] - planetPosition[i]) is a vector value
// (planetPosition[j]-planetPosition[i])/(planetPosition[j]-plantetPosition[i]).magnitude() is a
// unit vector with direction from planet[i] to planet[j]
vectorFg = Fg * (planetPosition[j] - planetPosition[i]) /
(planetPosition[j] - planetPosition[i]).magnitude();
totalForce += vectorFg;
}
Vector2D acceleration = totalForce / planetMass[i];
planetVelocity[i] += acceleration;
}
// it is important to separate two for's, if you want to know why ask in the comments
for (int i = 0; i < n; i++)
planetPosition[i] += planetVelocity[i];
sleep 17 ms;
draw planets;
}
If you're simulating physics, I highly recommend Box2D.
It's a great physics simulator, and will really cut down the amount of boiler plate you'll need, with physics simulating.
Fundamentals of Astrodynamics by Bate, Muller, and White is still required reading at my alma mater for undergrad Aerospace engineers. This tends to cover the orbital mechanics of bodies in Earth orbit...but that is likely the level of physics and math you will need to start your understanding.
+1 for #Stefano Borini's suggestion for "everything that Jean Meeus has written."
Dear Friend here is the graphics code that simulate solar system
Kindly refer through it
/*Arpana*/
#include<stdio.h>
#include<graphics.h>
#include<conio.h>
#include<math.h>
#include<dos.h>
void main()
{
int i=0,j=260,k=30,l=150,m=90;
int n=230,o=10,p=280,q=220;
float pi=3.1424,a,b,c,d,e,f,g,h,z;
int gd=DETECT,gm;
initgraph(&gd,&gm,"c:\tc\bgi");
outtextxy(0,10,"SOLAR SYSTEM-Appu");
outtextxy(500,10,"press any key...");
circle(320,240,20); /* sun */
setfillstyle(1,4);
floodfill(320,240,15);
outtextxy(310,237,"sun");
circle(260,240,8);
setfillstyle(1,2);
floodfill(258,240,15);
floodfill(262,240,15);
outtextxy(240,220,"mercury");
circle(320,300,12);
setfillstyle(1,1);
floodfill(320,298,15);
floodfill(320,302,15);
outtextxy(335,300,"venus");
circle(320,160,10);
setfillstyle(1,5);
floodfill(320,161,15);
floodfill(320,159,15);
outtextxy(332,150, "earth");
circle(453,300,11);
setfillstyle(1,6);
floodfill(445,300,15);
floodfill(448,309,15);
outtextxy(458,280,"mars");
circle(520,240,14);
setfillstyle(1,7);
floodfill(519,240,15);
floodfill(521,240,15);
outtextxy(500,257,"jupiter");
circle(169,122,12);
setfillstyle(1,12);
floodfill(159,125,15);
floodfill(175,125,15);
outtextxy(130,137,"saturn");
circle(320,420,9);
setfillstyle(1,13);
floodfill(320,417,15);
floodfill(320,423,15);
outtextxy(310,400,"urenus");
circle(40,240,9);
setfillstyle(1,10);
floodfill(38,240,15);
floodfill(42,240,15);
outtextxy(25,220,"neptune");
circle(150,420,7);
setfillstyle(1,14);
floodfill(150,419,15);
floodfill(149,422,15);
outtextxy(120,430,"pluto");
getch();
while(!kbhit()) /*animation*/
{
a=(pi/180)*i;
b=(pi/180)*j;
c=(pi/180)*k;
d=(pi/180)*l;
e=(pi/180)*m;
f=(pi/180)*n;
g=(pi/180)*o;
h=(pi/180)*p;
z=(pi/180)*q;
cleardevice();
circle(320,240,20);
setfillstyle(1,4);
floodfill(320,240,15);
outtextxy(310,237,"sun");
circle(320+60*sin(a),240-35*cos(a),8);
setfillstyle(1,2);
pieslice(320+60*sin(a),240-35*cos(a),0,360,8);
circle(320+100*sin(b),240-60*cos(b),12);
setfillstyle(1,1);
pieslice(320+100*sin(b),240-60*cos(b),0,360,12);
circle(320+130*sin(c),240-80*cos(c),10);
setfillstyle(1,5);
pieslice(320+130*sin(c),240-80*cos(c),0,360,10);
circle(320+170*sin(d),240-100*cos(d),11);
setfillstyle(1,6);
pieslice(320+170*sin(d),240-100*cos(d),0,360,11);
circle(320+200*sin(e),240-130*cos(e),14);
setfillstyle(1,7);
pieslice(320+200*sin(e),240-130*cos(e),0,360,14);
circle(320+230*sin(f),240-155*cos(f),12);
setfillstyle(1,12);
pieslice(320+230*sin(f),240-155*cos(f),0,360,12);
circle(320+260*sin(g),240-180*cos(g),9);
setfillstyle(1,13);
pieslice(320+260*sin(g),240-180*cos(g),0,360,9);
circle(320+280*sin(h),240-200*cos(h),9);
setfillstyle(1,10);
pieslice(320+280*sin(h),240-200*cos(h),0,360,9);
circle(320+300*sin(z),240-220*cos(z),7);
setfillstyle(1,14);
pieslice(320+300*sin(z),240-220*cos(z),0,360,7);
delay(20);
i++;
j++;
k++;
l++;
m++;
n++;
o++;
p++;
q+=2;
}
getch();
}

Resources