How does depth work in a frustum environment? - opengl-es

I need some help understanding the basics of a frustum transformation. Mainly, how depth works.
The following uses a viewport of 768x1024. Using an Orthogonal projection and a square of 768x768 (z defaults to 0) with no translation or scaling, and a viewport of glViewport(0, 0, 768, 1024) this square easily fills the width of the frame:
Now when I change the project to a frustum and mess with the z translation, the square scales appropriately due to the perspective changes.
Here is the same square in such an environment:
I can play with this z translation, as well as the near and far parameters of the frustum matrix and make the square change is apparent onscreen size accordingly. Fine.
But what I cannot figure out is the obvious relationship between its onscreen size and these depth parameters.
For example, suppose I want to use a frustum but have the square fill the frame width, as in my first example image above. How to achieve this?
I would think that if the z translation matched the near plane, then you'd essentially have a square "right in front of the camera", filling the frame. But I cannot figure a way to achieve this. If my near is 1 and my z translation is -1, then the square should be sitting right on the near plane itself (right?!) , filling the width of the frame (where the frustum's left and right planes are the same as the orthogonal projection).
I could paste a bunch of code here to show what I'm doing but I think the concept here is clear. I just want to figure out where the near plane actually is, how to situate something on it, as this will help me understand how the frustum is working.
Okay here is the relevant code I'm using, where width=768 and height=1024.
My vertex shader is the simple gl_Position=Projection*Modelview*Position;
My projection matrix (frustum) is thus:
Frustum(-width/2, width/2, -height/2, height/2, 1,10);
This function is:
static Matrix4<T> Frustum(T left, T right, T bottom, T top, T near, T far)
{
T a = 2 * near / (right - left);
T b = 2 * near / (top - bottom);
T c = (right + left) / (right - left);
T d = (top + bottom) / (top - bottom);
T e = - (far + near) / (far - near);
T f = -2 * far * near / (far - near);
Matrix4 m;
m.x.x = a; m.x.y = 0; m.x.z = 0; m.x.w = 0;
m.y.x = 0; m.y.y = b; m.y.z = 0; m.y.w = 0;
m.z.x = c; m.z.y = d; m.z.z = e; m.z.w = -1;
m.w.x = 0; m.w.y = 0; m.w.z = f; m.w.w = 1;
return m;
}
My square is just two 2d triangles with a default z=0, and an x range from left as -768/2 and right edge at 768/2. The square is clearly working properly as my first image above shows, using the orthogonal projection. (Though I switched to the frustum projection for this question)
To draw the square, I translate the Modelview with:
Translate(0, 0, -1);
Using:
static Matrix4<T> Translate(T x, T y, T z)
{
Matrix4 m;
m.x.x = 1; m.x.y = 0; m.x.z = 0; m.x.w = 0;
m.y.x = 0; m.y.y = 1; m.y.z = 0; m.y.w = 0;
m.z.x = 0; m.z.y = 0; m.z.z = 1; m.z.w = 0;
m.w.x = x; m.w.y = y; m.w.z = z; m.w.w = 1;
return m;
}
As you can see, the translation should put the square on the near plane, yet it looks like this:
If I translate instead of -1.01 just to be sure I avoid near clipping, the result is the same. If I do not translate, thus z=0, the square does not appear, as you'd expect, since it would be behind the camera.

In your frustum matrix, m.w.w should be 0, not 1. This will fix your problem.
But, the mistake isn't your fault. It's my fault! I'm actually the one who wrote that code in the first place, and unfortunately it has proliferated. It's an errata in my book (iPhone 3D Programming), which is where it first appeared.
Feeling very guilty about this!

If my near is 1 and my z translation is -1, then the square should be sitting right on the near plane itself (right?!)
Yes
, filling the width of the frame (where the frustum's left and right planes are the same as the orthogonal projection).
Not neccesarily. The near plane has the extents given with the left, right, bottom and top parameters of glFrustum. A rectangle going to exactly those bounds will snugly fit the viewport when being placed at the near plane distance.

Related

Why is the floor in my raycaster seemingly "misaligned"?

I have been working on a doom/wolfenstein style raycaster for a while now. I implemented the "floor raycasting" to the best of my ability, roughly following a well known raycaster tutorial. It almost works, but the floor tiles seem slightly bigger than they should be, and they don't "stick", as in they don't seem to align properly and they slide slightly as the player moves/rotates. Additionally, the effect seems worsened as the FOV is increased. I cannot figure out where my floor casting is going wrong, so any help is appreciated.
Here is a (crappy) gif of the glitch happening
Here is the most relevant part of my code:
void render(PVector pos, float dir) {
ArrayList<FloatList> dists = new ArrayList<FloatList>();
for (int i = 0; i < numColumns; i++) {
float curDir = atan((i - (numColumns/2.0)) / projectionDistance) + dir;
// FloatList because it returns a few pieces of data
FloatList curHit = cast(pos, curDir);
// normalize distances with cos
curHit.set(0, curHit.get(0) * cos(curDir - dir));
dists.add(curHit);
}
screen.beginDraw();
screen.background(50);
screen.fill(0, 30, 100);
screen.noStroke();
screen.rect(0, 0, screen.width, screen.height/2);
screen.loadPixels();
PImage floor = textures.get(4);
// DRAW FLOOR
for (int y = screen.height/2 + 1; y < screen.height; y++) {
float rowDistance = 0.5 * projectionDistance / ((float)y - (float)rY/2);
// leftmost and rightmost (on screen) floor positions
PVector left = PVector.fromAngle(dir - fov/2).mult(rowDistance).add(p.pos);
PVector right = PVector.fromAngle(dir + fov/2).mult(rowDistance).add(p.pos);
// current position on the floor
PVector curPos = left.copy();
PVector stepVec = right.sub(left).div(screen.width);
float b = constrain(map(rowDistance, 0, maxDist, 1, 0), 0, 1);
for (int x = 0; x < screen.width; x++) {
color sample = floor.get(floor((curPos.x - floor(curPos.x)) * floor.width), floor((curPos.y - floor(curPos.y)) * floor.height));
screen.pixels[x + y*screen.width] = color(red(sample) * b, green(sample) * b, blue(sample) * b);
curPos.add(stepVec);
}
}
updatePixels();
}
If anyone wants to look at the full code or has any questions, ask away.
Ok, I seem to have found a "solution". I will be the first to admit that I do not understand why it works, but it does work. As per my comment above, I noticed that my rowDistance variable was off, which caused all of the problems. In desperation, I changed the FOV and then hardcoded the rowDistance until things looked right. I plotted the ratio between the projectionDistance and the numerator of the rowDistance. I noticed that it neatly conformed to a scaled cos function. So after some simplification, here is the formula I came up with:
float rowDistance = (rX / (4*sin(fov/2))) / ((float)y - (float)rY/2);
where rX is the width of the screen in pixels.
If anyone has an intuitive explanation as to why this formula makes sense, PLEASE enlighten me. I hope this helps anyone else who may have this problem.

Confusion about zFar and zNear plane offsets using glm::perspective

I have been using glm to help build a software rasterizer for self education. In my camera class I am using glm::lookat() to create my view matrix and glm::perspective() to create my perspective matrix.
I seem to be getting what I expect for my left, right top and bottom clipping planes. However, I seem to be either doing something wrong for my near/far planes of there is an error in my understanding. I have reached a point in which my "google-fu" has failed me.
Operating under the assumption that I am correctly extracting clip planes from my glm::perspective matrix, and using the general plane equation:
aX+bY+cZ+d = 0
I am getting strange d or "offset" values for my zNear and zFar planes.
It is my understanding that the d value is the value of which I would be shifting/translatin the point P0 of a plane along the normal vector.
They are 0.200200200 and -0.200200200 respectively. However, my normals are correct orientated at +1.0f and -1.f along the z-axis as expected for a plane perpendicular to my z basis vector.
So when testing a point such as the (0, 0, -5) world space against these planes, it is transformed by my view matrix to:
(0, 0, 5.81181192)
so testing it against these plane in a clip chain, said example vertex would be culled.
Here is the start of a camera class establishing the relevant matrices:
static constexpr glm::vec3 UPvec(0.f, 1.f, 0.f);
static constexpr auto zFar = 100.f;
static constexpr auto zNear = 0.1f;
Camera::Camera(glm::vec3 eye, glm::vec3 center, float fovY, float w, float h) :
viewMatrix{ glm::lookAt(eye, center, UPvec) },
perspectiveMatrix{ glm::perspective(glm::radians<float>(fovY), w/h, zNear, zFar) },
frustumLeftPlane {setPlane(0, 1)},
frustumRighPlane {setPlane(0, 0)},
frustumBottomPlane {setPlane(1, 1)},
frustumTopPlane {setPlane(1, 0)},
frstumNearPlane {setPlane(2, 0)},
frustumFarPlane {setPlane(2, 1)},
The frustum objects are based off the following struct:
struct Plane
{
glm::vec4 normal;
float offset;
};
I have extracted the 6 clipping planes from the perspective matrix as below:
Plane Camera::setPlane(const int& row, const bool& sign)
{
float temp[4]{};
Plane plane{};
if (sign == 0)
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] + perspectiveMatrix[i][row];
}
}
else
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] - perspectiveMatrix[i][row];
}
}
plane.normal.x = temp[0];
plane.normal.y = temp[1];
plane.normal.z = temp[2];
plane.normal.w = 0.f;
plane.offset = temp[3];
plane.normal = glm::normalize(plane.normal);
return plane;
}
Any help would be appreciated, as now I am at a loss.
Many thanks.
The d parameter of a plane equation describes how much the plane is offset from the origin along the plane normal. This also takes into account the length of the normal.
One can't just normalize the normal without also adjusting the d parameter since normalizing changes the length of the normal. If you want to normalize a plane equation then you also have to apply the division step to the d coordinate:
float normalLength = sqrt(temp[0] * temp[0] + temp[1] * temp[1] + temp[2] * temp[2]);
plane.normal.x = temp[0] / normalLength;
plane.normal.y = temp[1] / normalLength;
plane.normal.z = temp[2] / normalLength;
plane.normal.w = 0.f;
plane.offset = temp[3] / normalLength;
Side note 1: Usually, one would store the offset of a plane equation in the w-coordinate of a vec4 instead of a separate variable. The reason is that the typical operation you perform with it is a point to plane distance check like dist = n * x - d (for a given point x, normal n, offset d, * is dot product), which can then be written as dist = [n, d] * [x, -1].
Side note 2: Most software and also hardware rasterizer perform clipping after the projection step since it's cheaper and easier to implement.

Processing: Efficiently create uniform grid

I'm trying to create a grid of an image (in the way one would tile a background with). Here's what I've been using:
PImage bgtile;
PGraphics bg;
int tilesize = 50;
void setup() {
int t = millis();
fullScreen(P2D);
background(0);
bgtile = loadImage("bgtile.png");
int bgw = ceil( ((float) width) / tilesize) + 1;
int bgh = ceil( ((float) height) / tilesize) + 1;
bg = createGraphics(bgw*tilesize,bgh*tilesize);
bg.beginDraw();
for(int i = 0; i < bgw; i++){
for(int j = 0; j < bgh; j++){
bg.image(bgtile, i*tilesize, j*tilesize, tilesize, tilesize);
}
}
bg.endDraw();
print(millis() - t);
}
The timing code says that this takes about a quarter of a second, but by my count there's a full second once the window opens before anything shows up on screen (which should happen as soon as draw is first run). Is there a faster way to get this same effect? (I want to avoid rendering bgtile hundreds of times in the draw loop for obvious reasons)
One way could be to make use of the GPU and let OpenGL repeat a texture for you.
Processing makes it fairly easy to repeat a texture via textureWrap(REPEAT)
Instead of drawing an image you'd make your own quad shape and instead of calling vertex(x, y) for example, you'd call vertex(x, y, u, v); passing texture coordinates (more low level info on the OpenGL link above). The simple idea is x,y would control the geometry on screen and u,v would control how the texture is applied to the geometry.
Another thing you can control is textureMode() which allows you control how you specify the texture coordinates (U, V):
IMAGE mode is the default: you use pixel coordinates (based on the dimensions of the texture)
NORMAL mode uses values between 0.0 and 1.0 (also known as normalised values) where 1.0 means the maximum the texture can go (e.g. image width for U or image height for V) and you don't need to worry about knowing the texture image dimensions
Here's a basic example based on the textureMode() example above:
PImage img;
void setup() {
fullScreen(P2D);
noStroke();
img = loadImage("https://processing.org/examples/moonwalk.jpg");
// texture mode can be IMAGE (pixel dimensions) or NORMAL (0.0 to 1.0)
// normal means 1.0 is full width (for U) or height (for V) without having to know the image resolution
textureMode(NORMAL);
// this is what will make handle tiling for you
textureWrap(REPEAT);
}
void draw() {
// drag mouse on X axis to change tiling
int tileRepeats = (int)map(constrain(mouseX,0,width), 0, width, 1, 100);
// draw a textured quad
beginShape(QUAD);
// set the texture
texture(img);
// x , y , U , V
vertex(0 , 0 , 0 , 0);
vertex(width, 0 , tileRepeats, 0);
vertex(width, height, tileRepeats, tileRepeats);
vertex(0 , height, 0 , tileRepeats);
endShape();
text((int)frameRate+"fps",15,15);
}
Drag the mouse on the Y axis to control the number of repetitions.
In this simple example both vertex coordinates and texture coordinates are going clockwise (top left, top right, bottom right, bottom left order).
There are probably other ways to achieve the same result: using a PShader comes to mind.
Your approach caching the tiles in setup is ok.
Even flattening your nested loop into a single loop at best may only shave a few milliseconds off, but nothing substantial.
If you tried to cache my snippet above it would make a minimal difference.
In this particular case, because of the back and forth between Java/OpenGL (via JOGL), as far as I can tell using VisualVM, it looks like there's not a lot of room for improvement since simply swapping buffers takes so long (e.g. bg.image()):
An easy way to do this would be to use processing's built in get(); which saves a PImage of the coordinates you pass, for example: PImage pic = get(0, 0, width, height); will capture a "screenshot" of your entire window. So, you can create the image like you already are, and then take a screenshot and display that screenshot.
PImage bgtile;
PGraphics bg;
PImage screenGrab;
int tilesize = 50;
void setup() {
fullScreen(P2D);
background(0);
bgtile = loadImage("bgtile.png");
int bgw = ceil(((float) width) / tilesize) + 1;
int bgh = ceil(((float) height) / tilesize) + 1;
bg = createGraphics(bgw * tilesize, bgh * tilesize);
bg.beginDraw();
for (int i = 0; i < bgw; i++) {
for (int j = 0; j < bgh; j++) {
bg.image(bgtile, i * tilesize, j * tilesize, tilesize, tilesize);
}
}
bg.endDraw();
screenGrab = get(0, 0, width, height);
}
void draw() {
image(screenGrab, 0, 0);
}
This will still take a little bit to generate the image, but once it does, there is no need to use the for loops again unless you change the tilesize.
#George Profenza's answer looks more efficient than my solution, but mine may take a little less modification to the code you already have.

Processing - creating circles from current pixels

I'm using processing, and I'm trying to create a circle from the pixels i have on my display.
I managed to pull the pixels on screen and create a growing circle from them.
However i'm looking for something much more sophisticated, I want to make it seem as if the pixels on the display are moving from their current location and forming a turning circle or something like this.
This is what i have for now:
int c = 0;
int radius = 30;
allPixels = removeBlackP();
void draw {
loadPixels();
for (int alpha = 0; alpha < 360; alpha++)
{
float xf = 350 + radius*cos(alpha);
float yf = 350 + radius*sin(alpha);
int x = (int) xf;
int y = (int) yf;
if (radius > 200) {radius =30;break;}
if (c> allPixels.length) {c= 0;}
pixels[y*700 +x] = allPixels[c];
updatePixels();
}
radius++;
c++;
}
the function removeBlackP return an array with all the pixels except for the black ones.
This code works for me. There is an issue that the circle only has the numbers as int so it seems like some pixels inside the circle won't fill, i can live with that. I'm looking for something a bit more complex like I explained.
Thanks!
Fill all pixels of scanlines belonging to the circle. Using this approach, you will paint all places inside the circle. For every line calculate start coordinate (end one is symmetric). Pseudocode:
for y = center_y - radius; y <= center_y + radius; y++
dx = Sqrt(radius * radius - y * y)
for x = center_x - dx; x <= center_x + dx; x++
fill a[y, x]
When you find places for all pixels, you can make correlation between initial pixels places and calculated ones and move them step-by-step.
For example, if initial coordinates relative to center point for k-th pixel are (x0, y0) and final coordinates are (x1,y1), and you want to make M steps, moving pixel by spiral, calculate intermediate coordinates:
calc values once:
r0 = Sqrt(x0*x0 + y0*y0) //Math.Hypot if available
r1 = Sqrt(x1*x1 + y1*y1)
fi0 = Math.Atan2(y0, x0)
fi1 = Math.Atan2(y1, x1)
if fi1 < fi0 then
fi1 = fi1 + 2 * Pi;
for i = 1; i <=M ; i++
x = (r0 + i / M * (r1 - r0)) * Cos(fi0 + i / M * (fi1 - fi0))
y = (r0 + i / M * (r1 - r0)) * Sin(fi0 + i / M * (fi1 - fi0))
shift by center coordinates
The way you go about drawing circles in Processing looks a little convoluted.
The simplest way is to use the ellipse() function, no pixels involved though:
If you do need to draw an ellipse and use pixels, you can make use of PGraphics which is similar to using a separate buffer/"layer" to draw into using Processing drawing commands but it also has pixels[] you can access.
Let's say you want to draw a low-res pixel circle circle, you can create a small PGraphics, disable smoothing, draw the circle, then render the circle at a higher resolution. The only catch is these drawing commands must be placed within beginDraw()/endDraw() calls:
PGraphics buffer;
void setup(){
//disable sketch's aliasing
noSmooth();
buffer = createGraphics(25,25);
buffer.beginDraw();
//disable buffer's aliasing
buffer.noSmooth();
buffer.noFill();
buffer.stroke(255);
buffer.endDraw();
}
void draw(){
background(255);
//draw small circle
float circleSize = map(sin(frameCount * .01),-1.0,1.0,0.0,20.0);
buffer.beginDraw();
buffer.background(0);
buffer.ellipse(buffer.width / 2,buffer.height / 2, circleSize,circleSize);
buffer.endDraw();
//render small circle at higher resolution (blocky - no aliasing)
image(buffer,0,0,width,height);
}
If you want to manually draw a circle using pixels[] you are on the right using the polar to cartesian conversion formula (x = cos(angle) * radius, y = sin(angle) * radius).Even though it's focusing on drawing a radial gradient, you can find an example of drawing a circle(a lot actually) using pixels in this answer

Ray-box Intersection Theory

I wish to determine the intersection point between a ray and a box. The box is defined by its min 3D coordinate and max 3D coordinate and the ray is defined by its origin and the direction to which it points.
Currently, I am forming a plane for each face of the box and I'm intersecting the ray with the plane. If the ray intersects the plane, then I check whether or not the intersection point is actually on the surface of the box. If so, I check whether it is the closest intersection for this ray and I return the closest intersection.
The way I check whether the plane-intersection point is on the box surface itself is through a function
bool PointOnBoxFace(R3Point point, R3Point corner1, R3Point corner2)
{
double min_x = min(corner1.X(), corner2.X());
double max_x = max(corner1.X(), corner2.X());
double min_y = min(corner1.Y(), corner2.Y());
double max_y = max(corner1.Y(), corner2.Y());
double min_z = min(corner1.Z(), corner2.Z());
double max_z = max(corner1.Z(), corner2.Z());
if(point.X() >= min_x && point.X() <= max_x &&
point.Y() >= min_y && point.Y() <= max_y &&
point.Z() >= min_z && point.Z() <= max_z)
return true;
return false;
}
where corner1 is one corner of the rectangle for that box face and corner2 is the opposite corner. My implementation works most of the time but sometimes it gives me the wrong intersection. Please see image:
The image shows rays coming from the camera's eye and hitting the box surface. The other rays are the normals to the box surface. It can be seen that the one ray in particular (it's actually the normal that is seen) comes out from the "back" of the box, whereas the normal should be coming up from the top of the box. This seems to be strange since there are multiple other rays that hit the top of the box correctly.
I was wondering if the way I'm checking whether the intersection point is on the box is correct or if I should use some other algorithm.
Thanks.
Increasing things by epsilon is not actually a great way to do this, as you now have a border of size epsilon at the edge of your box through which rays can pass. So you'll get rid of this (relatively common) weird set of errors, and end up with another (rarer) set of weird errors.
I assume that you're already envisioning that your ray is traveling at some speed along its vector and find the time of intersection with each plane. So, for example, if you are intersecting the plane at x=x0, and your ray is going in direction (rx,ry,rz) from (0,0,0), then the time of intersection is t = x0/rx. If t is negative, ignore it--you're going the other way. If t is zero, you have to decide how to handle that special case--if you're in a plane already, do you bounce off it, or pass through it? You may also want to handle rx==0 as a special case (so that you can hit the edge of the box).
Anyway, now you have exactly the coordinates where you struck that plane: they are (t*rx , t*ry , t*rz). Now you can just read off whether t*ry and t*rz are within the rectangle they need to be in (i.e. between the min and max for the cube along those axes). You don't test the x coordinate because you already know that you hit it Again, you have to decide whether/how to handle hitting corners as a special case. Furthermore, now you can order your collisions with the various surfaces by time and pick the first one as your collision point.
This allows you to compute, without resorting to arbitrary epsilon-factors, whether and where your ray intersects your cube, to the accuracy possible with floating point arithmetic.
So you just need three functions like the one you've already got: one for testing whether you hit within yz assuming you hit x, and the corresponding ones for xz and xy assuming that you hit y and z respectively.
Edit: code added to (verbosely) show how to do the tests differently for each axis:
#define X_FACE 0
#define Y_FACE 1
#define Z_FACE 2
#define MAX_FACE 4
// true if we hit a box face, false otherwise
bool hit_face(double uhit,double vhit,
double umin,double umax,double vmin,double vmax)
{
return (umin <= uhit && uhit <= umax && vmin <= vhit && vhit <= vmax);
}
// 0.0 if we missed, the time of impact otherwise
double hit_box(double rx,double ry, double rz,
double min_x,double min_y,double min_z,
double max_x,double max_y,double max_z)
{
double times[6];
bool hits[6];
int faces[6];
double t;
if (rx==0) { times[0] = times[1] = 0.0; }
else {
t = min_x/rx;
times[0] = t; faces[0] = X_FACE;
hits[0] = hit_box(t*ry , t*rz , min_y , max_y , min_z , max_z);
t = max_x/rx;
times[1] = t; faces[1] = X_FACE + MAX_FACE;
hits[1] = hit_box(t*ry , t*rz , min_y , max_y , min_z , max_z);
}
if (ry==0) { times[2] = times[3] = 0.0; }
else {
t = min_y/ry;
times[2] = t; faces[2] = Y_FACE;
hits[2] = hit_box(t*rx , t*rz , min_x , max_x , min_z , max_z);
t = max_y/ry;
times[3] = t; faces[3] = Y_FACE + MAX_FACE;
hits[3] = hit_box(t*rx , t*rz , min_x , max_x , min_z , max_z);
}
if (rz==0) { times[4] = times[5] = 0.0; }
else {
t = min_z/rz;
times[4] = t; faces[4] = Z_FACE;
hits[4] = hit_box(t*rx , t*ry , min_x , max_x , min_y , max_y);
t = max_z/rz;
times[5] = t; faces[5] = Z_FACE + MAX_FACE;
hits[5] = hit_box(t*rx , t*ry , min_x , max_x , min_y , max_y);
}
int first = 6;
t = 0.0;
for (int i=0 ; i<6 ; i++) {
if (times[i] > 0.0 && (times[i]<t || t==0.0)) {
first = i;
t = times[i];
}
}
if (first>5) return 0.0; // Found nothing
else return times[first]; // Probably want hits[first] and faces[first] also....
}
(I just typed this, didn't compile it, so beware of bugs.)
(Edit: just corrected an i -> first.)
Anyway, the point is that you treat the three directions separately, and test to see whether the impact has occurred within the right box in (u,v) coordinates, where (u,v) are either (x,y), (x,z), or (y,z) depending on which plane you hit.
PointOnBoxFace should be a two-dimensional check instead of three-dimensional. For example, if you're testing against the z = z_min plane, then you should only need to compare x and y to their respective boundaries. You've already figured out that z coordinate is correct. Floating point precision is likely tripping you up as you "re-check" the third coordinate.
For example, if z_min is 1.0, you first test against that plane. You find an intersection point of (x, y, 0.999999999). Now, even though x and y are within bounds, the z isn't quite right.
Code looks fine. Try to find this particular ray and debug it.
Could it be that that ray ends up passing exactly through the edge of the box? Floating point roundoff errors might cause it to be missed by both the right and the back face.
EDIT: Ignore this answer (see comments below where I am quite convincingly shown the error of my ways).
You are testing whether the point is inside the volume, but the point is on the periphery of that volume, so you may find that it is an "infinitesimal" distance outside the volume. Try growing the box by some small epsilon, e.g.:
double epsilon = 1e-10; // Depends the scale of things in your code.
double min_x = min(corner1.X(), corner2.X()) - epsilon;
double max_x = max(corner1.X(), corner2.X()) + epsilon;
double min_y = min(corner1.Y(), corner2.Y()) - epsilon;
...
Technically, the correct way to compare almost-equal numbers is to cast their bit representations to ints and compare the integers for some small offset, e.g., in C:
#define EPSILON 10 /* some small int; tune to suit */
int almostequal(double a, double b) {
return llabs(*(long long*)&a - *(long long*)&b) < EPSILON;
}
Of course, this isn't so easy in C#, but perhaps unsafe semantics can achieve the same effect. (Thanks to #Novox for his comments, which lead me to a nice page explaining this technique in detail.)

Resources