depth peeling invariance in webgl (and threejs) - algorithm

I'm looking at what i think is the first paper for depth peeling (the simplest algorithm?) and I want to implement it with webgl, using three.js
I think I understand the concept and was able to make several peels, with some logic that looks like this:
render(scene, camera) {
const oldAutoClear = this._renderer.autoClear
this._renderer.autoClear = false
setDepthPeelActive(true) //sets a global injected uniform in a singleton elsewhere, every material in the scene has onBeforeRender injected with additional logic and uniforms
let ping
let pong
for (let i = 0; i < this._numPasses; i++) {
const pingPong = i % 2 === 0
ping = pingPong ? 1 : 0
pong = pingPong ? 0 : 1
const writeRGBA = this._screenRGBA[i]
const writeDepth = this._screenDepth[ping]
setDepthPeelPassNumber(i) //was going to try increasing the polygonOffsetUnits here globally,
if (i > 0) {
//all but first pass write to depth
const readDepth = this._screenDepth[pong]
setDepthPeelFirstPass(false)
setDepthPeelPrevDepthTexture(readDepth)
this._depthMaterial.uniforms.uFirstPass.value = 0
this._depthMaterial.uniforms.uPrevDepthTex.value = readDepth
} else {
//first pass just renders to depth
setDepthPeelFirstPass(true)
setDepthPeelPrevDepthTexture(null)
this._depthMaterial.uniforms.uFirstPass.value = 1
this._depthMaterial.uniforms.uPrevDepthTex.value = null
}
scene.overrideMaterial = this._depthMaterial
this._renderer.render(scene, camera, writeDepth, true)
scene.overrideMaterial = null
this._renderer.render(scene, camera, writeRGBA, true)
}
this._quad.material = this._blitMaterial
// this._blitMaterial.uniforms.uTexture.value = this._screenDepth[ping]
this._blitMaterial.uniforms.uTexture.value = this._screenRGBA[
this._currentBlitTex
]
console.log(this._currentBlitTex)
this._renderer.render(this._scene, this._camera)
this._renderer.autoClear = oldAutoClear
}
I'm using gl_FragCoord.z to do the test, and packing the depth into a 8bit RGBA texture, with a shader that looks like this:
float depth = gl_FragCoord.z;
vec4 pp = packDepthToRGBA( depth );
if( uFirstPass == 0 ){
float prevDepth = unpackRGBAToDepth( texture2D( uPrevDepthTex , vSS));
if( depth <= prevDepth + 0.0001) {
discard;
}
}
gl_FragColor = pp;
Varying vSS is computed in the vertex shader, after the projection:
vSS.xy = gl_Position.xy * .5 + .5;
The basic idea seems to work and i get peels, but only if i using the fudge factor. It looks like it fails though as the angle gets more obtuse (which is why polygonOffset needs both the factor and units, to account for the slope?).
I didn't understand at all how the invariance is solved. I don't understand how the mentioned extension is being used other than it seems to be overriding the fragment depth, but with what?
I must admit that I'm not sure even which interpolation is being referred to here since every pixel is aligned, i'm just using nearest filtering.
I did see some hints about depth buffer precision, but not really understanding the issue, i wanted to try packing the depth into only three channels and see what happens.
Having such a small fudge factor make it sort of work tells me that likely all these sampled and computed depths do seem to exist in the same space. But this seems to be the same issue as if using gl.EQUAL for depth testing? For shits and giggles i tried to override the depth with the unpacked depth immediately after packing it, but it didn't seem to do anything.
edit
Increasing the polygon offset with each peel seems to have done the trick. I got some fighting though with the lines but i think it's due to the fact that i was already using offset to draw them and i need to include that in the peel offset. I'd still love to understand more about the problem.

The depth buffer stores depths :) Depending on the 'far' and 'near' planes the perspective projection tends to set the depths of the points "stacked" in just a short part of the buffer. It's not linear in z. You can see this on your own setting a different color depending on the depth and render some triangle that takes most of near-far distance.
A shadow map stores depths (distances to light)... calculated after projection. Later, in the second or following pass, you will compare those depths, which are "stacked", which makes some comparisons to fail due to they are very similar values: hazardous variances.
You can user a more fine-grained depth buffer, 24 bits instead of 16 or 8 bits. This may solve part of the problem.
There's another issue: the perspective division or z/w, needed to get normalized device coordinates (NDC). It occurs after vertex shader, so gl_FragDepth = gl_FragCoord.z is affected.
The other approach is to store the depths calculated in some space that doesn't suffer "stacking" nor perspective division. Camera space is one. In other words, you can calculate the depth undoing projection in the vertex shader.
The article you link to is for old fixed-pipeline, without shaders. It shows a NVIDIA extension to deal with these variances.

Related

How to convert a screen coordinate into a translation for a projection matrix?

(More info at end)----->
I am trying to render a small picture-in-picture display over my scene. The PiP is just a smaller texture, but it is intended to reveal secret objects in the scene when it is placed over them.
To do this, I want to render my scene, then render the SAME scene on the smaller texture, but with the exact same positioning as the main scene. The intended result would be something like this:
My problem is... I cannot get the scene on the smaller texture to match up 1:1. I keep trying various kludges, but ultimately I suspect that I need to do something to the projection matrix to pan it over to the location of the frame. I can get it to zoom correctly...just can't get it to pan.
Can anyone suggest what I need to do to my projection matrix to render my scene 1:1 (but panned by x,y) onto a smaller texture?
The data I have:
Resolution of the full-screen framebuffer
Resolution of the smaller texture
XY coordinate where I want to draw the smaller texture as an overlay sprite
The world/view/projection matrices from the original full-screen scene
The viewport from the original full-screen scene
(Edit)
Here is the function I use to produce the 3D camera:
void Make3DCamera(Vector theCameraPos, Vector theLookAt, Vector theUpVector, float theFOV, Point theRez, Matrix& theViewMatrix,Matrix& theProjectionMatrix)
{
Matrix aCombinedViewMatrix;
Matrix aViewMatrix;
aCombinedViewMatrix.Scale(1,1,-1);
theCameraPos.mZ*=-1;
theLookAt.mZ*=-1;
theUpVector.mZ*=-1;
aCombinedViewMatrix.Translate(-theCameraPos);
Vector aLookAtVector=theLookAt-theCameraPos;
Vector aSideVector=theUpVector.Cross(aLookAtVector);
theUpVector=aLookAtVector.Cross(aSideVector);
aLookAtVector.Normalize();
aSideVector.Normalize();
theUpVector.Normalize();
aViewMatrix.mData.m[0][0] = -aSideVector.mX;
aViewMatrix.mData.m[1][0] = -aSideVector.mY;
aViewMatrix.mData.m[2][0] = -aSideVector.mZ;
aViewMatrix.mData.m[3][0] = 0;
aViewMatrix.mData.m[0][1] = -theUpVector.mX;
aViewMatrix.mData.m[1][1] = -theUpVector.mY;
aViewMatrix.mData.m[2][1] = -theUpVector.mZ;
aViewMatrix.mData.m[3][1] = 0;
aViewMatrix.mData.m[0][2] = aLookAtVector.mX;
aViewMatrix.mData.m[1][2] = aLookAtVector.mY;
aViewMatrix.mData.m[2][2] = aLookAtVector.mZ;
aViewMatrix.mData.m[3][2] = 0;
aViewMatrix.mData.m[0][3] = 0;
aViewMatrix.mData.m[1][3] = 0;
aViewMatrix.mData.m[2][3] = 0;
aViewMatrix.mData.m[3][3] = 1;
if (gG.mRenderToSprite) aViewMatrix.Scale(1,-1,1);
aCombinedViewMatrix*=aViewMatrix;
// Projection Matrix
float aAspect = (float) theRez.mX / (float) theRez.mY;
float aNear = gG.mZRange.mData1;
float aFar = gG.mZRange.mData2;
float aWidth = gMath.Cos(theFOV / 2.0f);
float aHeight = gMath.Cos(theFOV / 2.0f);
if (aAspect > 1.0) aWidth /= aAspect;
else aHeight *= aAspect;
float s = gMath.Sin(theFOV / 2.0f);
float d = 1.0f - aNear / aFar;
Matrix aPerspectiveMatrix;
aPerspectiveMatrix.mData.m[0][0] = aWidth;
aPerspectiveMatrix.mData.m[1][0] = 0;
aPerspectiveMatrix.mData.m[2][0] = gG.m3DOffset.mX/theRez.mX/2;
aPerspectiveMatrix.mData.m[3][0] = 0;
aPerspectiveMatrix.mData.m[0][1] = 0;
aPerspectiveMatrix.mData.m[1][1] = aHeight;
aPerspectiveMatrix.mData.m[2][1] = gG.m3DOffset.mY/theRez.mY/2;
aPerspectiveMatrix.mData.m[3][1] = 0;
aPerspectiveMatrix.mData.m[0][2] = 0;
aPerspectiveMatrix.mData.m[1][2] = 0;
aPerspectiveMatrix.mData.m[2][2] = s / d;
aPerspectiveMatrix.mData.m[3][2] = -(s * aNear / d);
aPerspectiveMatrix.mData.m[0][3] = 0;
aPerspectiveMatrix.mData.m[1][3] = 0;
aPerspectiveMatrix.mData.m[2][3] = s;
aPerspectiveMatrix.mData.m[3][3] = 0;
theViewMatrix=aCombinedViewMatrix;
theProjectionMatrix=aPerspectiveMatrix;
}
Edit to add more information:
Just playing and tweaking numbers, I have come to a "close" result. However the "close" result requires a multiplication by some kludge numbers, that I don't understand.
Here's what I'm doing to to perspective matrix to produce my close result:
//Before calling Make3DCamera, adjusting FOV:
aFOV*=smallerTexture.HeightF()/normalRenderSize.HeightF(); // Zoom it
aFOV*=1.02f // <- WTH is this?
//Then, to pan the camera over to the x/y position I want, I do:
Matrix aPM=GetCurrentProjectionMatrix();
float aX=(screenX-normalRenderSize.WidthF()/2.0f)/2.0f;
float aY=(screenY-normalRenderSize.HeightF()/2.0f)/2.0f;
aX*=1.07f; // <- WTH is this?
aY*=1.07f; // <- WTH is this?
aPM.mData.m[2][0]=-aX/normalRenderSize.HeightF();
aPM.mData.m[2][1]=-aY/normalRenderSize.HeightF();
SetCurrentProjectionMatrix(aPM);
When I do this, my new picture is VERY close... but not exactly perfect-- the small render tends to drift away from "center" the further the "magic window" is from the center. Without the kludge number, the drift away from center with the magic window is very pronounced.
The kludge numbers 1.02f for zoom and 1.07 for pan reduce the inaccuracies and drift to a fraction of a pixel, but those numbers must be a ratio from somewhere, right? They work at ANY RESOLUTION, though-- so I have have a 1280x800 screen and a 256,256 magic window texture... if I change the screen to 1024x768, it all still works.
Where the heck are these numbers coming from?
If you don't care about sub-optimal performance (i.e., drawing the whole scene twice) and if you don't need the smaller scene in a texture, an easy way to obtain the overlay with pixel perfect precision is:
Set up main scene (model/view/projection matrices, etc.) and draw it as you are now.
Use glScissor to set the rectangle for the overlay. glScissor takes the screen-space x, y, width, and height and discards anything outside that rectangle. It looks like you have those four data items already, so you should be good to go.
Call glEnable(GL_SCISSOR_TEST) to actually turn on the test.
Set the shader variables (if you're using shaders) for drawing the greyscale scene/hidden objects/etc. You still use the same view and projection matrices that you used for the main scene.
Draw the greyscale scene/hidden objects/etc.
Call glDisable(GL_SCISSOR_TEST) so you won't be scissoring at the start of the next frame.
Draw the red overlay border, if desired.
Now, if you actually need the overlay in its own texture for some reason, this probably won't be adequate...it could be made to work either with framebuffer objects and/or pixel readback, but this would be less efficient.
Most people completely overcomplicate such issues. There is absolutely no magic to applying transformations after applying the projection matrix.
If you have a projection matrix P (and I'm assuming default OpenGL conventions here where P is constructed in a way that the vector is post-multiplied to the matrix, so for an eye space vector v_eye, we get v_clip = P * v_eye), you can simply pre-multiply some other translate and scale transforms to cut out any region of interest.
Assume you have a viewport of size w_view * h_view pixels, and you want to find a projection matrix which renders only a tile w_tile * h_tile pixels , beginning at pixel location (x_tile, y_tile) (again, assuming default GL conventions here, window space origin is bottom left, so y_tile is measured from the bottom). Also note that the _tile coordinates are to be interpreted relative to the viewport, in the typical case, that would start at (0,0) and have the size of your full framebuffer, but this is by no means required nor assumed here.
Since after applying the projection matrix we are in clip space, we need to transform our coordinates from window space pixels to clip space. Note that clip space is a 4D homogeneous space, but we can use any w value we like (except 0) to represent any point (as a point in the 3D space we care about forms a line in the 4D space we work in), so let's just use w=1 for simplicity's sake.
The view volume in clip space is denoted by the [-w,w] range, so in the w=1 hyperplane, it is [-1,1]. Converting our tile into this space yields:
x_clip = 2 * (x_tile / w_view) -1
y_clip = 2 * (y_tile / h_view) -1
w_clip = 2 * (w_tile / w_view) -1
h_clip = 2 * (h_tile / h_view) -1
We now just need to translate the objects such that the center of the tile is moved to the center of the view volume, which by definition is the origin, and scale the w_clip * h_clip sized region to the full [-1,1] extent in each dimension.
That means:
T = translate(-(x_clip + 0.5*w_clip), -(y_clip + 0.5 *h_clip), 0)
S = scale(2.0/w_clip, 2.0/h_clip, 1.0)
We can now create the modified projection matrix P' as P' = S * T * P, and that's all there is. Rendering with P' instead of P will render exactly the region of your tile to whatever viewport you are using, so for it to be pixel-exact with respect to your original viewport, you must now render with a viewport which is also w_tile * h_tile pixels big.
Note that there is also another approach: The viewport is not clamped against the framebuffer you're rendering to. It is actually valid to provide negative values for x and y. If your framebuffer for rendering your tile into is exactly w_tile * h_tile pixels, you simply could set glViewport(-x_tile, -y_tile, x_tile + w_tile, y_tile + h_tile) and render with the unmodified projection matrix P instead.

How to improve texture access performance in OpenGL shaders?

Conditions
I use OpenGL 3 and PyOpenGL.
I have ~50 thousand (53'490) vertices and each of them has 199 vec3 attributes which determine their displacement. It's impossible to store this data as regular vertices attributes, so I use texture.
The problem is: non-parallelized C function calculates displacement of vertices as fast as GLSL and even faster in some cases. I've checked: the issue is texture read and I don't understand how to optimize it.
I've written two different shaders. One calculates new model in ~0.09s and another one in ~0.12s (including attributes assignment, which is equal for both cases).
Code
Both shaders start with
#version 300 es
in vec3 vin_position;
out vec4 vin_pos;
uniform mat4 rotation_matrix;
uniform float coefficients[199];
uniform sampler2D principal_components;
The faster one is
void main(void) {
int c_pos = gl_VertexID;
int texture_size = 8192;
ivec2 texPos = ivec2(c_pos % texture_size, c_pos / texture_size);
vec4 tmp = vec4(0.0);
for (int i = 0; i < 199; i++) {
tmp += texelFetch(principal_components, texPos, 0) * coefficients[i];
c_pos += 53490;
texPos = ivec2(c_pos % texture_size, c_pos / texture_size);
}
gl_Position = rotation_matrix
* vec4(vin_position + tmp.xyz, 246006.0);
vin_pos = gl_Position;
}
The slower one
void main(void) {
int texture_size = 8192;
int columns = texture_size - texture_size % 199;
int c_pos = gl_VertexID * 199;
ivec2 texPos = ivec2(c_pos % columns, c_pos / columns);
vec4 tmp = vec3(0.0);
for (int i = 0; i < 199; i++) {
tmp += texelFetch(principal_components, texPos, 0) * coefficients[i];
texPos.x++;
}
gl_Position = rotation_matrix
* vec4(vin_position + tmp.xyz, 246006.0);
vin_pos = gl_Position;
}
The main idea of difference between them:
in the first case attributes of vertices are stored in following way:
first attributes of all vertices
second attributes of all vertices
...
last attributes of all vertices
in the second case attributes of vertices are stored in another way:
all attributes of the first vertex
all attributes of the second vertex
...
all attributes of the last vertex
also in the second example data is aligned so that all attributes of each vertex stored only in one row. This means that if I know the row and column of the first attribute of some vertex, I need only to increment x component of texture coordinate
I thought, that aligned data will be accessed faster.
Questions
Why is data not accessed faster?
How can I increase performance of it?
Is there ability to link texture chunk with vertex?
Are there recommendations for data alignment, good related article about caching in GPUs (Intel HD, nVidia GeForce)?
Notes
coefficients array changed from frame to frame, otherwise there's no problem: I could precalculate the model and be happy
Why is data not accessed faster?
Because GPUs are not magical. GPUs gain performance by performing calculations in parallel. Performing 1 million texel fetches, no matter how it happens, is not going to be fast.
If you were using the results of those textures to do lighting computations, it would appear fast because the cost of the lighting computation would be hidden by the latency of the memory fetches. You are taking the results of a fetch, doing a multiply/add, then doing another fetch. That's slow.
Is there ability to link texture chunk with vertex?
Even if there was (and there isn't), how would that help? GPUs execute operations in parallel. That means multiple vertices are being processed simultaneously, each accessing 200 textures.
So what would aid performance there is making each texture access coherent. That is, neighboring vertices would access neighboring texels, thus making the texture fetches more cache efficient. But there's no way to know what vertices will be considered "neighbors". And texture swizzle layouts are implementation dependent, so even if you did know the order of vertex processing, you couldn't adjust your texture to take local advantage of it.
The best way to do that would be to ditch vertex shaders and texture accesses in favor of compute shaders and SSBOs. That way, you have direct knowledge of the locality of your accesses, by setting the work group size. With SSBOs, you can arrange your array in whatever fashion gives you the best locality of access for each wavefront.
But things like this are the equivalent of putting band-aids on a gaping wound.
How can I increase performance of it?
Stop doing so many texture fetches.
I'm being completely serious. While there are ways to mitigate the costs of what you're doing, the most effective solution is to change your algorithm so that it doesn't need to do that much work.
Your algorithm looks suspiciously like vertex morphing via a palette of "poses", with the coefficient specifying the weight applied to each pose. If that's the case, then odds are good that most of your coefficients are either 0 or negligibly small. If so, then you're wasting vast amounts of time accessing textures only to transform their contributions into nothing.
If most of your coefficients are 0, then the best thing to do would be to pick some arbitrary and small number for the maximum number of coefficients that can affect the result. For example, 8. You send an array of 8 indices and coefficients to the shader as uniforms. Then you walk that array, fetching only 8 times. And you might be able to get away with just 4.

What algorithms or approaches apart from Haar cascades could be used for custom objects detection?

I need to do computer visions tasks in order to detect watter bottles or soda cans. I will obtain 'frontal' images of bottles, soda cans or any other random objects (one by one) and my algorithm should determine whether it's a bottle, a can or any of them.
Some details about object detecting scenario:
As mentioned, I will test one single object per image/video frame.
Not all watter bottles are the same. There could be color in plastic, lid or label variation. Maybe some could not get label or lid.
Same about variation goes for soda cans. No wrinkled soda cans are gonna be tested though.
There could be small size variation between objects.
I could have a green (or any custom color) background.
I will do any needed filters on image.
This will be run on a Raspberry Pi.
Just in case, an example of each:
I've tested a couple times OpenCV face detection algorithms and I know it works pretty good but I'd need to obtain an special Haar Cascades features XML file for detecting each custom object on this approach.
So, the distinct alternatives I have in mind are:
Creating a custom Haar Classifier.
Considering shapes.
Considering outlines.
I'd like to get a simple algorithm and I think creating a custom Haar classifier could be even not needed. What would you suggest?
Update
I strongly considered the shape/aspect ratio approach.
However I guess I'm facing some issues as bottles come in distinct sizes or even shapes each. But this made me think or set following considerations:
I'm applying a threshold with THRESH_BINARY method. (Thanks to the answers).
I will use a white background on detection.
Soda cans are all same size.
So, a bounding box for soda cans with high accuracy might distinguish a can.
What I've achieved:
Threshold really helped me, I could notice that on white background tests I would obtain for cans:
And this is what it's obtained for bottles:
So, darker areas left dominancy is noticeable. There are some cases in cans where this might turn into false negatives. And for bottles, light and angle may lead to not consistent results but I really really think this could be a shorter approach.
So, I'm quite confused now how I should evaluate that darkness dominancy, I've read that findContours leads to it but I'm quite lost on how to seize such function. For example, in case of soda cans, it may find several contours, so I get lost on what to evaluate.
Note: I'm open to test any other algorithms or libraries distinct to Open CV.
I see few basic ideas here:
Check object (to be precise - object boundind rect) width/height ratio. For can it's approimetely 2-2.5, for bottle i think it will be >3. It's very simple idea to it should be easy to test it quickly and i think it should has quite good accuracy. For some values, like 2.75 (assumimg that values that i gave are correct, which most likely isn't true) you can use some different algorithm.
Check whether you object contains glass/transparence regions - if yes, than definitely it's a bottle. Here you can read more about it.
Use grabcut algorithm to get object mask/more precise shape and check whether this shape width at the top is similar to width at the bottom - if yes than it's a can, no - bottle (bottles has screw cap at the top).
Since you want to recognize can vs bottle rather than pepsi vs coke, shape matching is probably the way to go when compared to Haar and the features2d matchers like SIFT/SURF/ORB
A unique background color will make things easier.
First create a histogram from an image of just the background
int channels[] = {0,1,2}; // use all the channels
int rgb_bins = 32; // quantize to 32 colors per channel
int histSize[] = {rgb_bins, rgb_bins, rgb_bins};
float _range[] = {0,255};
float* ranges[] = {_range, _range, _range};
cv::SparseMat bghist;
cv::calcHist(&bg_image, 1, channels, cv::noArray(),bghist, 3, histSize, ranges );
Then use calcBackProject to create a mask of bg and not bg
cv::MatND temp_ND;
cv::calcBackProject( &bottle_image, 1, channels, bghist, temp_ND, ranges );
cv::Mat bottle_mask, bottle_backproj;
if( feeling_lazy ){
cv::normalize(temp_ND, bottle_backproj, 0, 255, cv::NORM_MINMAX, CV_8U);
//a small blur here could work nicely
threshold( bottle_backproj, bottle_mask, 0, 255, THRESH_OTSU );
bottle_mask = cv::Scalar(255) - bottle_mask; //invert the mask
} else {
//finding just the right value here might be better than the above method
int magic_threshold = 64;
temp_ND.convertTo( bottle_backproj, CV_8U, 255.);
//I expect temp_ND to be CV_32F ranging from 0-1, but I might be wrong.
threshold( bottle_backproj, bottle_mask, magic_threshold, 255, THRESH_BINARY_INV );
}
Then either:
Compare bottle_mask or bottle_backproj to a few sample bottle masks/backprojections using matchTemplate with a threshold on confidence to decide if it's a match.
matchTemplate(bottle_mask, bottle_template, result, CV_TM_CCORR_NORMED);
double confidence; minMaxLoc( result, NULL, &confidence);
Or use matchShapes, though I've never gotten this to work properly.
double confidence = matchShapes(bottle_mask, bottle_template, CV_CONTOURS_MATCH_I3);
Or use linemod which is difficult to set up but works great for images like this where the shape isn't very complex. Aside from the linked file, I haven't found any working samples of this method so here's what I did.
First create/train the detector with some sample images
//some magic numbers
std::vector<int> T_at_level;
T_at_level.push_back(4);
T_at_level.push_back(8);
//add some padding so linemod doesn't scream at you
const int T = 32;
int width = bottle_mask.cols;
if( width % T != 0)
width += T - width % T;
int height = bottle_mask.rows;
if( height % T != 0)
height += T - height % T;
//in this case template_backproj is created specifically from a sample bottle_backproj
cv::Rect padded_roi( (width - template_backproj.cols)/2, (height - template_backproj.rows)/2, template_backproj.cols, template_backproj.rows);
cv::Mat padded_backproj = zeros( width, height, template_backproj.type());
padded_backproj( padded_roi ) = template_backproj;
cv::Mat padded_mask = zeros( width, height, template_mask.type());
padded_mask( padded_roi ) = template_mask;
//you might need to erode padded_mask by a few pixels.
//initialize detector
std::vector< cv::Ptr<cv::linemod::Modality> > modalities;
modalities.push_back( cv::makePtr<cv::linemod::ColorGradient>() ); //for those that don't have a kinect
cv::Ptr<cv::linemod::Detector> new_detector = cv::makePtr<cv::linemod::Detector>(modalities, T_at_level);
//add sample images to the detector
std::vector<cv::Mat> template_images;
templates.push_back( padded_backproj);
cv::Rect ignore_me;
const std::string class_id = "bottle";
template_id = new_detector->addTemplate(template_images, class_id, padded_mask, &ignore_me);
Then do some matching
std::vector<cv::Mat> sources_vec;
sources_vec.push_back( padded_backproj );
//padded_backproj doesn't need to be the same size as the trained template images, but it does need to be padded the same way.
float matching_threshold = 0.8; //a higher number makes the algorithm faster
std::vector<cv::linemod::Match> matches;
std::vector<cv::String> class_ids;
new_detector->match(sources_vec, matching_threshold, matches,class_ids);
float confidence = matches.size() > 0? matches[0].similarity : 0;
As cyriel suggests, the aspect ratio (width/height) might be one useful measure. Here is some OpenCV Python code that finds contours (hopefully including the outline of the bottle or can) and gives you aspect ratio and some other measurements:
# src image should have already had some contrast enhancement (such as
# cv2.threshold) and edge finding (such as cv2.Canny)
contours, hierarchy = cv2.findContours(src, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
num_points = len(contour)
if num_points < 5:
# The contour has too few points to fit an ellipse. Skip it.
continue
# We could use area to help determine the type of object.
# Small contours are probably false detections (not really a whole object).
area = cv2.contourArea(contour)
bounding_ellipse = cv2.fitEllipse(contour)
center, radii, angle_degrees = bounding_ellipse
# Let's define an ellipse's normal orientation to be landscape (width > height).
# We must ensure that the ellipse's measurements match this orientation.
if radii[0] < radii[1]:
radii = (radii[1], radii[0])
angle_degrees -= 90.0
# We could use the angle to help determine the type of object.
# A bottle or can's angle is probably approximately a multiple of 90 degrees,
# assuming that it is at rest and not falling.
# Calculate the aspect ratio (width / height).
# For example, 0.5 means the object's height is 2 times its width.
# A bottle is probably taller than a can.
aspect_ratio = radii[0] / radii[1]
For checking transparency, you can compare the picture to a known background using histogram analysis or background subtraction.
The contour's moments can be used to determine its centroid (center of gravity):
moments = cv2.moments(contour)
m00 = moments['m00']
m01 = moments['m01']
m10 = moments['m10']
centroid = (m10 / m00, m01 / m00)
You could compare this to the center. If the object is bigger ("heavier") on one end, the centroid will be closer to that end than the center is.
So, my main approach for detection was:
Bottles are transparent and cans are opaque
Generally algorithm consisted in:
Take a grayscale picture.
Apply a binary threshold.
Select a convenient ROI from it.
Obtain it's color mean and even the standard deviation.
Distinguish.
Implementation was basically reduced to this function (where CAN and BOTTLE were previously defined):
int detector(int x, int y, int width, int height, int thresholdValue, CvCapture* capture) {
Mat img;
Rect r;
vector<Mat> channels;
r = Rect(x,y,width,height);
if ( !capture ) {
fprintf( stderr, "ERROR: capture is NULL \n" );
getchar();
return -1;
}
img = Mat(cvQueryFrame( capture ));
cvtColor(img,img,CV_RGB2GRAY);
threshold(img, img, 127, 255, THRESH_BINARY);
// ROI
Mat roiImage = img(r);
split(roiImage, channels);
Scalar m = mean(channels[0]);
float media = m[0];
printf("Media: %f\n", media);
if (media < thresholdValue) {
return CAN;
}
else {
return BOTTLE;
}
}
As it can be seen, a THRESH_BINARY threshold was applied, and it was a plain white background which was used. However the main and critical issue I faced with this whole approach and algorithm was luminosity changes in environment, even minor ones.
Sometimes I could notice a THRESH_BINARY_INV might help more, but I wonder if I could use some certian threshold parameters or wether applying other filters may lead to getting rid of environment lightning as an issue.
I really appreciate the aspect ratio calculation approach from bounding box or finding contours but I found this straight forward and simple when conditions were adjusted.
I'd use deep learning, based on Transfer learning.
The idea is this: given a highly complex well trained neural network, that was trained on a similar classification task (tipically over a large public dataset, like imagenet), you can freeze the majority of its weigths and only train the last layers. There are lots of tutorials out there. You don't need to have a background on deep learning.
There is a tutorial which is almost out of the box with tensorflow here and here there is another based on keras.

Unprojecting Screen coords to world in OpenGL es 2.0

Long time listener, first time caller.
So I have been playing around with the Android NDK and I'm at a point where I want to Unproject a tap to world coordinates but I can't make it work.
The problem is the x and y values for both the near and far points are the same which doesn't seem right for a perspective projection. Everything in the scene draws OK so I'm a bit confused why it wouldn't unproject properly, anyway here is my code please help thanks
//x and y are the normalized screen coords
ndk_helper::Vec4 nearPoint = ndk_helper::Vec4(x, y, 1.f, 1.f);
ndk_helper::Vec4 farPoint = ndk_helper::Vec4(x, y, 1000.f, 1.f);
ndk_helper::Mat4 inverseProjView = this->matProjection * this->matView;
inverseProjView = inverseProjView.Inverse();
nearPoint = inverseProjView * nearPoint;
farPoint = inverseProjView * farPoint;
nearPoint = nearPoint *(1 / nearPoint.w_);
farPoint = farPoint *(1 / farPoint.w_);
Well, after looking at the vector/matrix math code in ndk_helper, this isn't a surprise. In short: Don't use it. After scanning through it for a couple of minutes, it has some obvious mistakes that look like simple typos. And particularly the Vec4 class is mostly useless for the kind of vector operations you need for graphics. Most of the operations assume that a Vec4 is a vector in 4D space, not a vector containing homogenous coordinates in 3D space.
If you want, you can check it out here, but be prepared for a few face palms:
https://android.googlesource.com/platform/development/+/master/ndk/sources/android/ndk_helper/vecmath.h
For example, this is the implementation of the multiplication used in the last two lines of your code:
Vec4 operator*( const float& rhs ) const
{
Vec4 ret;
ret.x_ = x_ * rhs;
ret.y_ = y_ * rhs;
ret.z_ = z_ * rhs;
ret.w_ = w_ * rhs;
return ret;
}
This multiplies a vector in 4D space by a scalar, but is completely wrong if you're operating with homogeneous coordinates. Which explains the results you are seeing.
I would suggest that you either write your own vector/matrix library that is suitable for graphics type operations, or use one of the freely available libraries that are tested, and used by others.
BTW, the specific values you are using for your test look somewhat odd. You definitely should not be getting the same results for the two vectors, but it's probably not what you had in mind anyway. For the z coordinate in your input vectors, you are using the distances of the near and far planes in eye coordinates. But then you apply the inverse view-projection matrix to those vectors, which transforms them back from clip/NDC space into world space. So your input vectors for this calculation should be in clip/NDC space, which means the z-coordinate values corresponding to the near/far plane should be at -1 and 1.

Setting the projectionMatrix of a Perspective Camera in Three.js

I'm trying to set the ProjectionMatrix of a Three.js Perspective Camera to match a projection Matrix I calculated with a different program.
So I set the camera's position and rotation like this:
self.camera.position.x = 0;
self.camera.position.y = 0;
self.camera.position.z = 142 ;
self.camera.rotation.x = 0.0;// -0.032
self.camera.rotation.y = 0.0;
self.camera.rotation.z = 0;
Next I created a 4x4 Matrix (called Matrix4 in Three.js) like this:
var projectionMatrix = new THREE.Matrix4(-1426.149, -145.7176, -523.0170, 225.07519, -42.40711, -1463.2367, -23.6839, 524.3322, -0.0174, -0.11928, -0.99270, 0.43826, 0, 0, 0, 1);
and changed the camera's projection Matrix entries like this:
for ( var i = 0; i < 16; i++) {
self.camera.projectionMatrix.elements[i] = projectionMatrix.elements[i];
}
when I now render the scene I just get a black screen and can't see any of the objects I inserted. Turning the angle of the Camera doesn't help either. I still can't see any objects.
If I insert a
self.camera.updateProjectionMatrix();
after setting the camera's projection Matrix to the values of my projectionMatrix the camera is set back to the original Position (x=0,y=0,z=142 and looking at the origin where I created some objects) and the values I set in the camera's matrix seem to have been overwritten. I checked that by printing the cameras projection Matrix to the console. If I do not call the updateProjectionMatrix() function the values stay as I set them.
Does somebody have an idea how to solve this problem?
If I do not call the updateProjectionMatrix() function the values stay as I set them.
Correct, updateProjectionMatrix() calculates those 16 numbers you pasted in your projection matrix based on a bunch of parameters. Those parameters are, the position and rotation you set above, plus the parameters you passed (or default) for the camera. (these actually make the matrixWorld and its inverse.
In case of a perspective camera, you don't have much - near, far, fov and aspect. Left,right,top,bottom are derived from these, with an orthographic camera you set them directly. These are then used to compose the projection matrix.
Scratch a pixel has a REALLY good tutorial on this subject. The next lesson on the openGL projection matrix is actually more relevant to WebGL. left right top and bottom are made from your FOV and your aspect ratio. Add near and far and you've got yourself a projection matrix.
Now, in order for this thing to work, you either have to know what you're doing, or get really lucky. Pasting these numbers from somewhere else and getting it to work is short of winning the lottery. Best case scenario, you can have your scale all wrong and clipping your scene. Worst case, you've mixed a completely different matrix, different XYZ convention, and there's no way you'll get it to work, or at least make sense.
Out of curiosity, what are you trying to do? Are you trying to match your camera to a camera from somewhere else?

Resources