How to speed up Get func in p5.js? - p5.js

Is there any way to speed up this piece of code (simplified)? On my Dell m4700 laptop, it works for 1 minute and 10 seconds (the size of the canvas is 1000x1400 pixels).
pg = createGraphics(1000,1400);
pg.pixelDensity(1);
***
for(j=0;j<pg.height;j++){
for(i=0;i<pg.width;i++){
pg.stroke(cc=pg.get(i,j));
pg.point(i,j+4);
}
}
Without this line,
pg.stroke(cc=pg.get(i,j));
the code executes in milliseconds.
I made another version that works in 20 seconds, but for some reason the result is slightly different visually:
for(j=0;j<pg.height;j++){
pg.loadPixels();
for(i=0;i<pg.width;i++){
let pi = i + (j * pg.width);
let ri = pi * 4;
let cr = pg.pixels[ri];
let cg = pg.pixels[ri + 1];
let cb = pg.pixels[ri + 2];
let ca = pg.pixels[ri + 3];
pg.stroke(color(cr,cg,cb,ca));
pg.point(i,floor(j+4));
}
}

Big edit:
Ok, I miss read the question and was thinking in java's processing not p5js as the OP has properly indicated. So my answer was very wrong. Sorry.
But still the approach exists in p5js and is faster.
p5js store pixels in 1d array, 4 slots for each pixel:
[pix1R, pix1G, pix1B, pix1A, pix2R, pix2G, pix2B, pix2A...]
And also the pixel density mathers.
So the code is different, I belive you are looking for something like (no pg here, but the thinking is the same):
loadPixels();
let d = pixelDensity();
let imagesize = 4 * (width * d) * ((height) * d);
for (let i = 0; i <= imagesize; i += 4) {
let j = i + 16;//4*4
pixels[i] = pixels[j];
pixels[i + 1] = pixels[j + 1];
pixels[i + 2] = pixels[j + 2];
pixels[i + 3] = pixels[j + 3];
}
updatePixels();
Now, to access a given area in the array is a little convoluted, here an example
//the area data
const area_x = 35;
const area_y = 48;
const width_of_area = 180;
const height_of_area = 200;
//the pixel density
const d = pixelDensity();
loadPixels();
// those 2 first loops goes trough every pixel in the area
for (let x = area_x; x < width_of_area; x++) {
for (let y = area_y; y < height_of_area; y++) {
//here we go trough the pixels array to get each value of a pixel
for (let i = 0; i < d; i++) {
for (let j = 0; j < d; j++) {
// calculate the index of the 1d array for every pixel
// 4 values in the array for each pixel
// y times density times #of pixels plus
// x times density times #of pixels
index = 4 * ((y * d + j) * width * d + (x * d + i));
// You can assign raw values for rgb color
pixels[index] = 255;
pixels[index + 1] = 30;
pixels[index + 2] = 200;
pixels[index + 3] = 255;
}
}
}
}
updatePixels();
Both this examples are at p5js online editor:
1:https://editor.p5js.org/v-k-/sketches/GGVeZvCk7
2: https://editor.p5js.org/v-k-/sketches/kW9lXyK2n
Hope that helps, and sorry for the previus processing answer/code.
cheers

Related

I am writing an app in threes for a set of particles, I want to include collision between the particles. It seems it only check for collisions once

It seems it only check for collisions once and particles loose speed. If anyone has an advice...
function animate() {
var elapsedTime = clock.getElapsedTime();
particlesGeometry.attributes.position.needsUpdate = true;
const paso = 1.5;
for (let i = 0; i < count; i++) {
const i3 = i * 3;var k = 0;
const x = particlesGeometry.attributes.position.array[i3 + 0];
const y = particlesGeometry.attributes.position.array[i3 + 1];
const z = particlesGeometry.attributes.position.array[i3 + 2];
for (let j=k; j < (count); j++){
const l = 3 * j;
if (l==i3){continue;}
var posx = x - particlesGeometry.attributes.position.array[l + 0];
var posy = y - particlesGeometry.attributes.position.array[l + 1];
var posz = z - particlesGeometry.attributes.position.array[l + 2];
var dist2 = Math.pow(posx,2)+Math.pow(posy,2)+Math.pow(posz,2);
var velMod1 = Math.sqrt(Math.pow(vel[l+0],2)+Math.pow(vel[l+1],2)+Math.pow(vel[l+2],2));
var velMod2 = Math.sqrt(Math.pow(vel[i3+0],2)+Math.pow(vel[i3+1],2)+Math.pow(vel[i3+2],2));
var erre = [posx, posy, posz];
var doterre = erre[0]*erre[0] + erre[1]*erre[1] + erre[2]*erre[2];
var doterreN = Math.sqrt(doterre);
var erreMod = [erre[0]/doterreN, erre[1]/doterreN, erre[2]/doterreN];
if (dist2 <=(velMod1+velMod2)*.05){
var proy1 = erreMod[0]*vel[l+0]+erreMod[1]*vel[l+1]+erreMod[2]*vel[l+2];
var proy2 = erreMod[0]*vel[i3+0]+erreMod[1]*vel[i3+1]+erreMod[2]*vel[i3+2];
vel[l + 0] = vel[l + 0] - proy1*erreMod[0];
vel[l + 1] = vel[l + 1] - proy1*erreMod[1];
vel[l + 2] = vel[l + 2] - proy1*erreMod[2];
vel[i3 + 0] = vel[i3 + 0] - proy2*erreMod[0];
vel[i3 + 1] = vel[i3 + 1] - proy2*erreMod[1];
vel[i3 + 2] = vel[i3 + 2] - proy2*erreMod[2];
particlesGeometry.attributes.color.array[i3+0] = 1;
particlesGeometry.attributes.color.array[i3+1] = 1;
particlesGeometry.attributes.color.array[i3+2] = 1;
particlesGeometry.attributes.color.array[l+0] = 1;
particlesGeometry.attributes.color.array[l+1] = 1;
particlesGeometry.attributes.color.array[l+2] = 1;
}
}
k++;
if (x>=0 || x<=-5){vel[i3+0]=-vel[i3+0]}
if (y>=5 || y<=-5){vel[i3+1]=-vel[i3+1]}
if (z>=5 || z<=-5){vel[i3+2]=-vel[i3+2]}
particlesGeometry.attributes.position.array[i3 + 0] = x + vel[i3+0]*paso;
particlesGeometry.attributes.position.array[i3 + 1] = y + vel[i3+1]*paso;
particlesGeometry.attributes.position.array[i3 + 2] = z + vel[i3+2]*paso;
}
Paso is the step. What I did was to invert the velocity component of the colliding particles along the vector that joins the particles. First I've tried with the dot function for dot product from mathjs but it was too slow so I've coded the dot products, mainly for normalization. The idea is the collision condition is checked for every pair of particles once.
dist2 <=(velMod1+velMod2)*.05
That means the square of the distance between particles must be less than or equal to the sum of their velocities times the step. That latter is in order to avoid particles to oscillate in the collision zone.
In order to check for collision events I've painted the particles that collided on white. The number of white particles should rise as time elapses but that is not the case when running the program.
particlesGeometry.attributes.color.array[i3+0] = 1;
particlesGeometry.attributes.color.array[i3+1] = 1;
particlesGeometry.attributes.color.array[i3+2] = 1;
particlesGeometry.attributes.color.array[l+0] = 1;
particlesGeometry.attributes.color.array[l+1] = 1;
particlesGeometry.attributes.color.array[l+2] = 1;
Thank you.
I've tried to write the double iteration in different forms, the same for the vector operations but the results were funny.

RGB32 in YUV420p

Im try conver rgb32 image to yuv420p for record video.
I have image
QImage image = QGuiApplication::primaryScreen()->grabWindow(0, rect_x, rect_y, rect_width, rect_height).toImage().convertToFormat(QImage::Format_RGB32);
AVFrame *frame;
and convert
for (y = 0; y < c->height; y++) {
QRgb *rowData = (QRgb*)image.scanLine(y);
for (x = 0; x < c->width; x++) {
QRgb pixelData = rowData[x];
int r = qRed(pixelData);
int g = qGreen(pixelData);
int b = qBlue(pixelData);
int y0 = (int)(0.2126 * (float)(r) + 0.7152 * (float)(g) + 0.0722 * (float)(b));
int u = 128 + (int)(-0.09991 * (float)(r) - 0.33609 * (float)(g) + 0.436 * (float)(b));
int v = 128 + (int)(0.615 * (float)(r) - 0.55861 * (float)(g) - 0.05639 * (float)(b));
frame->data[0][y * frame->linesize[0] + x] = y0;
frame->data[1][y / 2 * frame->linesize[1] + x / 2] = u;
frame->data[2][y / 2 * frame->linesize[2] + x / 2] = v;
}
}
but on result image im see artefact. Text look blended http://joxi.ru/eAORRX0u4d46a2
this bug in convert alogritm or something else?
UDP
for (y = 0; y < c->height; y++) {
QRgb *rowData = (QRgb*)image.scanLine(y);
for (x = 0; x < c->width; x++) {
QRgb pixelData = rowData[x];
int r = qRed(pixelData);
int g = qGreen(pixelData);
int b = qBlue(pixelData);
int y0 = (int)(0.2126 * (float)(r) + 0.7152 * (float)(g) + 0.0722 * (float)(b));
if (y0 < 0)
y0 = 0;
if (y0 > 255)
y0 = 255;
frame->data[0][y * frame->linesize[0] + x] = y0;
}
}
int x_pos = 0;
int y_pos = 0;
for (y = 1; y < c->height; y+=2) {
QRgb *pRow = (QRgb*)image.scanLine(y - 1);
QRgb *sRow = (QRgb*)image.scanLine(y);
for (x = 1; x < c->width; x+=2) {
QRgb pd1 = pRow[x - 1];
QRgb pd2 = pRow[x];
QRgb pd3 = sRow[x - 1];
QRgb pd4 = sRow[x];
int r = (qRed(pd1) + qRed(pd2) + qRed(pd3) + qRed(pd4)) / 4;
int g = (qGreen(pd1) + qGreen(pd2) + qGreen(pd3) + qGreen(pd4)) / 4;
int b = (qBlue(pd1) + qBlue(pd2) + qBlue(pd3) + qBlue(pd4)) / 4;
int u = 128 + (int)(-0.147 * (float)(r) - 0.289 * (float)(g) + 0.436 * (float)(b));
int v = 128 + (int)(0.615 * (float)(r) - 0.515 * (float)(g) - 0.1 * (float)(b));
if (u < 0)
u = 0;
if (v > 255)
v = 255;
frame->data[1][y_pos * frame->linesize[1] + x_pos] = u;
frame->data[2][y_pos * frame->linesize[2] + x_pos] = v;
x_pos++;
}
x_pos = 0;
y_pos++;
}
this work for me, but its wery slow, 60-70ms for one frame
The first problem is that you are letting your YUV values go beyond allowed range (which is even stricter than 0x00..0xFF. but you don't do any capping anyway). See:
Y' values are conventionally shifted and scaled to the range [16, 235] (referred to as studio swing or "TV levels") rather than using the full range of [0, 255] (referred to as full swing or "PC levels"). This confusing practice derives from the MPEG standards and explains why 16 is added to Y' and why the Y' coefficients in the basic transform sum to 220 instead of 255.[8] U and V values, which may be positive or negative, are summed with 128 to make them always positive, giving a studio range of 16–240 for U and V. (These ranges are important in video editing and production, since using the wrong range will result either in an image with "clipped" blacks and whites, or a low-contrast image.)
Second problem is that 4:2:0 means that you end up with one Y value for every pixel, and one U and one V value for every four pixels. That is, U and V should be averages of corresponding pixels, and your loop simply overwrites the values with U and V of the fourth input pixel, ignoring the previous three.
You tagged the question with ffmpeg and your previous question is FFmpeg related too. Note that FFmpeg offers swscale library, which sws_scale does the conversion way more efficiently compared to your loop and optimizations you could add to it. See related questions on SO:
avcodec YUV to RGB
Video from pipe->YUV with libAV->RGB with sws_scale->Draw with Qt

How to apply SoftLight blender between two images in OpenCV?

Regarding to this post on StackOverflow (and this too) I'm taking one normal image of a flower, then a white image and then I apply the SoftLight.
These are the images (flower and white image):
The result should be something similar of what I've got in GIMP:
but it's finally a white image.
I modified the code in order to put it inside a function, and this is my code:
// function
uint convSoftLight(int A, int B) {
return ((uint)((B < 128)?(2*((A>>1)+64))*((float)B/255):(255-(2*(255-((A>>1)+64))*(float)(255-B)/255))));
}
void function() {
Mat flower = imread("/Users/rafaelruizmunoz/Desktop/flower.jpg");
Mat white_flower = Mat::zeros(Size(flower.cols, flower.rows), flower.type());
Mat mix = Mat::zeros(Size(flower.cols, flower.rows), flower.type());
for (int i = 0; i < white_flower.rows; i++) {
for (int j = 0; j < white_flower.cols; j++) {
white_flower.at<Vec3b>(i,j) = Vec3b(255,255,255);
}
}
imshow("flower", flower);
imshow("mask_white", white_flower);
for (int i = 0; i < mix.rows; i++) {
for (int j = 0; j < mix.cols; j++) {
Vec3b vec = flower.at<Vec3b>(i,j);
vec[0] = convSoftLight(vec[0], 255); // 255 or just the white_flower pixel at (i,j)
vec[1] = convSoftLight(vec[1], 255); // 255 or just the white_flower pixel at (i,j)
vec[2] = convSoftLight(vec[2], 255); // 255 or just the white_flower pixel at (i,j)
mix.at<Vec3b>(i,j) = vec;
}
}
imshow("mix", mix);
}
What am I doing wrong?
Thank you.
EDIT: I've tried to flip the order (convSoftLight(B,A); instead convSoftLight(A,B)), but nothing happened (black image)
Based on the blender definitions: I rewrote my function:
uint convSoftLight(int A, int B) {
float a = (float)A / 255;
float b = (float)B / 255;
float result = 0;
if (b < 0.5)
result = 2 * a * b + pow(a,2) * (1 - 2*b);
else
result = 2 * a * (1-b) + sqrt(a) * (2*b - 1);
return (uint)255* result;
}
Here's how soft light might be implemented in Python (with OpenCV and NumPy):
import numpy as np
def applySoftLight(bottom, top, mask):
""" Apply soft light blending
"""
assert all(image.dtype == np.float32 for image in [bottom, top, mask])
blend = np.zeros(bottom.shape, dtype=np.float32)
low = np.where((top < 0.5) & (mask > 0))
blend[low] = 2 * bottom[low] * top[low] + bottom[low] * bottom[low] * (1 - 2 * top[low])
high = np.where((top >= 0.5) & (mask > 0))
blend[high] = 2 * bottom[high] * (1 - top[high]) + np.sqrt(bottom[high]) * (2 * top[high] - 1)
# alpha blending accroding to mask
result = bottom * (1 - mask) + blend * mask
return result
All matrices must be single channel 2D matrices converted into type np.float32. Mask is a "layer mask" in terms of GIMP/Photoshop.

Obtaining orientation map of fingerprint image using OpenCV

I'm trying to implement the method of improving fingerprint images by Anil Jain. As a starter, I encountered some difficulties while extracting the orientation image, and am strictly following those steps described in Section 2.4 of that paper.
So, this is the input image:
And this is after normalization using exactly the same method as in that paper:
I'm expecting to see something like this (an example from the internet):
However, this is what I got for displaying obtained orientation matrix:
Obviously this is wrong, and it also gives non-zero values for those zero points in the original input image.
This is the code I wrote:
cv::Mat orientation(cv::Mat inputImage)
{
cv::Mat orientationMat = cv::Mat::zeros(inputImage.size(), CV_8UC1);
// compute gradients at each pixel
cv::Mat grad_x, grad_y;
cv::Sobel(inputImage, grad_x, CV_16SC1, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
cv::Sobel(inputImage, grad_y, CV_16SC1, 0, 1, 3, 1, 0, cv::BORDER_DEFAULT);
cv::Mat Vx, Vy, theta, lowPassX, lowPassY;
cv::Mat lowPassX2, lowPassY2;
Vx = cv::Mat::zeros(inputImage.size(), inputImage.type());
Vx.copyTo(Vy);
Vx.copyTo(theta);
Vx.copyTo(lowPassX);
Vx.copyTo(lowPassY);
Vx.copyTo(lowPassX2);
Vx.copyTo(lowPassY2);
// estimate the local orientation of each block
int blockSize = 16;
for(int i = blockSize/2; i < inputImage.rows - blockSize/2; i+=blockSize)
{
for(int j = blockSize / 2; j < inputImage.cols - blockSize/2; j+= blockSize)
{
float sum1 = 0.0;
float sum2 = 0.0;
for ( int u = i - blockSize/2; u < i + blockSize/2; u++)
{
for( int v = j - blockSize/2; v < j+blockSize/2; v++)
{
sum1 += grad_x.at<float>(u,v) * grad_y.at<float>(u,v);
sum2 += (grad_x.at<float>(u,v)*grad_x.at<float>(u,v)) * (grad_y.at<float>(u,v)*grad_y.at<float>(u,v));
}
}
Vx.at<float>(i,j) = sum1;
Vy.at<float>(i,j) = sum2;
double calc = 0.0;
if(sum1 != 0 && sum2 != 0)
{
calc = 0.5 * atan(Vy.at<float>(i,j) / Vx.at<float>(i,j));
}
theta.at<float>(i,j) = calc;
// Perform low-pass filtering
float angle = 2 * calc;
lowPassX.at<float>(i,j) = cos(angle * pi / 180);
lowPassY.at<float>(i,j) = sin(angle * pi / 180);
float sum3 = 0.0;
float sum4 = 0.0;
for(int u = -lowPassSize / 2; u < lowPassSize / 2; u++)
{
for(int v = -lowPassSize / 2; v < lowPassSize / 2; v++)
{
sum3 += inputImage.at<float>(u,v) * lowPassX.at<float>(i - u*lowPassSize, j - v * lowPassSize);
sum4 += inputImage.at<float>(u, v) * lowPassY.at<float>(i - u*lowPassSize, j - v * lowPassSize);
}
}
lowPassX2.at<float>(i,j) = sum3;
lowPassY2.at<float>(i,j) = sum4;
float calc2 = 0.0;
if(sum3 != 0 && sum4 != 0)
{
calc2 = 0.5 * atan(lowPassY2.at<float>(i, j) / lowPassX2.at<float>(i, j)) * 180 / pi;
}
orientationMat.at<float>(i,j) = calc2;
}
}
return orientationMat;
}
I've already searched a lot on the web, but almost all of them are in Matlab. And there exist very few ones using OpenCV, but they didn't help me either. I sincerely hope someone could go through my code and point out any error to help. Thank you in advance.
Update
Here are the steps that I followed according to the paper:
Obtain normalized image G.
Divide G into blocks of size wxw (16x16).
Compute the x and y gradients at each pixel (i,j).
Estimate the local orientation of each block centered at pixel (i,j) using equations:
Perform low-pass filtering to remove noise. For that, convert the orientation image into a continuous vector field defined as:
where W is a two-dimensional low-pass filter, and w(phi) x w(phi) is its size, which equals to 5.
Finally, compute the local ridge orientation at (i,j) using:
Update2
This is the output of orientationMat after changing the mat type to CV_16SC1 in Sobel operation as Micka suggested:
Maybe it's too late for me to answer, but anyway somebody could read this later and solve the same problem.
I've been working for a while in the same algorithm, same method you posted... But there's some writting errors when the papper was redacted (I guess). After fighting a lot with the equations I found this errors by looking other similar works.
Here is what worked for me...
Vy(i, j) = 2*dx(u,v)*dy(u,v)
Vx(i,j) = dx(u,v)^2 - dy(u,v)^2
O(i,j) = 0.5*arctan(Vy(i,j)/Vx(i,j)
(Excuse me I wasn't able to post images, so I wrote the modified ecuations. Remeber "u" and "v" are positions of the summation across the BlockSize by BlockSize window)
The first thing and most important (obviously) are the equations, I saw that in different works this expressions were really different and in every one they talked about the same algorithm of Hong et al.
The Key is finding the Least Mean Square (First 3 equations) of the gradients (Vx and Vy), I provided the corrected formulas above for this ation. Then you can compute angle theta for the non overlapping window (16x16 size recommended in the papper), after that the algorithm says you must calculate the magnitud of the doubled angle in "x" and "y" directions (Phi_x and Phi_y).
Phi_x(i,j) = V(i,j) * cos(2*O(i,j))
Phi_y(i,j) = V(i,j) * sin(2*O(i,j))
Magnitud is just:
V = sqrt(Vx(i,j)^2 + Vy(i,j)^2)
Note that in the related work doesn't mention that you have to use the gradient magnitud, but it make sense (for me) in doing it. After all this corrections you can apply the low pass filter to Phi_x and Phi_y, I used a simple Mask of size 5x5 to average this magnitudes (something like medianblur() of opencv).
Last thing is to calculate new angle, that is the average of the 25ith neighbors in the O(i,j) image, for this you just have to:
O'(i,j) = 0.5*arctan(Phi_y/Phi_x)
We're just there... All this just for calculating the angle of the NORMAL VECTOR TO THE RIDGES DIRECTIONS (O'(i,j)) in the BlockSize by BlockSize non overlapping window, what does it mean? it means that the angle we just calculated is perpendicular to the ridges, in simple words we just calculated the angle of the riges plus 90 degrees... To get the angle we need, we just have to substract to the obtained angle 90°.
To draw the lines we need to have an initial point (X0, Y0) and a final point(X1, Y1). For that imagine a circle centered on (X0, Y0) with a radious of "r":
x0 = i + blocksize/2
y0 = j + blocksize/2
r = blocksize/2
Note we add i and j to the first coordinates becouse the window is moving and we are gonna draw the line starting from the center of the non overlaping window, so we can't use just the center of the non overlaping window.
Then to calculate the end coordinates to draw a line we can just have to use a right triangle so...
X1 = r*cos(O'(i,j)-90°)+X0
Y1 = r*sin(O'(i,j)-90°)+Y0
X2 = X0-r*cos(O'(i,j)-90°)
Y2 = Y0-r*cos(O'(i,j)-90°)
Then just use opencv line function, where initial Point is (X0,Y0) and final Point is (X1, Y1). Additional to it, I drawed the windows of 16x16 and computed the oposite points of X1 and Y1 (X2 and Y2) to draw a line of the entire window.
Hope this help somebody.
My results...
Main function:
Mat mat = imread("nwmPa.png",0);
mat.convertTo(mat, CV_32F, 1.0/255, 0);
Normalize(mat);
int blockSize = 6;
int height = mat.rows;
int width = mat.cols;
Mat orientationMap;
orientation(mat, orientationMap, blockSize);
Normalize:
void Normalize(Mat & image)
{
Scalar mean, dev;
meanStdDev(image, mean, dev);
double M = mean.val[0];
double D = dev.val[0];
for(int i(0) ; i<image.rows ; i++)
{
for(int j(0) ; j<image.cols ; j++)
{
if(image.at<float>(i,j) > M)
image.at<float>(i,j) = 100.0/255 + sqrt( 100.0/255*pow(image.at<float>(i,j)-M,2)/D );
else
image.at<float>(i,j) = 100.0/255 - sqrt( 100.0/255*pow(image.at<float>(i,j)-M,2)/D );
}
}
}
Orientation map:
void orientation(const Mat &inputImage, Mat &orientationMap, int blockSize)
{
Mat fprintWithDirectionsSmoo = inputImage.clone();
Mat tmp(inputImage.size(), inputImage.type());
Mat coherence(inputImage.size(), inputImage.type());
orientationMap = tmp.clone();
//Gradiants x and y
Mat grad_x, grad_y;
// Sobel(inputImage, grad_x, CV_32F, 1, 0, 3, 1, 0, BORDER_DEFAULT);
// Sobel(inputImage, grad_y, CV_32F, 0, 1, 3, 1, 0, BORDER_DEFAULT);
Scharr(inputImage, grad_x, CV_32F, 1, 0, 1, 0);
Scharr(inputImage, grad_y, CV_32F, 0, 1, 1, 0);
//Vector vield
Mat Fx(inputImage.size(), inputImage.type()),
Fy(inputImage.size(), inputImage.type()),
Fx_gauss,
Fy_gauss;
Mat smoothed(inputImage.size(), inputImage.type());
// Local orientation for each block
int width = inputImage.cols;
int height = inputImage.rows;
int blockH;
int blockW;
//select block
for(int i = 0; i < height; i+=blockSize)
{
for(int j = 0; j < width; j+=blockSize)
{
float Gsx = 0.0;
float Gsy = 0.0;
float Gxx = 0.0;
float Gyy = 0.0;
//for check bounds of img
blockH = ((height-i)<blockSize)?(height-i):blockSize;
blockW = ((width-j)<blockSize)?(width-j):blockSize;
//average at block WхW
for ( int u = i ; u < i + blockH; u++)
{
for( int v = j ; v < j + blockW ; v++)
{
Gsx += (grad_x.at<float>(u,v)*grad_x.at<float>(u,v)) - (grad_y.at<float>(u,v)*grad_y.at<float>(u,v));
Gsy += 2*grad_x.at<float>(u,v) * grad_y.at<float>(u,v);
Gxx += grad_x.at<float>(u,v)*grad_x.at<float>(u,v);
Gyy += grad_y.at<float>(u,v)*grad_y.at<float>(u,v);
}
}
float coh = sqrt(pow(Gsx,2) + pow(Gsy,2)) / (Gxx + Gyy);
//smoothed
float fi = 0.5*fastAtan2(Gsy, Gsx)*CV_PI/180;
Fx.at<float>(i,j) = cos(2*fi);
Fy.at<float>(i,j) = sin(2*fi);
//fill blocks
for ( int u = i ; u < i + blockH; u++)
{
for( int v = j ; v < j + blockW ; v++)
{
orientationMap.at<float>(u,v) = fi;
Fx.at<float>(u,v) = Fx.at<float>(i,j);
Fy.at<float>(u,v) = Fy.at<float>(i,j);
coherence.at<float>(u,v) = (coh<0.85)?1:0;
}
}
}
} ///for
GaussConvolveWithStep(Fx, Fx_gauss, 5, blockSize);
GaussConvolveWithStep(Fy, Fy_gauss, 5, blockSize);
for(int m = 0; m < height; m++)
{
for(int n = 0; n < width; n++)
{
smoothed.at<float>(m,n) = 0.5*fastAtan2(Fy_gauss.at<float>(m,n), Fx_gauss.at<float>(m,n))*CV_PI/180;
if((m%blockSize)==0 && (n%blockSize)==0){
int x = n;
int y = m;
int ln = sqrt(2*pow(blockSize,2))/2;
float dx = ln*cos( smoothed.at<float>(m,n) - CV_PI/2);
float dy = ln*sin( smoothed.at<float>(m,n) - CV_PI/2);
arrowedLine(fprintWithDirectionsSmoo, Point(x, y+blockH), Point(x + dx, y + blockW + dy), Scalar::all(255), 1, CV_AA, 0, 0.06*blockSize);
// qDebug () << Fx_gauss.at<float>(m,n) << Fy_gauss.at<float>(m,n) << smoothed.at<float>(m,n);
// imshow("Orientation", fprintWithDirectionsSmoo);
// waitKey(0);
}
}
}///for2
normalize(orientationMap, orientationMap,0,1,NORM_MINMAX);
imshow("Orientation field", orientationMap);
orientationMap = smoothed.clone();
normalize(smoothed, smoothed, 0, 1, NORM_MINMAX);
imshow("Smoothed orientation field", smoothed);
imshow("Coherence", coherence);
imshow("Orientation", fprintWithDirectionsSmoo);
}
seems nothing forgot )
I have read your code thoroughly and found that you have made a mistake while calculating sum3 and sum4:
sum3 += inputImage.at<float>(u,v) * lowPassX.at<float>(i - u*lowPassSize, j - v * lowPassSize);
sum4 += inputImage.at<float>(u, v) * lowPassY.at<float>(i - u*lowPassSize, j - v * lowPassSize);
instead of inputImage you should use a low pass filter.

GL_TRIANGLE_STRIP - Generating a grid for a single draw call (degenerate triangles)

I need to create a grid ready for GL_TRIANGLE_STRIP rendering.
My grid is just a :
Position ( self explanatory )
Size ( the x,y size )
Resolution ( the spacing between each vertex in x,y )
Here is the method used to create verts/indices and return them:
int iCols = vSize.x / vResolution.x;
int iRows = vSize.y / vResolution.y;
// Create Vertices
for(int y = 0; y < iRows; y ++)
{
for(int x = 0; x < iCols; x ++)
{
float startu = (float)x / (float)vSize.x;
float startv = (float)y / (float)vSize.y;
tControlVertex.Color = vColor;
tControlVertex.Position = CVector3(x * vResolution.x,y * vResolution.y,0);
tControlVertex.TexCoord = CVector2(startu, startv - 1.0 );
vMeshVertices.push_back(tControlVertex);
}
}
// Create Indices
rIndices.clear();
for (int r = 0; r < iRows - 1; r++)
{
rIndices.push_back(r*iCols);
for (int c = 0; c < iCols; c++)
{
rIndices.push_back(r*iCols+c);
rIndices.push_back((r+1)*iCols+c);
}
rIndices.push_back((r + 1) * iCols + (iCols - 1));
}
And to visualise that, few examples first.
1) Size 512x512 Resolution 64x64, so it should be made of 8 x 8 quads, but i get 7x7 only
2) Size 512x512 Resolution 128x128, so it should be made of 4 x 4 quads, but i get 3x3 only
3) Size 128x128 Resolution 8x8 so it should be made of 16 x 16 quads but i get 15x15 only
So as you can see, i am missing the Last Row and Last Column somewhere. Where am I going wrong?
Short answer: make your for-loop test <=, as compared to just <, when you're generating your vertices and indices.
The issue is that the computation of your iRows and iCols variables is counting the number of primitives of a particular resolution for a range of pixels; call that n. For a triangle strip of n primtives, you need n+2 primitives, so you're just missing the last "row" and "column" of vertices.
The index generation loop needs to be:
for ( int r = 0; r <= iRows; ++r ) {
for ( int c = 0; c <= iCols; ++c ) {
rIndices.push_back( c + r + r*iCols );
rIndices.push_back( c + r + (r+1)*iCols + 1 );
}
}

Resources