How to draw circle in opengles - opengl-es

Here is my part of code to show circle on screen but unfortunate circle is not coming on screen.
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
glLoadIdentity();
glColor3f(0.0f,1.0f,0.0f);
glBegin(GL_LINE_LOOP);
const float DEG2RAD = 3.14159/180;
for (int i=0; i < 360; i++)
{
float degInRad = i*DEG2RAD;
glVertex2f(cos(degInRad)*8,sin(degInRad)*8);
}
glEnd();
glFlush();
I am not understanding code is seems to look ok but circle is not coming on screen.

Your circle is too big. The default viewport is in the range [(-1 -1), (1 1)].
BTW, you don't need 360 segments. About 30 is usually adequate, depending on how smooth you want it.

Related

Let PShapes in an array rotate on its own axis in Processing

I have this code that basically reads each pixel of an image and redraws it with different shapes. All shapes will get faded in using a sin() wave.
Now I want to rotate every "Pixelshape" around its own axis (shapeMode(CENTER)) while they are faded in and the translate function gives me a headache in this complex way.
Here is the code so far:
void setup() {
size(1080, 1350);
shapeMode(CENTER);
img = loadImage("loremipsum.png");
…
}
void draw() {
background(123);
for (int gridX = 0; gridX < img.width; gridX++) {
for (int gridY = 0; gridY < img.height; gridY++) {
// grid position + tile size
float tileWidth = width / (float)img.width;
float tileHeight = height / (float)img.height;
float posX = tileWidth*gridX;
float posY = tileHeight*gridY;
// get current color
color c = img.pixels[gridY*img.width+gridX];
// greyscale conversion
int greyscale = round(red(c)*0.222+green(c)*0.707+blue(c)*0.071);
int gradientToIndex = round(map(greyscale, 0, 255, 0, shapeCount-1));
//FADEIN
float wave = map(sin(radians(frameCount*4)), -1, 1, 0, 2);
//translate(HEADACHE);
rotate(radians(wave));
shape(shapes[gradientToIndex], posX, posY, tileWidth * wave, tileHeight * wave);
}
}
I have tried many calculations but it just lets my sketch explode.
One that worked in another sketch where I tried basically the same but just in loop was (equivalent written):
translate(posX + tileWidth/2, posY + tileHeight/2);
I think I just don't get the matrix right? How can I translate them to its meant place?
Thank you very much #Rabbid76 – at first I just pasted in your idea and it went of crazy – then I added pushMatrix(); and popMatrix(); – turned out your translate(); code was in fact right!
Then I had to change the x and y location where every shape is drawn to 0,0,
And this is it! Now it works!
See the code here:
float wave = map(sin(radians(frameCount*4)), -1, 1, 0, 2);
pushMatrix();
translate(posX + tileWidth/2, posY + tileHeight/2);
rotate(radians(wave*180));
shape(shapes[gradientToIndex], 0, 0, tileWidth*wave , tileHeight*wave );
popMatrix();
PERFECT! Thank you so much!
rotate defines a rotation matrix and multiplies the current matrix by the rotation matrix. rotate therefore causes a rotation by (0, 0).
You have to center the rectangle around (0, 0), rotate it and move the rotated rectangle to the desired position with translate.
Since translate and rotate multiplies the current matrix by a new matrix, you must store and restore the matrix by pushMatrix() respectively popMatrix().
The center of a tile is (posX + tileWidth/2, posY + tileHeight/2):
pushMatrix();
translate(posX + tileWidth/2, posY + tileHeight/2);
rotate(radians(wave));
shape(shapes[gradientToIndex],
-tileWidth*wave/2, -tileHeight*wave/2,
tileWidth * wave, tileHeight * wave);
popMatrix();

P3D camera orientation

I've got a big sphere. There is a red dot that moves around the sphere. I want to follow that red dot as it moves around to sphere. So my camera has to move with the red dot. But there is a problem. Right now, what I'm experiencing is what is shown in exhibit B. I want my animation to realize the point of view shown in exhibit A.
Here is the code I have so far. I think it's pretty simple. I have 3 variables that control where my eye is, and I have 3 variables that control where the target is. The red dot is also located at the target position. I added 2 planes in x-y which helped me not get too confused as the thing was spinning.
Here is a fiddle:
https://jsfiddle.net/da8nza6y/
float radius = 1000;
float view_elevation = 1500;
float target_elevation = 300;
float x_eye;
float y_eye;
float z_eye;
float x_aim;
float y_aim;
float z_aim;
float h;
float theta;
void setup() {
size(600, 600, P3D);
theta = 0;
h = 30;
}
void draw() {
theta += 0.5;
theta = theta%360;
x_eye = (radius+view_elevation)*cos(theta*PI/180);
y_eye = 0;
z_eye = (radius+view_elevation)*sin(theta*PI/180);
x_aim = (radius+target_elevation)*cos((theta+h)*PI/180);
y_aim = 0;
z_aim = (radius+target_elevation)*sin((theta+h)*PI/180);
camera(x_eye, y_eye, z_eye, x_aim, y_aim, z_aim, 0, 0, -1);
background(255);
// the red dot
pushMatrix();
translate(x_aim, y_aim, z_aim);
fill(255, 0, 0, 120);
noStroke();
sphere(10);
popMatrix();
// the big sphere
noStroke();
fill(205, 230, 255);
lights();
sphere(radius);
// the orange plane
pushMatrix();
translate(0, 0, 10);
fill(255, 180, 0, 120);
rect(-2000, -2000, 4000, 4000);
popMatrix();
// the green plane
pushMatrix();
translate(0, 0, -10);
fill(0, 180, 0, 120);
rect(-2000, -2000, 4000, 4000);
popMatrix();
}
So the pickle is that, it seems like the moment the red dot (whose location in the x-z plane is given by the angle (theta+h) and distance (radius+target_elevation) from the origin) crosses the x-y plane, everything gets flipped upside-down and backwards.
Now, I have tried to control the last 3 variables in the camera() function but I'm getting confused. The documentation for the function is here:
https://processing.org/reference/camera_.html
Can anyone see a solution to this problem?
Also, I'm sure I could just rotate the sphere (which I can do) and not have these problems, but I'm sure where I'm going with this animation and I feel like there will be things to come that will be easier with this method. Though I could be mistaken.
I believe I've solved my own problem.
I've added the following lines in draw, before calling the camera() function:
if ((x_eye- x_aim) < 0) {
z_orientation = 1;
} else {
z_orientation = -1;
}
I noticed that it wasn't (theta+h) that was triggering the flip, but the relative positions of the view and target.
Here is an updated fiddle:
https://jsfiddle.net/da8nza6y/1/

pupil detection using opencv, with infrared image

I am trying the detect the pupil from a infrared image and calculate the center of the pupil.
In my setup, i used a camera sensitive to infrared light, and I added a visible light filter to the lens and two infrared LED around the camera.
However, the image I got is blur not so clear, maybe this caused by the low resolution of the camera, whose max is about 700x500.
In the processing, the first thing i did was to convert this RGB image to gray image, how ever the result is terrible. and it got nothing in the results.
int main()
{
//load image
cv::Mat src = cv::imread("11_13_2013_15_36_09.jpg");
cvNamedWindow("original");
cv::imshow("original", src);
cv::waitKey(10);
if (src.empty())
{
std::cout << "failed to find the image";
return -1;
}
// Invert the source image and convert to graysacle
cv::Mat gray;
cv::cvtColor(~src, gray, CV_BGR2GRAY);
cv::imshow("image1", gray);
cv::waitKey(10);
// Convert to binary image by thresholding it
cv::threshold(gray, gray, 220, 255, cv::THRESH_BINARY);
cv::imshow("image2", gray);
cv::waitKey(10);
// Find all contours
std::vector<std::vector<cv::Point>>contours;
cv::findContours(gray.clone(), contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Fill holes in each contour
cv::drawContours(gray, contours, -1, CV_RGB(255, 255, 255), -1);
cv::imshow("image3", gray);
cv::waitKey(10);
for (int i = 0; i < contours.size(); i++)
{
double area = cv::contourArea(contours[i]);
cv::Rect rect = cv::boundingRect(contours[i]);
int radius = rect.width / 2;
// If controu is big enough and has round shape
// Then it is the pupil
if (area >= 800 &&
std::abs(1 - ((double)rect.width / (double)rect.height)) <= 0.3 &&
std::abs(1 - (area / (CV_PI * std::pow(radius, 2)))) <= 0.3)
{
cv::circle(src, cv::Point(rect.x + radius, rect.y + radius), radius, CV_RGB(255, 0, 0), 2);
}
}
cv::imshow("image", src);
cvWaitKey(0);
}
When the original image was converted, the gray image is terrible, does anyone know a better solution to this? I am completely new to this. for the rest of the code for finding the circle, if you have any comments, just tell me. and also i need to extra the position of the two glint (the light point) on the original image, does anyone has some idea?
thanks.
Try equalizing and filtering your source image before thresholding it ;)

openGL image quality (blured)

i use openGL to create an slideshow app. Unfortunatly the images rendered with openGL look blured compared to the gnome image viewer.
Here are the 2 Screenshots
(opengl) http://tinyurl.com/dxmnzpc
(image viewer) http://tinyurl.com/8hshv2a
and this is the base image:
http://tinyurl.com/97ho4rp
the image has the native size of my screen. (2560x1440)
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/freeglut.h>
#include <SDL/SDL.h>
#include <SDL/SDL_image.h>
#include <unistd.h>
GLuint text = 0;
GLuint load_texture(const char* file) {
SDL_Surface* surface = IMG_Load(file);
GLuint texture;
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
SDL_PixelFormat *format = surface->format;
printf("%d %d \n",surface->w,surface->h);
if (format->Amask) {
gluBuild2DMipmaps(GL_TEXTURE_2D, 4,surface->w, surface->h, GL_RGBA,GL_UNSIGNED_BYTE, surface->pixels);
} else {
gluBuild2DMipmaps(GL_TEXTURE_2D, 3,surface->w, surface->h, GL_RGB, GL_UNSIGNED_BYTE, surface->pixels);
}
SDL_FreeSurface(surface);
return texture;
}
void display(void) {
GLdouble offset_x = -1;
GLdouble offset_y = -1;
int p_viewport[4];
glGetIntegerv(GL_VIEWPORT, p_viewport);
GLfloat gl_width = p_viewport[2];//width(); // GL context size
GLfloat gl_height = p_viewport[3];//height();
glClearColor (0.0,2.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glEnable( GL_TEXTURE_2D );
glTranslatef(0,0,0);
glBindTexture( GL_TEXTURE_2D, text);
gl_width=2; gl_height=2;
glBegin(GL_QUADS);
glTexCoord2f(0, 1); //4
glVertex2f(offset_x, offset_y);
glTexCoord2f(1, 1); //3
glVertex2f(offset_x + gl_width, offset_y);
glTexCoord2f(1, 0); // 2
glVertex2f(offset_x + gl_width, offset_y + gl_height);
glTexCoord2f(0, 0); // 1
glVertex2f(offset_x, offset_y + gl_height);
glEnd();
glutSwapBuffers();
}
int main(int argc, char **argv) {
glutInit(&argc,argv);
glutInitDisplayMode (GLUT_DOUBLE);
glutGameModeString("2560x1440:24");
glutEnterGameMode();
text = load_texture("/tmp/raspberry/out.jpg");
glutDisplayFunc(display);
glutMainLoop();
}
UPDATED TRY
void display(void)
{
GLdouble texture_x = 0;
GLdouble texture_y = 0;
GLdouble texture_width = 0;
GLdouble texture_height = 0;
glViewport(0,0,width,height);
glClearColor (0.0,2.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
//Do pixel calculatons
texture_x = ((2.0*1-1) / (2*width));
texture_y = ((2.0*1-1) / (2*height));
texture_width=((2.0*width-1)/(2*width));
texture_height=((2.0*height-1)/(2*height));
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0,0,0);
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, text);
glBegin(GL_QUADS);
glTexCoord2f(texture_x, texture_height); //4
glVertex2f(0, 0);
glTexCoord2f(texture_width, texture_height); //3
glVertex2f(width, 0);
glTexCoord2f(texture_width, texture_y); // 2
glVertex2f(width,height);
glTexCoord2f(texture_y, texture_y); // 1
glVertex2f(0,height);
glEnd();
glutSwapBuffers();
}
What you run into is a variation of the fencepost problem, that arises from how OpenGL deals with texture coordinates. OpenGL does not address a texture's pixels (texels), but uses the image data as support for a interpolation, that in fact covers a wider range than the images pixels. So the texture coordinates 0 and 1 don't hit the left-/bottom most and right-/top most pixels, but go a little further, in fact.
Let's say the texture is 8 pixels wide:
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
^ ^ ^ ^ ^ ^ ^ ^ ^
0.0 | | | | | | | 1.0
| | | | | | | | |
0/8 1/8 2/8 3/8 4/8 5/8 6/8 7/8 8/8
The digits denote the texture's pixels, the bars the edges of the texture and in case of nearest filtering the border between pixels. You however want to hit the pixels' centers. So you're interested in the texture coordinates
(0/8 + 1/8)/2 = 1 / (2 * 8)
(1/8 + 2/8)/2 = 3 / (2 * 8)
...
(7/8 + 8/8)/2 = 15 / (2 * 8)
Or more generally for pixel i in a N wide texture the proper texture coordinate is
(2i + 1)/(2N)
However if you want to perfectly align your texture with the screen pixels, remember that what you specify as coordinates are not a quad's pixels, but edges, which, depending on projection may align with screen pixel edges, not centers, thus may require other texture coordinates.
Note that if you follow this, irregardless of your filtering mode and mipmaps your image will always look clear and crisp, because the interpolation hits exactly your sampling support, which is your input image. Switching to another filtering mode, like GL_NEAREST may look right at first look, but it's actually not correct, because it will alias your samples. So don't do it.
There are few other issues with your code as well, but they're not as a huge problem. First and foremost, you're choosing a rather arcane way to viewport dimensions. You're (probably without further thought) explout the fact that the default OpenGL viewport is the size of the window the context has been created with. You're using SDL, which has the side effect, that this approach won't bite you, as long as you stick with SDL-1. But switch to any other framework, that may create the context via a proxy drawable, and you're running into a problem.
The canonical way is usually to retrieve the window size from the windowing system (SDL in your case) and then setting the viewport at one of the first actions in the display function.
Another issue is your use of gluBuildMipmaps, because a) you don't want to use mipmaps and b) since OpenGL-2 you can upload texture images of arbitrary size (i.e. you're not limited to powers of 2 for the dimensions), which completely eliminates the need for gluBuildMipmaps. So don't use it. Just use glTexImage2D directly and switch to a non-mipmapping filtering mode.
Update due to question update
The way you calculate the texture coordinates still doesn't look right. It seems like you're starting to count at 1. Texture pixels are 0 base indexed, so…
This is how I'd do it:
Assuming the projection maps the viewport
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, win_width, 0, win_height, -1, 1);
glViewport(0, 0, win_width, win_height);
we calculate the texture coordinates as
//Do pixel calculatons
glBindTexture( GL_TEXTURE_2D, text);
GLint tex_width, tex_height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &tex_width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &tex_height);
texture_s1 = 1. / (2*width); // (2*0-1
texture_t1 = 1. / (2*height);
texture_s2 = (2.*(tex_width -1) + 1) / (2*width);
texture_t2 = (2.*(tex_height-1) + 1) / (2*height);
Note that tex_width and tex_height give the number of pixels in each direction, but the coordinates are 0 based, so you've to subtract 1 from them for the texture coordinate mapping. Hence we also use a constant 1 in the numerator for the s1, t1 coordinates.
The rest looks okay, given the projection you choose
glEnable( GL_TEXTURE_2D );
glBegin(GL_QUADS);
glTexCoord2f(s1, t1); //4
glVertex2f(0, 0);
glTexCoord2f(s2, t1); //3
glVertex2f(tex_width, 0);
glTexCoord2f(s2, t2); // 2
glVertex2f(tex_width,tex_height);
glTexCoord2f(s1, t2); // 1
glVertex2f(0,tex_height);
glEnd();
I'm not sure if this is really the problem, but I think you don't need/want mipmaps here. Have you tried using glTexImage2D instead of gluBuild2DMipmaps in combination with nearest neighbor filtering (glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN/MAG_FILTER, GL_NEAREST);)?

OpenGL ES glRotatef performing shear instead of rotate?

I am able to draw a sprite on the screen of an iPhone, but when I try to rotate it I am getting some weird results. It seems to be stretching the sprite in the y direction more the closer the sprite gets to pointing down the y-axis (90 and 270 degrees). It displays correctly when pointing down the x and -x axes (0 and 180 degrees). It is basically like it is shearing instead of rotating. Here are the essentials of the code (projection matrix is ortho):
glPushMatrix();
glLoadIdentity();
glTranslatef( position.x, position.y, -1.0f );
glRotatef( rotation, 0.0f, 0.0f, 1.0f );
glScalef( halfSize.x, halfSize.y, 1.0f );
vertices[0] = 1.0f;
vertices[1] = 1.0f;
vertices[2] = 0.0f;
vertices[3] = 1.0f;
vertices[4] = -1.0f;
vertices[5] = 0.0f;
vertices[6] = -1.0f;
vertices[7] = 1.0f;
vertices[8] = 0.0f;
vertices[9] = -1.0f;
vertices[10] = -1.0f;
vertices[11] = 0.0f;
glVertexPointer( 3, GL_FLOAT, 0, vertices );
glDrawArrays( GL_TRIANGLE_STRIP, 0, 4 );
glPopMatrix();
Can anybody explain to me how to fix this please?
halfsize is just half the x and y extent of the sprite; removing the glScalef call does not make any difference.
Here is my matrix setup:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, 320, 480, 0, 0.01, 5);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
OK, hopefully this screenshot will demonstrate what's happening:
If you are scaling by the same amount in the x and y directions, then your projection is causing the distortion.
Just a hunch, but maybe try swapping the 320 and 480 in your Ortho projection. (In case the X and Y on the iPhone is swapped)

Resources