OpenGL Orthographic Projection and Translate - opengl-es

The code below draws a rectangle in 2D screen space using OpenGL ES2. How do move the drawing of the rectangle by 1 pixel to the right without modifying its vertices?
Specifically, what I am trying to do is move the coordinates 0.5 pixels to the right. I had to do this previously with GLES1.x and the reason for this is that I had problems drawing lines in the correct place unless I did a glTranslate() with 0.5f.
I'm confused about the use of glm::translate() in the code below.
If I attempt a translate of 0.5f, the whole rectangle moves from the left of the screen to the middle - a jump of about 200 pixels.
I get the same result whether I do a glm::translate on the Model or the View matrix.
Is the order of the matrix multiplication wrong and what should it be?
short g_RectFromTriIndices[] =
{
0, 1, 2,
0, 2, 3
}; // The order of vertex rendering.
GLfloat g_AspectRatio = 1.0f;
//--------------------------------------------------------------------------------------------
// LoadTwoTriangleVerticesForRect()
//--------------------------------------------------------------------------------------------
void LoadTwoTriangleVerticesForRect( GLfloat *pfRectVerts, float fLeft, float fTop, float fWidth, float fHeight )
{
pfRectVerts[ 0 ] = fLeft;
pfRectVerts[ 1 ] = fTop;
pfRectVerts[ 2 ] = 0.0;
pfRectVerts[ 3 ] = fLeft + fWidth;
pfRectVerts[ 4 ] = fTop;
pfRectVerts[ 5 ] = 0.0;
pfRectVerts[ 6 ] = fLeft + fWidth;
pfRectVerts[ 7 ] = fTop + fHeight;
pfRectVerts[ 8 ] = 0.0;
pfRectVerts[ 9 ] = fLeft;
pfRectVerts[ 10 ] = fTop + fHeight;
pfRectVerts[ 11 ] = 0.0;
}
//--------------------------------------------------------------------------------------------
// Draw()
//--------------------------------------------------------------------------------------------
void Draw( void )
{
GLfloat afRectVerts[ 12 ];
//LoadTwoTriangleVerticesForRect( afRectVerts, 0, 0, g_ScreenWidth, g_ScreenHeight );
LoadTwoTriangleVerticesForRect( afRectVerts, 50, 50, 100, 100 );
// Correct for aspect ratio so squares ARE squares and not rectangular stretchings..
g_AspectRatio = (GLfloat) g_ScreenWidth / (GLfloat) g_ScreenHeight;
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
GLuint hPosition = glGetAttribLocation( g_SolidProgram, "vPosition" );
// PROJECTION
glm::mat4 Projection = glm::mat4(1.0);
// Projection = glm::perspective( 45.0f, g_AspectRatio, 0.1f, 100.0f );
// VIEW
glm::mat4 View = glm::mat4(1.0);
static GLfloat transValY = 0.5f;
static GLfloat transValX = 0.5f;
//View = glm::translate( View, glm::vec3( transValX, transValY, 0.0f ) );
// MODEL
glm::mat4 Model = glm::mat4(1.0);
// static GLfloat rot = 0.0f;
// rot += 0.001f;
// Model = glm::rotate( Model, rot, glm::vec3( 0.0f, 0.0f, 1.0f ) ); // where x, y, z is axis of rotation (e.g. 0 1 0)
glm::mat4 Ortho = glm::ortho( 0.0f, (GLfloat) g_ScreenWidth, (GLfloat) g_ScreenHeight, 0.0f, 0.0f, 1000.0f );
glm::mat4 MVP;
MVP = Projection * View * Model * Ortho;
GLuint hMVP;
hMVP = glGetUniformLocation( g_SolidProgram, "MVP" );
glUniformMatrix4fv( hMVP, 1, GL_FALSE, glm::value_ptr( MVP ) );
glEnableVertexAttribArray( hPosition );
// Prepare the triangle coordinate data
glVertexAttribPointer( hPosition, 3, GL_FLOAT, FALSE, 0, afRectVerts );
// Draw the rectangle using triangles
glDrawElements( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, g_RectFromTriIndices );
glDisableVertexAttribArray( hPosition );
}
Here is the vertex shader source:
attribute vec4 vPosition;
uniform mat4 MVP;
void main()
{
gl_Position = MVP * vPosition;
}
UPDATE: I'm finding the below matrix multiplication is giving me better results. I don't know if this is "correct" or not though:
MVP = Ortho * Model * View * Projection;

That MVP seems really weird to me, you shouldn't need 4 things in there to get your MVP.. your Projection matrix should just be the Orthogonal one, so in this case
MVP = Projection * View * Ortho;
But I can also see that your Projection matrix has been commented from perspective so I don't think it's doing much right now.
By the sounds of it since you want the model co-ordinates to stay the same while moving, you want to move your camera right? So (By the looks of it your vertices are using a 1 unit per pixel co-ordinate range) doing a translate of 0.5f to your View is shifting whatever half your projection space is. Instead, you want to have something like a Camera class that you get your Viewfrom using the camera's X and Y positions.
Then you can get your View matrix using the cameras position which can share the world units system you're using, which is 1 unit per pixel.
glm::mat4 view;
view = glm::lookAt(glm::vec3(camX, camY, 0.0), glm::vec3(0.0, 0.0, 0.0),glm::vec3(0.0, 1.0, 0.0));
I ripped that line straight (minus changing camZ for camY) from a really good 3d tutorial on camera here but the exact same concept can be applied to a orthogonal camera instead
I know it's a bit more overhead but having a cmaera class that you can control this way is nicer practice than manually using glm::translate,rotate&scale to control your viewport (and it lets you ensure that you'r working with a more obivous co-ordinate system between your camera and models co-ordinate points.

Related

Use 2 meshes + shader materials with each a different fragment shader in 1 scene (three.js)

I have 2 meshes with each a shaderMaterial and each a different fragment shader. When I add both meshes to my scene, only one will show up. Below you can find my 2 fragment shaders (see both images to see what they look like). They're basically the same.
What I want to achieve: Use mesh1 as a mask and put the other one, mesh2 (purple blob) on top of the mask.
Purple blob:
// three.js code
const geometry1 = new THREE.PlaneBufferGeometry(1, 1, 1, 1);
const material1 = new THREE.ShaderMaterial({
uniforms: this.uniforms,
vertexShader,
fragmentShader,
defines: {
PR: window.devicePixelRatio.toFixed(1)
}
});
const mesh1 = new THREE.Mesh(geometry1, material1);
this.scene.add(mesh1);
// fragment shader
void main() {
vec2 res = u_res * PR;
vec2 st = gl_FragCoord.xy / res.xy - 0.5;
st.y *= u_res.y / u_res.x * 0.8;
vec2 circlePos = st;
float c = circle(circlePos, 0.2 + 0. * 0.1, 1.) * 2.5;
float offx = v_uv.x + sin(v_uv.y + u_time * .1);
float offy = v_uv.y * .1 - u_time * 0.005 - cos(u_time * .001) * .01;
float n = snoise3(vec3(offx, offy, .9) * 2.5) - 2.1;
float finalMask = smoothstep(1., 0.99, n + pow(c, 1.5));
vec4 bg = vec4(0.12, 0.07, 0.28, 1.0);
vec4 bg2 = vec4(0., 0., 0., 0.);
gl_FragColor = mix(bg, bg2, finalMask);
}
Blue mask
// three.js code
const geometry2 = new THREE.PlaneBufferGeometry(1, 1, 1, 1);
const material2 = new THREE.ShaderMaterial({
uniforms,
vertexShader,
fragmentShader,
defines: {
PR: window.devicePixelRatio.toFixed(1)
}
});
const mesh2 = new THREE.Mesh(geometry2, material2);
this.scene.add(mesh2);
// fragment shader
void main() {
vec2 res = u_res * PR;
vec2 st = gl_FragCoord.xy / res.xy - 0.5;
st.y *= u_res.y / u_res.x * 0.8;
vec2 circlePos = st;
float c = circle(circlePos, 0.2 + 0. * 0.1, 1.) * 2.5;
float offx = v_uv.x + sin(v_uv.y + u_time * .1);
float offy = v_uv.y * .1 - u_time * 0.005 - cos(u_time * .001) * .01;
float n = snoise3(vec3(offx, offy, .9) * 2.5) - 2.1;
float finalMask = smoothstep(1., 0.99, n + pow(c, 1.5));
vec4 bg = vec4(0.12, 0.07, 0.28, 1.0);
vec4 bg2 = vec4(0., 0., 0., 0.);
gl_FragColor = mix(bg, bg2, finalMask);
}
Render Target code
this.rtWidth = window.innerWidth;
this.rtHeight = window.innerHeight;
this.renderTarget = new THREE.WebGLRenderTarget(this.rtWidth, this.rtHeight);
this.rtCamera = new THREE.PerspectiveCamera(
this.camera.settings.fov,
this.camera.settings.aspect,
this.camera.settings.near,
this.camera.settings.far
);
this.rtCamera.position.set(0, 0, this.camera.settings.perspective);
this.rtScene = new THREE.Scene();
this.rtScene.add(this.purpleBlob);
const geometry = new THREE.PlaneGeometry(window.innerWidth, window.innerHeight, 1);
const material = new THREE.MeshPhongMaterial({
map: this.renderTarget.texture,
});
this.mesh = new THREE.Mesh(geometry, material);
this.scene.add(this.mesh);
I'm still new to shaders so please be patient. :-)
There are probably infinite ways to mask in three.js. Here's a few
Use the stencil buffer
The stencil buffer is similar to the depth buffer in that it for every pixel in the canvas or render target there is a corresponding stencil pixel. You need to tell three.js you want a stencil buffer and then you can tell it when rendering what to do with the stencil buffer when you're drawing things.
You the stencil settings on Material
You tell three.js
what to do if the pixel you're drawing fails the stencil test
what to do if the pixel your drawing fails the depth test
what to do if the pixel you're drawing passes the depth test.
The things you can tell it to do for each of those conditions are keep (do nothing), increment, decrement, increment wraparound, decrement wraparound, set to a specific value.
You can also specify what the stencil test is by setting Material.stencilFunc
So, for example you can clear the stencil buffer to 0 (the default?), set the stencil test so it always passes, and set the conditions so if the depth test passes you set the stencil to 1. You then draw a bunch of things. Everywhere they are drawn there will now be a 1 in then stencil buffer.
Now you change the stencil test so it only passes if it equals 1 (or 0) and then draw more stuff, now things will only be drawn where the stencil equals the value you set
This exmaple uses the stencil
Mask with an alpha mask
In this case you need 2 color textures and an alpha texture. How you get those is up to you. For example you could load all 3 from images. Or you could generate all 3 using 3 render targets. Finally you pass all 3 to a shader that mixes them as in
gl_FragColor = mix(colorFromTexture1, colorFromTexture2, valueFromAlphaTexture);
This example uses this alpha mixing method
Note that if one of your 2 colors textures has an alpha channel you could use just 2 textures. You'd just pass one of the color textures as your mask.
Or of course you could calculate a mask based on the colors in one image or the other or both. For example
// assume you have function that converts from rgb to hue,saturation,value
vec3 hsv = rgb2hsv(colorFromTexture1.rgb);
float hue = hsv.x;
// pick one or the other if color1 is close to green
float mixAmount = step(abs(hue - 0.33), 0.05);
gl_FragColor = mix(colorFromTexture1, colorFromTexture2, mixAmount);
The point here is not that exact code, it's that you can make any formula you want for the mask, based on whatever you want, color, position, random math, sine waves based on time, some formula that generates a blob, whatever. The most common is some code that just looks up a mixAmount from a texture which is what the linked example above does.
ShaderToy style
Your code above appears to be a shadertoy style shader which is drawing a fullscreen quad. Instead of drawing 2 separate things you can just draw them in the same shader
vec4 computeBlueBlob() {
...
return blueBlobColor;
}
vec4 computeWhiteBlob() {
...
return whtieBlobColor;
}
vec4 main() {
vec4 color1 = computeBlueBlob();
vec4 color2 = computeWhiteBlob();
float mixAmount = color.a; // note: color2.a could be any
// formula to decide which colors
// to draw
gl_FragColor = mix(color1, color2, mixAmount);
}
note just like above how you compute mixAmount is up to you. Based it off anything, color1.r, color2.r, some formula, some hue, some other blob generation function, whatever.

Why is this basic "rotate around the origin" failing to work?

I've done this a hundred times, but this is my first time with a manually constructed cube made of "sticks", which are 3D lines. It's constructed around the origin, out 5 from the origin in each of the X, Y, and Z directions.
When I rotate it, I'm still "inside it" and it rotates around me (the camera). I'm applying a translation and rotation, so I'm stymied as to what I'm doing wrong.
Here's the basic code to rotate the box, by which I mean generate it's world matrix:
float rotateX = 0.0f, rotateY = 0.0f, rotateZ = 0.0f;
XMFLOAT4 positionBox = XMFLOAT4(0, 0, -50, 1); // Camera at origin looking at this
XMMATRIX matrixCubeWorld;
void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext )
{
auto pCamera = g_GameServices.GetService<CWorldCamera>();
XMMATRIX translation = XMMatrixTranslationFromVector(XMLoadFloat4(&positionBox));
XMMATRIX rotation = XMMatrixRotationRollPitchYaw(rotateX, rotateY, rotateZ);
matrixCubeWorld = rotation * translation;
if (GetKeyState('X') < 0)
rotateX = RotateAround(rotateX, fElapsedTime);
if (GetKeyState('Y') < 0)
rotateY = RotateAround(rotateY, fElapsedTime);
}
And when I set up to draw, I use that matrix:
D3D11_MAPPED_SUBRESOURCE MappedResource;
V(pd3dImmediateContext->Map(_pVertexShaderVariables, 0, D3D11_MAP_WRITE_DISCARD, 0, &MappedResource));
auto pCB = reinterpret_cast<VSCB3DLineChangesEveryFrame *>(MappedResource.pData);
pCB->_gWorldViewProj = matrixCubeWorld * pCamera->GetViewMatrix() * pCamera->GetProjMatrix();
pd3dImmediateContext->Unmap(_pVertexShaderVariables, 0);
return hr;
...and the shader is as simple as can be:
VertexShaderOutput Line3DVertexShaderFunction(float3 position : POSITION, float4 color : COLOR, float2 tex : TEXCOORD0)
{
VertexShaderOutput output;
output.position = mul(float4(position, 1), _gWorldViewProj);
output.color = color;
output.tex = tex;
return output;
}
So do I have a bug or a misunderstanding? I've tried with the inverse of the translation, thinking that would 'bring it back to the origin before rotating' but didn't improve it.
Transformations look good imho.
Maybe it's due to the fact that 'XMMatrixTranslationFromVector'
takes only 3d-vector as the documentation (msdn) says.
Also make sure that RotateAround function and camera view/proj matrices give correct results.
Best regards.

glClipPlane - Is there an equivalent in webGL?

I have a 3D mesh. Is there any possibility to render the sectional view (clipping) like glClipPlane in OpenGL?
I am using Three.js r65.
The latest shader that I have added is:
Fragment Shader:
uniform float time;
uniform vec2 resolution;
varying vec2 vUv;
void main( void )
{
vec2 position = -1.0 + 2.0 * vUv;
float red = abs( sin( position.x * position.y + time / 2.0 ) );
float green = abs( cos( position.x * position.y + time / 3.0 ) );
float blue = abs( cos( position.x * position.y + time / 4.0 ) );
if(position.x > 0.2 && position.y > 0.2 )
{
discard;
}
gl_FragColor = vec4( red, green, blue, 1.0 ); }
Vertex Shader:
varying vec2 vUv;
void main()
{
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
Unfortunately in the OpenGL-ES specification against which WebGL has been specified there are no clip planes and the vertex shader stage lacks the gl_ClipDistance output, by which plane clipping is implemented in modern OpenGL.
However you can use the fragment shader to implement per-fragment clipping. In the fragment shader test the position of the incoming fragment against your set of clip planes and if the fragment does not pass the test discard it.
Update
Let's have a look at how clip planes are defined in fixed function pipeline OpenGL:
void ClipPlane( enum p, double eqn[4] );
The value of the first argument, p, is a symbolic constant,CLIP PLANEi, where i is
an integer between 0 and n − 1, indicating one of n client-defined clip planes. eqn
is an array of four double-precision floating-point values. These are the coefficients
of a plane equation in object coordinates: p1, p2, p3, and p4 (in that order). The
inverse of the current model-view matrix is applied to these coefficients, at the time
they are specified, yielding
p' = (p'1, p'2, p'3, p'4) = (p1, p2, p3, p4) inv(M)
(where M is the current model-view matrix; the resulting plane equation is unde-
fined if M is singular and may be inaccurate if M is poorly-conditioned) to obtain
the plane equation coefficients in eye coordinates. All points with eye coordinates
transpose( (x_e, y_e,z_e, w_e) ) that satisfy
(p'1, p'2, p'3, p'4)  x_e  ≥ 0
 y_e 
 z_e 
 w_e 
lie in the half-space defined by the plane; points that do not satisfy this condition
do not lie in the half-space.
So what you do is, you add uniforms by which you pass the clip plane parameters p' and add another out/in pair of variables between the vertex and fragment shader to pass the vertex eye space position. Then in the fragment shader the first thing you do is performing the clip plane equation test and if it doesn't pass you discard the fragment.
In the vertex shader
in vec3 vertex_position;
out vec4 eyespace_pos;
uniform mat4 modelview;
void main()
{
/* ... */
eyespace_pos = modelview * vec4(vertex_position, 1);
/* ... */
}
In the fragment shader
in vec4 eyespace_pos;
uniform vec4 clipplane;
void main()
{
if( dot( eyespace_pos, clipplane) < 0 ) {
discard;
}
/* ... */
}
In the newer versions (> r.76) of three.js clipping is supported in the THREE.WebGLRenderer. There is an array property called clippingPlanes where you can add your custom clipping planes (THREE.Plane instances).
For three.js you can check these two examples:
1) WebGL clipping (code base here on GitHub)
2) WebGL clipping advanced (code base here on GitHub)
A simple example
To add a clipping plane to the renderer you can do:
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var plane = new THREE.Plane( normal, constant );
renderer.clippingPlanes = [plane];
Here a fiddle to demonstrate this.
You can also clip on object level by adding a clipping plane to the object material. For this to work you have to set the renderer localClippingEnabled property to true.
// set renderer
renderer.localClippingEnabled = true;
// add clipping plane to material
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var color = 0xff0000;
var plane = new THREE.Plane( normal, constant );
var material = new THREE.MeshBasicMaterial({ color: color });
material.clippingPlanes = [plane];
var mesh = new THREE.Mesh( geometry, material );
Note: In r.77 some of the clipping functionality in the THREE.WebGLRenderer was moved moved to a separate THREE.WebGLClipping class, check here for reference in the three.js master branch.

How to position a textured quad in screen coordinates?

I am experimenting with different matrices, studying their effect on a textured quad. So far I have implemented Scaling, Rotation, and Translation matrices fairly easily - by using the following method against my position vectors:
enter code here
for(int a=0;a<noOfVertices;a++)
{
myVectorPositions[a] = SlimDX.Vector3.TransformCoordinate(myVectorPositions[a],myPerspectiveMatrix);
}
However, I what I want to do is be able to position my vectors using world-space coordinates, not object-space.
At the moment my position vectors are declared thusly:
enter code here
myVectorPositions[0] = new Vector3(-0.1f, 0.1f, 0.5f);
myVectorPositions[1] = new Vector3(0.1f, 0.1f, 0.5f);
myVectorPositions[2] = new Vector3(-0.1f, -0.1f, 0.5f);
myVectorPositions[3] = new Vector3(0.1f, -0.1f, 0.5f);
On the other hand (and as part of learning about matrices) I have read that I need to apply a matrix to get to screen coordinates. I've been looking through the SlimDX API docs and can't seem to pin down the one I should be using.
In any case, hopefully the above makes sense and what I am trying to achieve is clear. I'm aiming for a simple 1024 x 768 window as my application area, and want to position a my textured quad at 10,10. How do I go about this? Most confused right now.
I am not familiar with slimdx, but in native DirectX, if you want to draw a quad in screen coordinates, you should define the vertex format as Translated, that is you specify the screen coordinates directly instead of using D3D transform engine to transform your vertex. the vertex definition as below
#define SCREEN_SPACE_FVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)
and you can define your vertex like this
ScreenVertex Vertices[] =
{
// Triangle 1
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, }, // x, y, z, rhw, color
{ 350.0f, 150.0f, 0, 1.0f, 0xff00ff00, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
// Triangle 2
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
{ 150.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
};
By default screen space in 3d systems is from -1 to 1 (where -1,-1 is bottom left corner and 1,1 top right).
To convert those unit to pixel values, you need to convert pixel values into this space. So for example pixel 10,30 on a screen of 1024*768 is:
position.x = 10.0f * (1.0f / 1024.0f); // maps to 0/1
position.x *= 2.0f; //maps to 0/2
position.x -= 1.0f; // Maps to -1/1
Now for y you do
position.y = 30.0f * (1.0f / 768.0f); // maps to 0/1
position.y = 1.0f - position.y; //Inverts y
position.y *= 2.0f; //maps to 0/2
position.y -= 1.0f; // Maps to -1/1
Also if you want to apply transforms to your quads, It is better to send the transformation to the shader (and do the vector transformation in the vertex shader), rather than doing the multiplications on the vertices, since you will not need to update your vertexbuffer every time.

OpenGL ES glRotatef performing shear instead of rotate?

I am able to draw a sprite on the screen of an iPhone, but when I try to rotate it I am getting some weird results. It seems to be stretching the sprite in the y direction more the closer the sprite gets to pointing down the y-axis (90 and 270 degrees). It displays correctly when pointing down the x and -x axes (0 and 180 degrees). It is basically like it is shearing instead of rotating. Here are the essentials of the code (projection matrix is ortho):
glPushMatrix();
glLoadIdentity();
glTranslatef( position.x, position.y, -1.0f );
glRotatef( rotation, 0.0f, 0.0f, 1.0f );
glScalef( halfSize.x, halfSize.y, 1.0f );
vertices[0] = 1.0f;
vertices[1] = 1.0f;
vertices[2] = 0.0f;
vertices[3] = 1.0f;
vertices[4] = -1.0f;
vertices[5] = 0.0f;
vertices[6] = -1.0f;
vertices[7] = 1.0f;
vertices[8] = 0.0f;
vertices[9] = -1.0f;
vertices[10] = -1.0f;
vertices[11] = 0.0f;
glVertexPointer( 3, GL_FLOAT, 0, vertices );
glDrawArrays( GL_TRIANGLE_STRIP, 0, 4 );
glPopMatrix();
Can anybody explain to me how to fix this please?
halfsize is just half the x and y extent of the sprite; removing the glScalef call does not make any difference.
Here is my matrix setup:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, 320, 480, 0, 0.01, 5);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
OK, hopefully this screenshot will demonstrate what's happening:
If you are scaling by the same amount in the x and y directions, then your projection is causing the distortion.
Just a hunch, but maybe try swapping the 320 and 480 in your Ortho projection. (In case the X and Y on the iPhone is swapped)

Resources