Question:
How can I render to target through HLSL by returning a struct with semantic mappings?
Hi there. I'm new to DirectX11 and am trying to set up render to texture so I can implement deferred shading. I've spent over a day scouring the net for a solution to my problem, but have had no luck so far.
Currently, I can render my scene to a single texture through HLSL and I finish by rendering that texture to a full-screen quad without problems. However if I bind multiple render targets, all I ever see is the clear color for my targets, the pixel shader never seems to get to any bound textures. I have D3D11_CREATE_DEVICE_DEBUG set with no warnings/errors at runtime and I've tried using the Visual Studio 2013 graphics debugging tool, but am having a difficult time understanding what the issue is. Here is probably the relevant source code:
Shader Code: (compiled with vs_5_0, ps_5_0)
// vertex shader
cbuffer mTransformationBuffer
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
};
struct VertexInputType
{
float4 position : POSITION;
float4 color : COLOR;
float3 normal : NORMAL;
};
struct PixelInputType
{
float4 position : SV_POSITION;
float4 color : COLOR;
float3 normal : NORMAL;
};
// this does not seem to work!
struct PixelOutputType
{
float4 position : SV_TARGET0;
float4 normal : SV_TARGET1;
};
PixelInputType VertexEntry(VertexInputType input)
{
input.position.w = 1.0f;
PixelInputType output;
output.position = mul(input.position, worldMatrix);
output.normal = mul(input.normal, (float3x3)worldMatrix);
output.normal = normalize(output.normal);
output.color = input.color;
return output;
}
// fragment shader
PixelOutputType FragmentEntry(PixelInputType input)
{
PixelOutputType output;
// render as white so I can see MRT working
output.position = float4(1.0f, 1.0f, 1.0f, 1.0f);
output.normal = float4(1.0f, 1.0f, 1.0f, 1.0f);
return output;
}
Render Target Creation:
void RenderTargetManager::createTarget(const std::string& name, unsigned int width, unsigned int height, DXGI_FORMAT format)
{
unsigned int hash = hashString(name);
RenderTarget target;
// create the texture to render to
target.mTexture = TextureUtils::createTexture2d(mDevice, width, height, format, D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE);
// create the render target view
D3D11_RENDER_TARGET_VIEW_DESC rtViewDesc;
memset(&rtViewDesc, 0, sizeof(rtViewDesc));
rtViewDesc.Format = format;
rtViewDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;
HRESULT ret = S_OK;
ret = mDevice.CreateRenderTargetView(target.mTexture, &rtViewDesc, &target.mRenderTargetView);
ASSERT(ret == S_OK, "Failed to create a render target view.");
// create the shader resource view
D3D11_SHADER_RESOURCE_VIEW_DESC srViewDesc;
memset(&srViewDesc, 0, sizeof(srViewDesc));
srViewDesc.Format = format;
srViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srViewDesc.Texture2D.MipLevels = 1;
ret = mDevice.CreateShaderResourceView(target.mTexture, &srViewDesc, &target.mShaderResourceView);
ASSERT(ret == S_OK, "Failed to create a shader resource view.");
mRenderTargetMap[hash] = target;
}
Setting Render Targets:
void RenderTargetManager::setTargets(ID3D11DepthStencilView* depthStencilView, unsigned int numTargets, ...)
{
va_list args;
va_start(args, numTargets);
std::vector<ID3D11RenderTargetView*> renderTargetViews;
for (unsigned int i = 0; i < numTargets; ++i)
{
std::string renderTargetName = va_arg(args, char*);
unsigned int hash = hashString(renderTargetName);
auto it = mRenderTargetMap.find(hash);
if (it != mRenderTargetMap.end())
{
renderTargetViews.push_back(it->second.mRenderTargetView);
}
else
{
LOGW("Render target view '%s' not found.", renderTargetName.c_str());
}
}
va_end(args);
if (!renderTargetViews.empty())
{
ASSERT(renderTargetViews.size() == numTargets, "Failed to set render targets. Not all targets found.");
mDeviceContext.OMSetRenderTargets(renderTargetViews.size(), &renderTargetViews[0], depthStencilView);
}
}
My pixel shader will write to any one of these targets if it returns a float4, returning a struct never seems to work and results in only the clear color for the render target. I have also tried this as the return struct without success:
struct PixelOutputType
{
float4 position : SV_TARGET
};
Thank you for taking the time to read this. I really appreciate the help.
-T
EDIT:
I have also tried using a static array of ID3D11RenderTargetView*, with the same issues.
EDIT2: I also enabled DirectX Control Panel output and see nothing out of the ordinary.
After a lot of tests and frustration, I finally discovered my issue ;__;
There were no problems with my textures, views, or states, but there was an issue with the shader and SV_POSITION.
I was mistakenly trying to use my pixel shader input 'float4 position : SV_TARGET' as my output position and only transformed it to world coordinates in the vertex shader. After thinking about this for a while I realized that this MUST be the fully transformed point as an input to the pixel shader and sure enough, fully transforming it fixed the issue.
THIS:
output.position = mul(input.position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);
instead of this:
output.position = mul(input.position, worldMatrix);
//output.position = mul(output.position, viewMatrix);
//output.position = mul(output.position, projectionMatrix);
Related
I am learning OpenGL ES and use for this the book "Lean OpenGL ES for Mobile Game and Graphics Development" written by Mehta. He firstly shows how to build a simple project, goes over and adds orthoMatrix, then 3D, and in the end he explains how to use this all together with a texture. I am wondering if it is possible to use a texture without a matrix. I must say I tried it, the program did not crash, but the the texture was completely scattered and disorted - as if the vertexData was wrong. So my first question is if it is teoretically possible?
UPDATE 1
This code below represents an application which is able to show the camera preview on a texture. With the use of a matrix it works fine, but the use without a matrix results in the screen as shown on the picture.
this is the texture vertex shader and the texture fragment shader.
attribute vec4 a_Position;
attribute vec2 a_TextureCoordinates;
varying vec2 v_TextureCoordinates;
void main()
{
v_TextureCoordinates = a_TextureCoordinates;
gl_Position = a_Position;
}
#extension GL_OES_EGL_image_external : require
precision mediump float;
uniform samplerExternalOES u_TextureUnit;
varying vec2 v_TextureCoordinates;
void main()
{
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
}
The texture loader
public class TextureHelper {
public static final int GL_TEXTURE_EXTERNAL_OES = 0x8D65;
public static int loadTexture() {
final int[] textureObjectsId = new int[1];
GLES20.glGenTextures(1, textureObjectsId, 0);
if (textureObjectsId[0] == 0 ) {
return 0;
}
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, textureObjectsId[0]);
GLES20.glTexParameterf(GLES20.GL_TEXTURE, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, 0);
return textureObjectsId[0];
The locations and uniforms
public class TextureShaderProgram extends ShaderProgram {
private final int uTextureUnitLocation;
// Attribute locations
private final int aPositionLocation;
private final int aTextureCoordinatesLocation;
public TextureShaderProgram(Context context) {
super(context, R.raw.texture_vertex_shader, R.raw.texture_fragment_shader);
uTextureUnitLocation = GLES20.glGetUniformLocation(program, U_TEXTURE_UNIT);
aPositionLocation = GLES20.glGetAttribLocation(program, A_POSITION);
aTextureCoordinatesLocation = GLES20.glGetAttribLocation(program, A_TEXTURE_COORDINATES);
}
public void setUniforms(int textureId) {
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, textureId);
GLES20.glUniform1i(uTextureUnitLocation, 0);
}
public int getPositionAttributeLocation() {
return aPositionLocation;
}
public int getTextureCoordinatesAttributeLocation() {
return aTextureCoordinatesLocation;
}
}
The vertexData (triangle fan)
public class Table {
private static final int POSITION_COMPONENT_COUNT = 2;
private static final int TEXTURE_COORDINATES_COMPONENT_COUNT = 2;
private static final int STRIDE = (POSITION_COMPONENT_COUNT + TEXTURE_COORDINATES_COMPONENT_COUNT * BYTES_PER_FLOAT);
private static final float[] VERTEXDATA = {
0f, 0f, 0.5f, 0.5f, //middle
-0.5f, -0.8f, 1f, 0.1f,//bottom left
0.5f, -0.8f, 1f, 0.9f, //bottom right
0.5f, 0.8f, 0f, 0.9f,//top right
-0.5f, 0.8f, 0f, 0.1f, // top left
-0.5f, -0.8f, 1f, 0.1f, // bottom left
};
Do you need more? Leave comment so I can post the rest if needed.
You will normally need some form of transform to animate the vertices in your scene. This is nearly always a matrix, because they can efficiently represent any desirable transform, but it doesn't have to be.
It is rare for the texture coordinate itself to be modified by a matrix - normally the texture is "fixed" on the model so the coordinate is just passed through the vertex shader without modification.
i took the code from the OpenGL ES 2.0 Tutorial from : github.com/learnopengles/Learn-OpenGLES-Tutorials
and customized the class "LessonOneRenderer" to test a barrel distortin Shader like mentioned on this site:
www.geeks3d.com/20140213/glsl-shader-library-fish-eye-and-dome-and-barrel-distortion-post-processing-filters/2/
This is the "untouched" Result without calling the distort() function:
http://fs1.directupload.net/images/150125/hw746rcn.png
and the result calling the function :
http://fs2.directupload.net/images/150125/f84arxoj.png
/**
* This class implements our custom renderer. Note that the GL10 parameter passed in is unused for OpenGL ES 2.0
* renderers -- the static class GLES20 is used instead.
*/
public class LessonOneRenderer implements GLSurfaceView.Renderer
{
/**
* Store the model matrix. This matrix is used to move models from object space (where each model can be thought
* of being located at the center of the universe) to world space.
*/
private float[] mModelMatrix = new float[16];
/**
* Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
* it positions things relative to our eye.
*/
private float[] mViewMatrix = new float[16];
/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
private float[] mProjectionMatrix = new float[16];
/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
private float[] mMVPMatrix = new float[16];
/** Store our model data in a float buffer. */
private final FloatBuffer mTriangle1Vertices;
/** This will be used to pass in the transformation matrix. */
private int mMVPMatrixHandle;
/** This will be used to pass in model position information. */
private int mPositionHandle;
/** This will be used to pass in model color information. */
private int mColorHandle;
/** How many bytes per float. */
private final int mBytesPerFloat = 4;
/** How many elements per vertex. */
private final int mStrideBytes = 7 * mBytesPerFloat;
/** Offset of the position data. */
private final int mPositionOffset = 0;
/** Size of the position data in elements. */
private final int mPositionDataSize = 3;
/** Offset of the color data. */
private final int mColorOffset = 3;
/** Size of the color data in elements. */
private final int mColorDataSize = 4;
private FloatBuffer vertexBuffer;
private ShortBuffer drawListBuffer;
// number of coordinates per vertex in this array
static final int COORDS_PER_VERTEX = 3;
// X Y Z
static float squareCoords[] = { -0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f }; // top right
private short drawOrder[] = { 0, 1, 2, 0, 2, 3 }; // order to draw vertices
float color[] = { 0.63671875f, 0.76953125f, 0.22265625f, 1.0f };
static final int vertexStride = COORDS_PER_VERTEX * 4;
static final int vertexCount = 4;
private ByteBuffer dlb;
/**
* Initialize the model data.
*/
public LessonOneRenderer()
{
ByteBuffer bb = ByteBuffer.allocateDirect(squareCoords.length * 4); // (# of coordinate values * 4 bytes per float)
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(squareCoords);
vertexBuffer.position(0);
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(drawOrder.length * 2); // (# of coordinate values * 2 bytes per short)
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
// Define points for equilateral triangles.
// This triangle is red, green, and blue.
final float[] triangle1VerticesData = {
// X, Y, Z,
// R, G, B, A
-0.5f, 0.5f, 0.0f, // top left
1.0f, 0.0f, 0.0f, 1.0f,
-0.5f, -0.5f, 0.0f,// bottom left
0.0f, 0.0f, 1.0f, 1.0f,
0.5f, -0.5f, 0.0f, // bottom right
0.0f, 1.0f, 0.0f, 1.0f
,
0.5f, 0.5f, 0.0f, // top right
0.0f, 1.0f, 0.0f, 1.0f
};
// Initialize the buffers.
mTriangle1Vertices = ByteBuffer.allocateDirect(triangle1VerticesData.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mTriangle1Vertices.put(triangle1VerticesData).position(0);
// initialize byte buffer for the draw list
dlb = ByteBuffer.allocateDirect(drawOrder.length * 2); // (# of coordinate values * 2 bytes per short)
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
}
#Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{
// Set the background clear color to gray.
GLES20.glClearColor(0.5f, 0.5f, 0.5f, 0.5f);
// Position the eye behind the origin.
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = 1.5f;
// We are looking toward the distance
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = -5.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
// Set the view matrix. This matrix can be said to represent the camera position.
// NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
// view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
final String vertexShader =
"uniform mat4 u_MVPMatrix; \n" // A constant representing the combined model/view/projection matrix.
+ "attribute vec4 a_Position; \n" // Per-vertex position information we will pass in.
+ "attribute vec4 a_Color; \n" // Per-vertex color information we will pass in.
+ "varying vec4 a_Pos; \n"
+ "varying vec4 v_Color; \n" // This will be passed into the fragment shader.
+" "
+"vec4 Distort(vec4 p){ \n"
+" float BarrelPower = 0.4; \n"
+" vec2 v = p.xy / p.w; \n"
+" float radius = length(v); \n"
+" if (radius > 0.0){ \n"
+" float theta = atan(v.y,v.x); \n"
+" radius = pow(radius, BarrelPower);\n"
+" v.x = radius * cos(theta); \n"
+" v.y = radius * sin(theta); \n"
+" p.xy = v.xy * p.w; \n"
+" }"
+" \n"
+" return p; \n"
+" } \n"
+ "void main() \n" // The entry point for our vertex shader.
+ "{ \n"
+ " v_Color = a_Color; \n" // Pass the color through to the fragment shader.
+ " vec4 P = u_MVPMatrix * a_Position;" // It will be interpolated across the triangle.
+ " gl_Position = Distort(P); \n" // gl_Position is a special variable used to store the final position.
+ " \n" // Multiply the vertex by the matrix to get the final point in
+ "} \n"; // normalized screen coordinates.
final String fragmentShader =
// Set the default precision to medium. We don't need as high of a
"varying vec4 a_Pos;" // precision in the fragment shader.
+ "varying vec4 v_Color; \n" // This is the color from the vertex shader interpolated across the
// triangle per fragment.
+ "void main() \n" // The entry point for our fragment shader.
+ "{ vec4 c = vec4(1.0); \n"
+ " gl_FragColor = v_Color; \n" // Pass the color directly through the pipeline.
+ "} \n";
// Load in the vertex shader.
int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);
if (vertexShaderHandle != 0)
{
// Pass in the shader source.
GLES20.glShaderSource(vertexShaderHandle, vertexShader);
// Compile the shader.
GLES20.glCompileShader(vertexShaderHandle);
// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);
// If the compilation failed, delete the shader.
if (compileStatus[0] == 0)
{
GLES20.glDeleteShader(vertexShaderHandle);
vertexShaderHandle = 0;
}
}
if (vertexShaderHandle == 0)
{
throw new RuntimeException("Error creating vertex shader.");
}
// Load in the fragment shader shader.
int fragmentShaderHandle = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);
if (fragmentShaderHandle != 0)
{
// Pass in the shader source.
GLES20.glShaderSource(fragmentShaderHandle, fragmentShader);
// Compile the shader.
GLES20.glCompileShader(fragmentShaderHandle);
// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(fragmentShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);
// If the compilation failed, delete the shader.
if (compileStatus[0] == 0)
{
GLES20.glDeleteShader(fragmentShaderHandle);
fragmentShaderHandle = 0;
}
}
if (fragmentShaderHandle == 0)
{
throw new RuntimeException("Error creating fragment shader.");
}
// Create a program object and store the handle to it.
int programHandle = GLES20.glCreateProgram();
if (programHandle != 0)
{
// Bind the vertex shader to the program.
GLES20.glAttachShader(programHandle, vertexShaderHandle);
// Bind the fragment shader to the program.
GLES20.glAttachShader(programHandle, fragmentShaderHandle);
// Bind attributes
GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
GLES20.glBindAttribLocation(programHandle, 1, "a_Color");
// Link the two shaders together into a program.
GLES20.glLinkProgram(programHandle);
// Get the link status.
final int[] linkStatus = new int[1];
GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);
// If the link failed, delete the program.
if (linkStatus[0] == 0)
{
GLES20.glDeleteProgram(programHandle);
programHandle = 0;
}
}
if (programHandle == 0)
{
throw new RuntimeException("Error creating program.");
}
// Set program handles. These will later be used to pass in values to the program.
mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
mColorHandle = GLES20.glGetAttribLocation(programHandle, "a_Color");
// Tell OpenGL to use this program when rendering.
GLES20.glUseProgram(programHandle);
}
#Override
public void onSurfaceChanged(GL10 glUnused, int width, int height)
{
// Set the OpenGL viewport to the same size as the surface.
GLES20.glViewport(0, 0, width, height);
// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 10.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
#Override
public void onDrawFrame(GL10 glUnused)
{
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
// Do a complete rotation every 10 seconds.
long time = SystemClock.uptimeMillis() % 10000L;
float angleInDegrees = (360.0f / 10000.0f) * ((int) time);
// Draw the triangle facing straight on.
//Matrix.setIdentityM(mModelMatrix, 0);
//Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f);
//drawTriangle(mTriangle1Vertices);
//Draw Square
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f, -0.7f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f);
this.drawSquare(mTriangle1Vertices);
//Draw Square
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f, 0.7f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f);
this.drawSquare(mTriangle1Vertices);
// Draw one translated a bit down and rotated to be flat on the ground.
//Matrix.setIdentityM(mModelMatrix, 0);
//Matrix.translateM(mModelMatrix, 0, 0.0f, -1.0f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, 90.0f, 1.0f, 0.0f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f);
//drawTriangle(mTriangle2Vertices);
// Draw one translated a bit to the right and rotated to be facing to the left.
//Matrix.setIdentityM(mModelMatrix, 0);
//Matrix.translateM(mModelMatrix, 0, 1.0f, 0.0f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, 90.0f, 0.0f, 1.0f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f);
//drawTriangle(mTriangle3Vertices);
// test.draw();
}
/**
* Draws a triangle from the given vertex data.
*
* #param aTriangleBuffer The buffer containing the vertex data.
*/
private void drawTriangle(final FloatBuffer aTriangleBuffer)
{
// Pass in the position information
aTriangleBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
mStrideBytes, aTriangleBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Pass in the color information
aTriangleBuffer.position(mColorOffset);
GLES20.glVertexAttribPointer(mColorHandle, mColorDataSize, GLES20.GL_FLOAT, false,
mStrideBytes, aTriangleBuffer);
GLES20.glEnableVertexAttribArray(mColorHandle);
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 3);
}
private void drawSquare(final FloatBuffer aTriangleBuffer)
{
// Pass in the position information
aTriangleBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
mStrideBytes, aTriangleBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Pass in the color information
aTriangleBuffer.position(mColorOffset);
GLES20.glVertexAttribPointer(mColorHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false,
vertexStride, aTriangleBuffer);
GLES20.glEnableVertexAttribArray(mColorHandle);
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 4);
}
}
Does anybody have an idea what i'm missing to distort the squares like in the example?
You apply the distortion effect in the vertex shader. So all you are doing is moving the 4 corners of those spheres. You can't achieve the desired effect that way.
Threre are different options. You theoretically could use some higher tesselation for your squares, so you just create a 2D grid of vertices, which then can indivudually be moved according to the distortion. If your grid is fine enough, the piecewise-linear nature of that approximation will not be visible anymore.
However, such effects are typically done differently. One usually does not tune the geometric models for specific effects. That kind of distortions usually are applied in screen-space, as a post-processing effect (and the link you posted does exactly this). The idea is that you can render the whole scene into a texture, and draw a single tetxured rectangle filling the whole screen as the final pass. In that pass, you can apply the distortion to the texture coordinates in the fragment shader, just as in the original example.
All of that can be done in OpenGL ES, too. The keywords to look for are RTT (render to texture) and FBOs (framebuffer objects). On GLES, this feature is available as the OES_framebuffer_object extension (which is widely supported on most ES implementations).
However, using that stuff is definitively a bit more advanced than your typical "lessson 1" of some tutorial, you might want to read some other lessons first before trying that... ;)
I have to following code:
public void BeginDraw()
{
if(bufferMaterial != null)
{
bufferMaterial.Textures[0].Surface.BindFramebuffer();
beganDraw = true;
}
}
public void EndDraw()
{
if (beganDraw)
{
beganDraw = false;
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
if (bufferMaterial != null)
{
GL.BindBuffer(BufferTarget.ArrayBuffer, screenMesh.VBO);
GL.BindVertexArray(VAO);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, screenMesh.VEO);
bufferMaterial.Use();
screenMesh.ApplyDrawHints(bufferMaterial.Shader);
GL.DrawElements(PrimitiveType.Triangles, 2, DrawElementsType.UnsignedInt, 0);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, 0);
GL.BindVertexArray(0);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
GL.UseProgram(0);
}
}
}
Between those two I draw my scene, so the image should be drawn to bufferMaterial.Textures[0].Surface.
Now the "screenMesh" is created like this:
screenMesh = new Mesh();
screenMesh.SetVertices(new float[] {
0,0,
1,0,
1,1,
0,1
});
screenMesh.SetIndices(new uint[] {
0,3,2,
0,1,2
});
screenMesh.SetDrawHints(new VertexObjectDrawHint("pos", 2, 2, 0));
-> 2 components, stride of 2, 0 offset
The shader looks like this:
++++++++
Screenbuffer
++++++++
[Shader vertexScreen]
#version 150 core
in vec2 pos;
uniform float _time;
uniform sampler2D tex;
void main() {
gl_Position = vec4(pos, -1, 1);
}
[Shader fragmentScreen]
#version 150 core
#define PI 3.1415926535897932384626433832795
out vec4 outColor;
uniform float _time;
uniform sampler2D tex;
//texture2D(tex,
void main() {
outColor = vec4(1,1,1,1);
}
Now, I would expect this to draw a white rectangle to the screen, but the screen stays black. I played around with the indices of the screenMesh a little so the might be off, but they were never visible...
I don't need any projection matrix with a shader like this right ?
Mesh class: http://pastebin.com/PcwEYqGH
Edit: Okay, the buffer is rendered now, but I still need to get the depth buffer working! The surface creation code looks like this:
public void Create(int width, int height, SurfaceFormat format)
{
Width = width;
Height = height;
textureHandle = GL.GenTexture();
//bind texture
GL.BindTexture(TextureTarget.Texture2D, textureHandle);
Log.Error("Bound Texture: " + GL.GetError());
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Nearest);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Nearest);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)format.WrapMode);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)format.WrapMode);
Log.Error("Created Texture Parameters: " + GL.GetError());
GL.TexImage2D(TextureTarget.Texture2D, 0, format.InternalFormat, Width, Height, 0, format.PixelFormat, format.SourceType, format.Pixels);
Log.Error("Created Image: " + GL.GetError());
//unbind texture
GL.BindTexture(TextureTarget.Texture2D, 0);
//create depthbuffer
if (format.DepthBuffer)
{
GL.GenRenderbuffers(1, out dbHandle);
GL.BindRenderbuffer(RenderbufferTarget.Renderbuffer, dbHandle);
GL.RenderbufferStorage(RenderbufferTarget.Renderbuffer, RenderbufferStorage.DepthComponent24, Width, Height);
}
//create fbo
fboHandle = GL.GenFramebuffer();
GL.BindFramebuffer(FramebufferTarget.Framebuffer, fboHandle);
GL.FramebufferTexture2D(FramebufferTarget.FramebufferExt, FramebufferAttachment.ColorAttachment0Ext, TextureTarget.Texture2D, textureHandle, 0);
if(format.DepthBuffer)
GL.FramebufferRenderbuffer(FramebufferTarget.FramebufferExt, FramebufferAttachment.DepthAttachment, RenderbufferTarget.Renderbuffer, dbHandle);
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
Log.Error("Created Framebuffer: " + GL.GetError());
}
And this is the code for binding it:
public void BindTexture(TextureUnit slot = TextureUnit.Texture0)
{
GL.ActiveTexture(slot);
GL.BindTexture(TextureTarget.Texture2D, textureHandle);
}
public void BindFramebuffer()
{
GL.BindFramebuffer(FramebufferTarget.Framebuffer, fboHandle);
}
Yet if the depth buffer is enabled, the screen goes black again.
EDIT: Forgot to clear the depth buffer!
There is a problem in your draw call:
GL.DrawElements(PrimitiveType.Triangles, 2, DrawElementsType.UnsignedInt, 0);
The second argument is the number of indices used for rendering. With only 2 indices, that's not enough for a complete triangle. You have 6 indices in your index array, so it should be:
GL.DrawElements(PrimitiveType.Triangles, 6, DrawElementsType.UnsignedInt, 0);
Beyond that, I haven't been able to find code or documentation for the Mesh class you are using, so I can't tell if the vertex data stored in the mesh is set up correctly.
I have read a lot of tutorials about using color coding to achieve 3D Object picking on iOS. But I'm not sure how to do it. Anyone who can get me a demo written by Objective-C .
The related issues just like this:
OpenGL ES 2.0 Object Picking on iOS (Using Color Coding)
many thanks.
luo
I have achieved picking object in OpenGL ES scene using snapshot. Here is the key code:
-(GLKVector4)pickAtX:(GLuint)x Y:(GLuint)y {
GLKView *glkView = (GLKView*)[self view];
UIImage *snapshot = [glkView snapshot];
GLKVector4 objColor = [snapshot pickPixelAtX:x Y:y];
return objColor;
}
And, then in your tapGesture method, you just need to add this:
const CGPoint loc = [recognizer locationInView:[self view]];
GLKVector4 objColor = [self pickAtX:loc.x Y:loc.y];
if (GLKVector4AllEqualToVector4(objColor, GLKVector4Make(1.0f, 0.0f, 0.0f, 1.0f)))
{
//do something.......
}
of course, you should add this :
#implementation UIImage (NDBExtensions)
- (GLKVector4)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {
CGImageRef cgImage = [self CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
if ((x > width) || (y > height))
{
GLKVector4 baseColor = GLKVector4Make(0.0f, 0.0f, 0.0f, 1.0f);
return baseColor;
}
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t offset = ((width * y) + x) * 4;
//UInt8 b = data[offset+0];
float b = data[offset+0];
float g = data[offset+1];
float r = data[offset+2];
float a = data[offset+3];
CFRelease(bitmapData);
NSLog(#"R:%f G:%f B:%f A:%f", r, g, b, a);
GLKVector4 objColor = GLKVector4Make(r/255.0f, g/255.0f, b/255.0f, a/255.0f);
return objColor;
}
it is useful to achieve object picking in OpenGL ES scene.
I had been achieve 3D Object picking use color coding, if anybody want the demo, please call me and tell me your e-mail.
I am running this code on a PC (compiled in code::blocks 10.05)
When i used to do basic OpenGL code with GLUT (GL Utiltiy Toolkit) Everything worked fine.
Now that i'm running OpenGL code through the SDL Framework when i try to change the z-axis (third parameter) of the translation function the location of a geometric primitive (quad) the 3D space appears to have no depth and either shows covering the complete screen or completely disappears when the depth gets to a certain point.
Am i missing anything? :/
#include <sdl.h>
#include <string>
#include "sdl_image.h"
#include "SDL/SDL_opengl.h"
#include <gl\gl.h>
// Declare Constants
const int FRAMES_PER_SECOND = 60;
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
const int SCREEN_BPP = 32;
// Create Event Listener
SDL_Event event;
// Declare Variable Used To Control How Deep Into The Screen The Quad Is
GLfloat zpos(0);
// Loop Switches
bool gamestarted(false);
bool exited(false);
// Prototype For My GL Draw Function
void drawstuff();
// Code Begins
void init_GL() {
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_LEQUAL); // The Type Of Depth Testing To Do
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really Nice Perspective Calculations
glViewport(0, 0, 640, 513); // Viewport Change
glOrtho(0, 640, 0, 513, -1.0, 1.0); // Puts Stuff Into View
}
bool init() {
SDL_Init(SDL_INIT_EVERYTHING);
SDL_SetVideoMode(640, 513, 32, SDL_OPENGL);
return true;
}
void drawstuff() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0f, 0.0f, zpos);
glColor3f(0.5f,0.5f,0.5f);
glBegin(GL_QUADS);
glVertex3f(-1.0f, 1.0f, 0.0f);
glVertex3f( 1.0f, 1.0f, 0.0f);
glVertex3f( 1.0f,-1.0f, 0.0f);
glVertex3f(-1.0f,-1.0f, 0.0f);
glEnd();
}
int main (int argc, char* args[]) {
init();
init_GL();
while(exited == false) {
while( SDL_PollEvent( &event ) ) {
if( event.type == SDL_QUIT ) {
exited = true;
}
if( event.type == SDL_MOUSEBUTTONDOWN ) {
zpos-=.1;
}
}
glClear( GL_COLOR_BUFFER_BIT);
drawstuff();
SDL_GL_SwapBuffers();
}
SDL_Quit();
return 0;
}
When you say depth, do you refer to a perspective effect? You need to use a perspective projection matrix (see gluPerspective) if you want things farther away to appear smaller.
You're currently using orthographic projection (glOrtho), which does not have any perspective effect.
I don't know the reason for your problem, but I find a problem in your code.
After rendering one frame, you cleared the color framebuffer, but you forgot to clear the depth buffer.So, use glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); in your main function and see the effect.
I think I know the anwser:
In Orthographic projection(after projection and perspective divide):
Zb = -2/(f-n) -(f+n)/((f-n)*Z)
Zb:z value in depth buffer
Z: z value of your vertex you give
In your situation: glOrtho(0, 640, 0, 513, -1.0, 1.0);
f = 1.0, n = -1.0
so your Zb would always be -2/(f - n) = -1, this causes all your primitives's depth the same.
You can reference to Red book's Appendix C.2.5.There is a matrix for orthographic projection, and after that is perspective divide.
There is another tips to keep in mind in perspective projection that zNear value cannot be set to zero, which causes all primitives' depth value in depth buffer to be same as this one.