Why does this only draw black?
glPushMatrix();
// Check the current color
glColor3i(255, 0, 255);
GLint currentColor[4];
glGetIntegerv(GL_CURRENT_COLOR, currentColor);
//currentColor[0] = 254, 1 = 0, 2 = 254, 3 = doesn't matter
glBegin(GL_QUADS);
glLineWidth(1);
glVertex2f(0, 0);
glVertex2f(WINDOW_WIDTH * .1, 0);
glVertex2f(WINDOW_WIDTH * .1, WINDOW_HEIGHT);
glVertex2f(0, WINDOW_HEIGHT);
glEnd();
glPopMatrix();
From memory, I think that glColor3i uses colour values that are scaled to cover the full integer range. Hence your 255 values are approximately equal to zero.....
Try 2147483647 instead.
All typed integral gl entrypoints have this range behavior. Floating point variants use normalized values: 0.0 - 1.0. glColor sets the current vertex color which even affects the output during vertex array processing if glColorPointer is not enabled (if indeed your version of GL uses named vertex attributes and not generic as in OpenGLES 2.x and greater).
Common variants for glColor are glColor{3,4}{u,b} and glColor{3,4}f.
In your case you should stick to your 0xFF values and use glColor3ub(255, 0, 255) or perhaps easier glColor3f(1.0f, 0.0f, 1.0f).
Using an integral value of INT_MAX or ~2 billion in conjunction with glColor3i() doesn't read very well.
Related
I have a total of two textures, the first is used as a framebuffer to work with inside a computeshader, which is later blitted using BlitFramebuffer(...). The second is supposed to be an OpenGL array texture, which is used to look up textures and copy them onto the framebuffer. It's created in the following way:
var texarray uint32
gl.GenTextures(1, &texarray)
gl.ActiveTexture(gl.TEXTURE0 + 1)
gl.BindTexture(gl.TEXTURE_2D_ARRAY, texarray)
gl.TexParameteri(gl.TEXTURE_2D_ARRAY, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
gl.TexImage3D(
gl.TEXTURE_2D_ARRAY,
0,
gl.RGBA8,
16,
16,
22*48,
0,
gl.RGBA, gl.UNSIGNED_BYTE,
gl.Ptr(sheet.Pix))
gl.BindImageTexture(1, texarray, 0, false, 0, gl.READ_ONLY, gl.RGBA8)
sheet.Pix is just the pixel array of an image loaded as a *image.NRGBA
The compute-shader looks like this:
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img;
layout(binding = 1) uniform sampler2DArray texAtlas;
void main() {
ivec2 iCoords = ivec2(gl_GlobalInvocationID.xy);
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7));
imageStore(img, iCoords, c);
}
When i run the program however, the result is just a window filled with the same color:
So my question is: What did I do wrong during the shader creation and what needs to be corrected?
For any open code questions, here's the corresponding repo
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7))
That can't work. texture samples the texture at normalized coordinates, so the texture is in [0,1] (in the st domain, the third dimension is the layer and is correct here), coordinates outside of that ar handled via the GL_WRAP_... modes you specified (repeat, clamp to edge, clamp to border). Since int % 16 is always an integer, and even with repetition only the fractional part of the coordinate will matter, you are basically sampling the same texel over and over again.
If you need the full texture sampling (texture filtering, sRGB conversions etc.), you have to use the normalized coordinates instead. But if you only want to access individual texel data, you can use texelFetch and keep the integer data instead.
Note, since you set the texture filter to GL_LINEAR, you seem to want filtering, however, your coordinates appear as if you would want at to access the texel centers, so if you're going the texture route , thenvec3(vec2(iCoords.xy)/vec2(16) + vec2(1.0/32.0) , layer) would be the proper normalization to reach the texel centers (together with GL_REPEAT), but then, the GL_LINEAR filtering would yield identical results to GL_NEAREST.
I know this had been asked a million times, but I just don't get some details.
As an example, lets say that I have a swap chain created and one staging ID3D11Texture2D.
What I can do with it is load a bitmap into this 2D texture and then copy it to the render target (assuming the size and format of both resources are the same).
Now I would like to display a sprite over that. One solution is to use vertex and pixel shaders. My understanding problem starts here.
Vertex shader:
I guess I should draw 2 triangles (2 triangles makes a rectangle, or a quad).
DirectX is using left handed coordinate system, but I guess it's irrelevant here, because I'm dealing with 2D sprites, right?
For the same reason, I assume I can ignore the world->view->projection transformations, right? Actually, I only need translation here in order to place the sprite at the right place on the screen, right?
Should the coordinates of these two triangles match the sprite dimensions, plus translation?
In what order should I provide these vertices? Should the origin of the coordinate system be in the center of the sprite, or should the origin be at the top left corner of the sprite?
As an example, if I have 80x70 sprite, what would be the vertex values?
The real vertices: (X, Y, Z - with no translation applied)
1. -40, -35, 0
2. -40, 35, 0
3. 40, -35, 0
4. 40, 35, 0
5. -40, 35, 0
6. 40, -35, 0
Is that correct?
The rasterization step should call pixel shader once for each pixel in the sprite. The output of the vertex shader will be input for the pixel shader. Does that mean the pixel shader should have access to the sprite in order to return the correct pixel value when called?
For 2D drawing, you generally use a world->view->projection that converts classic 'pixel space' to the clip space (-1..1).
float xScale = 2.0f / ViewPortWidth;
float yScale = 2.0f / ViewPortHeight;
[ xScale, 0 0, 0, ]
[ 0, -yScale, 0, 0, ]
[ 0, 0, 1, 0, ]
[ -1, 1, 0, 1 ]
The -1 and -yScale values are negated so that 0,0 is at the upper-left.
The Vertex Shader would do the transform to clip space:
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : SV_Position)
{
position = mul(position, MatrixTransform);
}
The points of the quad are then rasterized, then the Pixel Shader is invoked:
Texture2D<float4> Texture : register(t0);
sampler TextureSampler : register(s0);
float4 SpritePixelShader(float4 color : COLOR0,
float2 texCoord : TEXCOORD0) : SV_Target0
{
return Texture.Sample(TextureSampler, texCoord) * color;
}
See SpriteBatch in the DirectX Tool Kit.
I am using this simple function to draw quad in 3D space that is facing camera. Now, I want to use fragment shader to draw illusion of a sphere inside. But, the problem is I'm new to OpenGL ES, so I don't know how?
void draw_sphere(view_t view) {
set_gl_options(COURSE);
glPushMatrix();
{
glTranslatef(view.plyr_pos.x, view.plyr_pos.y, view.plyr_pos.z - 1.9);
#ifdef __APPLE__
#undef glEnableClientState
#undef glDisableClientState
#undef glVertexPointer
#undef glTexCoordPointer
#undef glDrawArrays
static const GLfloat vertices []=
{
0, 0, 0,
1, 0, 0,
1, 1, 0,
0, 1, 0,
0, 0, 0,
1, 1, 0
};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
glDisableClientState(GL_VERTEX_ARRAY);
#else
#endif
}
glPopMatrix();
}
More exactly, I want to achieve this:
There might be quite a few thing you need to to achieve this... The sphere that is drawn on the last image you posted is a result in using lighting and shine and color. In general you need a shader that can process all that and can normally work for any shape.
This specific case (also some others that can be mathematically presented) can be drawn with a single quad without even needing to push normal coordinates to the program. What you need to do is create a normal in a fragment shader: If you receive vectors sphereCenter, fragmentPosition and float sphereRadius, then sphereNormal is a vector such as
sphereNormal = (fragmentPosition-sphereCenter)/radius; //taking into account all have .z = .0
sphereNormal.z = -sqrt(1.0 - length(sphereNormal)); //only if(length(spherePosition) < sphereRadius)
and real sphere position:
spherePosition = sphereCenter + sphereNormal*sphereRadius;
Now all you need to do is add your lighting.. Static or not it is most common to use some ambient factor, linear and square distance factors, shine factor:
color = ambient*materialColor; //apply ambient
vector fragmentToLight = lightPosition-spherePosition;
float lightDistance = length(fragmentToLight);
fragmentToLight = normalize(fragmentToLight); //can also just divide with light distance
float dotFactor = dot(sphereNormal, fragmentToLight); //dot factor is used to take int account the angle between light and surface normal
if(dotFactor > .0) {
color += (materialColor*dotFactor)/(1.0 + lightDistance*linearFactor + lightDistance*lightDistance*squareFactor); //apply dot factor and distance factors (in many cases the distance factors are 0)
}
vector shineVector = (sphereNormal*(2.0*dotFactor)) - fragmentToLight; //this is a vector that is mirrored through the normal, it is a reflection vector
float shineFactor = dot(shineVector, normalize(cameraPosition-spherePosition)); //factor represents how strong is the light reflection towards the viewer
if(shineFactor > .0) {
color += materialColor*(shineFactor*shineFactor * shine); //or some other power then 2 (shineFactor*shineFactor)
}
This pattern to create lights in fragment shader is one of very many. If you don't like it or you cant make it work I suggest you find another one on the web, otherwise I hope you will understand it and be able to play around with it.
I just want to mimic a perspective projection without using z coordinate (ie: Only using 2d environment).
The x axis must shrink as y axis increases towards the top.
I have following code as ready,
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, size.width, 0, size.height, -1024 * CC_CONTENT_SCALE_FACTOR(), 1024 * CC_CONTENT_SCALE_FACTOR());
GLfloat proj[16] = { ?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,? };
glMultMatrixf(proj)
Running on iPad.
Mapping a texture that is 256x256 onto a quad. I'm trying to render it exactly the same size as the actual image. The quad looks correct (shape is right, texture mapped correctly), but it is only ~75% of the size of the actual .png.
Not sure why.
The code is characterized as follows (excerpts below):
Screen is 768x1024. Windows is 768x1024 as well.
glViewport(0, 0, 768, 1024); // aspect ratio 1:1.333
glOrthof(-0.5f, 0.5f, -0.666f, 0.666f, -1.0f, 1.0f); // matching aspect ratio with 0,0 centered
// Sets up an array of values to use as the sprite vertices.
//.25 of 1024 is 256 pixels so the quad (centered on 0,0) spans -0.125,
//-0.125 to 0.125, 0.125 (bottom left corner and upper right corner)
GLfloat spriteVertices[] = {
-0.125f, -0.125f,
0.125f, -0.125f,
-0. 125f, 0.125f,
0.125f, 0.125f,
};
// Sets up an array of values for the texture coordinates.
const GLshort spriteTexcoords[] = {
0, 0,
1, 0,
0, 1,
1, 1,
};
followed by the appropriate calls to:
glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
glTexCoordPointer(2, GL_SHORT, 0, spriteTexcoords);
then
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Why is my sprite smaller than 256x256 when rendered?
Your output is 192x192 (approx) because your quad is the wrong size. It's 0.25x0.25 and the "unit length" direction is X which is 768 wide, so it's 0.25 * 768 = 192. If you switched your glOrthof so that top/bottom were -0.5 and +0.5 (with appropriate correction to X) it would work.