How to set xna's TextureFilter to Point - xna-4.0

I have a little Texture2D:
myTexture = new Texture2D(GraphicsDevice, 512, 512, false, SurfaceFormat.Vector4);
when I try to draw it:
spriteBatch.Begin();
spriteBatch.Draw(myTexture, new Rectangle(0, 0, 512, 512), Color.White);
spriteBatch.End();
I get an exception:
"XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4."
How can I set TextureFilter to Point?

Pass SamplerState.PointClamp or SamplerState.PointWrap to SpriteBatch.Begin.

Related

libgdx - sprite does not rotate

This is the create() Method:
batch = new SpriteBatch();
texture = new Texture(Gdx.files.internal("spaceships/tfighter0.png"));
sprite = new Sprite(texture);
sprite.setOrigin(sprite.getWidth()/2, sprite.getHeight()/2);
sprite.rotate(180f);
And the render() Method:
Gdx.gl.glClearColor(0, 0, 0, 0);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.draw(sprite ,200,200);
batch.end();
Shouldn't the sprite be rotated now? It just looks the same as the png no matter what degree I put into the rotate method.
Using
sprite.draw(batch)
Instead of
batch.draw(sprite,x,y)
works

How can I change an Image in libgdx?

I load an Image via Texture
texture = new Texture("badlogic.jpg");
texture2 = new Texture("badlogic2.png");
image=new Image(texture);
stage.addActor(image);
or TextureRegion
texture = new Texture("badlogic.jpg");
regions=new TextureRegion[2];
regions[0]=new TextureRegion(texture, 0, 0, 64, 64);
regions[1]=new TextureRegion(texture, 0, 63, 64, 64);
image=new Image(regions[0]);
stage.addActor(image);
Now, I want to change the image to texture2 or regions[1]. How can I do that?

Render to 3D Texture with OpenGL on OSX (multi-layer framebuffer attachment)

I have an OpenGL 3.2 CORE context on OSX 10.7.5 set up and trying to render to a 3D texture,
using a layered rendering approach. The geometry shader feature "gl_layer" is supported, but I cannot bind a GL_TEXTURE_3D to my framebuffer attachment. It returns GL_FRAMEBUFFER_UNSUPPORTED.
This is the card and driver version in my MBP:
AMD Radeon HD 6770M 1024 MB - OpenGL 3.2 CORE (ATI-7.32.12)
This feature does not directly relate to a specific extension AFAIK.
Does anybody know how to figure out whether this is unsupported by the driver or hardware?
Thanks so much.
Below the code to reconstruct. I use glfw to set up the context:
// Initialize GLFW
if (!glfwInit())
throw "Failed to initialize GLFW";
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 2);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwOpenWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
// Open a window and create its OpenGL context
if (!glfwOpenWindow(720, 480, 8, 8, 8, 8, 24, 8, GLFW_WINDOW))
throw "Failed to open GLFW window";
//
// ...
//
GLuint framebuffer, texture;
GLenum status;
glGenFramebuffers(1, &framebuffer);
// Set up the FBO with one texture attachment
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, framebuffer);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_3D, texture);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA8, 256, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, texture, 0);
status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE)
throw status;
//
// status is GL_FRAMEBUFFER_UNSUPPORTED here !!!
//
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glDeleteTextures(1, &texture);
glDeleteFramebuffers(1, &framebuffer);
exit(1);
Does anybody know how to figure out whether this is unsupported by the driver or hardware?
It just told you. That's what GL_FRAMEBUFFER_UNSUPPORTED means: it's the driver exercising veto-power over any framebuffer attachments it doesn't like for any reason whatsoever.
There's not much you can do when this happens except to try other things. Perhaps rendering to a 2D array texture.

Stenciling using OpenGl ES 2.0

I am trying to figure out a way to cut out a certain region of a background texture such that a certain custom pattern is not rendered on the screen for that background. For example:
This square can be any pattern.
I am using Frame Buffer Object and Stencil Buffer to achieve this kind of effect. Here is the code:
fbo.begin();
//Disables ColorMask and DepthMask so that all the rendering is done on the Stencil Buffer
Gdx.gl20.glColorMask(false, false, false, false);
Gdx.gl20.glDepthMask(false);
Gdx.gl20.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl20.glStencilFunc(GL20.GL_ALWAYS, 1, 0xFFFFFFFF);
Gdx.gl20.glStencilOp(GL20.GL_REPLACE, GL20.GL_REPLACE, GL20.GL_REPLACE);
stage.getSpriteBatch().begin();
rHeart.draw(stage.getSpriteBatch(), 1); //Draws the required pattern on the stencil buffer
//Enables the ColorMask and DepthMask to resume normal rendering
Gdx.gl20.glColorMask(true, true, true, true);
Gdx.gl20.glDepthMask(true);
Gdx.gl20.glStencilFunc(GL20.GL_EQUAL, 1, 0xFFFFFFFF);
Gdx.gl20.glStencilOp(GL20.GL_KEEP, GL20.GL_KEEP, GL20.GL_KEEP);
background.draw(stage.getSpriteBatch(), 1); //Draws the background such that the background is not rendered on the required pattern, leaving that area black.
stage.getSpriteBatch().end();
Gdx.gl20.glDisable(GL20.GL_STENCIL_TEST);
fbo.end();
However this is not working at all. How am I supposed to do this using Stencil Buffers? I am also facing some difficulty understanding glStencilFunc and glStencilOp. It would be very helpful if anyone can shed some light on these two.
UPDATE: I have also tried producing something of the same kind using glColorMask. Here is the code:
Gdx.gl20.glClearColor(0, 0, 0, 0);
stage.draw();
FrameBuffer.clearAllFrameBuffers(Gdx.app);
fbo1.begin();
Gdx.gl20.glClearColor(0, 0, 0, 0);
batch.begin();
rubber.draw(batch, 1);
Gdx.gl20.glColorMask(false, false, false, true);
coverHeart.draw(batch, 1);
Gdx.gl20.glColorMask(true, true, true, false);
batch.end();
fbo1.end();
toDrawHeart = new Image(new TextureRegion(fbo1.getColorBufferTexture()));
batch.begin();
toDrawHeart.draw(batch, 1);
batch.end();
This code is producing this:
Instead of something like this: (Ignore the windows sizes and colour tones)
Note: I am using the libgdx library.
While drawing to a SpriteBatch, state changes are ignored, until end() is called. If you want to use stenciling with SpriteBatch, you'll need to break up the batch drawing. One thing, I've left out FBOs, but that shouldn't make a difference.
#Override
public void create() {
camera = new OrthographicCamera(1, 1);
batch = new SpriteBatch();
texture = new Texture(Gdx.files.internal("data/badlogic.jpg"));
texture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
TextureRegion region = new TextureRegion(texture, 0, 0, 256, 256);
sprite = new Sprite(region);
sprite.setSize(1f, 1f);
sprite.setPosition(-0.5f, -0.5f);
spriteUpsideDown = new Sprite(new TextureRegion(texture, 1f, 1f, 0f, 0f));
spriteUpsideDown.setSize(1f, 1f);
spriteUpsideDown.setPosition(-0.5f, -0.5f);
pattern = new Sprite(region);
pattern.setSize(0.5f, 0.5f);
pattern.setPosition(-0.25f, -0.25f);
<< Set Input Processor >>
}
The input processor allows to set two boolean flags breakBatch1 and breakBatch2 via keyboard (libgdx on desktop), which are used to break the SpriteBatch drawing.
#Override
public void render() {
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_STENCIL_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
// setup drawing to stencil buffer
Gdx.gl20.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl20.glStencilFunc(GL20.GL_ALWAYS, 0x1, 0xffffffff);
Gdx.gl20.glStencilOp(GL20.GL_REPLACE, GL20.GL_REPLACE, GL20.GL_REPLACE);
Gdx.gl20.glColorMask(false, false, false, false);
// draw base pattern
batch.begin();
pattern.draw(batch);
if(breakBatch1) { batch.end(); batch.begin(); }
// fix stencil buffer, enable color buffer
Gdx.gl20.glColorMask(true, true, true, true);
Gdx.gl20.glStencilOp(GL20.GL_KEEP, GL20.GL_KEEP, GL20.GL_KEEP);
// draw where pattern has NOT been drawn
Gdx.gl20.glStencilFunc(GL20.GL_NOTEQUAL, 0x1, 0xff);
sprite.draw(batch);
if(breakBatch2) { batch.end(); batch.begin(); }
// draw where pattern HAS been drawn.
Gdx.gl20.glStencilFunc(GL20.GL_EQUAL, 0x1, 0xff);
spriteUpsideDown.draw(batch);
batch.end();
}
Gdx.gl20.glStencilFunc(GL20.GL_REPLACE, GL20.GL_REPLACE, GL20.GL_REPLACE);
These are not the right arguments to glStencilFunc. I think you mean glStencilOp here.
You need to use glGetError in your code, it will alert you to these kinds of errors.
I believe your problem is that your initial GL_REPLACE stencil operation is applied to all the drawn pixels by your rHeart.draw regardless of the shape of any texture applied on the quad.
Thus, the stencil value is applied to every pixel of your quads which gives your problem.
If the texture applied on your quad has an alpha channel, as GL_ALPHA_TEST is not supported, you could setup your shader to discard the totally transparent pixels, preventting them from being drawn to the stencil buffer.

How to force 24 bit color depth in OpenGL ES

I am trying to load and display a texture in OpenGL ES. The problem I am having is that even though my image is in ARGB_8888 format, the texture seems to be drawn in RGB_565 format. Without dithering, my image looks pretty terrible.
I am running my program on a phone which supports 16m colors, therefore, the texture should be view-able in all it's original glory.
EDIT code:
loading bitmap:
background = BitmapFactory.decodeResource(getResources(), R.drawable.background, null);
generating texture:
public void loadBackground(GL10 gl) {
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D,0, background,0);
background.recycle();
}
drawing:
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, backgroundVertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0,4);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
onSurfaceCreated:
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glEnable(GL10.GL_TEXTURE_2D);
}
onSurfaceChanged
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glLoadIdentity();
gl.glOrthof(0, width, height, 0, -1, 1);
By default the GLSurfaceView is using RGB_565 for its pixel format, so you need to specify that you want a 32 bit surface before you bind the renderer. More info at http://developer.android.com/reference/android/opengl/GLSurfaceView.html , look at one of the setEGLConfigChooser methods.

Resources