I wish to upload textures with non-zero mipmap levels using glCopyTexImage2D().
I am using following code for the same :
// Render Some Geometry
GLint mipmap_level = 1;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_CUBE_MAP, textureId);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glCopyTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X,mipmap_level,GL_RGBA16_SNORM,0,0,128,128,0);
// Five other glCopyTexImage2D() calls to load textures
glGenerateMipmap(GL_TEXTURE_CUBE_MAP);
//Render geometry
Here, if I use mipmap_level = 1 the geometry is not drawn at all. How exactly does mipmap levels work in conjunction with glCopyTexImage2D() API ?
I suppose that using level = 1 , loads 64x64 texture i.e. the first sampled mipmap.
Using glGenerateMipmap() call before glCopyTexImage2D() will not make any sense. So how exactly the driver will load a non zero mipmap level using glCopyTexImage2D() ?
You first set the image for mipmap level 1 and then you call glGenerateMipmap to automatically generate mipmaps from the image of the first mipmap level, which is mipmap level 0. So all mipmap images following level 0 just get overwritten by the automatically generated images. glGenerateMipmap doesn't care how you set the image for mipmap level 0, it just assumes you did. On the other hand I don't see you specifying an image for level 0, so it will probably just contain rubbish or be regarded as incomplete, which will make any use of the corresponding texture fail.
In the end I hope this question doesn't just amount to knowing that in most programming languages one usually starts to count at 0 instead of 1, and so does OpenGL.
EDIT: As datenwolf points out in his comment, you can change the base mipmap level to be used for mipmap-filtering and as input for glGenerateMipmap by doing a
glTexParamteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_BASE_LEVEL, 1);
But don't do this just because you want to start counting at 1 instead of 0. If on the other hand you have a valid image for mipmap level 0 and want to set one for level 1 and use this for computing the other levels, then changing the base level to 1 before calling glGenerateMipmap (and probably back to 0 aftwerwards) can be a good idea, though I cannot come up with a usecase for this approach right away.
Related
I recently found out that one can render alpha-blended primitives correctly not just back-to-front but also front-to-back (http://hacksoflife.blogspot.com/2010/02/alpha-blending-back-to-front-front-to.html) by using GL_ONE_MINUS_DST_ALPHA, GL_ONE, premultiplying the fragment's alpha in the fragment shader and clearing destination alpha to black before rendering.
It occurred to me that it would then be great if one could combine this with EITHER early-z rejection OR some kind of early "destination-alpha testing" in order to discard fragments that won't contribute to the final pixel color.
When rendering with front-to-back alpha-blending, a fragment can be skipped if the destination-alpha at this location already contains the value 1.0.
I did prototype-implement that by using GL_EXT_shader_framebuffer_fetch to test the destination alpha at the start of the pixel shader and then manually discard the fragment if the value is above a certain threshold. That works but it made things actually slower on my test hardware (Snapdragon XR2) - so I wonder:
whether it's somehow possible to not even have the fragment shader execute if destination alpha is already above a certain threshold?
alternatively, if it would be possible to only write to the depth buffer for fragments that are completely opaque and leave the current depth buffer value unchanged for all fragments that have an alpha value of less than 1 (but still depth-test every fragment), that should allow the hardware to use early-z rejection for occluded fragments. So,
Is this possible somehow (i.e. use depth testing, but update the depth buffer value only for opaque fragments and leave it unchanged for others)?
bottom line this would allow to reduce overdraw of alpha-blended sprites to only those fragments that contribute to the final pixel color and I wonder whether there is a performant way of doing this.
For number 2, I think you could modify gl_FragDepth in the fragment shader to achieve something close, but doing so would disable early-z rejection so wouldn't really help.
I think one viable way to reduce overdraw would be to create a tool to generate a mesh for each sprite which aims to cover a decent proportion of the opaque part of the sprite without using too many verts. I imagine for a typical sprite, even just a well placed quad could cover 80%+.
You'd render the generated opaque geometry of your sprites with depth write enabled, and do a second pass the ordinary way with depth testing enabled to cover the transparent parts.
You would massively reduce overdraw, but significantly increase the complexity of your code and number of verts rendered. You would double your draw calls, but if you're atlassing and using texture arrays, you might be doubling from 1 to 2 draw calls which is fine. I've never tried it so can't say if it's worth all the effort that would be involved.
When setting up a single texture WebGL program what command actually causes "work" to be done?
It seems to me that the textImage2D command upload html image data to the GPU:
gl.texImage2D(target, level, internalformat, format, type, HTMLImageElement);
However once that data is uploaded and bound to a texture, that texture still needs to be "bound" to a sampler.
setActiveTexture(gl, 0, this['textureRef0']);
var samplerRef = gl.getUniformLocation(program, 'sampler0');
gl.uniform1i(samplerRef, 0);
Does any memory need to be allocated to bind the texture to the sampler? Or is it just a pointer that changes which points the sampler to the texture data?
Also what about binding textures to frame buffers?
gl.bindFramebuffer(gl.FRAMEBUFFER, this.globalFB);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, textureRef0, 0);
Does that act alone cause any significant performance issues? Or is "real" work only done when the program is called and data is rendered into that texture?
texImage2D allocates memory because the driver needs to make a copy of the data you pass it as the moment texImage2D returns you can change your memory.
framebuffers don't allocate much memory, the memory is the attachments. But, framebuffers need to be validated so it's better to make multiple framebuffers with every combination of attachments you need rather than change the attachments on a single framebuffer
In other words, for example if you're doing ping ponging of textures for post processing for example
// init time
const fb = gl.createFramebuffer();
// render time
for each pass
gl.framebufferTexture2D(...., dstTex); // bad!
...
gl.drawXXX
const t = srcTex; srcTex = dstTex; dstTex = t; // swap textures
}
vs
// init time
let srcFB = gl.createFramebuffer();
gl.framebufferTexture(...., srcTex);
let dstFB = gl.createFramebuffer();
gl.framebufferTexture(...., dstTex);
// render time
for each pass
gl.bindFramebuffer(dstFB); // good
...
gl.drawXXX
const t = srcFB; srcFB = dstFB; dstFB = t; // swap framebuffers
}
textures also have the issue that because of the API design GL has a bunch of work to do the first time you draw with a texture (and any time you change it's contents).
Consider this is a normal sequence in WebGL to supply mips
texImage2D level 0, 16x16
texImage2D level 1, 8x8
texImage2D level 2, 4x4
texImage2D level 3, 2x2
texImage2D level 4, 1x1
But this is also completely valid API calls
texImage2D level 0, 16x16
texImage2D level 1, 8x8
texImage2D level 2, 137x324 // nothing in the spec prevents this. It's fully valid
texImage2D level 3, 2x2
texImage2D level 4, 1x1
texImage2D level 2, 4x4 // fix level 2 before drawing
That call to level 2 with some strange size is valid. It's not allowed to give an error. Of course if you don't replace level 2 before drawing it will fail to draw but uploading the data is not wrong according to the API. That means it isn't until the texture is actually used that the driver can look at the data, formats, and sizes for each mip, check if they are all correct, and then finally arrange the data on the GPU.
texStorage was added to fix that issue (available in WebGL2/OpenGL ES 3.0)
Calling activeTexture, binding textures with bindTexture, and setting uniforms take no memory and have no significant performance issues.
I created a 1024*1024 texture with
glCompressedTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG, 1024, 1024, 0, nDataLen*4, pData1);
then update it's first 512*512 part like this
glCompressedTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG, nDataLen, pData2);
This update generated glerror 1282(invalid operation), if I update the whole 1024*1024 region all are ok, it seems that pvrtc texture cannot be partial updated.
Is it possible to partial update pvrtc textur, if it is how how?
Sounds to me like you can't on GLES2 (link to spec, see 3.7.3.)
Calling CompressedTexSubImage2D will result in an INVALID_OPERATION error if xoffset or yoffset is not equal to zero, or if width and height do not match the width and height of the texture, respectively. The contents of any texel outside the region modified by the call are undefined. These restrictions may be relaxed for specific compressed internal formats whose images are easily modified
Makes glCompressedTexSubImage2D sound a bit useless to me, tbh, but I guess it's for updating individual mips or texture array levels.
Surprisingly, i copyed a small pvrtc texture data into a large one, it works just like glCompressedTexSubImage2D.But i'am not sure whether it's safe to use this solution in my engine.
Rightly or wrongly, the reason PVRTC1 does not have CompressedTexSubImage2D support is that unlike, say, ETC* or S3TC, the texture data is not compressed as independent 4x4 squares of texels which, in turn, get represented as either 64 or 128 bits of data depending on the format. With ETC*/S3TC any aligned 4x4 block of texels can be replaced without affecting any other region of the texture simply by just replacing its corresponding 64- or 128-bit data block.
With PVRTC1, two aims were to avoid block artifacts and to take advantage of the fact that neighbouring areas are usually very similar and thus can share information. Although the compressed data is grouped into 64-bit units, these affect overlapping areas of texels. In the case of 4bpp they are ~7x7 and for 2bpp, 15x7.
As you later point out, you could copy the data yourself but there may be a fuzzy boundary: For example, I took these 64x64 and 32x32 textures (which have been compressed and decompressed with PVRTC1 #4bpp ) ...
+
and then did the equivalent of "TexSubImage" to get:
As you should be able to see, the border of the smaller texture has smudged as the colour information is shared across the boundaries.
In practice it might not matter but since it doesn't strictly match the requirements of TexSubImage, it's not supported.
PVRTC2 has facilities to do better subimage replacement but is not exposed on at least one well-known platform.
< Unsubtle plug > BTW if you want some more info on texture compression, there is a thread on the Stack Exchange Computer Graphics site < /Unsubtle plug >
I have some code for rendering a video, so the OpenGL side of it (once the rendered frame is available in the target texture) is very simple: Just render it to the target rectangle.
What complicates things a bit is that I am using a third-party SDK to render the UI, so I cannot know what state changes it makes, and therefore every time I am rendering a frame I have to make sure all the states I need are set correctly.
I am using a vertex and a texture coordinate buffer to draw my rectangle like this:
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, texHandle);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glPushClientAttrib( GL_CLIENT_VERTEX_ARRAY_BIT );
glEnableClientState( GL_VERTEX_ARRAY );
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer);
glVertexPointer(4, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, m_texCoordBuffer);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glDrawArrays(GL_QUADS, 0, 4);
glPopClientAttrib();
(Is there anything that I can skip - even when not knowing what is happening inside the UI library?)
Now I wonder -and this is more theoretical as I suppose there won't be much difference when just drawing one Quad- if it is theoretically faster to just render like the above, or instead write a simple default vertex and fragment shader that does maybe nothing more than returning ftransform() for the position and uses the default way for the fragment color too?
I wonder if by using a shader I can skip certain state changes, or generally speed up things. Or if by using the above code OpenGL internally just does that and the outcome will be exactly the same?
If you are worried about clobbering the UI SDK state, you should wrap the code with glPushAttrib(GL_ENABLE_BIT | GL_TEXTURE_BIT) ... glPopAttrib() as well.
You could simplify the state management code a bit by using a vertex array object.
As to using a shader, for this simple program I wouldn't bother. It would be one more bit of state you'd have to save & restore, and you're right that internally OpenGL is probably doing just that for the same outcome.
On speeding things up: performance is going to be dominated by the cost of sending tens? hundreds? of kilobytes of video frame data to the GPU, and adding or removing OpenGL calls is very unlikely to make a difference. I'd look first at possible differences in frame rate between the UI and video stream: for example, if the frame rate is faster, arrange for the video data to be copied once and re-used instead of copying it each time the UI is redrawn.
Hope this helps.
I'm making a 2D sidescrolling space shooter-type game, where I need a background that can be scrolled infintely (it is tiled or wrapped repeatedly). I'd also like to implement parallax scrolling, so perhaps have one lowest background nebula texture that barely moves, a higher one containing far-away stars that barely moves and the highest background containing close stars that moves a lot.
I see from google that I'd have each layer move 50% less than the layer above it, but how do I implement this in libgdx? I have a Camera that can be zoomed in and out, and in the physical 800x480 screen could show anything from 128x128 pixels (a ship) to a huge area of space featuring the textures wrapped multiple times on their edges.
How do I continuosly wrap a smaller texture (say 512x512) as if it were infinitely tiled (for when the camera is zoomed right out), and then how do I layer multiple textures like these, keep them together in a suitable structure (is there one in the libgdx api?) and move them as the player's coords change? I've looked at the javadocs and the examples but can't find anything like this problem, apologies if it's obvious!
Hey I am also making a parrallax background and trying to get it to scroll.
There is a ParallaxTest.java in the repository, it can be found here.
this file is a standalone class, so you will need to incorporate it into your game how you want. and you will need to change the control input since its hooked up to use touch screen/mouse.
this worked for me. as for repeated bg, i havent gotten that far yet, but i think you just need to basic logic as in, ok one screen away from the end, change the first few screens pos to line up at the end.
I have not much more to say regarding to the Parallax Scrolling than PFG already did. There is indeed an example in the repository under the test folder and several explanations around the web. I liked this one.
The matter with the background is really easy to solve. This and other related problems can be approached by using modular algebra. I won't go into the details because once shown is very easy to understand.
Imagine that you want to show a compass in your screen. You have a texture 1024x16 representing the cardinal points. Basically all you have is a strip. Letting aside the considerations about the real orientation and such, you have to render it.
Your viewport is 300x400 for example, and you want 200px of the texture on screen (to make it more interesting). You can render it perfectly with a single region until you reach the position (1024-200) = 824. Once you're in this position clearly there is no more texture. But since it is a compass, it's obvious that once you reach the end of it, it has to start again. So this is the answer. Another texture region will do the trick. The range 825-1023 has to be represented by another region. The second region will have a size of (1024-pos) for every value pos>824 && pos<1024
This code is intended to work as real example of a compass. It's very dirty since it works with relative positions all the time due to the conversion between the range (0-3.6) to (0-1024).
spriteBatch.begin();
if (compassorientation<0)
compassorientation = (float) (3.6 - compassorientation%3.6);
else
compassorientation = (float) (compassorientation % 3.6);
if ( compassorientation < ((float)(1024-200)/1024*3.6)){
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 200, 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth(), 32 * (float)1.2);
}
else if (compassorientation > ((float)(1024-200)/1024*3.6)) {
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 1024 - (int)(compassorientation/3.6*1024), 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , 32 * (float)1.2);
compass2.setRegion(0, 0, 200 - compass1.getRegionWidth(), 16);
spriteBatch.draw(compass2, compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth() - (compass1.getRegionWidth()/200f * Gdx.graphics.getWidth()) , 32 * (float)1.2);
}
spriteBatch.end();
You can use setWrap function like below:
Texture texture = new Texture(Gdx.files.internal("images/background.png"));
texture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
It will draw background repeatedly! Hope this help!
Beneath where you initialize your Texture for the object. Then beneath that type in this
YourTexture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
Where YourTexture is your texture that you want to parallax scroll.
In Your render file type in this code.
batch.draw(YourTexture,0, 0, 0 , srcy, Gdx.graphics.getWidth(),
Gdx.graphics.getHeight());
srcy +=10;
It is going to give you an error so make a variable called srcy. It is nothing too fancy.
Int srcy