WebGL and rectangular (power of two) textures - opengl-es

WebGL is known to have poor support for NPOT (non-power-of-two) textures. But what about rectangular textures where both width and height are powers of two? Specifically, I'm trying to draw to a rectangular framebuffer as part of a render-to-texture scheme to generate some UI elements. The framebuffer would need to be 512x64 or thereabouts.
How much less efficient would this be in terms of drawing? If framerate is a concern, would I do better to allocate a 512x512 power-of-two-sized buffer and only render to the top 64 pixels, sacrificing memory for speed?

There has never been the constraint for that width must equal height.

More specifically: 2D textures are not at all required to be square; a 512x64 texture is not only allowed but should also be efficiently implemented by the driver; on the other hand cube maps need to be square.
For 2D textures, you can use NPOT textures if both wrap modes are CLAMP_TO_EDGE and your minification filter does not require a mipmap. Efficiency of NPOT texture may vary depending on your driver.

Related

Is a large MTLTexture more performant than using multiple smaller MTLTextures?

I'm using Metal to display an image into a 2D plane. The image is being rendered as tiles to an IOSurface using Core Image. After each tile is rendered, the IOSurface is sent via XPC to the app, wrapped in an MTLTexture and its contents are copied to a master texture using a blit encoder. The IOSurface is then re-used if need-be.
I have full control over the tile sizes and texture sizes and I'm wondering if Metal prefers having a small number of large textures, a large number of small textures or if it just doesn't really matter.
There are some tradeoffs that I've already come across and the most notable one is that if I use smaller textures then the cost to (re)generate the mipmaps can be smaller. There's also the issue of textures having a maximum size of 16384, which implies that any image larger than that needs to be tiled anyways.
Consider the following:
Image dimensions used below are just for easy math, in real life I'm working with DSLR images, panoramas, stitched images, etc..
#1 - A single texture that covers the entire image:
Texture Type: MTLTextureType2D
Texture Size: 400 x 300 px
When Core Image renders a tile and I copy it into the texture, I have to regenerate the mipmaps for the entire texture, even though most of the texture's content did not change.
#2 - Using two textures:
Texture Type: MTLTextureType2DArray
Texture Array Count: 2
Texture Size: 200 x 300 px (x2)
With two tiles covering the image's content, I only have to regenerate the mipmaps for the "dirty" tile.
#3 - Using many textures:
Texture Type: MTLTextureType2DArray
Texture Array Length: 12
Texture Size: 100 x 100 px (x12)
In this scenario, I can minimize mipmap generation by matching the texture tile size to my rendering tile size, but it will require a large number of MTLTextures.
MTLTextureDescriptor.arrayLength is documented as being able to hold values between 1..2048, which suggests to me that using a large number of textures isn't such a bad thing.
A single texture is passed to my fragment shader and all it does is sample the color at the appropriate coordinates.
Using smaller-sized textures gives me a lot more fidelity in marking the "dirty" regions of the image that need invalidating, but I'm curious if the large number of textures is to be avoided or not.
My current attempts at measuring the performance are somewhat inclusive to me and I wonder if that's because this doesn't matter at all from Metal's perspective. Ultimately, the same amount of memory is needed (from my perspective) but I'd be interested to know if there are performance trade-offs I'm not aware of.

GL_REPEAT vs a High Resolution Image?

If I've got a low-resolution texture with a bunch of dots in it which if I did GL_REPEAT
It'd be unnoticeable. then Is it advisable to use GL_REPEAT by specifying higher texture coordinates to repeat this texture or just use a high-resolution one with all dots I need? (GPU Performance)
You will always get better performance with a smaller texture. If the texture repeats itself and there are no architectural reasons to make it bigger, use the smaller version. Sampling a smaller texture accesses less memory so it is more likely that most (or all) accesses will fall in the GPU's texture cache.

THREE.js How to increase texture quality

What are the possible and good ways/best practices/etc to improve texture quality in THREE.js?
I have a scene where I have planes(cards) with 512x512px textures. How it looks you can see on images below. My problem is that textures looks blurred. I have tried to change filters and value of anisotropy and it helps, but just a little and texture still blurred. The only one way that I found when texture looks like I want - increase render size x2 and keep canvas size the same. It is bad way because of performance issues, but I don't find another way to get good texture quality.
The best quality - render size x2
Normal quality - magFilter = minFilter = THREE.LinearMipMapLinearFilter /anisotropy = 16
Bad quality - no filters
I hope for any help, thanks in advance
You hardly can do better than trilinear filtering with 16x anisotropic (and not all hardwares can achieve 16x anisotropic filtering).
However, you say your textures are 512x512, while (if your snapshots are real-size) it appear clear that:
they are rendered way smaller thant 512x512. It mean this is currently a lower mipmap level that is used to render your cardes, a mipmap generated by WebGL.
Your cards are rectangular while your textures are square. Depending how you mapped texture on your shape, this could mean the aspect-ratio change, so the sampler need to do some more interpolation (so filtering, meaning more blur)
So what you can try to do, is to:
use smaller base texture, 256x256 for example, which you done yourself with the best sharpness you can, so no min-filter is needed while WebGL sample the texture.
Adapt the mesh texture coordinates to your texture or vice versa to avoid aspect-ratio changes during texture sampling.

Which is more efficient and on which mobile phones - Texture size

I'm in the final stages of optimizing my game. I use three different texture sizes depending on how big the screen of the mobile is (big for H>1280, medium for H<=1280, and small for H<=640).
I want to know what's more efficient (FPSwise) for the medium screen size (640 < H <= 1280) a big 1024x1024 PNG texture or 2x512x512 PNG textures.
One texture means one texture change (setup) per render so that's good; but 1024 is big. The two 512 textures mean 2 texture changes per render , but on the other hand they are smaller. Which way is best?
POT textures do not need to be squares. They just need to have the power-of-two dimensions. So, you can use one 1024x512 texture and not need to switch textures during render.

How do I properly group polygons into arrays?

I want to render a scene in OpenGL ES, but I have a problem.
Because there is no immediate mode in ES, and simulating immediate mode with single-polygon buffers is slow, I can't just switch textures and skip invisible polygons, so I have to group my polygons.
Here are characteristics of different polygons:
Diffuse texture (mipmapped, lots of them).
Lightmap texture (packed, up to 64 textures).
Visibility.
At first I thought to group the polygons only by visibility area, but I couldn't find a way to use texture index arrays.
So, how do I properly make buffers of polygons to render?
I will make visibility groups with texture subgroups. Quake uses 64 128x128 lightmaps, I'm going to replace them with a single 1024x1024 lightmap, since modern hardware supports textures of such size.

Resources