WebGL – Stretching non power of two textures or adding padding - three.js

I'm using images of varying sizes and aspect ratios, uploaded through a CMS in Three.js / A-Frame. Of course, these aren't power of two textures. It seems like I have two options for processing them.
The first is to stretch the image, as is done in Three.JS – with the transformation undone when applied to the plane.
The second is to add extra pixels (which aren't displayed) due to custom UVs.
Would one approach be better than the other? Based on image quality, I'd imagine not doing any stretching would be preferred.
EDIT:
For those interested, I couldn't spot a difference between the two approaches. Here's the code for altering the UVs to cut off the unused texture padding:
var uvX = 1;
var uvY = 0;
if(this.orientation === 'portrait') {
uvX = (1.0 / (this.data.textureWidth / this.data.imageWidth));
} else {
uvY = 1.0 - (this.data.imageHeight / this.data.textureHeight);
}
var uvs = new Float32Array( [
0, uvY,
uvX, uvY,
uvX, 1,
0, 1
]);
EDIT 2:
I hadn't set the texture up properly.
Side by side, the non-stretched (padded) image does look better up close – but not a huge difference:
Left: Stretched to fit the power of two texture. Right: Non-stretched with padding

Custom UV's can be a bit a pain (especially when users can modify the texturing), and padding can break tiling when repeating the texture (unless taken very special care of).
Just stretch the images (or let Three.js do it for you). That's what most engines (like Unity) do anyway. There -might- be a tiny bit of visual degradation if the stretch algorithm and texel sampling do not 100% match, but it will be fine.
The general idea is that if your users -really- cared about sampling quality at that level, they'd carefully handcraft POT textures anyway. Usually, they just want to throw texture images at their models and have them look about right... and they will.

Related

THREE.Sprite size in px with sizeAttenuation false

I'm trying to scale sprites to have size defined in px. regardless of camera FOV and so on. I have sizeAttenuation set to false, as I dont want them to be scaled based on distance from camera, but I struggle with setting the scale. Dont really know the conversion formula and when I hardcoded the scale with some number that's ok on one device, on the other its wrong. Any advice or help how to have the sprites with the correct sizing accross multiple devices? Thanks
Corrected answer:
Sprite size is measured in world units. Converting world units to pixel units may take a lot of calculations because it varies based on your camera's FOV, distance from camera, window height, pixel density, and so on...
To use pixel-based units, I recommend switching from THREE.Sprite to THREE.Points. It's material THREE.PointsMaterial has a size property that's measured in pixels if sizeAttenuation is set to false. Just keep in mind that it has a max size limitation based on the device's hardware, defined by gl.ALIASED_POINT_SIZE_RANGE.
My original answer continues below:
However, "1 px" is a subjective measurement nowadays because if you use renderer.setPixelRatio(window.devicePixelRatio); then you'll get different sprite sizes on different devices. For instance, MacBooks have a pixel ratio of 2 and above, some cell phones have pixel ratio of 3, and desktop monitors are usually at a ratio of 1. This can be avoided by not using setPixelRatio, or if you use it, you'll have to use a multiplication:
const s = 5;
points.size = s * window.devicePixelRatio;
Another thing to keep in mind is that sprites THREE.Points are sized in pixels, whereas meshes are sized in world units. So sometimes when you shrink your browser window vertically, the sprite Point size will remain the same, but the meshes will scale down to fit in the viewport. This means that a 5px sprite Point will take up more real-estate in a small window than it would in a large monitor. If this is the problem, make sure you use the window.innerHeight value when calculating sprite Point size.

External elements slowing down canvas

I am developing a game using several canvases (3) on top of one another. I am close to finishing the game and I haven't yet optimized the performance.
Regardless, my main concern is that the game has performed pretty well so far, but being close to finish I am building a simple web page around the canvas to give a frame to the game. I am talking just putting the title of the game and a few links here and there, but suddenly the game is now choppy and slow!!! If remove those elements everything is smooth again.
The culprits are:
The game title above the canvas (styled with text-shadow).
four buttons below the canvas to redirect to other sites and credits.
Is it possible that this few static elements interfere with the rendering of the game?
Thank you.
Anything with shadows, rounded corners or expensive effects such as blur cost a lot to render.
Modern browsers try to optimize this in various way but there are special cases which they can't get around just like that (updated render engines using 3D hardware can help in the future).
Shadows are closely related to blurring and needs to be composited per frame due to the possibility that the background, shadow color, blur range etc. could change. Rounded corners forces the browser to create an alpha mask instead of doing just a rectangular clip. The browser may cache some of these operations, but they'll add up in the end.
Text Shadow
A workaround is to "cache" the shadowed text as an image. It can be a pre-made image from Photoshop or it could be made dynamically using a canvas element. Then display this instead of the text+shadow.
Example
var ctx = c.getContext("2d"),
txt = "SHADOW HEADER";
// we need to do this twice as when we set width of canvas, state is cleared
ctx.font = "bold 28px sans-serif";
c.width = ctx.measureText(txt).width + 20; // add space for shadow
c.height = 50; // estimated
// and again...
ctx.font = "bold 28px sans-serif";
ctx.textBaseline = "top";
ctx.textAlign = "left";
ctx.shadowBlur = 9;
ctx.shadowOffsetX = 9;
ctx.shadowOffsetY = 9;
ctx.shadowColor = "rgba(0,0,0,0.8)";
ctx.fillStyle = "#aaa";
ctx.fillText(txt, 0, 0);
body {background:#7C3939}
<canvas id=c></canvas>
The canvas element can now be placed as needed. In addition you could convert the canvas to an image and use that without the extra overhead.
Rounded Corners
Rounded corners on an element is also expensive and there are no easy way around this - the corners need to be cut one way or another and question is which method is fastest.
Let browser do it using CSS
Overlay the element with the outer corners covered in the same color as background - clunky but can be fast as no clipping is needed. However, more data need to be composited.
Use a mask in canvas directly via globalCompositeOperation. The chances are this would be the slowest method. Performance tests must be made for this scenario to find out which one works best overall.
Make a compromise and remove rounded corners all together.
Links
Also these could be replaced by clickable images. It's a bit more tedious but also these could be made dynamically using a canvas allowing the text to change ad-hoc.
CSS
I would also recommend experimenting with position: fixed; for some of the elements. When fixed is used, some browsers renders that element separately (gives it its own bitmap). This may be more efficient in some cases.
But do make some performance tests to see what combination is the best for your scenario.

How to convert to a HDR renderer?

I am in the process of converting my webgl deferred renderer to one that uses high dynamic range. I've read a lot about the subject from various sources online and I have a few questions that I hope could be clarified. Most of the reading I have done covers HDR image rendering, but my questions pertain to how a renderer might have to change to support HDR.
As I understand it, HDR is essentially trying to capture higher light ranges so that we can see detail in both extremely lit or dark scenes. Typically in games we use an intensity of 1 to represent white light and 0 black. But in HDR / the real world, the ranges are far more varied. I.e. a sun in the engine might be 10000 times brighter than a lightbulb of 10.
To cope with these larger ranges you have to convert your renderer to use floating point render targets (or ideally half floats as they use less memory) for its light passes.
My first question is on the lighting. Besides the floating point render targets, does this simply mean that if previously I had a light representing the sun, which was of intensity 1, it could/should now be represented as 10000? I.e.
float spec = calcSpec();
vec4 diff = texture2D( sampler, uv );
vec4 color = diff * max(0.0, dot( N, L )) * lightIntensity + spec; //Where lightIntensity is now 10000?
return color;
Are there any other fundamental changes to the lighting system (other than float textures and higher ranges)?
Following on from this, we now have a float render target that has additively accumulated all the light values (in the higher ranges as described). At this point I might do some post processing on the render target with things like bloom. Once complete it now needs to be tone-mapped before it can be sent to the screen. This is because the light ranges must be converted back to the range of our monitors.
So for the tone-mapping phase, I would presumably use a post process and then using a tone-mapping formula convert the HDR lighting to a low dynamic range. The technique I chose was John Hables from Uncharted 2:
const float A = 0.15;
const float B = 0.50;
const float C = 0.10;
const float D = 0.20;
const float E = 0.02;
const float F = 0.30;
const float W = 11.2;
vec3 Uncharted2Tonemap(vec3 x)
{
return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F;
}
... // in main pixel shader
vec4 texColor = texture2D(lightSample, texCoord );
texColor *= 16; // Hardcoded Exposure Adjustment
float ExposureBias = 2.0;
vec3 curr = Uncharted2Tonemap( ExposureBias * texColor.xyz );
vec3 whiteScale = 1.0 / Uncharted2Tonemap(W);
vec3 color = curr * whiteScale;
// Gama correction
color.x = pow( color.x, 1.0 /2.2 );
color.y = pow( color.y, 1.0 /2.2 );
color.z = pow( color.z, 1.0 /2.2 );
return vec4( color, 1.0 );
Tone mapping article
My second question is related to this tone mapping phase. Is there much more to it than simply this technique? Is simply using higher light intensities and tweaking the exposure all thats required to be considered HDR - or is there more to it? I understand that some games have auto exposure functionality to figure out the average luminescence, but at the most basic level is this needed? Presumably you can just use manually tweak the exposure?
Something else thats discussed in a lot of the documents is that of gama correction. The gama correction seems to be done in two areas. First when textures are read and then once again when they are sent to the screen. When textures are read they must simply be changed to something like this:
vec4 diff = pow( texture2D( sampler, uv), 2.2 );
Then in the above tone mapping technique the output correction is done by:
pow(color,1/2.2);
From John Hables presentation he says that not all textures must be corrected like this. Diffuse textures must be, but things like normal maps don't necessarily have to.
My third question is on this gama correction. Is this necessary in order for it to work? Does it mean I have to change my engine in all places where diffuse maps are read?
That is my current understanding of whats involved for this conversion. Is it correct and is there anything I have misunderstood or got wrong?
Light Calculation / Accumulation
Yes, you are generally able to keep your lightning calculation the same and increasing say the intensity of directional lights over 1.0 is certainly fine. Another way the value can exceed one is simply by adding the contributions of several lights together.
Tone Mapping
You certainly understood the concept. There are quite a few different ways to do the actual mapping, from the more simple / naive one color = clamp(hdrColor * exposure) to the more sophisticated (and better) one you posted.
Adaptive tone mapping can quickly become more complicated. Again the naive way is to simply normalize colors by diving with the brightest pixel, which will certainly make it hard/impossible to perceive details in the darker parts of the image. You can also average the brightness and clamp. Or you can save whole histograms of the several last frames and use those in your mapping.
Another method is to normalize each pixel only with the values of the neighbouring pixels, i.e. "local tone mapping". This one is not usually done in real-time rendering.
While it may sound complicated the formula you posted will generate very good results, so it is fine to go with it. Once you have a working implementation feel free to experiment here. There are also great papers available :)
Gamma
Now gamma-correction is important, even if you do not use hdr rendering. But never worry, it is not hard.
The most important thing is to be always aware in what color space you are working. Just like a number without unit, a color without color space just makes seldom sense. Now we like to work in linear (rgb) color space in our shaders, meaning a color with twice the rgb-values should be twice as bright. However this is not how monitors work.
Cameras and photo-editing software often simply hide all this from us and simply save pictures in the format the monitor likes (called sRGB).
There is an additional advantage in sRGB and that is compression. We usually save image with 8/16/32 bit per pixel per channel. If you save pictures in linear space and you have small but very bright spots in the image your 8/16/32 bit may not be precise enough to save brightness differences in the darker parts of the image and if you are displaying them again (of course gamma correct) details may be lost in the dark.
You are able to change the color space your images are saved in many cameras and programs, even if it is sometimes a bit hidden. So if you tell your artists to save all images in linear (rgb) color space you do not need to gamma-correct images at all. Since most programs like sRGB and sRGB offers better compression it is generally a good idea to save images that describe color in sRGB, those therefore need to be gamma corrected. Images that describe values/data like normal maps or bump maps are usually saved in linear color space (if your normal [1.0, 0.5, 0.0] just does not have a 45 degree angle everybody will be confused; the compression advantage is also naught with non-colors).
If you want to use a sRGB Texture just tell OpenGL so and it will convert it to a linear color space for you, without performance hit.
void glTexImage2D( GLenum target,
GLint level,
GLint internalFormat, // Use **GL_SRGB** here
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid * data);
Oh and of course you have to gamma-correct everything you send to your display (so change from linear to sRGB or gamma 2.2). You can do this in your tone mapping or another post-process step. Or let OpenGL do it for you; see glEnable(GL_FRAMEBUFFER_SRGB)

Drawing an outline around the non-transparent part of a texture

Tinkering with the feature set for a new game, i'm considering including a PVP gameplay mode. Nothing like NI after kicking the AI so smithereens :). iSomething only. Willing to restrict to modern devices.
One option I would consider to differentiate the characters for each player on the map would be to add 'on the fly' a 2-point outline of different colours to the characters of each player (others options exist, but have weight considerations for the resources).
I have not found on here (nor elsewhere for that matter) any very useful answers to this kind of requirement, nor am I an GL expert by a far cry. If any one of you could point me in the direction of some tutorials, I would greatly appreciate. TIA
I wasn't recommending that you necessarily put the outlines into separate textures. What I was imagining was that you have a sprite with a region that is all alpha = 1.0, surrounded by a transparent region of alpha = 0.0.
One idea could be to draw a couple pixel wide ring around the opaque region with something like alpha = 0.5.
If you then want to draw your sprites without a border, you can just alpha test for alpha > 0.75, and the border will not appear. If you want to draw a border, you can alpha test for alpha > 0.25, and use a fragment shader to replace all pixels with 0.4 < alpha < 0.6 with a colored border of your choice.
This becomes more difficult if your images use partial transparency, though in that case you could maybe block off the range from 0.0 to 0.1 for alpha metadata like the border.
This would not require any additional textures to be used or increase the size of any of the existing resources.

How to draw "glowing" line in OpenGL ES

Could you please share some code (any language) on how draw textured line (that would be smooth or have a glowing like effect, blue line, four points) consisting of many points like on attached image using OpenGL ES 1.0.
What I was trying was texturing a GL_LINE_STRIP with texture 16x16 or 1x16 pixels, but without any success.
In ES 1.0 you can use render-to-texture creatively to achieve the effect that you want, but it's likely to be costly in terms of fill rate. Gamasutra has an (old) article on how glow was achieved in the Tron 2.0 game — you'll want to pay particular attention to the DirectX 7.0 comments since that was, like ES 1.0, a fixed pipeline. In your case you probably want just to display the Gaussian image rather than mixing it with an original since the glow is all you're interested in.
My summary of the article is:
render all lines to a texture as normal, solid hairline lines. Call this texture the source texture.
apply a linear horizontal blur to that by taking the source texture you just rendered and drawing it, say, five times to another texture, which I'll call the horizontal blur texture. Draw one copy at an offset of x = 0 with opacity 1.0, draw two further copies — one at x = +1 and one at x = -1 — with opacity 0.63 and a final two copies — one at x = +2 and one at x = -2 with an opacity of 0.17. Use additive blending.
apply a linear vertical blur to that by taking the horizontal blur texture and doing essentially the same steps but with y offsets instead of x offsets.
Those opacity numbers were derived from the 2d Gaussian kernel on this page. Play around with them to affect the fall off towards the outside of your lines.
Note the extra costs involved here: you're ostensibly adding ten full-screen textured draws plus some framebuffer swapping. You can probably get away with fewer draws by using multitexturing. A shader approach would likely do the horizontal and vertical steps in a single pass.

Resources