THREE.Sprite size in px with sizeAttenuation false - three.js

I'm trying to scale sprites to have size defined in px. regardless of camera FOV and so on. I have sizeAttenuation set to false, as I dont want them to be scaled based on distance from camera, but I struggle with setting the scale. Dont really know the conversion formula and when I hardcoded the scale with some number that's ok on one device, on the other its wrong. Any advice or help how to have the sprites with the correct sizing accross multiple devices? Thanks

Corrected answer:
Sprite size is measured in world units. Converting world units to pixel units may take a lot of calculations because it varies based on your camera's FOV, distance from camera, window height, pixel density, and so on...
To use pixel-based units, I recommend switching from THREE.Sprite to THREE.Points. It's material THREE.PointsMaterial has a size property that's measured in pixels if sizeAttenuation is set to false. Just keep in mind that it has a max size limitation based on the device's hardware, defined by gl.ALIASED_POINT_SIZE_RANGE.
My original answer continues below:
However, "1 px" is a subjective measurement nowadays because if you use renderer.setPixelRatio(window.devicePixelRatio); then you'll get different sprite sizes on different devices. For instance, MacBooks have a pixel ratio of 2 and above, some cell phones have pixel ratio of 3, and desktop monitors are usually at a ratio of 1. This can be avoided by not using setPixelRatio, or if you use it, you'll have to use a multiplication:
const s = 5;
points.size = s * window.devicePixelRatio;
Another thing to keep in mind is that sprites THREE.Points are sized in pixels, whereas meshes are sized in world units. So sometimes when you shrink your browser window vertically, the sprite Point size will remain the same, but the meshes will scale down to fit in the viewport. This means that a 5px sprite Point will take up more real-estate in a small window than it would in a large monitor. If this is the problem, make sure you use the window.innerHeight value when calculating sprite Point size.

Related

Processing 3 Box2D Velocity Not Scaling With Resolution

I'm currently trying to make my simple game scale with the resolution. I've noticed though when I change the resolution not everything works out. For instance from the shift from 1280x720 to 1920x1080 the jumping distance changes slightly. The main problem I've noticed is that when I fire a projectile with a velocity. On lower resolutions it seems to travel across the screen significantly faster and I can't understand why as it should scale down with the size of the window. Here is a snipet of the code that fires a projectile:
m = new Box(l.pos.x+Width/32*direction2, l.pos.y-Height/288, Width/64, Height/72, true, 4);
m.body.setGravityScale(0f);
boxes.add(m);
m.body.setLinearVelocity(new Vec2(Width*direction2, 0));
In this scenario m is a box I'm creating. In new Box(spawn x coordinate, spawn y cooridinate, width of box, height of box, is the box moveable, type of box) l.pos.x and l.pos.y are the positions I'm firing the box from. The Height and Width variables are the size of the current window in pixels being updated in void draw(), direction2 is either 1 or -1 depending on the direction in which the character is facing.
Hard to tell how the rest of code affects the simulation without seeing more of it.
Ideally you would want to keep phyics related properties independent from the Processing sketch dimensions in terms of dimensions but maintain proportion so you can simply scale up the rendering of the same sized world. If you have mouse interaction the coordinates would scale as well, but other than the position, the rest of physical proeprties should be maintained.
From what I can gather in your code if Width is the sketch's width you should de-couple that from linear velocity:
m.body.setLinearVelocity(new Vec2(Width*direction2, 0));
you should use a value that you will keep constant in relation to the sketch dimensions.

How to set the scale of a THREE.Sprite to a width in pixel units?

I have a viewer with a perspective camera. I know the size of the viewer and the pixel ratio. I have several sprites in my scene that use the .sizeAttenuation property to never change size.
With all of this, I want to be able to set the scale of the sprite instances to, for example, be 20px x 20px. Is that possible? Is there a known conversion from pixels to sprite scale?
What I am experiencing now is that the sprites will change size depending on the viewer size. I wish to know how to resize them when the viewer changes so they are consistently the same size.
thanks!

Why does fps drop when increasing the scale of objects in three js

I have a scene with a single camera and one PlaneBufferGeometry
If I make this plane size 1x1 I get 60fps
If I make this plane size 1000x1000 I get <20fps
Why does this happen? I am drawing the same number of vertices to the screen.
Here is a fiddle showing the problem
Just change the definition of size between 1 and 1000 to observe the problem.
var size = 10000;
//size = 1;
var geometry = new THREE.PlaneBufferGeometry(size, size);
I am adding 50 identical planes in this example. There isn't a significant fps hit with only one plane.
It's definitely normal. A larger plane cover more surface on the screen, thus more pixels.
More fragments are emitted by the rasterisation process. For each one, the GPU will check if it pass the depth test and/or the stencil test. If so, it will invoke the fragment shader for each pixels.
Try to zoom in your 1x1 plane, until it cover the whole screen. Your FPS will drop as well.
#pleup has a good point there, to extend on that a little bit: Even a low-end GPU will have absolutely no problem overdrawing (painting the same pixel multiple times) several times (i'd say something like 4 to 8 times) at fullscreen and still keep it up at 60 FPS. This number is likely a bit lower for webgl due to the compositing with the DOM and browser-UI, but it's still multiple times for sure.
Now what is happening is this: you are in fact creating 50 planes, and not only one. All of them with the same size in the same place. No idea why, but thats irrelevant here. As all of them are in the same place, every single pixel needs to be drawn 50 times, and worst case that is 50 times the full screen-area.

Three.js zoom to fit width of objects (ignoring height)

I have a set of block objects, and I'd like to set the perspective camera so that their entire width is fully visible (the height will be too big - that's OK, we're going to pan up and down).
I've seen there are a number of questions close to this, such as:
Adjusting camera for visible Three.js shape
Three.js - Width of view
THREE.JS: Get object size with respect to camera and object position on screen
How to Fit Camera to Object
ThreeJS. How to implement ZoomALL and make sure a given box fills the canvas area?
However, none of them seem to quite cover everything I'm looking for:
I'm not interested in the height, only the width (they won't be the same - the size will be dynamic but I can presume the height will be larger than the width)
The camera.position.z (or the FOV I guess) is the unknown, so I'm trying to get the equations round the right way to solve that
(I'm not great with 3D maths. Thanks in advance!)
I was able to simplify this problem a lot, in my case...
Since I knew the overall size of the objects, I was able to simply come up with a suitable distance through changing the camera's z position a few times and seeing what looked best.
My real problem was that the same z position gave different widths, relative to the screen width, on different sized screens - due to the different aspect ratios.
So all I did was divide my distance value by camera.aspect. Now the blocks take up the same proportion of the screen's width on all screen sizes :-)

Non predefined multiple light sources in OpenGL ES 2.0

There is a great article about multiple light sources in GLSL
http://en.wikibooks.org/wiki/GLSL_Programming/GLUT/Multiple_Lights
But light0 and light1 parameters described in shader code, what if must draw flare gun shots, e.g every flare has it own position, color and must illuminate surroundings. How we manage other objects shader to deal with unknown (well there is a limit to max flares on the screen) position, colors of flares? For example there will be 8 max flares on screen, what i must to pass 8*2 uniforms, even if they not exist at this time?
Or imagine you making level editor, user can place lamps, how other objects will "know" about new light source and render then new lamp has been added?
I think there must be clever solution, but i can't find one.
Lighting equations usually rely on additive colour. So the output is the colour of light one plus the colour of light two plus the colour of light three, etc.
One of the in-framebuffer blending modes offered by OpenGL is additive blending. So the colour output of anything new that you draw will be added to whatever is already in the buffer.
The most naive solution is therefore to write your shader to do exactly one light. If you have multiple lights, draw the scene that many times, each time with a different nominated line. It's an example of multipass rendering.
Better solutions involve writing shaders to do two, four, eight or whatever lights at once, doing, say, 15 lights as an 8-light draw then a 4-light draw then a 2-light draw then a 1-light draw, and including only geometry within reach of each light when you do that pass. Which tends to mean finding intelligent ways to group lights by locality.
EDIT: with a little more thought, I should add that there's another option in deferred shading, though it's not completely useful on most GL ES devices at the moment due to the limited options for output buffers.
Suppose theoretically you could render your geometry exactly once and store whatever you wanted per pixel. So you wouldn't just output a colour, you'd output, say, a position in 3d space, a normal, a diffuse colour, a specular colour and a specular exponent. Those would then all be in a per-pixel buffer.
You could then render each light by (i) working out the maximum possible space it can occupy when projected onto the screen (so, a 2d rectangle that relates directly to pixels); and (ii) rendering the light as a single quad of that size, for each pixel reading the relevant values from the buffer you just set up and outputting an appropriately lit colour.
Then you'd do all the actual geometry in your scene only exactly once, and each additional light would cost at most a single, full-screen quad.
In practice you can't really do that because the output buffers you tend to be able to use in ES provide too little storage. But what you can usually do is render to a 32bit colour buffer with an attached depth buffer. So you can just store depth in the depth buffer and work out world (x, y, z) from that plus the [uniform] position of the camera in the light shader. You could store 8-bit versions of normal x and y in the colour buffer so as to spend 16 bits and work out z in the colour buffer because you know that the normal is always of unit length. Then, to pick a concrete example at random, maybe you could store a 16-bit version of the diffuse colour in the remaining space, possibly in YCrCb with extra storage for Y.
The main disadvantage is that hardware antialiasing then doesn't due to much the same sort of concerns as transparency and depth buffers. But if you get to the point where you save dramatically on lighting it might still make sense to do manual antialiasing by rendering a large version of the scene and then scaling it down in a final pass.

Resources