I am trying to understand how the "size" attribute in the THREE.PointCloudMaterial translates to the size of it's points on the screen.
With an orthographic camera set at (-1,1,1-1) and size = 1, the points do not fill half the screen, so apparently this parameter does not refer to camera space. Nor does it refer to pixels; at "size = 1", the points >> 1 pixel.
Furthermore, if I resize the browser window, changing it's height, the points scale in size, while if I resize the window's width, the points do not scale in size (!?!)
Any clarification on how "size" get's translated to screen or camera space would be greatly appreciated.
In case it is of interest why I need to know this: I am trying to overlay a PointCloud with a THREE.PointCloudMaterial (with which I can use a texture map) over a second PointCloud that uses a ShaderMaterial (where I can send the size parameter straight to gl_PointSize and know exactly how big each point will be). I am having trouble matching up the point sizes in the two clouds.
Thanks!
-mike
Here, at line 368 the code starts.
It uses gl_PointSize to rasterize a vertex. Two options are present, one with attenuation, the other without. Without, the point gets rasterized to a fixed size in pixels. With, the size is divided by depth and creates a perspective effect. This is happening in the vertex shader.
Looking at the code, it seems that the size would be expressed in world units in the case of attenuation, and to a fixed pixel size if not.
Related
I'm trying to scale sprites to have size defined in px. regardless of camera FOV and so on. I have sizeAttenuation set to false, as I dont want them to be scaled based on distance from camera, but I struggle with setting the scale. Dont really know the conversion formula and when I hardcoded the scale with some number that's ok on one device, on the other its wrong. Any advice or help how to have the sprites with the correct sizing accross multiple devices? Thanks
Corrected answer:
Sprite size is measured in world units. Converting world units to pixel units may take a lot of calculations because it varies based on your camera's FOV, distance from camera, window height, pixel density, and so on...
To use pixel-based units, I recommend switching from THREE.Sprite to THREE.Points. It's material THREE.PointsMaterial has a size property that's measured in pixels if sizeAttenuation is set to false. Just keep in mind that it has a max size limitation based on the device's hardware, defined by gl.ALIASED_POINT_SIZE_RANGE.
My original answer continues below:
However, "1 px" is a subjective measurement nowadays because if you use renderer.setPixelRatio(window.devicePixelRatio); then you'll get different sprite sizes on different devices. For instance, MacBooks have a pixel ratio of 2 and above, some cell phones have pixel ratio of 3, and desktop monitors are usually at a ratio of 1. This can be avoided by not using setPixelRatio, or if you use it, you'll have to use a multiplication:
const s = 5;
points.size = s * window.devicePixelRatio;
Another thing to keep in mind is that sprites THREE.Points are sized in pixels, whereas meshes are sized in world units. So sometimes when you shrink your browser window vertically, the sprite Point size will remain the same, but the meshes will scale down to fit in the viewport. This means that a 5px sprite Point will take up more real-estate in a small window than it would in a large monitor. If this is the problem, make sure you use the window.innerHeight value when calculating sprite Point size.
I am painting a rope. It is a Sprite built using a 16x16 texture that is repeated (using TextureOptions.REPEATING_BILINEAR, to 16 x ropeLength).
The problem is that I need to change the rope length "on the fly" (I am already doing it in onManagedUpdate), but I would like to change also the texture length, and so avoid de "ellastic" effect that happens when changing the sprite length without changing the texture length (the repeating textures are stretched or contracted to match the new sprite size).
I have confirmed that using "this.getTextureRegion().setTextureSize()" has no effect after the Sprite has been created.
Can anybody help me or give some ideas.
You'll need to modify the u/v coordinates of the vertices instead. Unfortunately I don't know how to do that in Andengine. Perhaps it's somewhere "near" the function you use to extend the rope (i.e. modify x/y/z coordinates of vertices). Hope this helps.
I have a set of block objects, and I'd like to set the perspective camera so that their entire width is fully visible (the height will be too big - that's OK, we're going to pan up and down).
I've seen there are a number of questions close to this, such as:
Adjusting camera for visible Three.js shape
Three.js - Width of view
THREE.JS: Get object size with respect to camera and object position on screen
How to Fit Camera to Object
ThreeJS. How to implement ZoomALL and make sure a given box fills the canvas area?
However, none of them seem to quite cover everything I'm looking for:
I'm not interested in the height, only the width (they won't be the same - the size will be dynamic but I can presume the height will be larger than the width)
The camera.position.z (or the FOV I guess) is the unknown, so I'm trying to get the equations round the right way to solve that
(I'm not great with 3D maths. Thanks in advance!)
I was able to simplify this problem a lot, in my case...
Since I knew the overall size of the objects, I was able to simply come up with a suitable distance through changing the camera's z position a few times and seeing what looked best.
My real problem was that the same z position gave different widths, relative to the screen width, on different sized screens - due to the different aspect ratios.
So all I did was divide my distance value by camera.aspect. Now the blocks take up the same proportion of the screen's width on all screen sizes :-)
I am trying to draw large numbers of 2d circles for my 2d games in opengl. They are all the same size and have the same texture. Many of the sprites overlap. What would be the fastest way to do this?
an example of the kind of effect I'm making http://img805.imageshack.us/img805/6379/circles.png
(It should be noted that the black edges are just due to the expanding explosion of circles. It was filled in a moment after this screen-shot was taken.
At the moment I am using a pair of textured triangles to make each circle. I have transparency around the edges of the texture so as to make it look like a circle. Using blending for this proved to be very slow (and z culling was not possible as they were rendered as squares to the depth buffer). Instead I am not using blending but having my fragment shader discard any fragments with an alpha of 0. This works, however it means that early z is not possible (as fragments are discarded).
The speed is limited by the large amounts of overdraw and the gpu's fillrate. The order that the circles are drawn in doesn't really matter (provided it doesn't change between frames creating flicker) so I have been trying to ensure each pixel on the screen can only be written to once.
I attempted this by using the depth buffer. At the start of each frame it is cleared to 1.0f. Then when a circle is drawn it changes that part of the depth buffer to 0.0f. When another circle would normally be drawn there it is not as the new circle also has a z of 0.0f. This is not less than the 0.0f that is currently there in the depth buffer so it is not drawn. This works and should reduce the number of pixels which have to be drawn. However; strangely it isn't any faster. I have already asked a question about this behavior (opengl depth buffer slow when points have same depth) and the suggestion was that z culling was not being accelerated when using equal z values.
Instead I have to give all of my circles separate false z-values from 0 upwards. Then when I render using glDrawArrays and the default of GL_LESS we correctly get a speed boost due to z culling (although early z is not possible as fragments are discarded to make the circles possible). However this is not ideal as I've had to add in large amounts of z related code for a 2d game which simply shouldn't require it (and not passing z values if possible would be faster). This is however the fastest way I have currently found.
Finally I have tried using the stencil buffer, here I used
glStencilFunc(GL_EQUAL, 0, 1);
glStencilOp(GL_KEEP, GL_INCR, GL_INCR);
Where the stencil buffer is reset to 0 each frame. The idea is that after a pixel is drawn to the first time. It is then changed to be none-zero in the stencil buffer. Then that pixel should not be drawn to again therefore reducing the amount of overdraw. However this has proved to be no faster than just drawing everything without the stencil buffer or a depth buffer.
What is the fastest way people have found to write do what I am trying?
The fundamental problem is that you're fill limited, which is the GPUs inability to shade all the fragments you ask it to draw in the time you're expecting. The reason that you're depth buffering trick isn't effective is that the most time-comsuming part of processing is shading the fragments (either through your own fragment shader, or through the fixed-function shading engine), which occurs before the depth test. The same issue occurs for using stencil; shading the pixel occurs before stenciling.
There are a few things that may help, but they depend on your hardware:
render your sprites from front to back with depth buffering. Modern GPUs often try to determine if a collection of fragments will be visible before sending them off to be shaded. Roughly speaking, the depth buffer (or a represenation of it) is checked to see if the fragment that's about to be shaded will be visible, and if not, it's processing is terminated at that point. This should help reduce the number of pixels that need to be written to the framebuffer.
Use a fragment shader that immediately checks your texel's alpha value, and discards the fragment before any additional processing, as in:
varying vec2 texCoord;
uniform sampler2D tex;
void main()
{
vec4 texel = texture( tex, texCoord );
if ( texel.a < 0.01 ) discard;
// rest of your color computations
}
(you can also use alpha test in fixed-function fragment processing, but it's impossible to say if the test will be applied before the completion of fragment shading).
I am currently porting an app over from iOS to Windows Phone 8. It is an image processing app, and all calculations are done on the GPU using pixel shaders.
There is one detail that I just haven't been able to figure out, that is Texel Width/Height offsets. I have absolutely no idea what these values are, and I can't seem to find any information on them.
Are they common terms? Does anybody know what they represent? Does anyone know what sort of values should be in them?
Texel is a pixel of texture localized by a coordinate, the offset in a texture is where a texture begin mapped on a model or render target.
The most simple example of this:
http://lifeasa.files.wordpress.com/2011/02/super_mario_world_by_xinzax.png
The map of stage is a few textures, when Mario advances in level, the X coordinate offset increases, and the right part of texture became visible, at same time the left side becames hidden.
Check the textures, if have more than a 'part' in a single image, is this.
Another case is a single texture that is mapped in multiple objects, and each object have a offset to appears a 'segment' of previous object.