Processing 3 Box2D Velocity Not Scaling With Resolution - processing

I'm currently trying to make my simple game scale with the resolution. I've noticed though when I change the resolution not everything works out. For instance from the shift from 1280x720 to 1920x1080 the jumping distance changes slightly. The main problem I've noticed is that when I fire a projectile with a velocity. On lower resolutions it seems to travel across the screen significantly faster and I can't understand why as it should scale down with the size of the window. Here is a snipet of the code that fires a projectile:
m = new Box(l.pos.x+Width/32*direction2, l.pos.y-Height/288, Width/64, Height/72, true, 4);
m.body.setGravityScale(0f);
boxes.add(m);
m.body.setLinearVelocity(new Vec2(Width*direction2, 0));
In this scenario m is a box I'm creating. In new Box(spawn x coordinate, spawn y cooridinate, width of box, height of box, is the box moveable, type of box) l.pos.x and l.pos.y are the positions I'm firing the box from. The Height and Width variables are the size of the current window in pixels being updated in void draw(), direction2 is either 1 or -1 depending on the direction in which the character is facing.

Hard to tell how the rest of code affects the simulation without seeing more of it.
Ideally you would want to keep phyics related properties independent from the Processing sketch dimensions in terms of dimensions but maintain proportion so you can simply scale up the rendering of the same sized world. If you have mouse interaction the coordinates would scale as well, but other than the position, the rest of physical proeprties should be maintained.
From what I can gather in your code if Width is the sketch's width you should de-couple that from linear velocity:
m.body.setLinearVelocity(new Vec2(Width*direction2, 0));
you should use a value that you will keep constant in relation to the sketch dimensions.

Related

THREE.Sprite size in px with sizeAttenuation false

I'm trying to scale sprites to have size defined in px. regardless of camera FOV and so on. I have sizeAttenuation set to false, as I dont want them to be scaled based on distance from camera, but I struggle with setting the scale. Dont really know the conversion formula and when I hardcoded the scale with some number that's ok on one device, on the other its wrong. Any advice or help how to have the sprites with the correct sizing accross multiple devices? Thanks
Corrected answer:
Sprite size is measured in world units. Converting world units to pixel units may take a lot of calculations because it varies based on your camera's FOV, distance from camera, window height, pixel density, and so on...
To use pixel-based units, I recommend switching from THREE.Sprite to THREE.Points. It's material THREE.PointsMaterial has a size property that's measured in pixels if sizeAttenuation is set to false. Just keep in mind that it has a max size limitation based on the device's hardware, defined by gl.ALIASED_POINT_SIZE_RANGE.
My original answer continues below:
However, "1 px" is a subjective measurement nowadays because if you use renderer.setPixelRatio(window.devicePixelRatio); then you'll get different sprite sizes on different devices. For instance, MacBooks have a pixel ratio of 2 and above, some cell phones have pixel ratio of 3, and desktop monitors are usually at a ratio of 1. This can be avoided by not using setPixelRatio, or if you use it, you'll have to use a multiplication:
const s = 5;
points.size = s * window.devicePixelRatio;
Another thing to keep in mind is that sprites THREE.Points are sized in pixels, whereas meshes are sized in world units. So sometimes when you shrink your browser window vertically, the sprite Point size will remain the same, but the meshes will scale down to fit in the viewport. This means that a 5px sprite Point will take up more real-estate in a small window than it would in a large monitor. If this is the problem, make sure you use the window.innerHeight value when calculating sprite Point size.

Possibly to prioritise drawing of objects in Threejs?

I am working on a CAD type system using threejs. I have thin objects next to other objects (think thin 2mm metal sheeting fixed to posts on a building measured in metres). When I am zoomed in it all looks fine. The objects do not intersect at all. As I zoom out the objects get smaller and I end up with cases where the post object 'glimmers' (sort of shows through) the metal sheet object as I rotate it around.
I understand it's the small numbers I am working with that is causing this effect. However, is there a way to set a priority such that one object (the metal sheeting) is more important than another object (post) so it doesn't get that sort of effect?
To answer the question from the title, it is possible to prioritize drawing orders with.
myMesh.renderOrder = 5
myOtherMesh.renderOrder = 7
It is then possible to apply different depth effects, turn off the test etc.
Another way is to group objects with, layers. Set the appropriate layer mask on the camera and then render (multiple times).
myMesh.layers.set(5)
camera.layers.set(1)
renderer.render(scene,camera)
camera.layers.set(5)
renderer.render(scene,camera)
This is called z-fighting, where two fragments are so close in the given depth space that their z-values are within the margin of error that their true depths might get inverted.
The easiest way to resolve this is to reduce the scale of your depth buffer. This is controlled by the near and far properties on your camera. You'll need to play with the values to determine what works best for your senario. If you can minimize the distance between the planes, you'll have better luck avoiding z-fighting.
For example, if (as a loose estimate) the bounding sphere of your entire model has a diameter of 100, then the distance between near and far need only be 100. However, their values are set as the distance into camera space. So as you zoom out, and your camera moves further away, you should adjust the values to maintain the minimum distance between them. If your camera is at z = 100, then set near = 50 and far = 150. When you pull your camera back to z = 250, then update near = 200 and far = 300.
Another option is to use the WebGLRenderer.logarithmicDepthBuffer option. (example)
Edit: There is one other cause: the faces of the shapes are actually co-planar. If two triangles are occupying the same space, then you're all but guaranteeing z-fighting.
The simple solution is to move one of the components such that the faces are no longer co-planar. You could also potentially apply a polygonOffset to the sheet metal material, but your use-case doesn't sound like that is appropriate.

Why does fps drop when increasing the scale of objects in three js

I have a scene with a single camera and one PlaneBufferGeometry
If I make this plane size 1x1 I get 60fps
If I make this plane size 1000x1000 I get <20fps
Why does this happen? I am drawing the same number of vertices to the screen.
Here is a fiddle showing the problem
Just change the definition of size between 1 and 1000 to observe the problem.
var size = 10000;
//size = 1;
var geometry = new THREE.PlaneBufferGeometry(size, size);
I am adding 50 identical planes in this example. There isn't a significant fps hit with only one plane.
It's definitely normal. A larger plane cover more surface on the screen, thus more pixels.
More fragments are emitted by the rasterisation process. For each one, the GPU will check if it pass the depth test and/or the stencil test. If so, it will invoke the fragment shader for each pixels.
Try to zoom in your 1x1 plane, until it cover the whole screen. Your FPS will drop as well.
#pleup has a good point there, to extend on that a little bit: Even a low-end GPU will have absolutely no problem overdrawing (painting the same pixel multiple times) several times (i'd say something like 4 to 8 times) at fullscreen and still keep it up at 60 FPS. This number is likely a bit lower for webgl due to the compositing with the DOM and browser-UI, but it's still multiple times for sure.
Now what is happening is this: you are in fact creating 50 planes, and not only one. All of them with the same size in the same place. No idea why, but thats irrelevant here. As all of them are in the same place, every single pixel needs to be drawn 50 times, and worst case that is 50 times the full screen-area.

Three.js zoom to fit width of objects (ignoring height)

I have a set of block objects, and I'd like to set the perspective camera so that their entire width is fully visible (the height will be too big - that's OK, we're going to pan up and down).
I've seen there are a number of questions close to this, such as:
Adjusting camera for visible Three.js shape
Three.js - Width of view
THREE.JS: Get object size with respect to camera and object position on screen
How to Fit Camera to Object
ThreeJS. How to implement ZoomALL and make sure a given box fills the canvas area?
However, none of them seem to quite cover everything I'm looking for:
I'm not interested in the height, only the width (they won't be the same - the size will be dynamic but I can presume the height will be larger than the width)
The camera.position.z (or the FOV I guess) is the unknown, so I'm trying to get the equations round the right way to solve that
(I'm not great with 3D maths. Thanks in advance!)
I was able to simplify this problem a lot, in my case...
Since I knew the overall size of the objects, I was able to simply come up with a suitable distance through changing the camera's z position a few times and seeing what looked best.
My real problem was that the same z position gave different widths, relative to the screen width, on different sized screens - due to the different aspect ratios.
So all I did was divide my distance value by camera.aspect. Now the blocks take up the same proportion of the screen's width on all screen sizes :-)

Correct Translation for artificial horizon

I would like to draw an artificial horizon. The center of the view would represent perfectly horizontal view with roll rotating the horizontal line and pitch moving it up or down.
The question is: what is the correct calculation to translate the horizon line up or down (pitch) given the pitch angle.
My guess is that this would probably depend on the FOV angle that one would assume for an assumed camera, so this angle would need to be a factor in the algorithm sought. Ideally I would figure out this angle for the iPhone/iPad camera so that the artificial horizon would line up with the actual horizon if you hold the device in front of you and look towards the horizon.
Until now I've been guesstimating the offset, but I would like to have the exact formula.
Try horizon_offset/(screen_height/2)=tan(pitch)/tan(vertical_FOV/2).
Look at the picture, and the formula derives itself.
(source: zwibbler.com)
.
Update I have two angles mixed up. One is the FOV angle of the camera, the other is the viewing angle of the screen. These are two different things. The latter depends on the viewing distance. You probably have to estimate this distance, and adjust magnification and/or focal distance such that objects visible on the screen are the same angular size as the same objects visible with the naked eye. (With my particular phone, you would need to magnify the image by an additional factor of about 3 after the 5x zoom, if the user stretches his hand with the phone all the way forward). Then the two angles are the same, and the formula works.
If you want to introduce magnification (i.e. objects on the screen have different sizes from their real-life counterparts), multiply the horizon offset by the magnification factor.
Update 2 When taking the viewing distance into account, the screen size cancels out, and the offset simply becomes viewing_distance*tan(pitch_angle) (with unit magnification).

Resources