ThreeJS Minimum screen width for a single object - three.js

I am using a very similar project to this example: https://developers.arcgis.com/javascript/latest/sample-code/scene-external-renderer-threejs/index.html
This application is displaying a 3D Object (ISS). If you zoom out, the object gets smaller (logical), however I want to have a minimum size for this object. I always want this object to take at least 10 % of the screen width.
How can I achieve this goal?
I don't want to restrict the camera movement by any metods, but I want to enlarge the object itself.

To get this working you need:
Compute bounding Spehre on a geometry of your object using boundingSphere (Geometry)
Project bounding spere (example of projection: Converting World coordinates to Screen coordinates in Three.js using Projection)
Check if pojected object width is not less than 10% of window width.

Related

MapLibre GL JS with terrain layer: How to pin a horizontal plane to a specific altitude?

I based some code on the "Add a 3D model" example at maplibre.org in order to draw only a horizontal plane on a map which uses setTerrain to add a terrain layer.
My intention is to draw a couple of semitransparent layers at a given heights above sea level and have them intersect with mountains, somewhat similar to contour lines.
In my first test I just created a 1km-wide square at altitude 0.
I am somewhat confused by the behavior, since altitude 0 turns out to be the height of the terrain in the center of the visible map area. When I then drag the map and release the mouse, altitude 0 gets somehow reset again to the new altitude 0 of the terrain at the center, making the plane change its relative altitude.
The following animated GIF illustrates the problem:
The GIF has a mountain range to the right and the elevation is greatly exaggerated, in order to better illustrate the issue.
What do I need to do in order to be able to specify the height of the plane in meters above sea level and have it appear to be fixed at that height when dragging the map?
I think I need to get the height of the terrain at the center and then add/substract it from the plane's z-position in order for it to stay put at the height relative to given landmarks, but I have got no idea on how to do this inside of a custom layer's render function.

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

ThreeJS - Scale texture's size down (no repeat - using UV-Coords)

Hello i am new to ThreeJS and texture mapping,
Let's say I have a 3D-Plane with the size of (1000x1000x1). When I apply a texture to it, it will be repeated or it will be scaled, to atleast filling the full plane.
What I try to achieve is, to change the scaling of the picture on the plane at runtime. I want the Image to get smaller and stop fitting the full plane.
I know there is a way to map each face to a part of a picture, but is it also possible to map it to a negative number in the picture, so it will be transparent?
My question is:
I UV-Mapped a Model in Blender and imported it with the UV-Coords into my ThreeJS-Code. Now i need to scale the texture down, like described before. Do I have to remap the UV-Cords or do i have to manipulate the image and add an transparent edge?
Further, will I be able on the same way to move the image on the picture?
I already achieved this kind of usage in java3d by manipulating bufferedImages and drawing them onto transparent ones. I am not sure this will be possible using javascript, so i want to know if it is possible by texture-mapping.
Thank you for your time and your suggestions!
This can be done using mapping the 3d -plane to a canvas ,where the image is drawn (fabric.js can be used for canvas drawings).Inshort set the canvas as texture for the 3d model
yourmodel.material.map = document.getElementById("yourCanvas");
Hope it helps :)
Yes. In THREE, there are some controls on the texture object..
texture.repeat, and texture.offset .. they are both Vector2()s.
To repeat the texture twice you can do texture.repeat.set(2,2);
Now if you just want to scale but NOT repeat, there is also the "wrapping mode" for the texture.
texture.wrapS (U axis) and texture.wrapT (V axis) and these can be set to:
texture.wrapS = texture.wrapT = THREE.ClampToEdgeWrapping;
This will make the edge pixels of the texture extend off to infinity when sampling, so you can position a single small texture, anywhere on the surface of your uv mapped object.
https://threejs.org/docs/#api/textures/Texture
Between those two options (including texture.rotation) you can position/repeat a texture pretty flexibly.
If you need something even more complex.. like warping the texture or changing it's colors, you may want to change the UV's in your modeller, or draw your texture image into a canvas, modify the canvas, and use the canvas as your texture image, as described in ArUns answer. Then you can modify it at runtime as well.

Fixed size for certain objects

I have a 3D scene, which includes interface objects rendered over the same perspective camera. I want those objects to have a fixed size based on their distance from their center to the camera.
My current solution involves adding/removing those objects to an array and calculating the bounding sphere for each of those objects every frame and rescaling based on the camera distance to the center of the spheres.
But that doesn't feel right and will at some point cost too many resources once the count of those objects gets big enough. Is there any efficient way to solve this e.g. setting a fixed size for the objects? I don't really want to use a second camera, because that would display the interface objects in a weird way, as if they don't really belong there.
If I understand you correctly, you want an object to stay the same size as you move the camera closer/further. One method is to first "attach" the item to the camera, then offset it. This would be executed every frame.
fixedSizeObject.position.copy( this.camera.position );
fixedSizeObject.rotation.copy( this.camera.rotation );
fixedSizeObject.updateMatrix();
fixedSizeObject.translateZ( -30 ); //where -30 is the distance you'd like from the camera

How to get consistent gradient fill in GDI+ when using a rotated LinearGradientBrush?

I'm using GDI+ in my application, and I need to use a rotated LinearGradientBrush to paint several rects in the exact same way. However, although I'm calling the same code to fill each rect, the results aren't what I expect. Here's the code to create the gradient fill, where rcDraw is the rect containing the area to paint for each rect. These coordinates are in the parent window's coordinates, so they are not identical for the 2 rects.
g_hbrLinear = new LinearGradientBrush( Rect( 0, rcDraw.top, 0, rcDraw.bottom - rcDraw.top ),
clrStart, clrEnd, (REAL) 80, FALSE );
What I see on screen looks like this (http://www.nnanime.com/bugs/LinGradBrush-rotate10.png). You can see that it's as if the fill from the first rect continues into the second one. What I really want is to have the 2 rects look identical. I think I can do that if I paint each rect separately using its own client coordinates, but for the purposes of my app, I need to use the parent window's coordinates.
I guess what I'm asking is, how does GDI+ calculate the "origin" of a fill? Is it always based on 0,0 in the coordinate system you use? Is there a way to shift it? I tried TranslateTransform, but it doesn't seem to shift the fill in a way that I find predictable or understandable.
The rect passed to the linear gradient brush determines the where the left and right colors will sit, and the gradient will be painted within this rectangle.
So, I think you need to create a brush for each rectangle you are painting, where the rectangle you are painting is also passed to the constructor for the linear gradient brush.
My experience with the "transform" of linear gradient brushes matches yours; I haven't been able to understand what it's supposed to do.
You can think of a brush in GDI+ as a function mapping world co-ordinates to a color. What the brush looks like at a given point does not change based on the shape being filled.
It does change with the transform of the Graphics object you're drawing on. So, if you don't want to change the brush, you could temporarily change the transform of the Graphics object so that the rectangle you're drawing has a specific, known size and position in world coordinates. The BeginContainer and EndContainer methods should make this easy.
(There is also the RenderingOrigin property but it only affects hatch brushes, which oddly are unaffected by world transforms.)

Resources