Deform a plane with datas - three.js

For a dataviz project, I have some datas in an array. I need to use these datas to render a wave in ThreeJS like this:
The height of each peak depends on the given size and must be in its circle depending on the year.
I thought about creating a plan and deforming it with a vertex shader based on the data. Unfortunately, it seems that this is not possible. I'm a bit lost and I clearly need advices about how to do this.
The array looks like this:
[
{
"year": 2016,
"dimension": 28.400 // hectares
},
{
"year": 1995,
"dimension": 12.200
}
]

There is a "displacementMap" texture property on the THREE.Materials.. this will displace the vertices vertically based on the brightness of the pixel in the texture.
You can make a plane with a bunch of subdivisions, like new THREE.Mesh(new THREE.PlaneGeometry(10,10, 30,30),new THREE.MeshStandardMaterial({displacementMap:yourDisplacementTexture})
For the displacementMap texture, you can either use an external image, or create a canvas, draw your height data into that, and then create a THREE.Texture( thecanvas ).
Yet another option is to create the subdivided plane using THREE.PlaneGeometry()
then get the geometry.vertices, and modify them by setting the .z value in each vertex.. followed up with geometry.verticesNeedUpdate = true (to tell the renderer to send the modifications to the GPU).

Related

Three.js: Merge BufferGeometries while keeping attribute position separate for each geometry

In the application I develop I am trying to combine the closest dots of the same color, dots created from a 3D coordinates vector and using Points, by using a BufferGeometry with custom positions, positions set by using the setAttribute command of the geometry and giving it a Float32BufferAttribute(positionArray, 3) object. The problem I encountered is that I have a lot of such geometries (tens of thousands usually) and since I add each one separately to the group, I have big performance issues.
So I tried to merge the buffer geometries in a single one to draw all of them at once using BufferGeometryUtils.mergeBufferGeometries, but that didn't work.
How it looks without merging the geometries
How it looks with merged geometries
This is how I create the geometries:
const newGeometry = baseGeometry.clone();
newGeometry.setAttribute(
'position',
new Float32BufferAttribute(geometryPositions, 3)
);
newGeometry.setAttribute(
'color',
new Float32BufferAttribute(geometryColors, 3)
);
newGeometry.computeBoundingSphere();
geometryArray.push(newGeometry);
And I add them like this to my group.
geometryArray.forEach((e) => {
this.group.add(new Mesh(e, baseMaterial));
});
This is how I merge them and add them to the group.
const merged = BufferGeometryUtils.mergeBufferGeometries(geometryArray);
this.group.add(new Mesh(merged, baseMaterial));
As you can see the geometries use the same material in all cases, the color being defined in the colors attribute of each geometry and vertexColors is set to true on the MeshBasicMaterial.
For a single geometry in the array, the position/color data looks like this. The sizes can be random, and the array may or may not be empty depending if the points have neighbors. The format is 3D coordinates [x1,y1,z1,x2,y2,z2,.....].
const positions = {
itemSize: 3,
type: 'Float32Array',
array: [
4118.44775390625, -839.14404296875, 845.7374877929688, 4125.9306640625,
-808.6709594726562, 856.7002563476562, 4118.44775390625,
-839.14404296875, 845.7374877929688, 4129.93017578125, -870.6640625,
828.08154296875,
],
normalized: false,
};
const colors = {
itemSize: 3,
type: 'Float32Array',
array: [
0.9725490212440491, 0.5960784554481506, 0.03529411926865578,
0.9725490212440491, 0.5960784554481506, 0.03529411926865578,
0.9725490212440491, 0.5960784554481506, 0.03529411926865578,
0.9725490212440491, 0.5960784554481506, 0.03529411926865578,
],
normalized: false,
};
How could I improve the performance of the code above and keeping the custom positions and colors of each geometry intact?

Is there a simple way of handling (transforming) a group of objects in SkiaSharp?

In a nutshell, let's say, I need to draw a complex object (arrow) which consists of certain amount of objects, like five (or more) lines, for instance. And what's more important, that object must be transformed with particular (dynamic) coordinates (including scaling, possibly).
My question is whether SkiaSharp has anything which I can use for manipulating of this complex object transformation (some sort of grouping etc.) or do I still need to calculate every single point manually (with matrix, for instance).
This question is related particularly to SkiaSharp as I use it on Xamarin, but maybe some general answers from Skia can also help with it?
I think, the question might be too common (and possibly not for stackoverflow exactly), but I just can't find any specific information in google.
Yes, I know how to use SkiaSharp for drawing primitives.
create an SKPath and add lines and other shapes to it
SKPath path = new SKPath();
path.LineTo(...);
...
...
then draw the SKPath on your canvas
canvas.DrawPath(path,paint);
you can apply a transform to the entire path before drawing
var rot = new SKMatrix();
SKMatrix.RotateDegrees(ref rot, 45.0f);
path.Transform(rot);
If you are drawing something more complex than a path SKPicture is perfect for this. You can set it up so that you construct it once and then reuse it easily and efficiently. In the example below, the SKPicture's origin is in the center of a 100 x 100 rectangle but that is arbitrary.
SKPicture myPicture;
SKPicture MyPicture {
get {
if(myPicture != null) {
return myPicture;
}
using(SKPictureRecorder recorder = new SKPictureRecorder())
using(SKCanvas canvas = recorder.BeginRecording(new SKRect(-50, -50, 50, 50)))
// draw using primitives
...
myPicture = recorder.EndRecording();
}
return myPicture;
}
}
Then you apply your transforms to the canvas, draw the picture and restore the canvas state. offsetX and offsetY correspond to where the origin of the SKPicture will be rendered.
canvas.Save();
canvas.Translate(offsetX, offsetY);
canvas.Scale(scaleAmount);
canvas.RotateDegrees(degrees);
canvas.DrawPicture(MyPicture);
canvas.Restore();

Aframe create shape from vectors

I'm doing something like this, How to create a custom square in a-frame but with custom shapes (i.e. drawing around an image to make a hotspot to make part of that image interactive)
I've got the line working and I'm now trying to convert this into a fill.
this._mesh = make('a-plane', {
geometry:"buffer: false"
}, this.el)
this._mesh.addEventListener('loaded', e => {
this._mesh.getObject3D('mesh').geometry.vertices = this._points
this._mesh.getObject3D('mesh').geometry.computeFaceNormals()
this._mesh.getObject3D('mesh').geometry.computeVertexNormals()
})
I'm getting close but it's only showing one triangle i.e. something like this
How do i get the shape to fill the whole area? I have done this before with a ConvexGeometry and Quick hull but it seems cumbersome.
I got the idea for updating the vertices of a plane from the above post.
If you create an array with Vector2 objects representing the contour of your shape in CCW order, you can use an instance of Shape and ShapeBufferGeometry to achieve the intended result. Just pass the array of points to the ctor of Shape. The following official three.js example demonstrates this approach:
https://threejs.org/examples/webgl_geometry_shapes
BTW: Instead of defining the contour by an array of points, you can also use the API of Shape to define shapes. A simple triangle would look like so:
var triangleShape = new THREE.Shape()
.moveTo( 80, 20 )
.lineTo( 40, 80 )
.lineTo( 120, 80 )
.lineTo( 80, 20 );
var geometry = new THREE.ShapeBufferGeometry( shape );
var mesh = new THREE.Mesh( geometry, material );
three.js R113

play separate animations in threejs

I exported a model(with several animations) from 3ds max and ultimately translated it into JSON. However, all of my animations are smashed into one animation. Is there a way in threejs to only play a certain range of frames , or do I need to do an export from 3ds max for each individual animation?
Thanks,
David
there are two types of animations.
I use mesh morph one when exporting from Blender. My resulting json has this line:
"morphTargets" : [{ "name": "animation_000000", "vertices": [...
then I load it using MorphAnimMesh:
mesh = new THREE.MorphAnimMesh( geometry, new THREE.MeshFaceMaterial( materials ) );
after loading I set idle animation by default:
mesh.duration = 4000; //4 seconds whole animation?
mesh.setFrameRange(1,50);
then on some event i just change animation range like this:
mesh.setFrameRange(51,80);
if you use bones and skins animation, your json model ends up with lines
"animations" : [...
"bones": [...
i havent used that one, so try this tutorial:
http://code.tutsplus.com/tutorials/webgl-with-threejs-models-and-animation--net-35993
also this is similar question:
How do I handle animated models in Three.js?

Drawing UI elements directly to the WebGL area with Three.js

In Three.js, is it possible to draw directly to the WebGL area (for a heads-up display or UI elements, for example) the way you could with a regular HTML5 canvas element?
If so, how can you get the context and what drawing commands are available?
If not, is there another way to accomplish this, through other Three.js or WebGL-specific drawing commands that would cooperate with Three.js?
My backup plan is to use HTML divs as overlays, but I think there should be a better solution.
Thanks!
You can't draw directly to the WebGL canvas in the same way you do with with regular canvas. However, there are other methods, e.g.
Draw to a hidden 2D canvas as usual and transfer that to WebGL by using it as a texture to a quad
Draw images using texture mapped quads (e.g. frames of your health box)
Draw paths (and shapes) by putting their vertices to a VBO and draw that with the appropriate polygon type
Draw text by using a bitmap font (basically textured quads) or real geometry (three.js has examples and helpers for this)
Using these usually means setting up a an orthographic camera.
However, all this is quite a bit of work and e.g. drawing text with real geometry can be expensive. If you can make do with HTML divs with CSS styling, you should use them as it's very quick to set up. Also, drawing over the WebGL canvas, perhaps using transparency, should be a strong hint to the browser to GPU accelerate its div drawing if it doesn't already accelerate everything.
Also remember that you can achieve quite much with CSS3, e.g. rounded corners, alpha transparency, even 3d perspective transformations as demonstrated by Anton's link in the question's comment.
I had exactly the same issue. I was trying to create a HUD (Head-up display) without DOM and I ended up creating this solution:
I created a separate scene with orthographic camera.
I created a canvas element and used 2D drawing primitives to render my graphics.
Then I created an plane fitting the whole screen and used 2D canvas element as a texture.
I rendered that secondary scene on top of the original scene
That's how the HUD code looks like:
// We will use 2D canvas element to render our HUD.
var hudCanvas = document.createElement('canvas');
// Again, set dimensions to fit the screen.
hudCanvas.width = width;
hudCanvas.height = height;
// Get 2D context and draw something supercool.
var hudBitmap = hudCanvas.getContext('2d');
hudBitmap.font = "Normal 40px Arial";
hudBitmap.textAlign = 'center';
hudBitmap.fillStyle = "rgba(245,245,245,0.75)";
hudBitmap.fillText('Initializing...', width / 2, height / 2);
// Create the camera and set the viewport to match the screen dimensions.
var cameraHUD = new THREE.OrthographicCamera(-width/2, width/2, height/2, -height/2, 0, 30 );
// Create also a custom scene for HUD.
sceneHUD = new THREE.Scene();
// Create texture from rendered graphics.
var hudTexture = new THREE.Texture(hudCanvas)
hudTexture.needsUpdate = true;
// Create HUD material.
var material = new THREE.MeshBasicMaterial( {map: hudTexture} );
material.transparent = true;
// Create plane to render the HUD. This plane fill the whole screen.
var planeGeometry = new THREE.PlaneGeometry( width, height );
var plane = new THREE.Mesh( planeGeometry, material );
sceneHUD.add( plane );
And that's what I added to my render loop:
// Render HUD on top of the scene.
renderer.render(sceneHUD, cameraHUD);
You can play with the full source code here:
http://codepen.io/jaamo/pen/MaOGZV
And read more about the implementation on my blog:
http://www.evermade.fi/pure-three-js-hud/

Resources