I'm actually getting started on a small project about music visualization, using Web Audio API and D3. I use the getByteFrequencyData() method as follows (implementation has been modified for clarity. This is just a "setup model"):
const ctx = new (window.AudioContext || window.webkitAudioContext)()
const analyzer = ctx.createAnalyser()
const src = ctx.createMediaElementSource(audioPlayer)
analyzer.fftSize = 512
src.connect(analyzer)
src.connect(ctx.destination)
//...
const getFrequencyData = () => {
audioAnimationFrameId.current = requestAnimationFrame(getFrequencyData)
const bufferLength = audioAnalyzer.frequencyBinCount
const data = new Uint8Array(bufferLength) // 256
audioAnalyzer.getByteFrequencyData(data)
return data
}
if (audioAnalyzer && isAudioPlaying) getFrequencyData()
My problem here, is that on my Uint8Array I'm getting a lot of "0" (mostly at the end of the array) which breaks my radial chart (as the D3 scale domain depends on data array length). I understood that these values represents decibels at some frequency, but is this normal to get all these zeros at the end?
I'm guessing I may have missed some configuration on my analyser, but after a lot of tests I wasn't able to find any solution... Did I miss something, or are these values correct, and finally it would be the D3 scale that I should change?
Here are my D3 scales:
const angle = scaleLinear({
domain: [0, 256], // NB: the array length
range: [0, Math.PI * 2]
})
const radius = scaleLinear({
domain: [0, 255], // NB: frequency data goes from 0 to 255
range: [0, (DIAMETER / 2)]
})
If someone could help me to understand these strange results, it would be really great!
Thx a lot!
See the spec for getByteFrequencyData. The formula there explains why the values are zero. Basically at the higher frequencies, the actual dB values fall below dB_min, so it's clamped to zero. If you don't want this and it's ok to have floating-point values, use getFloatFrequencyData.
Related
Alright I think I've mostly figured out how the MagicTile works, the source code at least (not really the Math as much yet). It all begins with the build and render calls in the MainForm.cs. It generates a tessellation like this:
First, it "generates" the tessellation. Since MagicTile is a Rubic's cube-like game, I guess it just statically computes all of the tiles up front. It does this by starting with a central tile, and reflecting its polygon (and the polygon's segments and points) using some sort of math which I've read about several times but I couldn't explain. Then it appears they allow rotations of the tessellation, where they call code like this in the "renderer":
Polygon p = sticker.Poly.Clone();
p.Transform( m_mouseMotion.Isometry );
Color color = GetStickerColor( sticker );
GLUtils.DrawConcavePolygon( p, color, GrabModelTransform() );
They track the mouse position, like if you are dragging, and somehow that is used to create an "isometry" to augment / transform the overall tessellation. So then we transform the polygon using that isometry. _It appears they only do the central tile and 1 or 2 levels after that, but I can't quite tell, I haven't gotten the app to run and debug yet (it's also in C# and that's a new language for me, coming from TypeScript). The Transform function digs down like this (here it is in TypeScript, as I've been converting it):
TransformIsometry(isometry: Isometry) {
for (let s of this.Segments) {
s.TransformIsometry(isometry)
}
this.Center = isometry.Apply(this.Center)
}
That goes into the transform for the segments here:
/// <summary>
/// Apply a transform to us.
/// </summary>
TransformInternal<T extends ITransform>(transform: T) {
// NOTES:
// Arcs can go to lines, and lines to arcs.
// Rotations may reverse arc directions as well.
// Arc centers can't be transformed directly.
// NOTE: We must calc this before altering the endpoints.
let mid: Vector3D = this.Midpoint
if (UtilsInfinity.IsInfiniteVector3D(mid)) {
mid = this.P2.MultiplyWithNumber(UtilsInfinity.FiniteScale)
}
mid = UtilsInfinity.IsInfiniteVector3D(this.P1)
? this.P2.MultiplyWithNumber(UtilsInfinity.FiniteScale)
: this.P1.MultiplyWithNumber(UtilsInfinity.FiniteScale)
this.P1 = transform.ApplyVector3D(this.P1)
this.P2 = transform.ApplyVector3D(this.P2)
mid = transform.ApplyVector3D(mid)
// Can we make a circle out of the transformed points?
let temp: Circle = new Circle()
if (
!UtilsInfinity.IsInfiniteVector3D(this.P1) &&
!UtilsInfinity.IsInfiniteVector3D(this.P2) &&
!UtilsInfinity.IsInfiniteVector3D(mid) &&
temp.From3Points(this.P1, mid, this.P2)
) {
this.Type = SegmentType.Arc
this.Center = temp.Center
// Work out the orientation of the arc.
let t1: Vector3D = this.P1.Subtract(this.Center)
let t2: Vector3D = mid.Subtract(this.Center)
let t3: Vector3D = this.P2.Subtract(this.Center)
let a1: number = Euclidean2D.AngleToCounterClock(t2, t1)
let a2: number = Euclidean2D.AngleToCounterClock(t3, t1)
this.Clockwise = a2 > a1
} else {
// The circle construction fails if the points
// are colinear (if the arc has been transformed into a line).
this.Type = SegmentType.Line
// XXX - need to do something about this.
// Turn into 2 segments?
// if( UtilsInfinity.IsInfiniteVector3D( mid ) )
// Actually the check should just be whether mid is between p1 and p2.
}
}
So as far as I can tell, this will adjust the segments based on the mouse position, somehow. Mouse position isometry updating code is here.
So it appears they don't have the functionality to "move" the tiling, like if you were walking on it, like in HyperRogue.
So after having studied this code for a few days, I am not sure how to move or walk along the tiles, moving the outer tiles toward the center, like you're a giant walking on Earth.
First small question, can you do this with MagicTile? Can you somehow update the tessellation to move a different tile to the center? (And have a function which I could plug a tween/animation into so it animates there). Or do I need to write some custom new code? If so, what do I need to do roughly speaking, maybe some pseudocode?
What I imagine is, user clicks on the outer part of the tessellation. We convert that click data to the tile index in the tessellation, then basically want to do tiling.moveToCenter(tile), but frame-by-frame-animation, so not quite sure how that would work. But that moveToCenter, what would that do in terms of the MagicTile rendering/tile-generating code?
As I described in the beginning, it first generates the full tessellation, then only updates 1-3 layers of the tiles for their puzzles. So it's like I need to first shift the frame of reference, and recompute all the potential visible tiles, somehow not recreating the ones that were already created. I don't quite see how that would work, do you? Once the tiles are recomputed, then I just re-render and it should show the updated center.
Is it a simple matter of calling some code like this again, for each tile, where the isometry is somehow updated with a border-ish position on the tessellation?
Polygon p = sticker.Poly.Clone();
p.Transform( m_mouseMotion.Isometry );
Or must I do something else? I can't quite see the full picture yet.
Or is that what these 3 functions are doing! TypeScript port of the C# MagicTile:
// Move from a point p1 -> p2 along a geodesic.
// Also somewhat from Don.
Geodesic(g: Geometry, p1: Complex, p2: Complex) {
let t: Mobius = Mobius.construct()
t.Isometry(g, 0, p1.Negate())
let p2t: Complex = t.ApplyComplex(p2)
let m2: Mobius = Mobius.construct()
let m1: Mobius = Mobius.construct()
m1.Isometry(g, 0, p1.Negate())
m2.Isometry(g, 0, p2t)
let m3: Mobius = m1.Inverse()
this.Merge(m3.Multiply(m2.Multiply(m1)))
}
Hyperbolic(g: Geometry, fixedPlus: Complex, scale: number) {
// To the origin.
let m1: Mobius = Mobius.construct()
m1.Isometry(g, 0, fixedPlus.Negate())
// Scale.
let m2: Mobius = Mobius.construct()
m2.A = new Complex(scale, 0)
m2.C = new Complex(0, 0)
m2.B = new Complex(0, 0)
m2.D = new Complex(1, 0)
// Back.
// Mobius m3 = m1.Inverse(); // Doesn't work well if fixedPlus is on disk boundary.
let m3: Mobius = Mobius.construct()
m3.Isometry(g, 0, fixedPlus)
// Compose them (multiply in reverse order).
this.Merge(m3.Multiply(m2.Multiply(m1)))
}
// Allow a hyperbolic transformation using an absolute offset.
// offset is specified in the respective geometry.
Hyperbolic2(
g: Geometry,
fixedPlus: Complex,
point: Complex,
offset: number,
) {
// To the origin.
let m: Mobius = Mobius.construct()
m.Isometry(g, 0, fixedPlus.Negate())
let eRadius: number = m.ApplyComplex(point).Magnitude
let scale: number = 1
switch (g) {
case Geometry.Spherical:
let sRadius: number = Spherical2D.e2sNorm(eRadius)
sRadius = sRadius + offset
scale = Spherical2D.s2eNorm(sRadius) / eRadius
break
case Geometry.Euclidean:
scale = (eRadius + offset) / eRadius
break
case Geometry.Hyperbolic:
let hRadius: number = DonHatch.e2hNorm(eRadius)
hRadius = hRadius + offset
scale = DonHatch.h2eNorm(hRadius) / eRadius
break
default:
break
}
this.Hyperbolic(g, fixedPlus, scale)
}
I have a project that loads various models (.obj, can be anything) and generates particles from the geometry position using Float23Array's.
Given the geometries of each model are completely different, this causes the number of particles to change depending on which model is used.
The code I'm using to populate the buffer attribute is below:
import { OBJLoader } from 'three/examples/jsm/loaders/OBJLoader.js';
const dataSize = 1024;
const modelLoader = new OBJLoader();
const modelObject = await modelLoader.loadAsync('/path/to/model.obj');
const positionData = new Float32Array(dataSize * dataSize * 3);
const modelChildren = modelObject.children as Mesh[];
const bufferPositions = modelChildren
.filter(({ isMesh }) => isMesh)
.map(({ geometry: { attributes } }) => attributes.position.array as Float32Array);
const combinedBuffer = concatFloat32Arrays(bufferPositions); // merge Float32's
for (let index = 0, length = positionData.length; index < length; index += 3) {
positionData[index] = combinedBuffer[index];
positionData[index + 1] = combinedBuffer[index + 1];
positionData[index + 2] = combinedBuffer[index + 2];
}
return new Float32BufferAttribute(positionData, 3);
A portion of the positionData array is empty, e.g 0, obviously because the combinedBuffer[index] is undefined.
Can anyone point me in the right direction?
I basically want an equal number of particles for each geometry, regardless of the model's geometry complexity.
You normally handle this use case by allocating a large enough buffer and then use BufferGeometry.setDrawRange() to decide which part of the data you want to draw. The values of vertices outside of the draw range doesn't matter with this approach.
Let’s say I want to make 100 objects - for example cars, like the one you see here:
This car is currently comprised of 5 meshes: one yellow Cube and four blue Spheres
What I’d like to know is what would be the most efficient/correct way to make 100 of these cars - or maybe 500 - in terms of memory management/ CPU performance, etc.
The way I’m currently going about doing this is as follows:
Make an empty THREE.Group called “newCarGroup” -
Create the yellow rectangular Mesh for the body of the car - called “carBodyMesh”
Create four blue Sphere Meshes for the Tires called “tire1Mesh”, “tire2Mesh”, “tire3Mesh”, and “tire4Mesh”
Add the Body and the four Tires to the “newCarGroup”
And finally, in a FOR loop, create/instantiate 100 “newCarGroup” objects, adding each one to the SCENE at a random position
The code is below.
It's working perfectly well right now, but I’d like to know if this is the “proper”/best way to do this?
Consider it’s possible I might end up needing 1,000 cars - or 5,000 cars. So will this scale properly?
Also, I need to add more objects to the car: like 4 windows - actually make that 6 windows, to also include the front and back windshields, then four headlights, etc.
So the final Car Object alone may end up being comprised of 20 meshes - or more.
Being that I’m kinda new to THREE.JS I wanna make sure I develop good habits and go about this sort of thing the right way.
Here’s my code:
function makeOneCar() {
var newCarGroup = new THREE.Group();
// 1. CAR-Body:
const bodyGeometry = new THREE.BoxGeometry(30, 10, 10);
const bodyMaterial = new THREE.MeshPhongMaterial({ color: "yellow" } );
const carBodyMesh = new THREE.Mesh(bodyGeometry, bodyMaterial);
// 2. TIRES:
const tireGeometry = new THREE.SphereGeometry(2, 16, 16);;
const tireMaterial = new THREE.MeshPhongMaterial( { color: "blue" } );
const tire1Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire2Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire3Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire4Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
// TIRE 1 Position:
tire1Mesh.position.x = carBodyMesh.position.x - 11;
tire1Mesh.position.y = carBodyMesh.position.y - 4.15;
tire1Mesh.position.z = carBodyMesh.position.z + 4.5;
// TIRE 2 Position:
tire2Mesh.position.x = carBodyMesh.position.x + 11;
tire2Mesh.position.y = carBodyMesh.position.y - 4.15;
tire2Mesh.position.z = carBodyMesh.position.z + 4.5;
// TIRE 3 Position:
tire3Mesh.position.x = carBodyMesh.position.x - 11;
tire3Mesh.position.y = carBodyMesh.position.y - 4.15;
tire3Mesh.position.z = carBodyMesh.position.z - 4.5;
// TIRE 4 Position:
tire4Mesh.position.x = carBodyMesh.position.x + 11;
tire4Mesh.position.y = carBodyMesh.position.y - 4.15;
tire4Mesh.position.z = carBodyMesh.position.z - 4.5;
// Putting it all together:
newCarGroup.add(carBodyMesh);
newCarGroup.add(tire1Mesh);
newCarGroup.add(tire2Mesh);
newCarGroup.add(tire3Mesh);
newCarGroup.add(tire4Mesh);
// Setting (x, y, z) Coordinates - RANDOMLY
let randy = Math.floor(Math.random() * 10);
let newCarGroupX = randy % 2 == 0 ? Math.random() * 250 : Math.random() * -250;
let newCarGroupY = 0.0;
let newCarGroupZ = randy % 2 == 0 ? Math.random() * 250 : Math.random() * -250;
newCarGroup.position.set(newCarGroupX, newCarGroupY, newCarGroupZ)
scene.add(newCarGroup);
}
function makeCars() {
for(var carCount = 0; carCount < 100; carCount ++) {
makeOneCar();
}
}
I’d like to know if this is the “proper”/best way to do this?
This is subjective. You say the method works great for your current use-case, so for that use-case, it is fine.
So will this scale properly?
The simple answer is: No. The more complex answer is: ...not really.
You're re-using the geometry and materials, which is good. But every Mesh you create has meta information surrounding it, which adds to your overall memory footprint.
Also, every standard Mesh you add incurs what is known as a "draw call", which is the GPU drawing that particular shape. Instead, take a look at InstancedMesh. This allows the GPU to be given instructions on how to draw the shape throughout the scene once. Yes, rather than drawing each cube individually, the GPU can draw all the cubes at the same time, and they can even have different colors and transformations. There are limitations to this class, but it's a good starting point to understanding how instancing works.
I have a working jsfiddle that I made using JSXGraph, a graphing toolkit for mathematical functions. I'd like to port it to D3.js for personal edification, but I'm having a hard time getting started.
The jsfiddle graphs the value of -ke(-x/T) + k, where x is an independent variable and the values of k and t come from sliders.
board.create('functiongraph',
[
// y = -k * e(-x/t) + k
function(x) { return -k.Value()*Math.exp(-x/t.Value()) + k.Value(); },
0
]
);
The three things I'm most stumped on:
Actually drawing the graph and its axes - it's not clear to me which of the many parts of the D3 API I should be using, or what level of abstraction I should be operating at.
Re-rendering the graph when a slider is changed, and making the graph aware of the value of the sliders.
Zooming out the graph so that the asymptote defined by y = k is always visible and not within the top 15% of the graph. I do this now with:
function getAestheticBoundingBox() {
var kMag = k.Value();
var tMag = t.Value();
var safeMinimum = 10;
var limit = Math.max(safeMinimum, 1.15 * Math.max(k.Value(), t.Value()));
return [0, Math.ceil(limit), Math.ceil(limit), 0];
}
What's the right way for me to tackle this problem?
I threw this example together really quick, so don't ding me on the code quality. But it should give you a good starting point for how you'd do something like this in d3. I implemented everything in straight d3, even the sliders.
As #LarKotthoff says, the key is that you have to loop your function and build your data:
// define your function
var func = function(x) {
return -sliders.k() * Math.exp(-x / sliders.t()) + sliders.k();
},
// your step for looping function
step = 0.01;
drawPlot();
function drawPlot() {
// avoid first callback before both sliders are created
if (!sliders.k ||
!sliders.t) return;
// set your limits
var kMag = sliders.k();
var tMag = sliders.t();
var safeMinimum = 10;
var limit = Math.max(safeMinimum, 1.15 * Math.max(kMag, tMag));
// generate your data
var data = [];
for (var i = 0; i < limit; i += step) {
data.push({
x: i,
y: func(i)
})
}
// set our axis limits
y.domain(
[0, Math.ceil(limit)]
);
x.domain(
[0, Math.ceil(limit)]
);
// redraw axis
svg.selectAll("g.y.axis").call(yAxis);
svg.selectAll("g.x.axis").call(xAxis);
// redraw line
svg.select('.myLine')
.attr('d', lineFunc(data))
}
I'm writing an export script (ruby) in SketchUp, and I'm having trouble applying the same transformation in Three.js side, so that objects have the same rotation in Three.js as they appear in SketchUp.
I can read the rotation using the SketchUp Transformation class: http://www.sketchup.com/intl/en/developer/docs/ourdoc/transformation.php
I can get these kind of values from a rotated component that I pass to my Three.js code. All are Vectors in the form of X, Y, Z
xaxis: 0.0157771536190692,-0.0,-0.0199058138160762
yaxis: -0.0199058138160762,0.0,-0.0157771536190692
zaxis: 0.0,0.0254,-0.0
origin: 1.4975125146729,0.0,-1.25735397455338
Objects are positioned correctly if I just copy the values from origin to Object3D.position. But I have no idea how to apply the xaxis, yaxis and zaxis values to Object3D.rotation.
Three.js has various ways to rotate a model, via Matrix manipulation, quaternion, angles, radians and whatnot. But how to set object rotation using those axis values?
EDIT:
SketchUp Transformation provides also a .to_a (to array) method, which I think is supposed to return a 16 element matrix. I tried to use that in Three.js:
// tm is from SketchUp:Transformation to_a
var tm = "0.621147780278315,0.783693457325836,-0.0,0.0,-0.783693457325836,0.621147780278315,0.0,0.0,0.0,0.0,1.0,0.0,58.9571856170433,49.5021249824165,0.0,1.0";
tm = tm.split(",");
for (var i = 0; i < tm.length; i++) {
tm[i] = tm[i] * 1.0;
}
var matrix = new THREE.Matrix4(tm[0], tm[1], tm[2], tm[3], tm[4], tm[5], tm[6], tm[7], tm[8], tm[9], tm[10], tm[11], tm[12], tm[13], tm[14], tm[15]);
obj.applyMatrix(matrix);
This results in a total mess however, so there's still something wrong.
Based on information here: http://sketchucation.com/forums/viewtopic.php?f=180&t=46944&p=419606&hilit=matrix#p419606
I was able to construct a working Matrix4. I think the problem was both in unit scales (see the .to_m conversion in some of the elements) and the order of matrix array elements. In Sketchup:
tr = transformation.to_a
trc = [tr[0],tr[8],-(tr[4]),tr[12].to_m, tr[2],tr[10],-(tr[6]),tr[14].to_m, -(tr[1]),-(tr[9]),tr[5],-(tr[13].to_m), 0.0, 0.0, 0.0, 1.0] # the last 4 values are unused in Sketchup
el.attributes["tm"] = trc.join(",") # rotation and scale matrix
el.attributes["to"] = convertscale(transformation.origin) # position
In Three.js
var origin = this.parsevector3(node.getAttribute("to"));
obj.position = origin;
var tm = node.getAttribute("tm");
tm = tm.split(",");
for (var i = 0; i < tm.length; i++) {
tm[i] = tm[i] * 1.0;
}
var matrix = new THREE.Matrix4(tm[0], tm[1], tm[2], tm[3], tm[4], tm[5], tm[6], tm[7], tm[8], tm[9], tm[10], tm[11], tm[12], tm[13], tm[14], tm[15]);
obj.applyMatrix(matrix);
Sorry there is some application specific logic in the code, but I think the idea can be found regardless, if someone runs into similar problems.
SketchUp Transformation provides also a .to_a (to array) method, which I think is supposed to return a 16 element matrix.
It has been a while since you posted this, but here's a useful link for people who bump into this in the future: http://www.martinrinehart.com/models/tutorial/tutorial_t.html