I am working on a web game and using three.js to do the 3d of the game. I am building a city and I want the buildings to look exactly like the picture below.
I want them to be a base material, without textures or colors because I will add colors later depending on the building status (like the game below).
This is what I have right now:
As you can see, I just have a bunch of different-sized boxes, with no details at all. How would I achieve the details seen in the game?
This is my current code:
export default function Map() {
const RenderBuildings = () => {
const buildings = []
for (let i = 0; i < 10; i++) {
buildings.push(<Box color="#dad3cb" width={1} height={1} depth={1} />)
}
return buildings
}
return (
<Canvas>
<CameraController />
<ambientLight intensity={1} />
<Ground color="#b0aa9d" width={40} height={1} depth={40} />
{RenderBuildings()}
</Canvas>
)
}
export const Box = (props: Props) => {
const { color, width, height, depth, ...rest } = props
const mesh = useRef<THREE.Mesh>()
const boxRef = useRef<THREE.Mesh>()
useEffect(() => {
if (!mesh.current) return
const _mesh = mesh.current
_mesh.position.x = Math.floor(Math.random() * 20)
_mesh.position.z = Math.floor(Math.random() * 20)
_mesh.scale.x =
Math.random() * Math.random() * Math.random() * Math.random() * 5 + 3
_mesh.scale.z = _mesh.scale.x
_mesh.scale.y =
Math.random() * Math.random() * Math.random() * _mesh.scale.x * 5 + 3
}, [mesh])
useEffect(() => {
if (!boxRef.current) return
const _boxRef = boxRef.current
_boxRef.applyMatrix4(new Matrix4().makeTranslation(0, 0.5, 0))
}, [boxRef])
return (
<mesh {...rest} ref={mesh}>
<boxGeometry args={[width, height, depth]} ref={boxRef} />
<meshToonMaterial color={color} />
</mesh>
)
}
Your first step should be to look at lighting your scene. This will allow for better depth to your buildings. A hemisphere light may be your best bet: https://threejs.org/examples/#webgl_lights_hemisphere. You can learn more about lights here: https://threejs.org/manual/#en/lights.
Next, take a look at geometry. You want those buildings to have some level of detail at the geometry level. You can use primitives like these: https://threejs.org/manual/#en/primitives or build a model in any modelling software and import it in. At that point, maybe make a few models and then instance them randomly.
Depending on how you do geometry, you will need to pick an appropriate material: https://threejs.org/manual/#en/materials. This should give you plenty of options.
I would also add in a bit of animation to keep things lively: https://threejs.org/examples/#webgl_animation_keyframes
Finally, a bit of fog always helps for ambiance: https://threejs.org/manual/#en/fog
Some more inspiration: https://demos.littleworkshop.fr/infinitown
Ultimately, building a scene like this is going to be a labour of love. There are no easy shortcuts. Keep at it!
Use a texture tile(s) to coordinate the layout (like a QR code but unencrypted). That is, a ~64x64 image with RGB values which correspond to scene logic. This method will be easier to block in features. You can store height in R, type in G, and owner in B. Maybe there is a sidecar file with more raw data (i.e. bills or level), referenced by owner and x/y.
Map generation could be defined by: matrix, random forest, evolutions... with rules for adjacent tiles and minimum areas. Then once you have a satisfactory system, it can be parsed as a Three.js scene. Geometry can be as easy or as hard as you make it. Extrude contiguous pixel groups & refine.
Don't forget to add dust clouds when you synchronize the data to obscure visual artifacts. You can probably do a lot with texture offsets in terms of variety, from a forced perspective.
Related
I have a basic three.js game working and I'd like to add particles. I've been searching online, including multiple questions here, and the closest I've come to getting a 'particle system' working is using a THREE.BufferGeometry, a THREE.BufferAttribute and a THREE.Points mesh. I set it up like this:
const particleMaterial = new THREE.PointsMaterial( { size: 10, map: particleTexture, blending: THREE.AdditiveBlending, transparent: true } );
const particlesGeometry = new THREE.BufferGeometry;
const particlesCount = 300;
const posArray = new Float32Array(particlesCount * 3);
for (let i = 0; i < particlesCount; i++) {
posArray[i] = Math.random() * 10;
}
const particleBufferAttribute = new THREE.BufferAttribute(posArray, 3);
particlesGeometry.setAttribute( 'position', particleBufferAttribute );
const particlesMesh = new THREE.Points(particlesGeometry, particleMaterial);
particlesMesh.counter = 0;
scene.add(particlesMesh);
This part works and displays the particles fine, at their initial positions, but of course I'd like to move them.
I have tried all manner of things, in my 'animate' function, but I am not happening upon the right combination. I'd like to move particles, ideally one vertex per frame.
The current thing I'm doing in the animate function - which does not work! - is this:
particleBufferAttribute.setXYZ( particlesMesh.counter, objects[0].position.x, objects[0].position.y, objects[0].position.z );
particlesGeometry.setAttribute( 'position', particleBufferAttribute );
//posArray[particlesMesh.counter] = objects[0].position;
particlesMesh.counter ++;
if (particlesMesh.counter > particlesCount) {
particlesMesh.counter = 0;
}
If anyone has any pointers about how to move Points mesh vertices, that would be great.
Alternatively, if this is not at all the right approach, please let me know.
I did find Stemkoski's ShaderParticleEngine, but I could not find any information about how to make it work (the docs are very minimal and do not seem to include examples).
You don't need to re-set the attribute, but you do need to tell the renderer that the attribute has changed.
particleBufferAttribute.setXYZ( particlesMesh.counter, objects[0].position.x, objects[0].position.y, objects[0].position.z );
particleBufferAttribute.needsUpdate = true; // This is the kicker!
By setting needsUpdate to true, the renderer knows to re-upload that attribute to the GPU.
This might not be concern for you, but just know that moving particles in this way is expensive, because you re-upload the position attribute every single frame, which includes all the position data for every particle you aren't moving.
I have a third person game using react-three-fiber and I want to add a sort of trailing light effect wherever the player moves. The light trail will disappear after a while so I was thinking of having a fixed size array for the points. This was my initial attempt at a solution:
const point = new THREE.Vector3();
const points = new Array(50).fill(null).map(p => new THREE.Vector3());
let index = 0;
const Trail = () => {
const ref = useRef();
const playerBody = useStore(state => state.player); // contains player position
const [path, setPath] = useState(new THREE.CatmullRomCurve3(points));
useFrame(() => { // equivalent to raf
const { x, y, z } = playerBody.position;
point.set(x, y, z);
points[index].copy(point);
index = (index + 1) % 50;
setPath(new THREE.CatmullRomCurve3(points));
if (ref && ref.current) {
ref.current.attributes.position.needsUpdate = true;
ref.current.computeBoundingSphere();
}
});
return (
<mesh>
<tubeBufferGeometry ref={ref} attach="geometry" args={[path, 20, .5, 8, false]} />
<meshBasicMaterial attach="material" color={0xffffff} />
</mesh>
)
}
Basically my thought process was to update the curve on every frame (or every x frames to be more performant) and to use an index to keep track of which position in the array of points to update.
However I get two problems with this:
TubeBufferGeometry doesn't update. Not sure if it's even possible to update the geometry after instantiation.
The pitfall I foresee in using this fixed array / index method is that once I hit the end of the array, I will have to wrap around to index 0. So then the curve interpolation would mess up because I'm assuming it takes the points sequentially. The last point in the array should connect to the first point now but it won't be like that.
To solve #2, I tried something like
points.unshift();
points.push(point.clone);
instead of points[index].copy(point); but I still couldn't get the Tube to update in the first place.
I wanted to see if there's a better solution for this or if this is the right approach for this sort of problem.
If you want to update the path of a TubeBufferGeometry, you also need to update all the vertices and normals, it is like building again the geometry.
Take a look here to understand how it works : https://github.com/mrdoob/three.js/blob/r118/src/geometries/TubeGeometry.js#L135
The important part is the generateSegment() function, and don't forget this part before :
const frames = path.computeFrenetFrames( tubularSegments, closed );
I made an example last year, feel free to use my code : https://codepen.io/soju22/pen/JzzvbR
I have the following d3/d3fc chart
https://codepen.io/parliament718/pen/BaNQPXx
The chart has a zoom behavior for the main area and a separate zoom behavior for the y-axis.
The y-axis can be dragged to rescale.
The problem I'm having trouble solving is that after dragging the y-axis to rescale and then subsequently panning the chart, there is a "jump" in the chart.
Obviously the 2 zoom behaviors have a disconnect and need to be synchronized but I'm racking my brain trying to fix this.
const mainZoom = zoom()
.on('zoom', () => {
xScale.domain(t.rescaleX(x2).domain());
yScale.domain(t.rescaleY(y2).domain());
});
const yAxisZoom = zoom()
.on('zoom', () => {
const t = event.transform;
yScale.domain(t.rescaleY(y2).domain());
render();
});
const yAxisDrag = drag()
.on('drag', (args) => {
const factor = Math.pow(2, -event.dy * 0.01);
plotArea.call(yAxisZoom.scaleBy, factor);
});
The desired behavior is for zooming, panning, and/or rescaling the axis to always apply the transformation from wherever the previous action finished, without any "jumps".
OK, so I've had another go at this - as mentioned in my previous answer, the biggest issue you need to overcome is that the d3-zoom only permits symmetrical scaling. This is something that has been widely discussed, and I believe Mike Bostock is addressing this in the next release.
So, in order to overcome the issue, you need to use multiple zoom behaviour. I have created a chart that has three, one for each axis and one for the plot area. The X & Y zoom behaviours are used to scale the axes. Whenever a zoom event is raised by the X & Y zoom behaviours, their translation values are copied across to the plot area. Likewise, when a translation occurs on the plot area, the x & y components are copied to the respective axis behaviours.
Scaling on the plot area is a little more complicated as we need to maintain the aspect ratio. In order to achieve this I store the previous zoom transform and use the scale delta to work out a suitable scale to apply to the X & Y zoom behaviours.
For convenience, I've wrapped all of this up into a chart component:
const interactiveChart = (xScale, yScale) => {
const zoom = d3.zoom();
const xZoom = d3.zoom();
const yZoom = d3.zoom();
const chart = fc.chartCartesian(xScale, yScale).decorate(sel => {
const plotAreaNode = sel.select(".plot-area").node();
const xAxisNode = sel.select(".x-axis").node();
const yAxisNode = sel.select(".y-axis").node();
const applyTransform = () => {
// apply the zoom transform from the x-scale
xScale.domain(
d3
.zoomTransform(xAxisNode)
.rescaleX(xScaleOriginal)
.domain()
);
// apply the zoom transform from the y-scale
yScale.domain(
d3
.zoomTransform(yAxisNode)
.rescaleY(yScaleOriginal)
.domain()
);
sel.node().requestRedraw();
};
zoom.on("zoom", () => {
// compute how much the user has zoomed since the last event
const factor = (plotAreaNode.__zoom.k - plotAreaNode.__zoomOld.k) / plotAreaNode.__zoomOld.k;
plotAreaNode.__zoomOld = plotAreaNode.__zoom;
// apply scale to the x & y axis, maintaining their aspect ratio
xAxisNode.__zoom.k = xAxisNode.__zoom.k * (1 + factor);
yAxisNode.__zoom.k = yAxisNode.__zoom.k * (1 + factor);
// apply transform
xAxisNode.__zoom.x = d3.zoomTransform(plotAreaNode).x;
yAxisNode.__zoom.y = d3.zoomTransform(plotAreaNode).y;
applyTransform();
});
xZoom.on("zoom", () => {
plotAreaNode.__zoom.x = d3.zoomTransform(xAxisNode).x;
applyTransform();
});
yZoom.on("zoom", () => {
plotAreaNode.__zoom.y = d3.zoomTransform(yAxisNode).y;
applyTransform();
});
sel
.enter()
.select(".plot-area")
.on("measure.range", () => {
xScaleOriginal.range([0, d3.event.detail.width]);
yScaleOriginal.range([d3.event.detail.height, 0]);
})
.call(zoom);
plotAreaNode.__zoomOld = plotAreaNode.__zoom;
// cannot use enter selection as this pulls data through
sel.selectAll(".y-axis").call(yZoom);
sel.selectAll(".x-axis").call(xZoom);
decorate(sel);
});
let xScaleOriginal = xScale.copy(),
yScaleOriginal = yScale.copy();
let decorate = () => {};
const instance = selection => chart(selection);
// property setters not show
return instance;
};
Here's a pen with the working example:
https://codepen.io/colineberhardt-the-bashful/pen/qBOEEGJ
There are a couple of issues with your code, one which is easy to solve, and one which is not ...
Firstly, the d3-zoom works by storing a transform on the selected DOM element(s) - you can see this via the __zoom property. When the user interacts with the DOM element, this transform is updated and events emitted. Therefore, if you have to different zoom behaviours both of which are controlling the pan / zoom of a single element, you need to keep these transforms synchronised.
You can copy the transform as follows:
selection.call(zoom.transform, d3.event.transform);
However, this will also cause zoom events to be fired from the target behaviour also.
An alternative is to copy directly to the 'stashed' transform property:
selection.node().__zoom = d3.event.transform;
However, there is a bigger problem with what you are trying to achieve. The d3-zoom transform is stored as 3 components of a transformation matrix:
https://github.com/d3/d3-zoom#zoomTransform
As a result, the zoom can only represent a symmetrical scaling together with a translation. Your asymmetrical zoom as a applied to the x-axis cannot be faithfully represented by this transform and re-applied to the plot-area.
This is an upcoming feature, as already noted by #ColinE. The original code is always doing a "temporal zoom" that is un-synced from the transform matrix.
The best workaround is to tweak the xExtent range so that the graph believes that there are additional candles on the sides. This can be achieved by adding pads to the sides. The accessors, instead of being,
[d => d.date]
becomes,
[
() => new Date(taken[0].date.addDays(-xZoom)), // Left pad
d => d.date,
() => new Date(taken[taken.length - 1].date.addDays(xZoom)) // Right pad
]
Sidenote: Note that there is a pad function that should do that but for some reason it works only once and never updates again that's why it is added as an accessors.
Sidenote 2: Function addDays added as a prototype (not the best thing to do) just for simplicity.
Now the zoom event modifies our X zoom factor, xZoom,
zoomFactor = Math.sign(d3.event.sourceEvent.wheelDelta) * -5;
if (zoomFactor) xZoom += zoomFactor;
It is important to read the differential directly from wheelDelta. This is where the unsupported feature is: We can't read from t.x as it will change even if you drag the Y axis.
Finally, recalculate chart.xDomain(xExtent(data.series)); so that the new extent is available.
See the working demo without the jump here: https://codepen.io/adelriosantiago/pen/QWjwRXa?editors=0011
Fixed: Zoom reversing, improved behaviour on trackpad.
Technically you could also tweak yExtent by adding extra d.high and d.low's. Or even both xExtent and yExtent to avoid using the transform matrix at all.
A solution is given here https://observablehq.com/#d3/x-y-zoom
It uses a main zoom behavior that gets the gestures, and two ancillary zooms that store the transforms.
I am working on a project that displays buildings. The requirement is to let the building gradually fade out (transparent) based on the distance between the camera and the buildings. Also, this effect has to follow the camera's movement.
I consider using THREE.Fog(), but the Fog seems can only change the material's color.
Above is a picture of the building with white fog.
The buildings are in tiles, each tile is one single geometry (I merged all the buildings into one) using
var bigGeometry = new THREE.Geometry();
bigGeometry.merge(smallGeometry);
The purple/blue color thing is the ground, and ground.material.fog = false;. So the ground won't interact with the fog.
My question is:
Is it possible to let the fog interact with the building's material's opacity instead of color? (more white translate to more transparent)
Or should I use Shader to control the material's opacity based on distance to the camera? But I have no idea of how to do this.
I also considered adding alphaMap. If so, each building tile have to map an alphaMap and all these alphaMap have to interact with the camera's movement. It's going to be a tons of work.
So any suggestions?
Best Regards,
Arthur
NOTE: I suspect there are probably easier/prettier ways to solve this than opacity. In particular, note that partially-opaque buildings will show other buildings behind them. To address that, consider using a gradient or some other scene background, and choosing a fog color to match that, rather than using opacity. But for the sake of trying it...
Here's how to alter an object's opacity based on its distance. This doesn't actually require THREE.Fog, I'm not sure how you would use the fog data directly. Instead I'll use THREE.NodeMaterial, which (as of three.js r96) is fairly experimental. The alternative would be to write a custom shader with THREE.ShaderMaterial, which is also fine.
const material = new THREE.StandardNodeMaterial();
material.transparent = true;
material.color = new THREE.ColorNode( 0xeeeeee );
// Calculate alpha of each fragment roughly as:
// alpha = 1.0 - saturate( distance / cutoff )
//
// Technically this is distance from the origin, for the demo, but
// distance from a custom THREE.Vector3Node would work just as well.
const distance = new THREE.Math2Node(
new THREE.PositionNode( THREE.PositionNode.WORLD ),
new THREE.PositionNode( THREE.PositionNode.WORLD ),
THREE.Math2Node.DOT
);
const normalizedDistance = new THREE.Math1Node(
new THREE.OperatorNode(
distance,
new THREE.FloatNode( 50 * 50 ),
THREE.OperatorNode.DIV
),
THREE.Math1Node.SAT
);
material.alpha = new THREE.OperatorNode(
new THREE.FloatNode( 1.0 ),
normalizedDistance,
THREE.OperatorNode.SUB
);
Demo: https://jsfiddle.net/donmccurdy/1L4s9e0c/
Screenshot:
I am the OP. After spending some time reading how to use Three.js's Shader material. I got some code that is working as desired.
Here's the code: https://jsfiddle.net/yingcai/4dxnysvq/
The basic idea is:
Create an Uniform that contains controls.target (Vector3 position).
Pass vertex position attributes to varying in the Vertex Shader. So
that the Fragment Shader can access it.
Get the distance between each vertex position and controls.target. Calculate alpha value based on the distance.
Assign alpha value to the vertex color.
Another important thing is: Because the fade out mask should follow the camera move, so don't forget to update the control in the uniforms every frame.
// Create uniforms that contains control position value.
uniforms = {
texture: {
value: new THREE.TextureLoader().load("https://threejs.org/examples/textures/water.jpg")
},
control: {
value: controls.target
}
};
// In the render() method.
// Update the uniforms value every frame.
uniforms.control.value = controls.target;
I had the same issue - a few years later - and solved it with the .onBeforeCompile function which is maybe more convenient to use.
There is a great tutorial here
The code itself is simple and could be easily changed for other materials. It just uses the fogFactor as alpha value in the material.
Here the material function:
alphaFog() {
const material = new THREE.MeshPhysicalMaterial();
material.onBeforeCompile = function (shader) {
const alphaFog =
`
#ifdef USE_FOG
#ifdef FOG_EXP2
float fogFactor = 1.0 - exp( - fogDensity * fogDensity * vFogDepth * vFogDepth );
#else
float fogFactor = smoothstep( fogNear, fogFar, vFogDepth );
#endif
gl_FragColor.a = saturate(1.0 - fogFactor);
#endif
`
shader.fragmentShader = shader.fragmentShader.replace(
'#include <fog_fragment>', alphaFog
);
material.userData.shader = shader;
};
material.transparent = true
return material;
}
and afterwards you can use it like
const cube = new THREE.Mesh(geometry, this.alphaFog());
I want to implement per-object motion-blur effect based on calculating previous pixel position inside shaders.
This technic's first step is to build velocity map of moving objects. This step requirements is to have as uniform variables projection and model view matrices of current frame and the same matrices of previous frame.
How could I include those matrices to uniforms for some special shader? I supposed to have solution in some way like:
uniforms = {
some_uniform_var : {type: "m4", value: initialMatrix, getter: function(){
// `this` points to object
return this.worldMatrix
}}
}
But now in THREE.js this is not available. We could make some sort of monkey patching, but I cannot find best way to do it.
Any suggestions?
The current solvation to this problems consist of several parts. I'm using EffectComposer to make several passes of rendered scene, one of then - VelocityPass. It takes current and previous model-view matrix and projection matrix and produces two positions. Both of them then used to calculate speed of a point.
Shader looks like this
"void main() {",
"vec2 a = (pos.xy / pos.w) * 0.5 + 0.5;",
"vec2 b = (prevPos.xy / prevPos.w) * 0.5 + 0.5;",
"vec2 oVelocity = a - b;",
"gl_FragColor = vec4(oVelocity, 0.0, 1.);",
"}"
There're several issues of this decision.
Three.js has certain point where it injects matrices to object-related shaders. The very ending of SetProgram closure, which lives in WebGLRenderer. That's why I took the whole renderer file, renamed renderer to THREE.MySuperDuperWebGLRenderer and added couple lines of code in it:
A closure to access closures, defined in userspace:
function materialPerObjectSetup(material, object){
if( material.customUniformUpdate ){
material.customUniformUpdate( object, material, _gl ); // Yes, I had to pass it...
}
}
And calling of it in renderBuffer and renderBufferDirect;
var program = setProgram( camera, lights, fog, material, object );
materialPerObjectSetup(material, object);
Now - the userspace part:
velocityMat = new THREE.ShaderMaterial( THREE.VelocityShader );
velocityMat.customUniformUpdate = function(obj, mat, _gl){
// console.log("gotcha");
var new_m = obj.matrixWorld;
var p_uniforms = mat.program.uniforms;
var mvMatrix = camera.matrixWorldInverse.clone().multiplyMatrices(camera.matrixWorldInverse, obj._oldMatrix );
_gl.uniformMatrix4fv( p_uniforms.prevModelViewMatrix, false, mvMatrix.elements );
_gl.uniformMatrix4fv( p_uniforms.prevProjectionMatrix, false, camera.projectionMatrix.elements );
obj._pass_complete = true; // Необходимо сохранять состояние старой матрицы пока не отрисуется этот пасс.
// А то матрицы обновляются каждый рендеринг сцены.
}
_pass_complete needed when we rerendering scene several times - each time matrix recalculated. This trick help us save previous matrix untill we use it.
_gl.uniformMatrix4fv is needed, because three.js serves universes one time before rendering. No matter how much objects we have - other method will pass to the shader modelViewMatrix of the last one. This happens because I want to draw this scene fully using VelocityShader. There's no other way to say to Renderer to use some alternative material for objects.
And as final point of this explaination I putting here a trick to manage previous matrix of an object:
THREE.Mesh.prototype._updateMatrixWorld = rotatedObject.updateMatrixWorld;
THREE.Mesh.prototype._pass_complete = true;
Object.defineProperty(THREE.Mesh.prototype, "updateMatrixWorld", {get: function(){
if(this._pass_complete){
this._oldMatrix = this.matrixWorld.clone();
this._pass_complete = false;
}
this._updateMatrixWorld();
return (function(){
});
}})
I believe, that there's could be a nicer solution. But sometimes I need to act in rush. And such kind of monkey things could happen.