How to get shadows with entities created programmatically in RealityKit? - realitykit

I've created a series of boxes in code. There is a subtle ambient occlusion where it meets the table but I'm not getting any shadows - either ground shadows or shadows cast on the model itself. I do get ground shadows when I use a model from Reality Composer.
let config = ARWorldTrackingConfiguration()
config.planeDetection = [.horizontal]
session.run(config, options: [])
let anchorEntity = AnchorEntity(plane: .horizontal)
scene.addAnchor(anchorEntity)
var material = PhysicallyBasedMaterial()
material.baseColor = PhysicallyBasedMaterial.BaseColor(tint: .white)
let box1 = ModelEntity(mesh: .generateBox(size: [0.05, 0.03, 0.05]), materials: [material])
box1.position = [0, 0.015, 0]
anchorEntity.addChild(box1)
let box2 = ModelEntity(mesh: .generateBox(size: [0.15, 0.03, 0.05]), materials: [material])
box2.position = [0.05, 0.045, 0]
anchorEntity.addChild(box2)
let box3 = ModelEntity(mesh: .generateBox(size: [0.01, 0.05, 0.01]), materials: [material])
box3.position = [0, 0.085, 0]
anchorEntity.addChild(box3)
iOS: 16.1.1

Related

Best way to paint rectangles in three.js

EDIT: I solved my problem and this is what is was for. It now uses raw webgl and two triangles for each rectangle.
I'm a seasoned developer, but know next to nothing about 3d development.
I need to animate a million small rectangles where I set the coordinates in Javascript (rather than through a shader). (EDIT: It's a 2D job and I'm looking at webgl for performance reasons only.) I tweaked an existing threejs sample that uses "Points" to modify the coordinates in a BufferGeometry via Javascript and that performs really well, even with a million points.
The three.js concept of "Points", however, is a bit weird in that it appears they have to be squares - my rectangles can't be quite squares though, and they are of slightly different dimensions each.
I can think of a couple of workarounds, such as having foreground-colored squares partially overlap with squares of a background-color, thereby molding them into the correct rectangle. That's quite hacky though.
Another possibility would be to not do it with points but rather with proper triangles; but then I need to set 12 values from Javascript (2 triangles, 3 edges, 2 dimensions) rather than just the needed 4 (x, y, width, height). I suppose that could be improved with a vertex shader somehow, but that will be tricky for a noob like me.
I'm looking for some suggestions or, alternatively, a sample on how to set a large number of vertex coordinates from Javascript in threejs (the existing samples all appear to assume that manipulation is done in shaders, but that doesn't work so well for my use case).
EDIT - Here's a picture of how the rectangles could be laid out:
The rectangle's top and bottom edges are arbitrary, but they are organized into columns of arbitrary widths.
The rectangles of each column all have the same, uniform color.
Just an option with canvas and .map:
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 1000);
camera.position.set(0, 0, 10);
camera.lookAt(scene.position);
var renderer = new THREE.WebGLRenderer();
renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);
var gh = new THREE.GridHelper(10, 10, "black", "black");
gh.rotation.x = Math.PI * 0.5;
gh.position.z = 0.01;
scene.add(gh);
var canvas = document.createElement("canvas");
var map = new THREE.CanvasTexture(canvas);
canvas.width = 512;
canvas.height = 512;
var ctx = canvas.getContext("2d");
ctx.fillStyle = "gray";
ctx.fillRect(0, 0, canvas.width, canvas.height);
function drawRectangle(x, y, width, height, color) {
let xUnit = canvas.width / 10;
let yUnit = canvas.height / 10;
let x_ = x * xUnit;
let y_ = y * yUnit;
let w_ = width * xUnit;
let h_ = height * yUnit;
ctx.fillStyle = color;
ctx.fillRect(x_, y_, w_, h_);
map.needsUpdate = true;
}
drawRectangle(1, 1, 4, 3, "aqua");
drawRectangle(0, 6, 6, 3, "magenta");
drawRectangle(3, 2, 6, 6, "yellow");
var plane = new THREE.Mesh(new THREE.PlaneBufferGeometry(10, 10), new THREE.MeshBasicMaterial({
color: "white",
map: map
}));
scene.add(plane);
renderer.setAnimationLoop(() => {
renderer.render(scene, camera);
});
body {
overflow: hidden;
margin: 0;
}
<script src="https://threejs.org/build/three.min.js"></script>
Read the source for these samples:
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_custom_attributes_particles
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_instancing
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_instancing_billboards
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_points

Xcode SceneKit Making a glowing light box

I have been trying to figure out if there a way to make a "glowing" SCNBox in SceneKit. Unfortunately, I didn't figure it out by myself.
Don't know if the solution is so simple it didn't come into my mind.
Ideas are welcome
Thanks
Make an omni light in the same position as your SCNNode. And set the value of the emission of the SCNNode the same colour as your light.
let box = SCNBox.init(width: 1, height: 1, length: 1, chamferRadius: 0.3)
box.materials.first?.diffuse.contents = UIColor.blue
box.materials.first?.emission.contents = UIColor.white
box.materials.first?.emission.intensity = 1.0
let boxNode = SCNNode.init(geometry: box)
boxNode.position = SCNVector3(x: 0, y: 0, z: -10)
self.sceneView.scene?.rootNode.addChildNode(boxNode)
let omniLight = SCNLight()
omniLight.type = .omni
omniLight.color = UIColor.yellow
boxNode.light = omniLight

Orbiting a cube in WebGL with glMatrix

https://jsfiddle.net/sepoto/Ln7qvv7w/2/
I have a base set up to display a cube with different colored faces. What I am trying to do is set up a camera and apply a combined X axis and Y axis rotation so that the cube spins around both axis concurrently. There seems to be some problems with the matrices I set up as I can see the blue face doesn't look quite right. There are some examples of how this is done using older versions of glMatrix however the code in the examples no longer works because of some changes in vec4 of the glMatrix library. Does anyone know how this can be done using the latest version of glMatrix as I have attached a CDN to the fiddle?
Thank you!
function drawScene() {
gl.viewport(0,0,gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
mat4.ortho( mOrtho, -5, 5, 5, -5, 2, -200);
mat4.identity(mMove);
var rotMatrix = mat4.create();
mat4.identity(rotMatrix);
rotMatrix = mat4.fromYRotation(rotMatrix, yRot,rotMatrix);
rotMatrix = mat4.fromXRotation(rotMatrix, xRot,rotMatrix);
mat4.multiply(mMove, rotMatrix, mMove);
setMatrixUniforms();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexPositionBuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, triangleVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBuffer);
gl.vertexAttribPointer(shaderProgram.vertexColorAttribute, triangleColorBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLES, 0, triangleVertexPositionBuffer.numItems);
yRot += 0.01;
xRot += 0.01;
}
As the name says, fromYRotation() initializes a matrix to a given rotation. Hence, you need two temporary matrices for the partial rotations, which you can then combine:
var rotMatrix = mat4.create();
var rotMatrixX = mat4.create();
var rotMatrixY = mat4.create();
mat4.fromYRotation(rotMatrixY, yRot);
mat4.fromXRotation(rotMatrixX, xRot);
mat4.multiply(rotMatrix, rotMatrixY, rotMatrixX);
And the reason why your blue face was behaving strangely, was the missing depth test. Enable it in your initialization method:
gl.enable(gl.DEPTH_TEST);
You dont need to use three matrices:
// you should do allocations outside of the renderloop
var rotMat = mat4.create();
// no need to set the matrix to identity as
// fromYRotation resets rotMats contents anyway
mat4.fromYRotation(rotMat, yRot);
mat4.rotateX(rotMat,xRot);

How to add a texture to one side of a cube without THREE.MultiMaterial to reduce draw calls

So lets say theres a cube with 2 materials.I'm using MultiMaterial but maybe thats not the correct approach because its showing 6 draw calls instead of 2. I'm worried about performance when it scales up.
http://codepen.io/glued/pen/JXmvzm
This is just an example, I know about FaceColors but would like to mix a meshBasicMaterial with another Material, say, with a texture.
var greenMaterial = new THREE.MeshBasicMaterial({ color: 0xc4f288 })
var orangeMaterial = new THREE.MeshBasicMaterial({ color: 0xf4511e })
var mats = [
orangeMaterial,
greenMaterial,
orangeMaterial,
orangeMaterial,
greenMaterial,
orangeMaterial
]
let box = new THREE.Mesh(geometry, new THREE.MultiMaterial( mats ))
If i used vertexColors: FaceColors and a texture:
new MeshBasicMaterial({ vertexColors: FaceColors, map:someTexture }))
how would i designate the texture for a specific face only?
I figured it out by creating a material with a texture and removing the UVs on the geometry faces that i'm not using
the texture is 128x256, See the codepen as i'm using a 2d canvas to generate
texture.repeat.y = 0.5
texture.offset.y = 0.5
let geometry = new THREE.BoxGeometry(50, 50, 50)
function assignUvAndColor(geo, i, color = 0x00cbff){
geo.faceVertexUvs[0][i] = new Array(3).fill(new THREE.Vector2(0, -1))
geo.faces[i].color.setHex(color)
}
const greenColor = 0xacffd3
assignUvAndColor(geometry, 3, greenColor)
assignUvAndColor(geometry, 2, greenColor)
assignUvAndColor(geometry, 0, greenColor)
assignUvAndColor(geometry, 1, greenColor)
assignUvAndColor(geometry, 4)
assignUvAndColor(geometry, 5)
assignUvAndColor(geometry, 6)
assignUvAndColor(geometry, 7)
let material = new THREE.MeshBasicMaterial({ map: texture, vertexColors: THREE.FaceColors })
let box = new THREE.Mesh(geometry, material)
http://codepen.io/glued/pen/grBEmo?editors=0010
multimaterial will always do N drawcalls where N = length of its material array
(see renderer implementation)
it does not even try to check whether some of its materials are duplicite in reference - so in your examples you have multimaterial with 6 materials = 6 drawcalls
you will have to change the geometry face material index or abandon using multimaterial and divide your geometry manually

Create ArrowHelper with correct rotation

How do I create an ArrowHelper in Three.js (r58) with correct rotation?
var point1 = new THREE.Vector3(0, 0, 0);
var point2 = new THREE.Vector3(10, 10, 10);
var direction = new THREE.Vector3().subVectors(point1, point2);
var arrow = new THREE.ArrowHelper(direction.normalize(), point1);
console.log(arrow.rotation);
I always end up with with Object {x: 0, y: 0, z: 0} for the arrow rotation. What am I doing wrong?
ArrowHelper uses quaternions to specify it's orientation.
If you do this:
var rotation = new THREE.Euler().setFromQuaternion( arrow.quaternion );
you will see an equivalent orientation expressed in Euler angles, although in r.59, arrow.rotation is now automatically updated, so you will no longer see zeros.
EDIT: answer updated to three.js r.59

Resources