Changing the anchor point in Three.js - three.js

Ive been playing with Three.js now for a few weeks and im planning to build something however all my measurements are from the top left front instead of Three.js that places from the central point.
Some context, I want to build Object3d's with cubes within them, but if i know the object 3d is 10/10/10 if i then add a cube of 1/1/1 at position 0/0/0 then the 1/1/1 would sit in the middle of the parent. Instead I want position 0/0/0 to put the 1/1/1 in the top left front corner.
I hope this makes sense.
EDIT: The answer to translate is not what im asking. that solution is positioning to the center and then translating it. I want the starting 0/0/0 to be in the top / left / front by default.
Thanks!

Related

Fluid is not touching container

I'm trying to create a fluid simulation on blender 2.92 but after the bake of the fluid, this one is not laying on the floor. It's floating at around 10cm above and also, it stays at 10cm from the walls.
Space between fluid and floor
Space between fluid and walls
Low res render
Does anybody know what I'm setting wrong ? The obstacle is inside a cube which has a solidify modifier applied and set up as Effector / Collision.
Any help will be appreciated
Did
Without having much to look at other than what you had provided, it's the resolution of your domain. When you look at your domain box, there should be a small "cube" in the corner. If you move the cube down under your floor to where the top of the cube is touching the floor, this will fix the gap between the floor and water. Just keep in mind, when you increase that resolution, that cube will get smaller. Meaning, you will need to adjust the position of it yet again. For the walls, you will need to widen your domain until the side of the cube is on the outside of the wall.
Think of the cube as a "safety boundary/perimeter" or something like that. Hopefully that all makes sense?

Three.js hide radial segments

Is there any way to remove or hide radial segments from ConeBufferGeometry, CylinderBufferGeometry etc ? I'm new to 3D and three.js so I don't really know is it some crucial part of these geometries or I can hide it.
What do you mean? What are the "radial segments"? If you remove the outer surface all you're left with is a disc from the bottom of a cone or 2 discs from the ends of the cylindar.
If you wanted just a disc use a CircleBufferGeometry. If you wanted the ends gone then read the docs. They make it pretty clear there is an option to remove the ends
https://threejs.org/docs/#api/en/geometries/ConeBufferGeometry
https://threejs.org/docs/#api/en/geometries/CylinderBufferGeometry
Otherwise you can make your own custom geometry

Tiledmap stays dark after world rotation in Phaser

I want to create a top down game, on which the "camera" rotates with the character (like in Tap Tap Dash). But Phaser does not implement camera rotation, so I followed this thread to create a world group, which will be rotated.
As you can see in the following screenshot, after rotating the Tilemaps (the road and the arrows) as well as the sprites (the coins), black areas occur. What is really strange is that the sprites are rendered correctly as you can see on the bottom of the screenshot, but the Tilemaps are not fully rendered.
I have tried to resize the world again and trying all kind of methods of the camera, world and layer object. But I am out of ideas. Hopefully someone can give me a hint how to approach this problem.
Thank you!

How to render side by side videos in OculusRiftEffect or VREffect

I'm experimenting with videojs-vr which is using THREE.OculusRiftEffect for rendering the video in an Oculus friendly way.
I downloaded a side by side video from YouTube and played it within the videojs-vr example.html.
Now I'm searching for a way to show only the left part of the video in the left camera of OculusRiftEffect / VREffect and the right part for the right eye.
I think I have to find/use an event which draws the movie onto the mesh and identify which camera is currently rendered to copy only the left or the right part of the video.
If you're using Three.js I would make two spheres, one for left eye and the other one for right eye. Then separate the video using a shader on those spheres to map only half texture in each one. Then attach each sphere to one of the two cameras.
Don't sure how to do it in three.js, as I come from Unity and I'm still a noob with three, but I did exactly that in Unity. Maybe the idea helps you.

Programmatic correction of camera tilt in a positioning system

A quick introduction:
We're developing a positioning system that works the following way. Our camera is situated on a robot and is pointed upwards (looking at the ceiling). On the ceiling we have something like landmarks, thanks to whom we can compute the position of the robot. It looks like this:
Our problem:
The camera is tilted a bit (0-4 degrees I think), because the surface of the robot is not perfectly even. That means, when the robot turns around but stays at the same coordinates, the camera looks at a different position on the ceiling and therefore our positioning program yields a different position of the robot, even though it only turned around and wasn't moved a bit.
Our current (hardcoded) solution:
We've taken some test photos from the camera, turning it around the lens axis. From the pictures we've deduced that it's tilted ca. 4 degrees in the "up direction" of the picture. Using some simple geometrical transformations we've managed to reduce the tilt effect and find the real camera position. On the following pictures the grey dot marks the center of the picture, the black dot is the real place on the ceiling under which the camera is situated. The black dot was transformed from the grey dot (its position was computed correcting the grey dot position). As you can easily notice, the grey dots form a circle on the ceiling and the black dot is the center of this circle.
The problem with our solution:
Our approach is completely unportable. If we moved the camera to a new robot, the angle and direction of tilt would have to be completely recalibrated. Therefore we wanted to leave the calibration phase to the user, that would demand takings some pictures, assessing the tilt parameters by him and then setting them in the program. My question to you is: can you think of any better (more automatic) solution to computing the tilt parameters or correcting the tilt on the pictures?
Nice work. To have an automatic calibration is a nice challenge.
An idea would be to use the parallel lines from the roof tiles:
If the camera is perfectly level, then all lines will be parallel in the picture too.
If the camera is tilted, then all lines will be secant (they intersect in the vanishing point).
Now, this is probably very hard to implement. With the camera you're using, distortion needs to be corrected first so that lines are indeed straight.
Your practical approach is probably simpler and more robust. As you describe it, it seems it can be automated to become user friendly. Make the robot turn on itself and identify pragmatically which point remains at the same place in the picture.

Resources