hitTest in MacOS RealityKit app produces really inaccurate result, and raycast is unavailable - macos

I'm doing a non-AR 3D app on macOS using RealityKit and SwiftUI, and I need to be able to modify materials on objects based on the results of onContinuousHover. In particular, I want to highlight the face of a box that the mouse is hovering over, which requires being able to find the model position fairly accurately. This has been impossible so far.
To find out what I might be doing wrong, I prepared a plane:
var mesh: MeshResource = MeshResource.generatePlane(width: 10.0, depth: 10.0)
// material is a grid, to help visualize where hits are taking place
model = ModelComponent(mesh: mesh, materials: [material])
generateCollisionShapes(recursive: true)
collision = CollisionComponent(
shapes: [ShapeResource.generateConvex(from: model!.mesh)],
mode: .default
)
and placed it on an AnchorEntity positioned at world origin.
The view coordinates from the onContinuousHover closure are good, but the results from a call to hitTest (the one that returns [CollisionCastHit], not the deprecated [ARHitTestResult]) are really nonsensical:
The y coordinate of a hit on a X-Z Plane centered at world origin is consistently reported as 0.1 - any hit in this plane should be in the form (x, 0.0, z), the returned result is off by ten centimeters! Offsetting the plane changes the result by the offset, but the y value is still always off by a constant 0.1
The sign of the z coordinate is always reversed - hovering over a small sphere placed at world (0.0, 0.0, 0.1) always returns (0.0, 0.0, -0.1), placing it at (0.0, 0.0, -0.1) and hovering returns (0.0, 0.0, 0.1). This holds for any point in the plane.
The Entity in the CollisionCastHit is always correct, but the reported model position is wildly inaccurate. The transform for both the plane and its anchor is the identity transform, no scale, translation or rotation. And I am always careful to convert to/from world space correctly, but in this test case, they are identical.
I was going to try using arview.raycast(from: allowing: alignment:), which the documentation says is available on macOS as of 10.15+, but I get "Value of type 'ARView' has no member 'raycast'". Opening the RealityKit header and searching for "raycast" got zero hits - where is it?
Ideas?

It looks like the documentation is making a mistake by saying it's available on macOS. If you go from the method here, to one of the arguments ARRaycastQuery.Target, you can see ARRaycastQuery is not available on macOS; so a method that uses it also cannot be available for macOS!
I'd recommend instead using hitTest(_:query:mask:), which should give you everything you need and works well whenever I use it.
It's used extensively in RealityUI, so can vouch for its accuracy and reliability:
https://github.com/maxxfrazer/RealityUI/blob/951d641164758de1ce9b1e81b17c7fc81eeb4b8a/Sources/RealityUI/RUILongTouchGestureRecognizer.swift#L86
Try turning on physics with debugOptions and showPhysics. The Y coordinate being off by 10cm sounds a lot like the box that has been created has a height of 0.2, instead of being a flat plane.

Related

What is the formula that is used by three.js to create the pattern that is seen when two overlapping 2-D surfaces are shown in SageMath?

I'm trying to understand the pattern (looks similar to an animal print) that shows when two different colored planes are plotted on (almost) the same plane. What is the formula that SageMath uses--using three.js--to create the pattern shown in the graph? The SageMath question/support area sent me to this support section for answers.
Example: Here one plane is slightly larger-- which makes SageMath show them both, but with a pattern. Also, as you move/manipulate the graph with the mouse, the pattern changes. What formula or information does SageMath (three.js) use to show the pattern?
I used the Sage Cell Server online to plot this (below) at https://sagecell.sagemath.org/:
M = implicit_plot3d(lambda x,y,z: x, (-15,15), (-15,15), (-15,15), rgbcolor= (0.0, 1.0, 0.0), frame=true)
N = implicit_plot3d(lambda x,y,z: x, (-15,15), (-15,15), (-15,15.5), rgbcolor= (0.0, 0.0, 1.0), frame=true)
M+N
Thanks for any information you can provide!
I'm not very familiar with Sage, but this seems to be a case of z-fighting.
The idea is basically that the planes are so close to each other (and in this case occupy the same space!), that the camera that is used to draw the scene, has trouble selecting a particular plane for each of the pixels.
And so the "pattern" is just random glitches, and it changes when you move, because the computation for selecting which plane is "in front" changes for each angle.
You can read in a lot more detail here.
Now, the pattern does remind me of "noise patterns", which you might be interested in. There's a lot of ressources for that, a good place to start could be the book of shaders.

Is it possible to know programmatically the position of texture in ruby?

I written some plugins for sketch-up. I want to know that is it possible to know the texture position that is vertical or horizontal programmatically in ruby ?
Ex: I'm using Sketch-up for wood working and whenever I apply material to the model I should take care of grains. So that I want to know that the wood grains are in horizontal or vertical. Selecting face then clicking Texture->position we can make horizontal grains to vertical and vice-verse. After applying materials programmatically how should I know the grains are horizontal or vertical.
Is there any solution?
When you say horizontal and vertical that implies from general direction - do you have that via Ruby code some how? Do you determine via the bounds of the group/component?
You can use face.getUVHelper to extract the position of the texture on the face. (You can also do so via PolygonMesh - but that's only really useful if you are interested in all the other data you get from that.)
Once you have the UV position you can use that to compare against your desired direction. (This would also assume that all your textures have the grain in the same direction.) Based on that you can then reposition the texture using face.position_texture.
Edit: a very native version can be seen in my UV Toolkit plugin. It's a very old plugin and not the greatest example - it enforces restrictions that the face must be a quad and only rotates in 90 degrees. But it shows how to get UV data and change the orientation of the texture.
A better generic version would be to get four points of the plane of the face (not colinear) and get the UV data for each point. Then transform the UV data by the rotation needed to get the direction you want and set the new UV data.
Disclaimer
I am horrible with 3-Dimensions (so please don't ask for help with the appropriate vectors) and I have never used sketchup ever. Not even once until right now but...
Information
You must select the Face object.
model = Sketchup.active_model
entities = model.active_entities
face_entities = entities.find_all{|e| e.class == Sketchup::Face}
#=>[#<Sketchup::Face:0xea645d8>]
face_entities.each{|e| puts e.get_texture_projection(false)}
#=> output of each texture projection for the back side
#nil means that the projection is unchanged from it's initial position
So here is what you can do
#true returns front, false returns the back texture as a Vector object
get_texture_projection(true || false)
#submit a vector [0,1,0] || nil , true sets front || false sets the back
#nil will reset the projection back to it's initial position
set_texture_projection([0,1,0],true || false)
#texture is now rotated 90 degrees on the Y axis
set_texture_projection([1,0,0],true || false)
#texture is now rotated 90 degrees on the X axis
you can also access the material through material and back_material
face.material.name
#=> [Wood_ Floor]
face.material.texture.filename
#=>Wood_Floor.jpg
Hope this helps you out it took a bit of digging but based on using the built in console in Sketchup it works.
P.S. I love that this program has a built in Ruby API that's is awesome.

3D sprites, writing correct depth buffer information

I am writing a particle engine for iOS using Monotouch and openTK. My approach is to project the coordinate of each particle, and then write a correctly scaled textured rectangle at this screen location.
it works fine, but I have trouble calculating the correct depth value so that the sprite will correctly overdraw and be overdrawn by 3D objects in the scene.
This is the code I am using today:
//d=distance to projection plane
float d=(float)(1.0/(Math.Tan(MathHelper.DegreesToRadians(fovy/2f))));
Vector3 screenPos=Vector3.Transform(ref objPos,ref viewMatrix, out screenPos);
float depth=1-d/-screenPos.Z;
Then I am drawing a trianglestrip at the screen coordinate where I put the depth value calculated above as the z coordinate.
The results are almost correct, but not quite. I guess I need to take the near and far clipping planes into account somehow (near is 1 and far is 10000 in my case), but I am not sure how. I tried various ways and algorithms without getting accurate results.
I'd appreciate some help on this one.
What you really want to do is take your source position and pass it through modelview and projection or whatever you've got set up instead if you're not using the fixed pipeline. Supposing you've used one of the standard calls to set up the stack, such as glFrustum, and otherwise left things at identity then you can get the relevant formula directly from the man page. So reading directly from that you'd transform as:
z_clip = -( (far + near) / (far - near) ) * z_eye - ( (2 * far * near) / (far - near) )
w_clip = -z
Then, finally:
z_device = z_clip / w_clip;
EDIT: as you're working in ES 2.0, you can actually avoid the issue entirely. Supply your geometry for rendering as GL_POINTS and perform a normal transform in your vertex shader but set gl_PointSize to be the size in pixels that you want that point to be.
In your fragment shader you can then read gl_PointCoord to get a texture coordinate for each fragment that's part of your point, allowing you to draw a point sprite if you don't want just a single colour.

Are SpriteBatch drawcalls culled by XNA?

I have a very subtle problem with XNA, specifically the SpriteBatch.
In my game I have a Camera class. It can Translate the view (obviously) and also zoom in and out.
I apply the Camera to the scene when I call the "Begin" function of my spritebatch instance (the last parameter).
The Problem: When the cameras Zoomfactor is bigger than 1.0f, the spritebatch stops drawing.
I tried to debug my scene but I couldn't find the point where it goes wrong.
I tried to just render with "Matrix.CreateScale(2.0f);" as the last parameter for "Begin".
All other parameters were null and the first "SpriteSortMode.Immediate", so no custom shader or something.
But SpriteBatch still didn't want to draw.
Then I tried to only call "DrawString" and DrawString worked flawlessly with the provided scale (2.0f).
However, through a lot of trial and error, I found out that also multiplying the ScaleMatrix with "Matrix.CreateTranslation(0, 0, -1)" somehow changed the "safe" value to 1.1f.
So all Scale values up to 1.1f worked. For everything above SpriteBatch does not render a single pixel in normal "Draw" calls. (DrawString still unaffected and working).
Why is this happening?
I did not setup any viewport or other matrices.
It appears to me that this could be some kind of strange Near/Farclipping.
But I usually only know those parameters from 3d stuff.
If anything is unclear please ask!
It is near/far clipping.
Everything you draw is transformed into and then rasterised in projection space. That space runs from (-1,-1) at the bottom left of the screen, to (1,1) at the top right. But that's just the (X,Y) coordinates. In Z coordinates it goes from 0 to 1 (front to back). Anything outside this volume is clipped. (References: 1, 2, 3.)
When you're working in 3D, the projection matrix you use will compress the Z coordinates down so that the near plane lands at 0 in projection space, and the far plane lands at 1.
When working in 2D you'd normally use Matrix.CreateOrthographic, which has near and far plane parameters that do exactly the same thing. It's just that SpriteBatch specifies its own matrix and leaves the near and far planes at 0 and 1.
The vertices of sprites in a SpriteBatch do, in fact, have a Z-coordinate, even though it's not normally used. It is specified by the layerDepth parameter. So if you set a layer depth greater than 0.5, and then scale up by 2, the Z-coordinate will be outside the valid range of 0 to 1 and won't get rendered.
(The documentation says that 0 to 1 is the valid range, but does not specify what happens when you apply a transformation matrix.)
The solution is pretty simple: Don't scale your Z-coordinate. Use a scaling matrix like:
Matrix.CreateScale(2f, 2f, 1f)

How to create a shader to mask using a degree offset from a central point?

I'm a little bit lost, and this is somewhat related to another question I've asked about fragment shaders, but goes beyond it.
I have an orthographic scene (although that may not be relevant), with the scene drawn here as black, and I have one billboarded sprite that I draw using a shader, which I show in red. I have a point that I know and define myself, A, represented by the blue dot, at some x,y coordinate in the 2d coordinate space. (Lower-left of screen is origin). I need to mask the red billboard in a programmatic fashion where I specify 0% to 100%, with 0% being fully intact and 100% being fully masked. I can either pass 0-100% (0 to 1.0) in to the shader, or I could precompute an angle, either solution would be fine.
( Here you can see the scene drawn with '0%' masking )
So when I set "15%" I want the following to show up:
( Here you can see the scene drawn with '15%' masking )
And when I set "45%" I want the following to show up:
( Here you can see the scene drawn with '45%' masking )
And here's an example of "80%":
The general idea, I think, is to pass in a uniform 'A' vec2d, and within the fragment shader I determine if the fragment is within the area from 'A' to bottom of screen, to the a line that's the correct angle offset clockwise from there. If within that area, discard the fragment. (Discarding makes more sense than setting alpha to 0.0 or 1.0 if keeping, right?)
But how can I actually achieve this?? I don't understand how to implement that algorithm in terms of a shader. (I'm using OpenGL ES 2.0)
One solution to this would be to calculate the difference between gl_FragCoord (I hope that exists under ES 2.0!) and the point (must be sure the point is in screen coords) and using the atan function with two parameters, giving you an angle. If the angle is not some value that you like (greater than minimum and less than maximum), kill the fragment.
Of course, killing fragments is not precisely the most performant thing to do. A (somewhat more complicated) triangle solution may still be faster.
EDIT:
To better explain "not precisely the most performant thing", consider that killing fragments still causes the fragment shader to run (it only discards the result afterwards) and interferes with early depth/stencil fragment rejection.
Constructing a triangle fan like whoplisp suggested is more work, but will not process any fragments that are not visible, will not interfere with depth/stencil rejection, and may look better in some situations, too (MSAA for example).
Why don't you just draw some black triangles ontop of the red rectangle?

Resources