I am trying to archive my SceneKit scene for saving using NSKeyedArchiver. This is so I can save the scene allowing me to restore it at a later date. I am finding that the restored scene seems to ignore/lose the SCNTransformConstraints I have added to various SCNNodes. This results is the nodes being placed in the wrong place.
I am wondering if this is by design or a bug? Or am I doing something wrong?
Any pointers would be appreciated.
SCNTransformConstraints works with a block provided by the client of the API. Blocks can't be archived with NSKeyedArchiver.
You need to archive another object instead that is be able to reconstruct the block and re-assign the transform constraint.
Related
Open3d's easy draw_geometries utility makes it possible to copy & paste camera parameters to restore a certain view point after it has been changed. It seems like this functionality would also be available when using the SceneWidget and its Open3DScene high level scene. However I have not figured out a way to mimic this behavior.
Copying and pasting a viewpoint from draw_geometries onto notepad reveals this information:
boundingbox_max, boundingbox_min, field_of_view, front, lookat, up, zoom
In order for it to have the same effect using the SceneWidget I would have to somehow obtain this information from the scene's camera, create a copy, and then load it later when it is needed. Nevertheless, I cannot access the above properties explicitly through the camera object, nor have I found a way to set them (assuming I already have them).
The next "obvious" solution would be the camera class's copy_from method, which sounds great, except I am unable to instantiate the Camera class in order to use it.
How can I achieve this save & restore viewpoint effect?
Thanks in advance
I am trying to understand how the WebAudio API would work. I have two objects; one representing the listener and one the source. And have used the below link as an example. I am able to move the source and the sound position changes.
https://mdn.github.io/webaudio-examples/panner-node/
The command to change the orientation has been provided: viz this.panner.setOrientation or this.listener.setOrientation. The question I have is: if I have a source or listener object (in canvas mode using ThreeJS viz. we know its position and rotation) how do I change the orientation of either the panner or listener (as the case may be) via JS.
An example would greatly help. Thanks
Any reason not to use THREE's PositionalAudio object? https://threejs.org/docs/index.html#api/audio/PositionalAudio. That lets you add a sound to mesh object, and THREE will take care of moving it. If you want to source a different sound than an AudioBuffer, just connect the audio source to the .gain of the PositionalAudio object.
I started a new Xcode project with the ARKit template and simply replaced the "ship.scn" with my "test.scn" filename asset. The object is about 16.5mm wide and 4.8mm tall. The ship worked fine of course, but my test object that reads "test" does not rotate as I move around it, or scale when I move towards or away from it, yet it does track in one location.
I compared the ship and test attribute panels, and I can't find anything that is different about them, except that the ship is textured and my test text is not. What is inherently special about scn objects that would make them behave correctly in ARKit besides their size? I've read through the documentation about anchoring, but it seems like I wouldn't have to do this in code if it's already a scn object.
In case anyone wants to test the file I'm using in the ARKit template to see how it's behaving, the file is here: https://ufile.io/ey49t
I’m answering the question because I think it could help others that are making a similar mistake, but it can be closed if it doesn’t make sense.
The problem is that the size was much too large where I could not see scaling or rotation no matter how much I moved around the room. Compare the scaled size in the last attribute panel on the ship model - not its actual size. Then get the size of your own model scaled down enough so that it is actually resonable.
I was thinking that you need a deformer to read the clusters etc to be able to get the original (usually T pose) position of the skeleton.
Also FBX supports poses etc but never had a file that implemented it.
But my surprise was that importing an fbx file into 3dsmax without any mesh inside if I uncheck "animation" I get the T pose.
Any idea about it?
Thank you in adavnce
FbxCluster has GetTransformMatrix and GetTransformLinkMatrix. The former returns the original transform of the bone (that should be used to initialize the skinning), and the latter the corresponding orientation of the skinned node. Additionally, the skin node can also have "geometric rotation". I don't think there's anything more than that.
I am trying to formulate how to create an ADF, drop an object in it, then have that object always be there when I run the app again, after localization occurs, of course. Do I have to save off the locations of virtual objects into a separate file when the user is done "dropping" objects into the scene and then reload them on subsequent runs? Or is there a way to save them into the ADF?
We cannot save objects with ADF, instead while loading ADF, object can be added (after recognition of ADF) to the recognised coordinate.
I did something like this and got it working, but found placed objects oscillating and not get placed exactly on the same place in the subsequent ADF loading. Because whenever Tango connection is established, that location is considered as origin(0,0,0) and objects get placed related to this origin. So it is hard to see those objects exactly on same places.
There's no good way to save it into ADF, unless you hack some of the ADF's meta data. But hacking meta data is not suggested.
I did what you say.
You have to write coordinates of Objects into a separated file then when you reload scene and recognized your room (thanks to the adf), just put back objects at same coords.
Of course every coord (x y z) must refer to the ADF Tango pose -> base = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION
On Unity it's very simple, you just have to check to "true" the "Use Area description poses" on your ARCamera tango script and same on your PointCloud script if you use it also.