Copy camera viewpoint using open3d gui - python-3.9

Open3d's easy draw_geometries utility makes it possible to copy & paste camera parameters to restore a certain view point after it has been changed. It seems like this functionality would also be available when using the SceneWidget and its Open3DScene high level scene. However I have not figured out a way to mimic this behavior.
Copying and pasting a viewpoint from draw_geometries onto notepad reveals this information:
boundingbox_max, boundingbox_min, field_of_view, front, lookat, up, zoom
In order for it to have the same effect using the SceneWidget I would have to somehow obtain this information from the scene's camera, create a copy, and then load it later when it is needed. Nevertheless, I cannot access the above properties explicitly through the camera object, nor have I found a way to set them (assuming I already have them).
The next "obvious" solution would be the camera class's copy_from method, which sounds great, except I am unable to instantiate the Camera class in order to use it.
How can I achieve this save & restore viewpoint effect?
Thanks in advance

Related

WebAudio changing of orientation of Listener and/or Panner

I am trying to understand how the WebAudio API would work. I have two objects; one representing the listener and one the source. And have used the below link as an example. I am able to move the source and the sound position changes.
https://mdn.github.io/webaudio-examples/panner-node/
The command to change the orientation has been provided: viz this.panner.setOrientation or this.listener.setOrientation. The question I have is: if I have a source or listener object (in canvas mode using ThreeJS viz. we know its position and rotation) how do I change the orientation of either the panner or listener (as the case may be) via JS.
An example would greatly help. Thanks
Any reason not to use THREE's PositionalAudio object? https://threejs.org/docs/index.html#api/audio/PositionalAudio. That lets you add a sound to mesh object, and THREE will take care of moving it. If you want to source a different sound than an AudioBuffer, just connect the audio source to the .gain of the PositionalAudio object.

Original skeleton position (Fbx import)

I was thinking that you need a deformer to read the clusters etc to be able to get the original (usually T pose) position of the skeleton.
Also FBX supports poses etc but never had a file that implemented it.
But my surprise was that importing an fbx file into 3dsmax without any mesh inside if I uncheck "animation" I get the T pose.
Any idea about it?
Thank you in adavnce
FbxCluster has GetTransformMatrix and GetTransformLinkMatrix. The former returns the original transform of the bone (that should be used to initialize the skinning), and the latter the corresponding orientation of the skinned node. Additionally, the skin node can also have "geometric rotation". I don't think there's anything more than that.

Persistant object in ADF

I am trying to formulate how to create an ADF, drop an object in it, then have that object always be there when I run the app again, after localization occurs, of course. Do I have to save off the locations of virtual objects into a separate file when the user is done "dropping" objects into the scene and then reload them on subsequent runs? Or is there a way to save them into the ADF?
We cannot save objects with ADF, instead while loading ADF, object can be added (after recognition of ADF) to the recognised coordinate.
I did something like this and got it working, but found placed objects oscillating and not get placed exactly on the same place in the subsequent ADF loading. Because whenever Tango connection is established, that location is considered as origin(0,0,0) and objects get placed related to this origin. So it is hard to see those objects exactly on same places.
There's no good way to save it into ADF, unless you hack some of the ADF's meta data. But hacking meta data is not suggested.
I did what you say.
You have to write coordinates of Objects into a separated file then when you reload scene and recognized your room (thanks to the adf), just put back objects at same coords.
Of course every coord (x y z) must refer to the ADF Tango pose -> base = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION
On Unity it's very simple, you just have to check to "true" the "Use Area description poses" on your ARCamera tango script and same on your PointCloud script if you use it also.

SceneKit and NSKeyedArchiver

I am trying to archive my SceneKit scene for saving using NSKeyedArchiver. This is so I can save the scene allowing me to restore it at a later date. I am finding that the restored scene seems to ignore/lose the SCNTransformConstraints I have added to various SCNNodes. This results is the nodes being placed in the wrong place.
I am wondering if this is by design or a bug? Or am I doing something wrong?
Any pointers would be appreciated.
SCNTransformConstraints works with a block provided by the client of the API. Blocks can't be archived with NSKeyedArchiver.
You need to archive another object instead that is be able to reconstruct the block and re-assign the transform constraint.

Qt Animation: Appearing & Disappearing Objects

I'm writing a video annotation application with Qt4 in which users need to be able to seek to various points in a video, putting markers on various objects and then setting keypoints for those markers so that they stay on the objects in the video as they move around. QGraphicsItemAnimation seems like a great place to start for these markers, however they need to be able to appear and disappear at specific times, which I can't figure out how to do with the QGraphicsItemAnimation. I could set the scale at 0 to make the objects disappear, but that seems like a pretty hacky solution, and I'm guessing that the paint engine would still waste cpu cycles trying to draw those invisible objects. Does anyone have a better solution than this? I'm using Qt 4.5.3 right now, but I'm willing to upgrade to 4.6 if it makes things easier. Thanks!!
It seems like the functionality you want of showing/hiding QGraphicsItem objects is beyond the scope of the simple "tweening" that the animation class performs. It is only for one object at a time, and any appearance or disappearance you have to write yourself.
You still might get some mileage out of QGraphicsItemAnimation (although the fact that it uses its own timer instead of being locked to the frame clock of your video is a little dodgy).
Neglecting "seeking" for a moment, there is a QTimeLine::finished() signal. If you let the end of an annotation's active animation timeline represent the point where you want it to disappear, you can trigger QGraphicsItem::hide() at that point. When it comes time to turn it back on, you would construct a new QGraphicsItemAnimation (based on the next run of keyframe data for that object) and call QGraphicsItem::show().
Note that one of the headlining features of Qt 4.6 is the QtAnimation framework, which is more sophisticated but also rather complex. I've not used it yet, but looking over the examples it seems like you might be able to "animate" a visibility or opacity property.

Resources