We have followed the step-by-step tutorials for the model-derivative apis. I can use the /manifest endpoint to get the object tree. I can also create tasks to get a .obj file with geometries of some objects. There is no way to know which geometry belongs to which object.
I could potentially create a job for each object, but that would require thousands of jobs per model which seems excessively inefficient. Are we missing something? Any pointers on how to get geometry for objects while retaining the mapping between them?
The Viewer will extract the information, but as of today, we don't have a documentation for SVF, so it may get trickier to analyze the geometry from this file. You may use JavaScript, something like described in this sample or this sample.
You can extract the OBJs for each element, but may need a big number of jobs. In this case, you need to run your code for .obj files.
Related
Using Sphinx's domain-aware ObjectDescriptions I can create fancy rendered documentation for them. For example:
.. py:function:: pyfunc()
Describes a Python function.
This renders the content in a nice way, and this works really well with module indices, references and so on. Cool so far!
Now, let's say I have that directive in a source document src/mymodule/functions.rst, and I have a bunch of text in src/guide/getting-started.rst, I can reference to the objects like
:py:func:`pyfunc`
Also cool!
Now, my actual question; Could I also tell the Sphinx writer to re-render the same documentation snippet for that object? To ease the user in not having to navigating away from the Getting Started page where I just want to include a single piece of content again.
What I've tried to do:
Simply copy the contents. This results in a warning that the object is defined multiple times, hurts the index and as a result references don't point to the "authoritative" place in your project, if unlucky. Not okay.
Document each object in its own file and then use .. include:: rel/path/to/pyfunc.rst in each document where I want to render it. As those includes are literal on ReST-level, this results in the same downsides as the option above. :-(
Thus, I'm looking for a solution where I would tell the renderer/writer of Sphinx to simply re-render the contents of a reference instead of producing a link. It should not add it to the index for a simple re-render.
I'm okay with a custom extension or a domain-specific custom solution - I'm already using my own custom domain, but I just used the general Python domain above as a well-known example.
Context for the use case: I'm building a Protobuf domain. Protobuf messages and enums are reused a lot and I would like to show the context of commonly reused objects inline on pages where this is useful to the reader. This means it is repeated over the whole project on purpose where it is deemed useful rather than navigating away all the time. Yet only the reference page should be "authoritative".
I've been successful with a dirty hack: abusing the XRef role logic. Cross-references in Sphinx render dynamically (e.g. Table 23) by producing arbitrary 'nodes'. By:
keeping a copy of the parent node during parsing in a custom Domain
registering a custom Sphinx/ReST XRef role to render a whole set of nodes (the saved parent node)
re-running the ReferencesResolver another time
... this basically does what I need. But yuck, it's rather ugly.
Working example I implemented in a Protobuf Domain extension.
We have an ImageViewer based on FHIR and we want to annotate certain images within a study.
We haven't found an adequate resource type for storing annotations line linear measurements, ellipse area, angle, text, etc.
On the top of that, the annotations shall point to a specific series/image pair, rather than the whole study.
What is the recommendation here? To create an extension for ImagingStudy (and have to update the whole resource for each annotation CRUD) or a separated suggested resource type?
FHIR ImagingStudy is designed to work with DICOM imaging (and especially RESTful DICOMweb) and we tried not to replicate capabilities between the two. Among other reasons, this prevents some systems from only being able to see part of the imaging record.
Therefore, annotations and image flagging should be handled using DICOM presentation states (e.g., GSPS) and Key Image Notes. The ImagingStudy can reference those instances and request images be rendered appropriately.
We are always looking for feedback on missing use cases, so please let us know on chat.fhir.org if we’ve left a gap.
I have a huge amount of data, already stored in STL containers. It feels very bad to do full copy for all data to Gtk::ListStore. And also, my data structure contains big gaps in some rows, which simply can be filled with a default value if the line of the view becomes visible. To store tons of default values into a model is also a bad overhead.
For this I thought it is easy to setup a own model which will then provide some entry points/callbacks where I simply can provide my own data from my containers.
But I can not find any line of documentation.
Here a standard TreeView, but no idea how to improve with own model:
https://developer.gnome.org/gtkmm-tutorial/stable/sec-treeview-examples.html.en
In https://developer.gnome.org/gtkmm-tutorial/stable/sec-treeview-model.html.en I found the beautiful sentence
Although you can theoretically implement your own Model,
you will normally use either the ListStore or TreeStore model classes.
After that I take a look into the sources... well, because Gtkmm is only a wrapper over the C-code, it is not simply deriving from a class and overwrite some methods. It is more to pick up the underlaying c code if my interpretation of the code I saw is correct.
Anyway, is there any chance to get some lines of code where the ListStore is replaced by a own model?
I am using the Three.js Object3D userData property to store information from a MySQL database serialized into json pairs to give me data to perform various actions when selecting objects which represent saved data. It seems to work nicely for a few pairs.
I note from the reference a warning to not to store references to functions as they will not be cloned. Can anyone tell me if there any other limitations to this property (number of pairs, hierarchical data, etc.)? I want to store 2-3000 words of text, images, blobs etc., but prefer to ask over trial and error. the documents are a little sparse on such matters.
Many thanks... James
No there are not special limitations. It is simply a Javascript object:
https://github.com/mrdoob/three.js/blob/0fbc8afb348198e4924d9805d1d4be5869264418/src/core/Object3D.js#L85
this.userData = {};
So while your object is in memory, you can put any Javascript variables there. Only limitations are what you always have, the available memory basically. As Javascript objects can contain any types and hierarchy so you're off fine there.
I used this search to check this in the codebase: https://github.com/mrdoob/three.js/search?utf8=%E2%9C%93&q=userdata
I'm trying to figure out how to decide when to use NSDictionary or NSCoder/NSCoding?
It seems that for general property lists and such that NSDictionary is the easy way to go that generates XML files that are easily editable outside of the application.
When dealing with custom classes that holds data or possibly other custom classes nested inside, it seems like NSCoder/NSCoding would be the better route since it will step through all the contained object classes and encode them as well when an archive command is used.
NSDictionary seems like it would take more work to get all the properties or data characteristics to a single level to be able to save it, where as NSCoder/NSCoding would automatically encode nested custom classes that implement the NSCoding interface.
Outside of it being binary data and not editable outside of your application is there a real reason to use one over the other? And along those lines is there an indicator of which way you should lean between the two? Am I missing something obvious?
Apple's documentation on object graphs has this to say:
Mac OS X serializations store a simple hierarchy of value objects, such as dictionaries, arrays, strings, and binary data. The serialization only preserves the values of the objects and their position in the hierarchy. Multiple references to the same value object might result in multiple objects when deserialized. The mutability of the objects is not maintained.
…
Mac OS X archives store an arbitrarily complex object graph. The archive preserves the identity of every object in the graph and all the relationships it has with all the other objects in the graph. When unarchived, the rebuilt object graph should, with few exceptions, be an exact copy of the original object graph.
The way I interpret this is that, if you want to store simple values, serialization (using an NSDictionary, for example) is a fine way to go. If you want to store an object graph of arbitrary types, with uniqueness and mutability preserved, using archives (with NSCoder, for example) is your best bet.
You may also want to read Apple's Archives and Serializations Programming Guide for Cocoa, of which the aforelinked page on object graphs is a part, as it covers this topic well.
I am NOT a big fan of using NSCoding/NSCoder/NSArchiver (we need to pick a name!) to serialise an object graph to a file.
Archives created in this way are incredibly fragile. If you save an object of class Foo then by golly you need to make sure when you load the data back in you have a class Foo in your application.
This makes NSCoder based serialisation difficult from the perspective of sharing files with other applications or even forwards compatibility with your future application.
I forgot to list what I would recommend.
NSCoding can be ok in certain situations: if you're just doing something quick and simple (although you do have to write a lot of code - two methods per class to be serialised). It can also be ok if you're not worried about compatibility with other applications.
Export/import via property lists (perhaps using the NSPropertyListSerializaion class) is a fine solution. XML based plists are easy to create and edit. Main advantage to plists is that you're not tying the file format to just your application.
You can also create your own XML based file format and read/write to it using NSXMLDocument API and friends. This really isn't much more work than using property lists.
I think you're a bit confused, NSDictionary is a data structure, it also happens to implement the NSCoding protocol. So in essence, you could either put all your data into a NSDictionary and have that encode itself later on, or you can implement the NSCoding protocol and encode your object tree using the NSCoder API. Based on the type of NSCoder object passed in to the encodeWithCoder: method, is the output of your encoding.