From here I have this:
If differential constraints are specified in this structure, they are applied to the base in a "differential" fashion. If there is no base, then the differential constraints cannot be provided (snapshot only). Differential structures are useful for the editing perspective, and snapshot structures are suitable for operational use. The FHIR Project provides a number of tools/services to populate snapshots from differential constraints. Logical Models have a base of "Element" or another logical model.
and there is this question In FHIR StructureDefinitions (profiles) how do elements aggregate into a snapshot?
which covers a very technical description of the transformation, but I'm still lost.
What is the practical implication as an implementer? Can I just take the snapshot and ignore the differential?
and are there practical examples of where there is difference?
Implementers typically care about the snapshot - "what is actually allowed". Designers care about the differential - "how are the constraints here different from the parent". Given the base, you can generate one from the other, but it's computationally expensive and systems won't necessarily have the base. So we transmit both perspectives to ensure that the instance can be consumed by both design/rendering tools and by software.
From a "read" perspective, feel free to determine which of the two your system needs to care about and ignore the other. If you're creating instances though, you'll need to populate both. (On the positive side, most of the reference implementations have the logic to generate one from the others, so you can still focus on the one you care about and largely ignore the other.)
Related
I have created a big Ontology (.owl) and I'm now in the reasoning step. In fact, the problem is how to ensure a scalable reasoning for my ontology. I have searched in literature and I found that Big Data can be an adequate solution for that. Unfortunately, I found that Map-reduce can't accept as input OWL file. In addition semantic language as SWRL, SPARQL can not be used.
My questions are:
should I change the owl file with others?
How to transform Rules (SWRL for example) in an acceptable format with Map-reduce?
Thanks
"Big data can be an adequate solution to that" is too simple a statement for this problem.
Ensuring scalability of OWL ontologies is a very complex issue. The main variables involved are number of axioms and expressivity of the ontology; however, these are not always the most important characteristics. A lot depends also on the api used and, for apis where the reasoning step is separate from parsing, which reasoner is being used.
SWRL rules add another level of complexity, as they are of (almost) arbitrary complexity - so it is not possible to guarantee scalability in general. For specific ontologies and specific sets of rules, it is possible to provide better guesses.
A translation to a MapReduce format /might/ help, but there is no standard transformation as far as I'm aware, and it would be quite complex to guarantee that the transformation preserves the semantics of the ontology and of the rule entailments. So, the task would amount to rewrite the data in a way that allows you to answer the queries you need to run, but this might prove impossible, depending on the specific ontology.
On the other hand, what is the size of this ontology and the amount of memory you allocated to the task?
Can someone explain me simply the main differences between Operational Transform and CRDT?
As far as I understand, both are algorithms that permits data to converge without conflict on different nodes of a distributed system.
In which usecase would you use which algorithm?
As far as I understand, OT is mostly used for text and CRDT is more general and can handle more advanced structures right?
Is CRDT more powerful than OT?
I ask this question because I am trying to see how to implement a collaborative editor for HTML documents, and not sure in which direction to look first. I saw the ShareJS project, and their attempts to support rich text collaboration on the browser on contenteditables elements. Nowhere in ShareJS I see any attempt to use CRDT for that.
We also know that Google Docs is using OT and it's working pretty well for real-time edition of rich documents.
Is Google's choice of using OT because CRDT was not very known at that time? Or would it be a good choice today too?
I'm also interested to hear about other use cases too, like using these algorithms on databases. Riak seems to use CRDT. Can OT be used to sync nodes of a database too and be an alternative to Paxos/Zab/Raft?
Both approaches are similar in that they provide eventual consistency. The difference is in how they do it. One way of looking at it is:
OT does it by changing operations. Operations are sent over the wire and concurrent operations are transformed once they are received.
CRDTs do it by changing state. Operations are made on the local CRDT. Its state is sent over the wire and is merged with the state of a copy. It doesn't matter how many times or in what order merges are made - all copies converge.
You're right, OT is mostly used for text and does predate CRDTs but research shows that:
many OT algorithms in the literature do not satisfy convergence properties
unlike what was stated by their authors
In other words CRDT merging is commutative while OT transformation functions sometimes are not.
From the Wikipedia article on CRDT:
OTs are generally complex and non-scalable
There are different kinds of CRDTs (sets, counters, ...) suited for different kinds of problems. There are some that are designed for text editing. For example, Treedoc - A commutative replicated data type for cooperative editing.
One another notable difference is that:
OT requires a central server for co-ordination.
CRDT can adopt any network topology like P2P over WebRTC and it is resilient to network partitions, which makes it decentralized.
Reference: https://youtu.be/B5NULPSiOGw?t=643 by Martin Kleppmann, author of "Designing Data-Intensive Applications".
I understood almost all types of interactions specified by the scorm data model element cmi.interactions.n.type(true_false, multiple_choice, fill_in, long_fill_in, matching, performance, sequencing, likert, numeric, other) ,it remains to understand the type performance. I found an explanation of Ostyn but it remains ambiguous .
The Performance interaction is the most flexible and rich of the
standard interaction types in SCORM. It allows the capture of a number
of arbitrary steps performed by a learner, along with information
about every step. (Claud Ostyn)
AFAIK it does exactly that, i.e. stores arbitrary data related to an ambiguous non-standard interaction (e.g. a 3D simulation). LMS are not supposed to do anything with interaction data anyway, at least not regarding completion and grading, so it is mostly used by instructional designers who need a deeper insight into what the learners are doing and then adjust the training, e.g. exercise difficulty.
I am working through a particular type of code testing that is rather nettlesome and could be automated, yet I'm not sure of the best practices. Before describing the problem, I want to make clear that I'm looking for the appropriate terminology and concepts, so that I can read more about how to implement it. Suggestions on best practices are welcome, certainly, but my goal is specific: what is this kind of approach called?
In the simplest case, I have two programs that take in a bunch of data, produce a variety of intermediate objects, and then return a final result. When tested end-to-end, the final results differ, hence the need to find out where the differences occur. Unfortunately, even intermediate results may differ, but not always in a significant way (i.e. some discrepancies are tolerable). The final wrinkle is that intermediate objects may not necessarily have the same names between the two programs, and the two sets of intermediate objects may not fully overlap (e.g. one program may have more intermediate objects than the other). Thus, I can't assume there is a one-to-one relationship between the objects created in the two programs.
The approach that I'm thinking of taking to automate this comparison of objects is as follows (it's roughly inspired by frequency counts in text corpora):
For each program, A and B: create a list of the objects created throughout execution, which may be indexed in a very simple manner, such as a001, a002, a003, a004, ... and similarly for B (b001, ...).
Let Na = # of unique object names encountered in A, similarly for Nb and # of objects in B.
Create two tables, TableA and TableB, with Na and Nb columns, respectively. Entries will record a value for each object at each trigger (i.e. for each row, defined next).
For each assignment in A, the simplest approach is to capture the hash value of all of the Na items; of course, one can use LOCF (last observation carried forward) for those items that don't change, and any as-yet unobserved objects are simply given a NULL entry. Repeat this for B.
Match entries in TableA and TableB via their hash values. Ideally, objects will arrive into the "vocabulary" in approximately the same order, so that order and hash value will allow one to identify the sequences of values.
Find discrepancies in the objects between A and B based on when the sequences of hash values diverge for any objects with divergent sequences.
Now, this is a simple approach and could work wonderfully if the data were simple, atomic, and not susceptible to numerical precision issues. However, I believe that numerical precision may cause hash values to diverge, though the impact is insignificant if the discrepancies are approximately at the machine tolerance level.
First: What is a name for such types of testing methods and concepts? An answer need not necessarily be the method above, but reflects the class of methods for comparing objects from two (or more) different programs.
Second: What are standard methods exist for what I describe in steps 3 and 4? For instance, the "value" need not only be a hash: one might also store the sizes of the objects - after all, two objects cannot be the same if they are massively different in size.
In practice, I tend to compare a small number of items, but I suspect that when automated this need not involve a lot of input from the user.
Edit 1: This paper is related in terms of comparing the execution traces; it mentions "code comparison", which is related to my interest, though I'm concerned with the data (i.e. objects) than with the actual code that produces the objects. I've just skimmed it, but will review it more carefully for methodology. More importantly, this suggests that comparing code traces may be extended to comparing data traces. This paper analyzes some comparisons of code traces, albeit in a wholly unrelated area of security testing.
Perhaps data-tracing and stack-trace methods are related. Checkpointing is slightly related, but its typical use (i.e. saving all of the state) is overkill.
Edit 2: Other related concepts include differential program analysis and monitoring of remote systems (e.g. space probes) where one attempts to reproduce the calculations using a local implementation, usually a clone (think of a HAL-9000 compared to its earth-bound clones). I've looked down the routes of unit testing, reverse engineering, various kinds of forensics, and whatnot. In the development phase, one could ensure agreement with unit tests, but this doesn't seem to be useful for instrumented analyses. For reverse engineering, the goal can be code & data agreement, but methods for assessing fidelity of re-engineered code don't seem particularly easy to find. Forensics on a per-program basis are very easily found, but comparisons between programs don't seem to be that common.
(Making this answer community wiki, because dataflow programming and reactive programming are not my areas of expertise.)
The area of data flow programming appears to be related, and thus debugging of data flow programs may be helpful. This paper from 1981 gives several useful high level ideas. Although it's hard to translate these to immediately applicable code, it does suggest a method I'd overlooked: when approaching a program as a dataflow, one can either statically or dynamically identify where changes in input values cause changes in other values in the intermediate processing or in the output (not just changes in execution, if one were to examine control flow).
Although dataflow programming is often related to parallel or distributed computing, it seems to dovetail with Reactive Programming, which is how the monitoring of objects (e.g. the hashing) can be implemented.
This answer is far from adequate, hence the CW tag, as it doesn't really name the debugging method that I described. Perhaps this is a form of debugging for the reactive programming paradigm.
[Also note: although this answer is CW, if anyone has a far better answer in relation to dataflow or reactive programming, please feel free to post a separate answer and I will remove this one.]
Note 1: Henrik Nilsson and Peter Fritzson have a number of papers on debugging for lazy functional languages, which are somewhat related: the debugging goal is to assess values, not the execution of code. This paper seems to have several good ideas, and their work partially inspired this paper on a debugger for a reactive programming language called Lustre. These references don't answer the original question, but may be of interest to anyone facing this same challenge, albeit in a different programming context.
Generally speaking what do you get out of extending an artificial neural net by adding more nodes to a hidden layer or more hidden layers?
Does it allow for more precision in the mapping, or does it allow for more subtlety in the relationships it can identify, or something else?
There's a very well known result in machine learning that states that a single hidden layer is enough to approximate any smooth, bounded function (the paper was called "Multilayer feedforward networks are universal approximators" and it's now almost 20 years old). There are several things to note, however.
The single hidden layer may need to be arbitrarily wide.
This says nothing about the ease with which an approximation may be found; in general large networks are hard to train properly and fall victim to overfitting quite frequently (the exception are so-called "convolutional neural networks" which really are only meant for vision problems).
This also says nothing about the efficiency of the representation. Some functions require exponential numbers of hidden units if done with one layer but scale much more nicely with more layers (for more discussion of this read Scaling Learning Algorithms Towards AI)
The problem with deep neural networks is that they're even harder to train. You end up with very very small gradients being backpropagated to the earlier hidden layers and the learning not really going anywhere, especially if weights are initialized to be small (if you initialize them to be of larger magnitude you frequently get stuck in bad local minima). There are some techniques for "pre-training" like the ones discussed in this Google tech talk by Geoff Hinton which attempt to get around this.
This is very interesting question but it's not so easy to answer. It depends on the problem you try to resolve and what neural network you try to use. There are several neural network types.
I general it's not so clear that more nodes equals more precision. Research show that you need mostly only one hidden layer. The numer of nodes should be the minimal numer of nodes that are required to resolve a problem. If you don't have enough of them - you will not reach solution.
From the other hand - if you have reached the number of nodes that is good to resolve solution - you can add more and more of them and you will not see any further progress in result estimation.
That's why there are so many types of neural networks. They try to resolve different types of problems. So you have NN to resolve static problems, to resolve time related problems and so one. The number of nodes is not so important like the design of them.
When you have a hidden layer is that you are creating a combined feature of the input. So, is the problem better tackled by more features of the existing input, or through higher-order features that come from combining existing features? This is the trade-off for a standard feed-forward network.
You have a theoretical reassurance that any function can be represented by a neural network with two hidden layers and non-linear activation.
Also, consider using additional resources for boosting, instead of adding more nodes, if you're not certain of the appropriate topology.
Very rough rules of thumb
generally more elements per layer for bigger input vectors.
more layers may let you model more non-linear systems.
If the kind of network you are using has delays in propagation , more layers may allow modelling of time series . Take care to have time jitter in the delays or it wont work very well. If this is just gobbledegook to you, ignore it.
More layers lets you insert recurrent features. This can be very useful for discrimination tasks. You ANN implementation my not permit this.
HTH
The number of units per hidden layer accounts for the ANN's potential to describe an arbitrarily complex function. Some (complicated) functions may require many hidden nodes, or possibly more than one hidden layer.
When a function can be roughly approximated by a certain number of hidden units, any extra nodes will provide more accuracy...but this is only true if the training samples used are enough to justify this addition - otherwise what will happen is "overconvergence". Overconvergence means that your ANN has lost its generalization abilities because it has overemphasized on the particular samples.
In general it is best to use the less hidden units possible, if the resulting network can give good results. The additional training patterns required to justify more hidden nodes can not be found easily in most cases, and accuracy is not the NNs' strong point.