Performance interaction in SCORM - performance

I understood almost all types of interactions specified by the scorm data model element cmi.interactions.n.type(true_false, multiple_choice, fill_in, long_fill_in, matching, performance, sequencing, likert, numeric, other) ,it remains to understand the type performance. I found an explanation of Ostyn but it remains ambiguous .
The Performance interaction is the most flexible and rich of the
standard interaction types in SCORM. It allows the capture of a number
of arbitrary steps performed by a learner, along with information
about every step. (Claud Ostyn)

AFAIK it does exactly that, i.e. stores arbitrary data related to an ambiguous non-standard interaction (e.g. a 3D simulation). LMS are not supposed to do anything with interaction data anyway, at least not regarding completion and grading, so it is mostly used by instructional designers who need a deeper insight into what the learners are doing and then adjust the training, e.g. exercise difficulty.

Related

Which distributions can be used to produce starting times of jobs if there is no observation real state?

I need to produce some data which has starting times of each job (# of jobs: 30), I do not have chance to get real data so how can I generate data which shows similarities with a data distribution. In this case, which distribution should be good to go on?
A common technique used in simulation models where you don't have any data yet (e.g., data is very expensive, or it's a prospective system that does not even exist yet so where would you get the data from?) is to use a triangular distribution parameterized by subject matter experts (or your own best guesses) about the smallest, largest, and most common value you might see.
A relatively new, but quite powerful extension to this would be to vary the parameter choices in a designed set of experiments to see how much it matters if your guesstimates are off. A well-designed experiment would allow you to assess and characterize how much your results change as a function of the parameter values.
A more comprehensive variant would be to incorporate the distribution choice itself (triangle vs exponential vs anything else you think is plausible) into the design, to see whether that makes much of a difference. In the happy event that it doesn't, you can freely use a simple and convenient distribution choice such as the triangle; if it makes a big difference, you now have certain knowledge that you should get your hands on real data ASAP, because without that data based knowledge you're operating in a garbage-in-garbage-out mode. This also assumes that you control for, say, the first two moments as you switch between distribution choices so that your experiments are testing the shape of the distribution rather than the effect of mean and variance of the distribution.
If designed experiments tell you it doesn't much matter, that's wonderful news. If it does matter, you now know more about the system than you did before and know where to focus your efforts going forward.

Differences between OT and CRDT

Can someone explain me simply the main differences between Operational Transform and CRDT?
As far as I understand, both are algorithms that permits data to converge without conflict on different nodes of a distributed system.
In which usecase would you use which algorithm?
As far as I understand, OT is mostly used for text and CRDT is more general and can handle more advanced structures right?
Is CRDT more powerful than OT?
I ask this question because I am trying to see how to implement a collaborative editor for HTML documents, and not sure in which direction to look first. I saw the ShareJS project, and their attempts to support rich text collaboration on the browser on contenteditables elements. Nowhere in ShareJS I see any attempt to use CRDT for that.
We also know that Google Docs is using OT and it's working pretty well for real-time edition of rich documents.
Is Google's choice of using OT because CRDT was not very known at that time? Or would it be a good choice today too?
I'm also interested to hear about other use cases too, like using these algorithms on databases. Riak seems to use CRDT. Can OT be used to sync nodes of a database too and be an alternative to Paxos/Zab/Raft?
Both approaches are similar in that they provide eventual consistency. The difference is in how they do it. One way of looking at it is:
OT does it by changing operations. Operations are sent over the wire and concurrent operations are transformed once they are received.
CRDTs do it by changing state. Operations are made on the local CRDT. Its state is sent over the wire and is merged with the state of a copy. It doesn't matter how many times or in what order merges are made - all copies converge.
You're right, OT is mostly used for text and does predate CRDTs but research shows that:
many OT algorithms in the literature do not satisfy convergence properties
unlike what was stated by their authors
In other words CRDT merging is commutative while OT transformation functions sometimes are not.
From the Wikipedia article on CRDT:
OTs are generally complex and non-scalable
There are different kinds of CRDTs (sets, counters, ...) suited for different kinds of problems. There are some that are designed for text editing. For example, Treedoc - A commutative replicated data type for cooperative editing.
One another notable difference is that:
OT requires a central server for co-ordination.
CRDT can adopt any network topology like P2P over WebRTC and it is resilient to network partitions, which makes it decentralized.
Reference: https://youtu.be/B5NULPSiOGw?t=643 by Martin Kleppmann, author of "Designing Data-Intensive Applications".

Method for runtime comparison of two programs' objects

I am working through a particular type of code testing that is rather nettlesome and could be automated, yet I'm not sure of the best practices. Before describing the problem, I want to make clear that I'm looking for the appropriate terminology and concepts, so that I can read more about how to implement it. Suggestions on best practices are welcome, certainly, but my goal is specific: what is this kind of approach called?
In the simplest case, I have two programs that take in a bunch of data, produce a variety of intermediate objects, and then return a final result. When tested end-to-end, the final results differ, hence the need to find out where the differences occur. Unfortunately, even intermediate results may differ, but not always in a significant way (i.e. some discrepancies are tolerable). The final wrinkle is that intermediate objects may not necessarily have the same names between the two programs, and the two sets of intermediate objects may not fully overlap (e.g. one program may have more intermediate objects than the other). Thus, I can't assume there is a one-to-one relationship between the objects created in the two programs.
The approach that I'm thinking of taking to automate this comparison of objects is as follows (it's roughly inspired by frequency counts in text corpora):
For each program, A and B: create a list of the objects created throughout execution, which may be indexed in a very simple manner, such as a001, a002, a003, a004, ... and similarly for B (b001, ...).
Let Na = # of unique object names encountered in A, similarly for Nb and # of objects in B.
Create two tables, TableA and TableB, with Na and Nb columns, respectively. Entries will record a value for each object at each trigger (i.e. for each row, defined next).
For each assignment in A, the simplest approach is to capture the hash value of all of the Na items; of course, one can use LOCF (last observation carried forward) for those items that don't change, and any as-yet unobserved objects are simply given a NULL entry. Repeat this for B.
Match entries in TableA and TableB via their hash values. Ideally, objects will arrive into the "vocabulary" in approximately the same order, so that order and hash value will allow one to identify the sequences of values.
Find discrepancies in the objects between A and B based on when the sequences of hash values diverge for any objects with divergent sequences.
Now, this is a simple approach and could work wonderfully if the data were simple, atomic, and not susceptible to numerical precision issues. However, I believe that numerical precision may cause hash values to diverge, though the impact is insignificant if the discrepancies are approximately at the machine tolerance level.
First: What is a name for such types of testing methods and concepts? An answer need not necessarily be the method above, but reflects the class of methods for comparing objects from two (or more) different programs.
Second: What are standard methods exist for what I describe in steps 3 and 4? For instance, the "value" need not only be a hash: one might also store the sizes of the objects - after all, two objects cannot be the same if they are massively different in size.
In practice, I tend to compare a small number of items, but I suspect that when automated this need not involve a lot of input from the user.
Edit 1: This paper is related in terms of comparing the execution traces; it mentions "code comparison", which is related to my interest, though I'm concerned with the data (i.e. objects) than with the actual code that produces the objects. I've just skimmed it, but will review it more carefully for methodology. More importantly, this suggests that comparing code traces may be extended to comparing data traces. This paper analyzes some comparisons of code traces, albeit in a wholly unrelated area of security testing.
Perhaps data-tracing and stack-trace methods are related. Checkpointing is slightly related, but its typical use (i.e. saving all of the state) is overkill.
Edit 2: Other related concepts include differential program analysis and monitoring of remote systems (e.g. space probes) where one attempts to reproduce the calculations using a local implementation, usually a clone (think of a HAL-9000 compared to its earth-bound clones). I've looked down the routes of unit testing, reverse engineering, various kinds of forensics, and whatnot. In the development phase, one could ensure agreement with unit tests, but this doesn't seem to be useful for instrumented analyses. For reverse engineering, the goal can be code & data agreement, but methods for assessing fidelity of re-engineered code don't seem particularly easy to find. Forensics on a per-program basis are very easily found, but comparisons between programs don't seem to be that common.
(Making this answer community wiki, because dataflow programming and reactive programming are not my areas of expertise.)
The area of data flow programming appears to be related, and thus debugging of data flow programs may be helpful. This paper from 1981 gives several useful high level ideas. Although it's hard to translate these to immediately applicable code, it does suggest a method I'd overlooked: when approaching a program as a dataflow, one can either statically or dynamically identify where changes in input values cause changes in other values in the intermediate processing or in the output (not just changes in execution, if one were to examine control flow).
Although dataflow programming is often related to parallel or distributed computing, it seems to dovetail with Reactive Programming, which is how the monitoring of objects (e.g. the hashing) can be implemented.
This answer is far from adequate, hence the CW tag, as it doesn't really name the debugging method that I described. Perhaps this is a form of debugging for the reactive programming paradigm.
[Also note: although this answer is CW, if anyone has a far better answer in relation to dataflow or reactive programming, please feel free to post a separate answer and I will remove this one.]
Note 1: Henrik Nilsson and Peter Fritzson have a number of papers on debugging for lazy functional languages, which are somewhat related: the debugging goal is to assess values, not the execution of code. This paper seems to have several good ideas, and their work partially inspired this paper on a debugger for a reactive programming language called Lustre. These references don't answer the original question, but may be of interest to anyone facing this same challenge, albeit in a different programming context.

Optimizing Data Translation

Our business deals with houses and over the years we have created several business objects to represent them. We also receive lots of data from outside sources, and send data to external consumers. Every one of these represents the house in a different way and we spend a lot of time and energy translating one format into another. I'm looking for some general patterns or best practices on how to deal with this situation. How can I write a universal data translator that is flexible, extensible, and fast.
Background: A house generally has 30-40 attributes such as size, number of bedrooms, roof type, construction material, siding material, etc. These are typically represented as key/value pairs. A typical translation problem is that one vendor will represent the number of bedrooms as a single key/value pair: NumBedrooms=3, while a different vendor will have a key/value pair per bedroom: Bedroom=master, Bedroom=small, Bedroom=small.
There's nothing particularly hard about the translation, but we spend a lot of time and energy writing and testing translations. How can I optimize this?
Thanks
(My environment is .Net)
The best place to start is by creating an "internal representation" which is the representation that your processing will always. Then create translators from and to "external representations" as needed. I'd imagine that this is what you are already doing, but it should be mentioned for completeness. The optimization comes from being able to selectively write import and export only when you need them.
A good implementation strategy is to externalize the transformation if you can. If you can get your inputs and outputs into XML documents, then you can write XSLT transforms between your internal and external representations. The goal is to be able to set up a pipeline of transformations from an input XML document to your internal representation. If everything is represented in XML and using a common protocol (say... hmm... HTTP), then the process can be controlled using configuration. BTW - this is essentially the Pipes and Filters design pattern.
Take a look at Yahoo pipes, Apache Cocoon, XML pipeline, and NetKernel for inspiration.
My employer back in the 90s faced this problem. We had a standard format we converted the customers' data to and from, as D.Shawley suggests.
I went further and designed a simple format-description language; we described our standard format in that language and then, for a new dataset, we'd write up its format too. Then a program would take both descriptions and convert the data from one format to the other, with automatic type conversions, safety checks, etc. (This came in handy for some other operations as well, not just these initial/final conversions.)
The particulars probably won't help you -- chances are you deal with completely different kinds of data. You can likely profit from the general principle, though. The "data definition language" needn't necessarily be a fancy thing with a parser and scanner; you might define it directly with a data structure in IronPython, say.

How do you represent music in a data structure?

How would you model a simple musical score for a single instrument written in regular standard notation? Certainly there are plenty of libraries out there that do exactly this. I'm mostly curious about different ways to represent music in a data structure. What works well and what doesn't?
Ignoring some of the trickier aspects like dynamics, the obvious way would be a literal translation of everything into Objects - a Scores is made of Measures is made of Notes. Synthesis, I suppose, would mean figuring out the start/end time of each note and blending sine waves.
Is the obvious way a good way? What are other ways to do this?
Many people doing new common Western music notation projects use MusicXML as a starting point. It provides a complete representation of music notation that you can subset to meet your needs. There is now an XSD schema definition that projects like ProxyMusic use to create MusicXML object models. ProxyMusic creates these in Java, but you should be able to do something similar with other XML data binding tools in other languages.
As one MusicXML customer put it:
"A very important benefit of all of your hard work on MusicXML as far as I am concerned is that I use it as a clear, structured and very ‘real-world practical’ specification of what music ‘is’ in order to design and implement my application’s internal data structures."
There's much more information available - XSDs and DTDs, sample files, a tutorial, a list of supported applications, a list of publications, and more - at
http://www.makemusic.com/musicxml
MIDI is not a very good model for a simple musical score in standard notation. MIDI lacks many of the basic concepts of music notation. It was designed to be a performance format, not a notation format.
It is true that music notation is not hierarchical. Since XML is hierarchical, MusicXML uses paired start-stop elements for representing non-hierarchical information. A native data structure can represent things more directly, which is one reason that MusicXML is just a starting point for the data structure.
For a more direct way of representing music notation that captures its simultaneous horizontal and vertical structure, look at the Humdrum format, which uses more of a spreadsheet/lattice model. Humdrum is especially used in musicology and music analysis applications where its data structure works particularly well.
MIDI files would be the usual way to do this. MIDI is a standard format for storing data about musical notes, including start and end times, note volume, which instrument it's played on, and various special characteristics; you can find plenty of prewritten libraries (including some open source) for reading and writing the files and representing the data in them in terms of arrays or objects, though they don't usually do it by having an object for each note, which would add up to a lot of memory overhead.
The instruments defined in MIDI are just numbers from 1 to 128 which have symbolic names, like violin or trumpet, but MIDI itself doesn't say anything about what the instruments should actually sound like. That is the job of a synthesizer, which takes the high-level MIDI data an converts it into sound. In principle, yes, you can create any sound by superposing sine waves, but that doesn't work that well in practice because it becomes computationally intensive once you get to playing a few tracks in parallel; also, a simple Fourier spectrum (the relative intensities of the sine waves) is just not adequate when you're trying to reproduce the real sound of an instrument and the expressiveness of a human playing it. (I've written a simple synthesizer to do just that so I know hard it can be produce a decent sound) There's a lot of research being done in the science of synthesis, and more generally DSP (digital signal processing), so you should certainly be able to find plenty of books and web pages to read about it if you'd like.
Also, this may only be tangentially related to what the question, but you might be interested in an audio programming language called ChucK. It was designed by people at the crossroads of programming and music, and you can probably get a good idea of the current state of sound synthesis by playing around with it.
Music in a data structure, standard notation, ...
Sounds like you would be interested in LilyPond.
Most things about musical notation are almost purely mechanical (there are rules and guidelines even for the complex, non-trivial parts of notation), and LilyPond does a beautiful job of taking care of all those mechanical aspects. What's left is input files that are simple to write in any text editor. In addition to PDFs, LilyPond can also produce Midi files.
If you felt so inclined, you could generate the text files algorythimically with a program and call LilyPond to convert it to notation and a midi file for you.
I doubt you could find a more complete and concise way to express music than an input file for LilyPond.
Please understand that music and musical notation is not hierarchical and can't be modelled(well) by strict adherence to hierarchical thinking. Read this for mor information on that subject.
Have fun!
Hmmm, fun problem.
Actually, I'd be tempted to turn it into Command pattern along with Composite. This is kind of turning the normal OO approach on its head, as you are in a sense making the modeled objects verbs instead of nouns. It would go like this:
a Note is a class with one method, play(), and a ctor takinglengthandtone`.
you need an Instrument which defines the behavior of the synth: timbre, attack, and so on.
You would then have a Score, which has a TimeSignature, and is a Composite pattern containing Measures; the Measures contain the Notes.
Actually playing it means interpreting some other things, like Repeats and Codas, which are other Containers. To play it, you interpret the hierarchical structure of the Composite, inserting a note into a queue; as the notes move through the queue based on the tempi, each Note has its play() method called.
Hmmm, might invert that; each Note is given as input to the Instrument, which interprets it by synthesizing the wave form as required. That comes back around to something like your original scheme.
Another approach to the decomposition is to apply Parnas' Law: you decompose in order to keep secret places where requirements could change. But I think that ends up with a similar decomposition; You can change the time signature and the tuning, you can change the instrument --- a Note doesn't care if you play it on a violin, a piano, or a marimba.
Interesting problem.
My music composition software (see my profile for the link) uses Notes as the primary unit (with properties like starting position, length, volume, balance, release duration etc.). Notes are grouped into Patterns (which have their own starting positions and repetition properties) which are grouped into Tracks (which have their own instrument or instruments).
Blending sine waves is one method of synthesizing sounds, but it's pretty rare (it's expensive and doesn't sound very good). Wavetable synthesis (which my software uses) is computationally inexpensive and relatively easy to code, and is essentially unlimited in the variety of sounds it can produce.
The usefulness of a model can only be evaluated within a given context. What is it you are trying to do with this model?
Many respondents have said that music is non-hierarchical. I sort of agree with this, but instead suggest that music can be viewed hierarchically from many different points of view, each giving rise to a different hierarchy. We may want to view it as a list of voices, each of which has notes with on/off/velocity/etc attributes. Or we may want to view it as vertical sonorities for the purpose of harmonic analysis. Or we may want to view it in a way suitable for contrapuntal analysis. Or many other possibilities. Worse still, we may want to see it from these different points of view for a single purpose.
Having made several attempts to model music for the purposes of generating species counterpoint, analysing harmony and tonal centers, and many other things, I have been continuously frustrated by music's reluctance to yield to my modelling skills. I'm beginning to think that the best model may be relational, simply because to a large extent, models based on the relational model of data strive not to take a point of view about the context of use. However, that may simply be pushing the problem somewhere else.

Resources