Related
I'm developing a quite large (for me) ruby script for engineering calculations. The script creates a few objects that are interconnected in a hierarchical fashion.
For example one object (Inp) contains the input parameters for a set of simulations. Other objects (SimA, SimB, SimC) are used to actually perform the simulations and each of them may generate one or more output objects (OutA, OutB, OutC) that contain the results and produce the actual files used for the visualization or analysis by other objects and so on.
The first time I perform and complete all the simulations all the objects will be fully defined and I will have a series or files that represent the outputs for the user.
Now suppose that the user needs to change one of the attributes of Inp. Depending on which attribute has been modified some simulations will have to be re-run and some object OutX will be rendered invalid otherwise the consistency would be loss as the outputs would not correspond to the inputs anymore.
I would like to know whether there is a design pattern that would facilitate this process. Also I was wondering whether some sort of graph could be used to visually represents the various dependencies between objects in a clear way.
From what I have been reading (this question is a year old) I think that the Ruby Observable class could be used for this purpose. Every time a parent object changes, it should send a message to its children so that they can update their state.
Is this the recommended approach?
I hope this makes the question a bit clearer.
I'm not sure that I fully understand your question, but the problem of stages which depend on results of previous stages which in turn depend on results from previous stages which themselves depend on result from previous stages, and every one of those stages can fail or take an arbitrary amount of time, is as old as programming itself and has been solved a number of times.
Tools which do this are typically called "build tools", because this is a problem that often occurs when building complex software systems, but they are in no way limited to building software. A more fitting term would be "dependency-oriented programming". Examples include make, ant, or Ruby's own rake.
I am working through a particular type of code testing that is rather nettlesome and could be automated, yet I'm not sure of the best practices. Before describing the problem, I want to make clear that I'm looking for the appropriate terminology and concepts, so that I can read more about how to implement it. Suggestions on best practices are welcome, certainly, but my goal is specific: what is this kind of approach called?
In the simplest case, I have two programs that take in a bunch of data, produce a variety of intermediate objects, and then return a final result. When tested end-to-end, the final results differ, hence the need to find out where the differences occur. Unfortunately, even intermediate results may differ, but not always in a significant way (i.e. some discrepancies are tolerable). The final wrinkle is that intermediate objects may not necessarily have the same names between the two programs, and the two sets of intermediate objects may not fully overlap (e.g. one program may have more intermediate objects than the other). Thus, I can't assume there is a one-to-one relationship between the objects created in the two programs.
The approach that I'm thinking of taking to automate this comparison of objects is as follows (it's roughly inspired by frequency counts in text corpora):
For each program, A and B: create a list of the objects created throughout execution, which may be indexed in a very simple manner, such as a001, a002, a003, a004, ... and similarly for B (b001, ...).
Let Na = # of unique object names encountered in A, similarly for Nb and # of objects in B.
Create two tables, TableA and TableB, with Na and Nb columns, respectively. Entries will record a value for each object at each trigger (i.e. for each row, defined next).
For each assignment in A, the simplest approach is to capture the hash value of all of the Na items; of course, one can use LOCF (last observation carried forward) for those items that don't change, and any as-yet unobserved objects are simply given a NULL entry. Repeat this for B.
Match entries in TableA and TableB via their hash values. Ideally, objects will arrive into the "vocabulary" in approximately the same order, so that order and hash value will allow one to identify the sequences of values.
Find discrepancies in the objects between A and B based on when the sequences of hash values diverge for any objects with divergent sequences.
Now, this is a simple approach and could work wonderfully if the data were simple, atomic, and not susceptible to numerical precision issues. However, I believe that numerical precision may cause hash values to diverge, though the impact is insignificant if the discrepancies are approximately at the machine tolerance level.
First: What is a name for such types of testing methods and concepts? An answer need not necessarily be the method above, but reflects the class of methods for comparing objects from two (or more) different programs.
Second: What are standard methods exist for what I describe in steps 3 and 4? For instance, the "value" need not only be a hash: one might also store the sizes of the objects - after all, two objects cannot be the same if they are massively different in size.
In practice, I tend to compare a small number of items, but I suspect that when automated this need not involve a lot of input from the user.
Edit 1: This paper is related in terms of comparing the execution traces; it mentions "code comparison", which is related to my interest, though I'm concerned with the data (i.e. objects) than with the actual code that produces the objects. I've just skimmed it, but will review it more carefully for methodology. More importantly, this suggests that comparing code traces may be extended to comparing data traces. This paper analyzes some comparisons of code traces, albeit in a wholly unrelated area of security testing.
Perhaps data-tracing and stack-trace methods are related. Checkpointing is slightly related, but its typical use (i.e. saving all of the state) is overkill.
Edit 2: Other related concepts include differential program analysis and monitoring of remote systems (e.g. space probes) where one attempts to reproduce the calculations using a local implementation, usually a clone (think of a HAL-9000 compared to its earth-bound clones). I've looked down the routes of unit testing, reverse engineering, various kinds of forensics, and whatnot. In the development phase, one could ensure agreement with unit tests, but this doesn't seem to be useful for instrumented analyses. For reverse engineering, the goal can be code & data agreement, but methods for assessing fidelity of re-engineered code don't seem particularly easy to find. Forensics on a per-program basis are very easily found, but comparisons between programs don't seem to be that common.
(Making this answer community wiki, because dataflow programming and reactive programming are not my areas of expertise.)
The area of data flow programming appears to be related, and thus debugging of data flow programs may be helpful. This paper from 1981 gives several useful high level ideas. Although it's hard to translate these to immediately applicable code, it does suggest a method I'd overlooked: when approaching a program as a dataflow, one can either statically or dynamically identify where changes in input values cause changes in other values in the intermediate processing or in the output (not just changes in execution, if one were to examine control flow).
Although dataflow programming is often related to parallel or distributed computing, it seems to dovetail with Reactive Programming, which is how the monitoring of objects (e.g. the hashing) can be implemented.
This answer is far from adequate, hence the CW tag, as it doesn't really name the debugging method that I described. Perhaps this is a form of debugging for the reactive programming paradigm.
[Also note: although this answer is CW, if anyone has a far better answer in relation to dataflow or reactive programming, please feel free to post a separate answer and I will remove this one.]
Note 1: Henrik Nilsson and Peter Fritzson have a number of papers on debugging for lazy functional languages, which are somewhat related: the debugging goal is to assess values, not the execution of code. This paper seems to have several good ideas, and their work partially inspired this paper on a debugger for a reactive programming language called Lustre. These references don't answer the original question, but may be of interest to anyone facing this same challenge, albeit in a different programming context.
Should entities know how to draw themselves? I've used this approach: it's simple and it works, but after learning MVC-patterns I feel uneasy about this. It's hard to change the art style when all the display logic is buried in the model.
One could introduce a view class, which takes the level as an argument and draws it, but this would mean it has to identify entity types and introduce a "switch"-statement, which I learned is also bad.
Where should one place code for drawing, in a way that is extensible, easy to change, clean and dry?
This question still comes up frequently in game development as various studios and groups start on new engines.
The short answer is that it depends on how complex your game entities will be. For simple entities, it doesn't really matter.
When you get into much more complicated entities, you have to rethink your approach. In general, you'll want to resist the urge to have the uber loop that iterates over every entity and calls some update/render/whatever function. That doesn't scale at all, unless every update, render, or whatever hierarchy is exactly the same. Which is fine for a game like Geometry Wars, but not for anything more complicated than that.
What you want to do is given the most general entity collection extract a usage specific traversal. For example, if you wanted to render the scene you should have a way of extracting all the renderable entities from the entity collection and then rendering all of them in some arbitrary batchable order. Same goes for physics, collision, AI, etc.
Some helpful links:
http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/
http://research.scee.net/files/presentations/gcapaustralia09/Pitfalls_of_Object_Oriented_Programming_GCAP_09.pdf
I highly recommend both; the first goes into design rationale for building game entities out of components, i.e., render component, physics component, AI component. The second goes into performance characteristics of various game entity approaches.
Theres nothing inherently wrong with having an abstract Draw() style method in your entities that lets them decide how they will be drawn, especially for smaller games that probably won't be extended significantly. I've used this method in a ton of small projects and it works great.
An improvement you can make to that strategy is to use your game resources as a proxy for the actual drawing operations. For example an enemy entity can defer all the rendering through a resource object it owns that represents the mesh; likewise for the texture/skin and effects.
I've recently switched to using my entities as 'dumb' containers for interfaces that define their behaviours. A player entity might contain IMoveable, IControllable, IRenderable, and many more interfaces that simply apply a specialized operation to that entity depending on the data it contains. The entity doesn't know much about this and all the execution happens when the scene graph is traversed for culling/rendering.
MVC isn't necessarily a great fit for games. MVC was designed for traditional GUIs which typically update infrequently based on discrete events, whereas games are more like simulations where there is constant updating going on and the presentation needs to reflect that instantly.
Still, there's no reason why you shouldn't still strive to separate state from presentation. Entities don't need to know how to draw themselves as such - that would mean they know about rendering operations, which is unnecessary - but it should be possible to ask them how they look at this point in time and then to use that information to render the scene. eg. a 2D entity should be able to return its current frame of animation. Or a 3D entity should be able to return its 3d mesh, position, and orientation. No need for the switch statement here, as long as you have generic representations that you can return to the renderer. Entities with wildly differing rendering algorithms might have to return rather different objects at this point.
Typically I solve this issue with inheritance.
For example, on a project I'm working with at the moment I'm using Test-Driven Development to write the game logic, with manual testing for the rendering. This example below in C# shows the rough idea.
class GameObjecet {
// Logic, nicely unit tested.
}
class DrawableGameObject : GameObject {
// Drawing logic - manual testing
}
This is generally what I've found to be the best solution. This allows code to be tested, while not cluttering the game logic with presentational code such as drawing, and model loading etc...
I am currently developing a charting application (for the iPhone, although that is largely irrelevant) using their MVC pattern.
One aspect of the application is that you can overlay a number of statistics on the charts. I am a little unsure how I am going to structure these classes.
For each statistic there will be two aspects.
1. The calculation. The function which will take the data and calculate the relevant statistical figures.
2. The display. The statistics then need to be drawn over the top of the graph.
Obviously I want the code to comply with the MVC pattern as closely as possible, but I am planning to develop possibly hundreds of these statistics.
I could create three classes. One for the graphics, one for the logic and a factory class to tie the two together. This would then fit with the pattern, but this seems to be a huge extra overhead in terms of the number of classes in the system and additional complexity which I dont feel is necessary.
So, I am very tempted to create a single class for each statistic. But that would mean each class would have logic and graphics mixed in together, which is heavily frowned upon.
Are there any other suggestions as to how I can lay these out in a structured reuseable way without adding uneccessary complexity?
EDIT
Thanks for the answers. Most useful, but has raised more questions!
MVC does fit the rest of the application perfectly. Also as its for the iPhone, I seem to be pushed along this path anyway. This is the only reason I am considering MVC for these statistics.
However, for these statistics, the user will not interact with them, they are purely for display. The statistics are painting various lines and symbols directly onto the view canvas. Each statistic paints their information in its own way. There is very little that can be shared between each one, also each piece of data can only be useful represented in one way. I can think of no other useful way that I would want to represent the information.
So it seems MVC is out for these, but I am unsure now what pattern would fit other than my newly invented "Mix logic and graphics" pattern which just feels wrong due to the Single Responsibility Principle (thanks for that link).
First of all, will the user interact with the statistics? If not, then you don't need MVC. (The Controller in MVC deals with user interaction).
You want to keep the number of classes to a minimum, which is good. Let's consider both the calculation and display separately.
How will you be displaying the statistics? Will they typically be text labels, or will there be other graph elements (error bars or things like that)? Try to figure out the different ways you will want to display your statistics.
How many different calculations will you have? Does each calculation map directly to a single graph element, or might it be drawn in a number of different ways? Try to figure out how the calculations relate to the graph elements.
As a concrete example, suppose you have a set of data points that you have plotted. You want to display the mean, the median, and the mode. You could display each of these as separate horizontal lines that cut through the chart at the appropriate Y value. The calculations are all independent, but the display logic can be shared. Or, perhaps you want to display the mean as both a line and as a text label. Here, there is only one calculation, but two different display methods.
MVC design is about separating the underlying data from its presentation. By doing this, you can re-use bits of presentation logic for many different pieces of data. Also, you can display a single piece of data in multiple ways, and they will all stay in sync.
It depends on what you call complex. Most people consider methods or classes that are responsible for multiple things complex and classes or methods that are responsible for one thing simple. This is also known as the SRP
When you use MVC, the view contains the display logic, the model contains the business logic (the calculation in your case), and the controller ties them together. You can use different ways to implement MVC. Complex applications need to seperate domain model and map to a viewmodel, but most simple applications can use one model.
If you define complexity on a different way, MVC might not be what you like, and you should try a different approach.
I'm trying to think of a naming convention that accurately conveys what's going on within a class I'm designing. On a secondary note, I'm trying to decide between two almost-equivalent user APIs.
Here's the situation:
I'm building a scientific application, where one of the central data structures has three phases: 1) accumulation, 2) analysis, and 3) query execution.
In my case, it's a spatial modeling structure, internally using a KDTree to partition a collection of points in 3-dimensional space. Each point describes one or more attributes of the surrounding environment, with a certain level of confidence about the measurement itself.
After adding (a potentially large number of) measurements to the collection, the owner of the object will query it to obtain an interpolated measurement at a new data point somewhere within the applicable field.
The API will look something like this (the code is in Java, but that's not really important; the code is divided into three sections, for clarity):
// SECTION 1:
// Create the aggregation object, and get the zillion objects to insert...
ContinuousScalarField field = new ContinuousScalarField();
Collection<Measurement> measurements = getMeasurementsFromSomewhere();
// SECTION 2:
// Add all of the zillion objects to the aggregation object...
// Each measurement contains its xyz location, the quantity being measured,
// and a numeric value for the measurement. For example, something like
// "68 degrees F, plus or minus 0.5, at point 1.23, 2.34, 3.45"
foreach (Measurement m : measurements) {
field.add(m);
}
// SECTION 3:
// Now the user wants to ask the model questions about the interpolated
// state of the model. For example, "what's the interpolated temperature
// at point (3, 4, 5)
Point3d p = new Point3d(3, 4, 5);
Measurement result = field.interpolateAt(p);
For my particular problem domain, it will be possible to perform a small amount of incremental work (partitioning the points into a balanced KDTree) during SECTION 2.
And there will be a small amount of work (performing some linear interpolations) that can occur during SECTION 3.
But there's a huge amount of work (constructing a kernel density estimator and performing a Fast Gauss Transform, using Taylor series and Hermite functions, but that's totally beside the point) that must be performed between sections 2 and 3.
Sometimes in the past, I've just used lazy-evaluation to construct the data structures (in this case, it'd be on the first invocation of the "interpolateAt" method), but then if the user calls the "field.add()" method again, I have to completely discard those data structures and start over from scratch.
In other projects, I've required the user to explicitly call an "object.flip()" method, to switch from "append mode" into "query mode". The nice this about a design like this is that the user has better control over the exact moment when the hard-core computation starts. But it can be a nuisance for the API consumer to keep track of the object's current mode. And besides, in the standard use case, the caller never adds another value to the collection after starting to issue queries; data-aggregation almost always fully precedes query preparation.
How have you guys handled designing a data structure like this?
Do you prefer to let an object lazily perform its heavy-duty analysis, throwing away the intermediate data structures when new data comes into the collection? Or do you require the programmer to explicitly flip the data structure from from append-mode into query-mode?
And do you know of any naming convention for objects like this? Is there a pattern I'm not thinking of?
ON EDIT:
There seems to be some confusion and curiosity about the class I used in my example, named "ContinuousScalarField".
You can get a pretty good idea for what I'm talking about by reading these wikipedia pages:
http://en.wikipedia.org/wiki/Scalar_field
http://en.wikipedia.org/wiki/Vector_field
Let's say you wanted to create a topographical map (this is not my exact problem, but it's conceptually very similar). So you take a thousand altitude measurements over an area of one square mile, but your survey equipment has a margin of error of plus-or-minus 10 meters in elevation.
Once you've gathered all the data points, you feed them into a model which not only interpolates the values, but also takes into account the error of each measurement.
To draw your topo map, you query the model for the elevation of each point where you want to draw a pixel.
As for the question of whether a single class should be responsible for both appending and handling queries, I'm not 100% sure, but I think so.
Here's a similar example: HashMap and TreeMap classes allow objects to be both added and queried. There aren't separate interfaces for adding and querying.
Both classes are also similar to my example, because the internal data structures have to be maintained on an ongoing basis in order to support the query mechanism. The HashMap class has to periodically allocate new memory, re-hash all objects, and move objects from the old memory to the new memory. A TreeMap has to continually maintain tree balance, using the red-black-tree data structure.
The only difference is that my class will perform optimally if it can perform all of its calculations once it knows the data set is closed.
If an object has two modes like this, I would suggest exposing two interfaces to the client. If the object is in append mode, then you make sure that the client can only ever use the IAppendable implementation. To flip to query mode, you add a method to IAppendable such as AsQueryable. To flip back, call IQueryable.AsAppendable.
You can implement IAppendable and IQueryable on the same object, and keep track of the state in the same way internally, but having two interfaces makes it clear to the client what state the object is in, and forces the client to deliberately make the (expensive) switch.
I generally prefer to have an explicit change, rather than lazily recomputing the result. This approach makes the performance of the utility more predictable, and it reduces the amount of work I have to do to provide a good user experience. For example, if this occurs in a UI, where do I have to worry about popping up an hourglass, etc.? Which operations are going to block for a variable amount of time, and need to be performed in a background thread?
That said, rather than explicitly changing the state of one instance, I would recommend the Builder Pattern to produce a new object. For example, you might have an aggregator object that does a small amount of work as you add each sample. Then instead of your proposed void flip() method, I'd have a Interpolator interpolator() method that gets a copy of the current aggregation and performs all your heavy-duty math. Your interpolateAt method would be on this new Interpolator object.
If your usage patterns warrant, you could do simple caching by keeping a reference to the interpolator you create, and return it to multiple callers, only clearing it when the aggregator is modified.
This separation of responsibilities can help yield more maintainable and reusable object-oriented programs. An object that can return a Measurement at a requested Point is very abstract, and perhaps a lot of clients could use your Interpolator as one strategy implementing a more general interface.
I think that the analogy you added is misleading. Consider an alternative analogy:
Key[] data = new Key[...];
data[idx++] = new Key(...); /* Fast! */
...
Arrays.sort(data); /* Slow! */
...
boolean contains = Arrays.binarySearch(data, datum) >= 0; /* Fast! */
This can work like a set, and actually, it gives better performance than Set implementations (which are implemented with hash tables or balanced trees).
A balanced tree can be seen as an efficient implementation of insertion sort. After every insertion, the tree is in a sorted state. The predictable time requirements of a balanced tree are due to the fact the cost of sorting is spread over each insertion, rather than happening on some queries and not others.
The rehashing of hash tables does result in less consistent performance, and because of that, aren't appropriate for certain applications (perhaps a real-time microcontroller). But even the rehashing operation depends only on the load factor of the table, not the pattern of insertion and query operations.
For your analogy to hold strictly, you would have to "sort" (do the hairy math) your aggregator with each point you add. But it sounds like that would be cost prohibitive, and that leads to the builder or factory method patterns. This makes it clear to your clients when they need to be prepared for the lengthy "sort" operation.
Your objects should have one role and responsibility. In your case should the ContinuousScalarField be responsible for interpolating?
Perhaps you might be better off doing something like:
IInterpolator interpolator = field.GetInterpolator();
Measurement measurement = Interpolator.InterpolateAt(...);
I hope this makes sense, but without fully understanding your problem domain it's hard to give you a more coherent answer.
"I've just used lazy-evaluation to construct the data structures" -- Good
"if the user calls the "field.add()" method again, I have to completely discard those data structures and start over from scratch." -- Interesting
"in the standard use case, the caller never adds another value to the collection after starting to issue queries" -- Whoops, false alarm, actually not interesting.
Since lazy eval fits your use case, stick with it. That's a very heavily used model because it is so delightfully reliable and fits most use cases very well.
The only reason for rethinking this is (a) the use case change (mixed adding and interpolation), or (b) performance optimization.
Since use case changes are unlikely, you might consider the performance implications of breaking up interpolation. For example, during idle time, can you precompute some values? Or with each add is there a summary you can update?
Also, a highly stateful (and not very meaningful) flip method isn't so useful to clients of your class. However, breaking interpolation into two parts might still be helpful to them -- and help you with optimization and state management.
You could, for example, break interpolation into two methods.
public void interpolateAt( Point3d p );
public Measurement interpolatedMasurement();
This borrows the relational database Open and Fetch paradigm. Opening a cursor can do a lot of preliminary work, and may start executing the query, you don't know. Fetching the first row may do all the work, or execute the prepared query, or simply fetch the first buffered row. You don't really know. You only know that it's a two part operation. The RDBMS developers are free to optimize as they see fit.
Do you prefer to let an object lazily perform its heavy-duty analysis,
throwing away the intermediate data structures when new data comes
into the collection? Or do you require the programmer to explicitly
flip the data structure from from append-mode into query-mode?
I prefer using data structures that allow me to incrementally add to it with "a little more work" per addition, and to incrementally pull the data I need with "a little more work" per extraction.
Perhaps if you do some "interpolate_at()" call in the upper-right corner of your region, you only need to do calculations involving the points in that upper-right corner,
and it doesn't hurt anything to leave the other 3 quadrants "open" to new additions.
(And so on down the recursive KDTree).
Alas, that's not always possible -- sometimes the only way to add more data is to throw away all the previous intermediate and final results, and re-calculate everything again from scratch.
The people who use the interfaces I design -- in particular, me -- are human and fallible.
So I don't like using objects where those people must remember to do things in a certain way, or else things go wrong -- because I'm always forgetting those things.
If an object must be in the "post-calculation state" before getting data out of it,
i.e. some "do_calculations()" function must be run before the interpolateAt() function gets valid data,
I much prefer letting the interpolateAt() function check if it's already in that state,
running "do_calculations()" and updating the state of the object if necessary,
and then returning the results I expected.
Sometimes I hear people describe such a data structure as "freeze" the data or "crystallize" the data or "compile" or "put the data into an immutable data structure".
One example is converting a (mutable) StringBuilder or StringBuffer into an (immutable) String.
I can imagine that for some kinds of analysis, you expect to have all the data ahead of time,
and pulling out some interpolated value before all the data has put in would give wrong results.
In that case,
I'd prefer to set things up such that the "add_data()" function fails or throws an exception
if it (incorrectly) gets called after any interpolateAt() call.
I would consider defining a lazily-evaluated "interpolated_point" object that doesn't really evaluate the data right away, but only tells that program that sometime in the future that data at that point will be required.
The collection isn't actually frozen, so it's OK to continue adding more data to it,
up until the point something actually extract the first real value from some "interpolated_point" object,
which internally triggers the "do_calculations()" function and freezes the object.
It might speed things up if you know not only all the data, but also all the points that need to be interpolated, all ahead of time.
Then you can throw away data that is "far away" from the interpolated points,
and only do the heavy-duty calculations in regions "near" the interpolated points.
For other kinds of analysis, you do the best you can with the data you have, but when more data comes in later, you want to use that new data in your later analysis.
If the only way to do that is to throw away all the intermediate results and recalculate everything from scratch, then that's what you have to do.
(And it's best if the object automatically handled this, rather than requiring people to remember to call some "clear_cache()" and "do_calculations()" function every time).
You could have a state variable. Have a method for starting the high level processing, which will only work if the STATE is in SECTION-1. It will set the state to SECTION-2, and then to SECTION-3 when it is done computing. If there's a request to the program to interpolate a given point, it will check if the state is SECTION-3. If not, it will request the computations to begin, and then interpolate the given data.
This way, you accomplish both - the program will perform its computations at the first request to interpolate a point, but can also be requested to do so earlier. This would be convenient if you wanted to run the computations overnight, for example, without needing to request an interpolation.