When to discard events in discrete event simulation - algorithm

In most examples of DES I've seen an Event triggers a State change and possibly schedules some new Events in the future. However, if I simulate a Billiard game this is not the whole story.
In this case the Events of interest are the shots and the collisions of the balls with each other and with the cushion. The State consists of the position and velocity of each ball.
After a collision or a shot I will first recalculate a new State and from there I will calculate all possible future (first) collisions. The strange thing is that I will have to discard all Events which were scheduled previously as these describe collisions which were possible only before the state change.
So there seem to be two ways of doing DES.
One, where the future Events are computed from the State and all Events scheduled in the past are discarded with each State change (as in the Billiard example), and
another one, where each Event causes a state change and possibly schedules new Events, but where old Events are never discarded (as in most examples I've seen).
This is hard to believe.
The Billiard example also has the irritating property, that future events are calculated from the global state of the system. All Balls need to be considered, not just the ones which participated in a collision or a shot.
I wonder if my Billard example is different from classic DES. In any case, I am looking for the correct way to reason about such issues, i.e.
How do I know which Events are to be discarded?
How do I know what States to consider when scheduling future events
It there a possible "safe" or "foolproof" way to compute future events (at the cost of performance)?
An obvious answer is "it all depends on your problem domain". A more precise answer or a pointer to literature would be much appreciated

Your example is not unique or different from other DES models.
There's a third option which you omitted, which is that when certain events occur, specific other events will be cancelled. For example, in an epidemic model you might schedule infection events. Each infection event subsequently schedules 1) the critical time for the patient beyond which death becomes inevitable, with some probability and some delay corresponding to the patient's demographics, mortality rate for that demographic, and rate of progression for the disease; or 2) the patient's recovery. If medical interventions get queued up according to some triage strategy, treatment may or may not occur prior to the critical time. If not, a death gets scheduled, otherwise cancel the critical time event and schedule a recovery event.
These sorts of event scheduling, event cancellation, and parameterizations so that you can identify which entities the scheduling/cancelling applies to can all be described by a notation called "event graphs," created by Lee Schruben. See 'Schruben, Lee 1983. Simulation modeling with event graphs. Communications of the ACM. 26: 957-963' for the original paper, or check out this tutorial from the 1996 Winter Simulation Conference which is freely available online.
You might also want to look at this paper titled "Simple Movement and Detection in Discrete Event Simulation", which appeared in the 2005 Winter Simulation Conference.

The State consists of the position and velocity of each ball.
Once you get that working, you'll need to add the spin and axis of rotation for each ball, since the proper use of spin is what differentiates the pros from the amateurs.
I will have to discard all Events which were scheduled previously
Yup, that's true, so don't bother scheduling them at all. See below.
So there seem to be two ways of doing DES (both involving the
scheduling of events)
Actually, there's a third way. Simply search the problem space to determine the time of the first future event, and then jump to that time. There is no need to schedule Events. You only care about the one Event that will occur first.
All Balls need to be considered
Yes, this is true. Start by considering one of the balls and determining the time of it's next collision. That time then puts an upper limit on how far the other balls can move. For example, imagine the first ball will collide after 0.1 seconds. Then the question for the second ball is, "Is it possible for the second ball to hit anything within 0.1 seconds?" If not, then move along to the third ball. If so, then reduce the time limit to the time it takes for the second ball to collide, and then move on to the third ball.
An obvious answer is "it all depends on your problem domain"
That's true. My comments apply only to your example of a billiards simulation. For other problem domains, different rules apply.

Related

Collision Management in a Simulation with Discrete Motion

I am building a simulation in which items (like chess pieces) move on a discrete set of positions that do not follow a sequence (like positions on a chessboard) according to a schedule.
Each position can hold only one item at any given time. The schedule could ask multiple items to move at the same time. If the destination position is occupied, the scheduled movement is cancelled.
Here is the question: if item A and item B, originally situated at position 1 and position 2 respectively, are scheduled to move simultaneously to their next positions position 2 and position 3, how do I make sure that item A gets to position 2, hopefully in an efficient design?
The reason to ask this question is that naively I would check whether position 2 is being occupied for item 1 to move into. If the check happens before item B is moved out of the way, item 1 would not move while in fact it should. Because the positions do not follow a sequence, it is not obvious which one to check first. You could imagine things gets messy if many items want to move at the same time. In the extreme case, a full chessboard of items should be allowed to move/rearrange themselves but the naive check may not be able to facilitate that.
Is there a common practice to handle such "nonexistent collision"? Ideas and references are all welcomed.
Two researchers, Ahmed Al Rowaei and Arnold Buss, published a paper in 2010 investigating the impact that using discrete time steps has on model accuracy/fidelity when the real-world system is event-based. There was also some follow-on work in 2011 with their colleague Stephen Lieberman. A major finding was that if you use time stepped models, order of execution matters and can cause the models to deviate from real-world behaviors in significant ways. Time-stepped models generally require you to introduce tie-breaking logic which doesn't exist in the real system. Logic that is needed for the model but doesn't exist in reality is called a "modeling artifact," and can lead to increased model complexity and inaccuracies. Systematic collision resolution schemes can lead to systematic biases.
Their recommendation was to build models based on continuous time. Events are scheduled using the actual (continuous) event times, which determine the order of event execution as in the real-world system. This occasionally (but rarely) requires priority tie breaking based on event type, so that (for example) departure events occur before arrival events if both were to occur at the exact same time.
If you insist on sticking with time-stepped models, a different strategy is to use two or more passes at each time step. The first pass lays out the desired state transitions and identifies potential conflicts, the last pass applies the actual transitions after conflicts have been resolved. The resolution process might be do-able in the initial setup pass, or may require additional passes if it's sufficiently complex.

Expressing both concurrency and time in activity diagrams

I am not sure how to express my scenario using activity diagrams:
What I am trying to visualise is the fact that:
A message is received
Two independent and concurrent actions take place: logging of the message and processing the message
Logging always takes less time than processing
The first activity in the diagram is correct in the sense that the actions are independent but it does not relay the fact that logging is guaranteed to take less time than processing.
The second activity in the diagram is not correct because, even if logging completes before processing, it looks as though processing depended on the logging's finishing first and that does not represent the reality.
Here is a non-computer related example:
You are a novice in birdwatching, trying to make your first notes in your notebook about birds passing by
A flock of birds approaches, you try to recognise as many details as possible
You want to write down the details in your notebook, but wait, you begin to realise that your theoretical background does not work in practice, what should be a quick scribble actually amounts to nothing in the end because you did not recognise anything
In the meantime, the birds majestically flew away without waiting for you, the activity is gone
Or maybe you did actually write it down, it took you only a moment and the birds are still nearby, slowly flying away, ending the activity again after some time
Or maybe you were under such awe that you just kept watching at them, without taking any notes - they fly away, disappearing in the horizon, ending the activity
After a few hours, you have enough notes and you come home very happy - maybe you did not capture everything but this was enough to make you smile anyway
I can always add a comment to a diagram to express it all somehow but I wonder, is there a more structured way to express what I described in an activity diagram? If not an activity diagram then what kind of a diagram would be better suited in your opinion? Thank you.
Your first diagram assumes that the duration of logging is always shorter than processing:
If this assumption is correct, the upper flow reaches the flow-final node, and the remaining flows continue until the first reaches the activity-final node. Here, the processing continues and the activity ends when the processing ends. This is exactly what you want.
But if once, the execution would deviate from this assumption and logging would get delayed for any reason, then the end of the processing would reach the activity-final node, resulting in the immediate interruption of all other ongoing activities. So logging would not complete. Maybe it’s not a problem for you, but in most cases audit expects logs to be complete.
You may be interested in a safer way that would be to add a join node:
The advantage is that the activity does not depend on any assumptions. It will always work:
whenever the logging is faster, the token on that flow will wait at the join node, and as soon as process is finished the activity (safely) the join can happen and the outgoing token reaches the end. This is exactly what you currently expect.
if the logging is exceptionally slower, no problem: the processing will be over, but the activity will wait for the logging to be completed.
This robust notation makes logging like Schroedinger's cat in its box: we don't have to know what activity is longer or shorter. At the end of the activity, both actions are completed.
Time in activity diagrams?
Activity diagrams are not really meant to express timing and duration. It's about the flow of control and the synchronization.
However, if time is important to you, you could:
visually make one activity shorter than the other. This is super-ambiguous and absolute meaningless from a formal UML point of view. But it's intuitive when readers see the parallel flow (a kind of sublminal communication ;-) ) .
add a comment note to express your assumption in plain English. This has the advantage of being very clear an unambiguous.
using UML duration constraints. This is often used in timing diagram, sometimes in sequence diagrams, but in general not in activity diagrams (personally I have never seen it, but UML specs doesn't exclude it either).
Time is something very general in the UML specs, and defined independently of the diagram. For example:
8.4.4.2: A Duration is a value of relative time given in an implementation specific textual format. Often a Duration is a non- negative integer expression representing the number of “time ticks” which may elapse during this duration.
8.5.1: An Interval is a range between two values, primarily for use in Constraints that assert that some other Element has a value in the given range. Intervals can be defined for any type of value, but they are especially useful for time and duration values as part of corresponding TimeConstraints and DurationConstraints.
In your case you have a duration observation for the processing (e.g. d), and a duration constraint for the logging (e.g. 0..d).
8.5.4.2: An IntervalConstraint is shown as an annotation of its constrainedElement. The general notation for Constraints may be used for an IntervalConstraint, with the specification Interval denoted textually (...).
Unfortunately little more is said. The only graphical examples are for messages in sequence diagrams (Fig 8.5 and 17.5) and for timing diagrams (Fig 17.28 to 17.30). Nevertheless, the notation could be extrapolated for activity diagrams, but it would be so unusal that I'd rather recommend the comment note.

Is GPS inaccuracy consistent over short time spans?

I'm interested in developing a semi-autonomous RC lawnmower.
That is, the operator would decide when to stop, turn, etc., but could request "slightly overlap previous cut" and the mower would automatically do so. (Having operated high-end RC mowers at trade shows, this is the tedious part. Overcoming that, plus the high cost -- which I believe is possible -- would make a commercial success.)
This feature would require accurate horizontal positioning. I have investigated ultrasonic, laser, optical, and GPS. Each has its problems in this application. (I'll resist the temptation to go off on these tangents here.)
So... my question...
I know GPS horizontal accuracy is only 3-4m. Not good enough, but:
I don't need to know where I am on the planet. I only need to know where I am relative to where I was a minute ago.
So, my question is, is the inaccuracy consistent in the short term? if so, I think it would work for me. If it varies wildly by +- 1.5m from one second to the next, then it will not work.
I have tried to find this information but have had no success (possibly because of the ubiquity of other GPS-accuracy discussion), so I appreciate any guidance.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Edit ~~~~~~~~~~~~~~~~~~~~~~
It's looking to me like GPS is not just skewed but granular. I'd be interested in hearing from anyone who can give better insight into this, but for now I'm going to explore other options.
I realized that even though my intended application is "outdoor", this question is technically in the field of "indoor positioning systems" so I am adding that tag.
My latest thinking is to have 3 "intelligent" high-dB ultrasonic (US) speaker units. The mower emits RF requests for a tone from each speaker in rapid sequence, measuring the time it takes to "hear" each unit's response, thereby calculating distance to each of these fixed point and using trilateration to get position. if the fixed-point speakers are 300' away from the mower, the mower may have moved several feet between the 1st and 3rd response, so this would have to be allowed for in the software. If it is possible to differentiate 3 different US frequencies, they could be requested/received "simultaneously". Though you still run into issues when you're close to one fixed unit and far from another. So some software correction may still be necessary. If we can assume the mower is moving in a straight line, this isn't too complicated.
Another variation is the mower does not request the tones. The fixed units send RF "here comes tone from unit A" etc., and the mower unit just monitors both RF info and US tones. This may simplify things somewhat, but it seems it really requires the ability to determine which speaker a tone is coming from.
This seems like the kind of thing you could (and should) measure empirically. Just set a GPS of your liking down in the middle of a field on a clear day and wait an hour. Then come back and see what you find.
Because I'm in a city, I can't run out and do this for you. However, I found a paper entitled iGeoTrans – A novel iOS application for GPS positioning in geosciences.
That includes this figure which duplicates the test I propose. You'll note that both the iPhone4 and Garmin eTrex10 perform pretty poorly versus the accuracy you say you need.
But the authors do some Math Magic™ to reduce the uncertainty in the position, presumably by using some kind of averaging. That gets them to a 3.53m RMSE measure.
If you have real-time differential GPS, you can do better. But this requires relatively expensive hardware and software.
Even aside from the above, you have the potential issue of GPS reflection and multipath error. What if your mower has to go under a deck, or thick trees, or near the wall of a house? These common yard features will likely break the assumptions needed to make a good averaging algorithm work and even frustrate attempts at DGPS by blocking critical signals.
To my mind, this seems like a computer vision problem. And not just because that'll give you more accurate row overlaps... you definitely don't want to run over a dog!
In my opinion a standard GPS is no way accurate enough for this application. A typical consumer grade receiver that I have used has a position accuracy defined as a CEP of 2.5 metres. This means that for a stationary receiver in a "perfect" sky view environment over time 50% of the position fixes will lie within a circle with a radius of 2.5 metres. If you look at the position that the receiver reports it appears to wander at random around the true position sometimes moving a number of metres away from its true location. When I have monitored the position data from a number of stationary units that I have used they could appear to be moving at speeds of up to 0.5 metres per second. In your application this would mean that the lawnmower could be out of position by some not insignificant distance (with disastrous consequences for your prized flowerbeds).
There is a way that this can be done, as has been proved by the tractor manufacturers who can position the seed drills and agricultural sprayers to millimetre accuracy. These systems use Differential GPS where there is a fixed reference station positioned in the neighbourhood of the tractor being controlled. This reference station transmits error corrections to the mobile unit allowing it to correct its reported position to a high degree of accuracy. Unfortunately this sort of positioning system is very expensive.

What is the shortest perceivable application response delay?

A delay will always occur between a user action and an application response.
It is well known that the lower the response delay, the greater the feeling of the application responding instantaneously. It is also commonly known that a delay of up to 100ms is generally not perceivable. But what about a delay of 110ms?
What is the shortest application response delay that can be perceived?
I'm interested in any solid evidence, general thoughts and opinions.
The 100 ms threshold was established over 30 yrs ago. See:
Card, S. K., Robertson, G. G., and Mackinlay, J. D. (1991). The information visualizer: An information workspace. Proc. ACM CHI'91 Conf. (New Orleans, LA, 28 April-2 May), 181-188.
Miller, R. B. (1968). Response time in man-computer conversational transactions. Proc. AFIPS Fall Joint Computer Conference Vol. 33, 267-277.
Myers, B. A. (1985). The importance of percent-done progress indicators for computer-human interfaces. Proc. ACM CHI'85 Conf. (San Francisco, CA, 14-18 April), 11-17.
What I remember learning was that any latency of more than 1/10th of a second (100ms) for the appearance of letters after typing them begins to negatively impact productivity (you instinctively slow down, less sure you have typed correctly, for example), but that below that level of latency productivity is essentially flat.
Given that description, it's possible that a latency of less than 100ms might be perceivable as not being instantaneous (for example, trained baseball umpires can probably resolve the order of two events even closer together than 100ms), but it is fast enough to be considered an immediate response for feedback, as far as effects on productivity. A latency of 100ms and greater is definitely perceivable, even if it's still reasonably fast.
That's for visual feedback that a specific input has been received. Then there'd be a standard of responsiveness in a requested operation. If you click on a form button, getting visual feedback of that click (eg. the button displays a "depressed" look) within 100ms is still ideal, but after that you expect something else to happen. If nothing happens within a second or two, as others have said, you really wonder if it took the click or ignored it, thus the standard of displaying some sort of "working..." indicator when an operation might take more than a second before showing a clear effect (eg. waiting for a new window to pop up).
New research as of January, 2014:
http://newsoffice.mit.edu/2014/in-the-blink-of-an-eye-0116
...a team of neuroscientists from MIT has found that the human brain
can process entire images that the eye sees for as little as 13
milliseconds...That speed is far faster than the 100 milliseconds
suggested by previous studies...
At the San Francisco Opera house, we routinely setup precise delay setting for each of our speakers. We can detect 5 millisecond changes in delay times to our speakers. When you make such subtle changes, you change where the sound sources from. Often times we want sound to sound as if it's coming from someplace other than were the speakers are. Precise delay adjustments make this possible. Sound delays of 15 milliseconds are very obvious even to untrained ears because it radically shifts where the sound sources from. A simple test is to prove this is to play sound through multiple speakers, and have the subject close their eyes and point to where the sound is coming from. Now make a slight change in the delay time to one of the speakers of just a few milliseconds, and have the person point again to where the sound is coming from. Making changes in delay times is acoustically very similar to moving the actual speakers.
I don't think anecdotes or opinions are really valid for answers here. This question touches on the psychology of user experience and the sub-conscious mind. The human brain is powerful and fast and mere milliseconds do count and are registered. I am no expert but I know there is much science behind e.g. what Matt Jacobsen mentioned. Check out Google's study here http://services.google.com/fh/files/blogs/google_delayexp.pdf for an idea of how much it can affect site traffic.
Here's another study by Akami - 2 second response time
http://www.akamai.com/html/about/press/releases/2009/press_091409.html (From https://ux.stackexchange.com/questions/5529/once-apon-a-time-there-was-a-10-seconds-to-load-a-page-rule-what-is-it-nowa )
Does anyone have any other studies to share?
Persistence of vision is around 100ms so it should be a reasonable visual feedback delay. 110ms should make no difference, as it is an approximate value. In practice you won't notice a delay below 200ms.
Out of my memory, studies have shown that users lose patience and retry an operation after around 2s of inactivity (in the absence of feedback), e.g. clicking on a confirm or action button. So plan on using some kind of animation if the action takes longer than 1s.
I worked on an application that had a explicit business goal of being blindingly fast, and we had a max allowed server time of 150ms for processing a full web page.
No solid evidence but for our own application, we allow a maximum of one second between a user action and feedback. If it does take longer, a "waiting box" should be shown.
A user should see "something" happening within a second of causing an action.
Use the dual of test for visual spatial resolution ( two parallel black bars, with an equal width, and an equal gap between them. Reduce angular subtense until they appear to be one line, ie scale down or simply move away. The point at which it seems to merge into one line shows the threshold).
Use function gen to blink an LED on for an interval, then off, then on, then off --- same time delay each interval, but repeat the pattern while gradually decreasing that delay, thus same as above, but time in place of space.
Imagine an oscilloscope image like so:
_________/^d^\_d_/^d^\_________
I note that at 41 ms interval, I perceive one longer blink only, but at 42 ms, I just perceive it as extremely rapid double blink. Thus, threshold is ~42ms. Probably varies depending on person, age, condition etc.
This is close to 24 fps, which is probably why cinema works at that presentation rate.
Reaction time to see something, then decide to react, say by clicking mouse etc, is longer much longer again. Thus, it's not surprising that experiments requiring a reaction response to measure yield a longer time, but that longer delay wasn't what you were asking for, and the above experiment is easy and illuminating!
But note also -- smoothly moving animations require the visual cortex to work harder, delaying visual comprehension. This delay is 'hidden' from perception, so longer delays (several hundred ms) can be 'hidden' by just providing something thats difficult to see because moving.
The effect that hides it is called Chronostasis. Basically, glancing somewhere 'new' requires the visual cortex to work harder to 'de-render' / 'recognise' the scene. This takes a remarkably long time, during which your consciousness is essentially 'paused'.
Once looking at a mostly-constant scene, only changes need this processing, so smaller/faster changes are possible and your perceptual experience resumes, and faster/smaller movements are detectable.
The detection of changes visually is processed basically on your retina. Your eyes also have a natural 'bandpass' response -- stare unblinkingly at anything for sufficient time, and at sufficient distance for saccades to be unable to change the image much, and you will find your visual feed fading out to 'grey'. This is what gives us our 'white balance', and is somewhat similar to the automatic gain control on analogue radio/tv.
The point is, that your eyes themselves have a time constant to respond, but this is actually dependant on the strength of the stimulus. (brightness of the LED, for our case).
Too bright, and the ability of your retinal cells to 'relax' back from the brightness, ie, respond to the 'sudden dark', is compromised.
The effect which keeps you seeing bright things after the light has stopped is called 'persistence of vision', and old cathode-ray picture tubes more or less depend heavily on it for them to work at all.
This is the one that's usually 100 ms or so, but it's not a 'sharp' interval -- more like a exponential roll-off, and again -- changes duration depending on how bright the stimulus is relative to how dark-adjusted (ie, sensitive) the eye is at that moment.
For duller, faster changes, especially changes outside your fovea, you will perceive even higher rates easily. Eg, flickering lights. Those outer parts of your retina (most of the area, actually) are adapted to detecting movement, and bringing it to your attention. So it makes sense that although lacking spatial resolution, they have greater time resolution / shorter response rate.
But this also means animating things usually requires even finer time steps, otherwise 'jumpiness' is perceptible, mostly due to that faster response.
Note all the scaling/sliding full screen animations iOS uses -- these essentially exploit chronostasis to hide technically unavoidable loading delays, giving the perception that those products respond instantly and smoothly at all times.
So, show something different within 42 ms -> instant response.
Keep animating otherwise useless hard-to-see-properly visuals continuously at high frame rates, then stop suddenly when done -> hides the delay so long as enough is visually busy, and the delay isn't too long. (probably 250ms is pushing the friendship).
This also seems to tee up with other's perceptions of input lag, for example : http://danluu.com/input-lag/
100ms is totally wrong. You can prove this yourself using your fingers, a desk, and a watch with visible seconds. Synchronising to the watch's seconds, drum out beats on the desk continuously such that 16 beats are drummed out every second. I chose 16 because it is natural to drum out multiples of two, so it's like four strong beats with three weak beats in between. Adjacent beats are clearly discernible by their sound. The beats are separated by about 60ms, so even 60 ms is actually still too high. Therefore the threshold is way below 100ms, especially if sound is involved.
For instance, a drum app or a keyboard app needs a delay of more like 30ms, or else it gets really annoying, because you hear the sound coming from the physical button / pad / key well before the sound comes out of the speakers. Software like ASIO and jack were made specifically to deal with this issue, so no excuses. If your drum app has a 100ms delay, I will hate you.
The situation for VoIP and high powered gaming is actually worse, because you need to react to events in real time, and in music, at least you get to plan ahead at least a little. For an average human reaction time of 200ms, a further 100ms delay is an enormous penalty. It noticeably changes the conversational flow of VoIP. In gaming, 200ms reaction time is generous, especially if the players have a lot of practice.
For a reasonably current scholarly article, try out How Much Faster is Fast Enough? User Perception of
Latency & Latency Improvements in Direct and Indirect Touch (PDF). While the main focus was on JND (Just Noticeable Difference) of delay, there is some good background on on absolute delay perception and they also acknowledge and account for 60Hz monitors (16.7 ms repaint times) in their second experiment.
I am a cognitive neuroscientist who studies visual perception and cognition.
The paper by Mary Potter mentioned above regards the minimum time required to categorize a visual stimulus. However, understand that this is under laboratory conditions in the absence of any other visual stimuli, which certainly would not be the case in the real world user experience.
The typical benchmark for a stimulus-response / input-stimulus interaction, that is, the average amount of time for an individuals minimum reaction speed or input-response detection is ~200ms. to be certain there is no detectable difference, this threshold could be lowered to around 100ms. Below this threshold, the temporal dynamics of your cognitive processes take longer to compute the event than the event itself, so there is nearly no chance of any ability to detect or differentiate it. You could go lower to say 50 ms, but it really wouldn't be necessary. 10 ms and you've gone into the territory of overkill.
For web applications 200ms is considered as unnoticable delay, while 500ms is acceptable.

Repeating "Events" (Calendar)

I'm currently working on an application that allows people to schedule "Shows" for an online radio station.
I want the ability for the user to setup a repeated event, say for example:-
"Manic Monday" show - Every Monday From 9-11
"Mid Month Madness" - Every Second Thursday of the Month
"This months new music" - 1st of every month.
What, in your opinion, is the best way to model this (based around an MVC/MTV structure).
Note: I'm actually coding this in Django. But I'm more interested in the theory behind it, rather than specific implementation details.
Ah, repeated events - one of the banes of my life, along with time zones. Calendaring is hard.
You might want to model this in terms of RFC2445. However, that may well give you far more flexibility - and complexity than you really want.
A few things to consider:
Do you need any finer granularity than a certain time on given dates? If you need to repeat based on time as well, it becomes trickier.
Consider date corner cases such as "the 30th of every month" and what that means for leap years
Consider time corner cases such as "1.30am every day" - sometimes 1.30am may happen twice, and sometimes it may not happen at all, due to daylight saving time
Do you need to share the schedule with people in other time zones? That makes life trickier again
Do you need to represent the number of times an event occurs, or a final date on which it occurs? ("Count" or "until" basically.) You may not need either, or you may need one or both.
I realise this is a list of things to think about more than a definitive answer, but I think it's important to define the parameters of your problem before you try to work out a solution.
From reading other posts, Martin Fowler describes recurring events the best.
http://martinfowler.com/apsupp/recurring.pdf
Someone implemented these classes for Java.
http://www.google.com/codesearch#vHK4YG0XgAs/src/java/org/chronicj/DateRange.java
I've had a thought that repeated events should be generated when the original event is saved, with a new model. This means I'm not doing random processing every time the calendar is loaded (and means I can also, for example, cancel one "Show" in a series) but also means that I have to limit this to a certain time frame, so if someone went say, a year into the future, they wouldn't see these repeated shows. But at some point, they'd have to (potentially) be re-generated.

Resources