I'm looking for ways to de-spaghttify my front-end widget code. It's been suggested that a Finite State Machine is the right way to think about what I'm doing. I know a State Machine paradigm can be applied to almost any problem. I'm wondering if there are some experienced UI programmers who actually make a habit of this.
So, the question is -- do any of you UI programmers think in terms of State Machines in your work? If so, how?
thanks,
-Morgan
I'm currently working with a (proprietary) framework that lends itself well to the UI-as-state-machine paradigm, and it can definitely reduce (but not eliminate) the problems with complex and unforeseen interactions between UI elements.
The main benefit is that it allows you to think at a higher level of abstraction, at a higher granularity. Instead of thinking "If button A is pressed then combobox B is locked, textfield C is cleared and and Button D is unlocked", you think "Pressing button A puts the app into the CHECKED state" - and entering that state means that certain things happen.
I don't think it's useful (or even possible) to model the entire UI as a single state machine, though. Instead, there's usually a number of smaller state machines that each handles one part of the UI (consisting of several controls that interact and belong together conceptually), and one (maybe more than one) "global" state machine that handles more fundamental issues.
State machines are generally too low-level to help you think about a user interface. They make a nice implementation choice for a UI toolkit, but there are just too many states and transitions to describe in a normal application for you to describe them by hand.
I like to think about UIs with continuations. (Google it -- the term is specific enough that you will get a lot of high quality hits.)
Instead of my apps being in various states represented by status flags and modes, I use continuations to control what the app does next. It's easiest to explain with an example. Say you want to popup a confirmation dialog before sending an email. Step 1 builds an email. Step 2 gets the confirmation. Step 3 sends the email. Most UI toolkits require you to pass control back to an event loop after each step which makes this really ugly if you try to represent it with a state machine. With continuations, you don't think in terms of the steps the toolkit forces upon you -- it's all one process of building and sending an email. However, when the process needs the confirmation, you capture the state of your app in a continuation and hand that continuation to the OK button on the confirmation dialog. When OK is pressed, your app continues from where it was.
Continuations are relatively rare in programming languages, but luckily you can get sort of a poor man's version using closures. Going back to the email sending example, at the point you need to get the confirmation you write the rest of the process as a closure and then hand that closure to the OK button. Closures are sort of like anonymous nested subroutines that remember the values of all your local variables the next time they are called.
Hopefully this gives you some new directions to think about. I'll try to come back later with real code to show you how it works.
Update: Here's a complete example with Qt in Ruby. The interesting parts are in ConfirmationButton and MailButton. I'm not a Qt or Ruby expert so I'd appreciate any improvements you all can offer.
require 'Qt4'
class ConfirmationWindow < Qt::Widget
def initialize(question, to_do_next)
super()
label = Qt::Label.new(question)
ok = ConfirmationButton.new("OK")
ok.to_do_next = to_do_next
cancel = Qt::PushButton.new("Cancel")
Qt::Object::connect(ok, SIGNAL('clicked()'), ok, SLOT('confirmAction()'))
Qt::Object::connect(ok, SIGNAL('clicked()'), self, SLOT('close()'))
Qt::Object::connect(cancel, SIGNAL('clicked()'), self, SLOT('close()'))
box = Qt::HBoxLayout.new()
box.addWidget(label)
box.addWidget(ok)
box.addWidget(cancel)
setLayout(box)
end
end
class ConfirmationButton < Qt::PushButton
slots 'confirmAction()'
attr_accessor :to_do_next
def confirmAction()
#to_do_next.call()
end
end
class MailButton < Qt::PushButton
slots 'sendMail()'
def sendMail()
lucky = rand().to_s()
message = "hello world. here's your lucky number: " + lucky
do_next = lambda {
# Everything in this block will be delayed until the
# the confirmation button is clicked. All the local
# variables calculated earlier in this method will retain
# their values.
print "sending mail: " + message + "\n"
}
popup = ConfirmationWindow.new("Really send " + lucky + "?", do_next)
popup.show()
end
end
app = Qt::Application.new(ARGV)
window = Qt::Widget.new()
send_mail = MailButton.new("Send Mail")
quit = Qt::PushButton.new("Quit")
Qt::Object::connect(send_mail, SIGNAL('clicked()'), send_mail, SLOT('sendMail()'))
Qt::Object::connect(quit, SIGNAL('clicked()'), app, SLOT('quit()'))
box = Qt::VBoxLayout.new(window)
box.addWidget(send_mail)
box.addWidget(quit)
window.setLayout(box)
window.show()
app.exec()
It's not the UI that needs to be modeled as a state machine; it's the objects being displayed that it can be helpful to model as state machines. Your UI then becomes (oversimplification) a bunch of event handlers for change-of-state in the various objects.
It's a change from:
DoSomethingToTheFooObject();
UpdateDisplay1(); // which is the main display for the Foo object
UpdateDisplay2(); // which has a label showing the Foo's width,
// which may have changed
...
to:
Foo.DoSomething();
void OnFooWidthChanged() { UpdateDisplay2(); }
void OnFooPaletteChanged() { UpdateDisplay1(); }
Thinking about what changes in the data you are displaying should cause what repainting can be clarifying, both from the client UI side and the server Foo side.
If you find that, of the 100 UI thingies that may need to be repainted when Foo's state changes, all of them have to be redrawn when the palette changes, but only 10 when the width changes, it might suggest something about what events/state changes Foo should be signaling. If you find that you have an large event handler OnFooStateChanged() that checks through a number of Foo's properties to see what has changed, in an attempt to minimize UI updates, it suggests something about the granularity of Foo's event model. If you find you want to write a little standalone UI widget you can use in multiple places in your UI, but that it needs to know when Foo changes and you don't want to include all the code that Foo's implementation brings with it, it suggests something about the organization of you data relative to your UI, where you are using classes vs interfaces, etc.... Fundamentally, it makes you think more seriously about what is your presentation layer, more seriously than "all the code in my form classes".
-PC
There is a book out there about this topic.
Sadly its out of print and the rare used ones available are very expensive.
Constructing the User Interface with Statecharts
by Ian Horrocks, Addison-Wesley, 1998
We were just talking about Horrocks' Constructing the User Interface with Statecharts, prices 2nd-hand range from $250 up to nearly $700. Our software development manager rates it as one of the most important books he's got (sadly, he lives on the other side of the world).
Samek's books on statecharts draw significantly from this work although in a slightly different domain and reportedly not as clear. "Practical UML Statecharts in C/C++Event-Driven Programming for Embedded Systems" is also available on Safari.
Horrocks is cited quite heavily - there are twenty papers on the ACM Portal so if you have access there you might find something useful.
There's a book and software FlashMX for Interactive Simulation. They have a PDF sample chapter on statecharts.
Objects, Components, and Frameworks with UML: The Catalysis(SM) Approach has a chapter on Behaviour Models which includes about ten pages of useful examples of using statecharts (I note that it is available very cheaply second hand). It is rather formal and heavy going but that section is easy reading.
Its not really a UI problem, to be honest.
I'd do the following:
Define your states
Define your transiations - which states are accessible from which others?
How are these transitions triggered? What are the events?
Write your state machine - store the current state, receive events, and if that event can cause a valid transition from the current state then change the state accordingly.
I got a prezi-presentation about a pattern that I call "State First".
It is a combination of MPV/IoC/FSM and I've used it successfully in .Net/WinForms, .Net/Silverlight and Flex (at the moment).
You start by coding your FSM:
class FSM
IViewFactory ViewFactory;
IModelFactory ModelFactory;
Container Container; // e.g. a StackPanel in SL
ctor((viewFactory,modelFactory,container) {
...assignments...
start();
}
start() {
var view = ViewFactory.Start();
var model = ModelFactory.Start();
view.Context = model;
view.Login += (s,e) => {
var loginResult = model.TryLogin(); // vm contains username/password now
if(loginResult.Error) {
// show error?
} else {
loggedIn(loginResult.UserModel); // jump to loggedIn-state
}
};
show(view);
}
loggedIn(UserModel model) {
var view = ViewFactory.LoggedIn();
view.Context = model;
view.Logout += (s,e) => {
start(); // jump to start
};
show(view);
}
Next up you create your IViewFactory and IModelFactory (your FSM makes it easy to see what you need)
public interface IViewFactory {
IStartView Start();
ILoggedInView LoggedIn();
}
public interface IModelFactory {
IStartModel Start();
}
Now all you need to do is implement IViewFactory, IModelFactory, IStartView, ILoggedInView and the models. The advantage here is that you can see all transitions in the FSM, you get über-low coupling between the views/models, high testability and (if your language permits) a great deal of type safely.
One important point in using the FSM is that your shouldn't just jump between the states - you should also carry all stateful data with you in the jump (as arguments, see loggedIn above). This will help you avoid global states that usually litter gui-code.
You can watch the presentation at http://prezi.com/bqcr5nhcdhqu/ but it contains no code examples at the moment.
Each interface item that's presented to the user can go to another state from the current one. You basically need to create a map of what button can lead to what other state.
This mapping will allow you to see unused states or ones where multiple buttons or paths can lead to the same state and no others (ones that can be combined).
Hey Morgan, we're building a custom framework in AS3 here at Radical and use the state machine paradigm to power any front end UI activity.
We have a state machine setup for all button events, all display events and more.
AS3, being an event driven language, makes this a very attractive option.
When certain events are caught, states of buttons / display objects are automatically changed.
Having a generalized set of states could definitely help de-spaghttify your code!
A state machine is something that allows code to work with other state machines. A state machine is simply logic that has memory of past events.
Therefore humans are state machines, and often they expect their software to remember what they've done in the past so that they can proceed.
For instance, you can put the entire survey on one page, but people are more comfortable with multiple smaller pages of questions. Same with user registrations.
So state machine have a lot of applicability to user interfaces.
They should be understood before being deployed, though, and the entire design must be complete before code is written - state machine can, are, and will be abused, and if you don't have a very clear idea of why you're using one, and what the goal is, you may end up worse off than other techniques.
-Adam
Related
I am hunting for a library to write a GUI on top of GLFW and OpenGL. I'm doing this because I am dissatisfied with the common UI library bindings which I feel are too imperative, and I would also like tight control of the look and feel of my UIs. I'd like a declarative approach to defining UI's. I am experimenting with reactive-banana (and temporarily reactive-banana-wx) to see if it meets my needs. I have a problem with defining recursive widgets. Here's my simplest test case:
A text widget that displays a counter.
A button widget that increments the counter.
A button widget that is inactive (so it is greyed out and does not respond to input at all) when the counter is 0 and otherwise active and resets the counter to 0.
The first and third widget have a recursive relationship. The first widget is intuitively a stepper of a union of events fed from the two buttons. However, the reset button is an fmap of the counter, and then the event stream relies on the reset button! What is to be done?
Beyond this question I have a concern about event handling: Since I want to handle device input and input focus within my code instead of relying on a framework, I see difficulties ahead in correctly dispatching events in a scalable way. Ideally I would define a data that encapsulates the hierarchical structure of a widget, a way to install event callbacks between the elements, and then write a function that traverses that data structure in order to define device input processing and graphical output. I am not sure how to take an event stream and split it as easily as event streams can be merged.
Recursion is allowed, as long as it is mutual recursion between a Behavior and an Event. The nice thing about Behaviors is that sampling them at the time of an update will return the old value.
For instance, your example can be expressed as follows
eClick1, eClick2 :: Event t ()
bCounter :: Behavior t Int
bCounter = accumB 0 $ mconcat [eIncrement, eReset]
eIncrement = (+1) <$ eClick1
eReset = (const 0) <$ whenE ((> 0) <$> bCounter) eClick2
See also the question "Can reactive-banana handle cycles in the network?"
As for your second question, you seem to be looking for the function filterE and its cousins filterApply and whenE?
As for your overall goal, I think it is quite ambitious. From what little experience I have gained so far, it seems to me that binding to an existing framework feels quite different from making an "clean-state" framework in FRP. Most likely, there are still some undiscovered (but exciting!) abstractions lurking there. I once started to write an application called BlackBoard that contains a nice abstraction about time-varying drawings.
However, if you care more about the result rather than the adventure, I would recommend a conservative approach: create the GUI toolkit in an imperative style and hook reactive-banana on top of that to get the benefits of FRP.
In case you just wish for any GUI, I am currently focussing on the web browser as a GUI. Here some preliminary experiments with Ji. The main benefit over wxHaskell is that it's a lot easier to get up and running and any API design efforts will benefit a very wide audience.
I'm not generally a GUI programmer but as luck has it, I'm stuck building a GUI for a project. The language is java although my question is general.
My question is this:
I have a GUI with many enabled/disabled options, check boxes.
The relationship between what options are currently selected and what option are allowed to be selected is rather complex. It can't be modeled as a simple decision tree. That is options selected farther down the decision tree can impose restrictions on options further up the tree and the user should not be required to "work his way down" from top level options.
I've implemented this in a very poor way, it works but there are tons of places that roughly look like:
if (checkboxA.isEnabled() && checkboxB.isSelected())
{
//enable/disable a bunch of checkboxs
//select/unselect a bunch of checkboxs
}
This is far from ideal, the initial set of options specified was very simple, but as most projects seem to work out, additional options where added and the definition of what configuration of options allowed continually grew to the point that the code, while functional, is a mess and time didn't allow for fixing it properly till now.
I fully expect more options/changes in the next phase of the project and fully expect the change requests to be a fluid process. I want to rebuild this code to be more maintainable and most importantly, easy to change.
I could model the options in a many dimensional array, but i cringe at the ease of making changes and the nondescript nature of the array indexes.
Is there a data structure that the GUI programmers out there would recommend for dealing with a situation like this? I assume this is a problem thats been solved elegantly before.
Thank you for your responses.
The important savings of code and sanity you're looking for here are declarative approach and DRY (Don't Repeat Yourself).
[Example for the following: let's say it's illegal to enable all 3 of checkboxes A, B and C together.]
Bryan Batchelder gave the first step of tidying it up: write a single rule for validity of each checkbox:
if(B.isSelected() && C.isSelected()) {
A.forceOff(); // making methods up - disable & unselected
} else {
A.enable();
}
// similar rules for B and C...
// similar code for other relationships...
and re-evaluate it anytime anything changes. This is better than scattering changes to A's state among many places (when B changes, when C changes).
But we still have duplication: the single conceptual rule for which combinations of A,B,C are legal was broken down into 3 rules for when you can allow free changes of A, B, and C. Ideally you'd write only this:
bool validate() {
if(A.isSelected() && B.isSelected() && C.isSelected()) {
return false;
}
// other relationships...
}
and have all checkbox enabling / forcing deduced from that automatically!
Can you do that from a single validate() rule? I think you can! You simulate possible changes - would validate() return true if A is on? off? If both are possible, leave A enabled; if only one state of A is possible, disable it and force its value; if none are possible - the current situation itself is illegal. Repeat the above simulation for A = other checkboxes...
Something inside me is itching to require here a simulation over all possible combinations of changes. Think of situations like "A should not disable B yet, because while illegal currently with C on, enabling B would force C off, and with that B is legal"... The problem is that down that road lies complete madness and unpredictable UI behaviour. I believe simulating only changes of one widget at a time relative to current state is the Right Thing to do, but I'm too lazy to prove it now. So take this approach with a grain of scepticism.
I should also say that all this sounds at best confusing for the user! Sevaral random ideas that might(?) lead you to more usable GUI designs (or at least mitigate the pain):
Use GUI structure where possible!
Group widgets that depend on a common condition.
Use radio buttons over checkboxes and dropdown selections where possible.
Radio buttons can be disabled individually, which makes for better feedback.
Use radio buttons to flatten combinations: instead of checkboxes "A" and "B" that can't be on at once, offer "A"/"B"/"none" radio buttons.
List compatibility constraints in GUI / tooltips!
Auto-generate tooltips for disabled widgets, explaining which rule forced them?
This one is actually easy to do.
Consider allowing contradictions but listing the violated rules in a status area, requiring the user to resolve before he can press OK.
Implement undo (& redo?), so that causing widgets to be disabled is non-destructive?
Remember the user-assigned state of checkboxes when you disable them, restore when they become enabled? [But beware of changing things without the user noticing!]
I've had to work on similar GUIs and ran into the same problem. I never did find a good data structure so I'll be watching other answers to this question with interest. It gets especially tricky when you are dealing with several different types of controls (combo boxes, list views, checkboxes, panels, etc.). What I did find helpful is to use descriptive names for your controls so that it's very clear when your looking at the code what each control does or is for. Also, organization of the code so that controls that affect each other are grouped together. When making udpates, don't just tack on some code at the bottom of the function that affects something else that's dealt with earlier in the function. Take the time to put the new code in the right spot.
Typically for really complex logic issues like this I would go with an inference based rules engine and simply state my rules and let it sort it out.
This is typically much better than trying to 1. code the maze of if then logic you have; and 2. modifying that maze later when business rules change.
One to check out for java is: JBoss Drools
I would think of it similarly to how validation rules typically work.
Create a rule (method/function/code block/lambda/whatever) that describes what criteria must be satisfied for a particular control to be enabled/disabled. Then when any change is made, you execute each method so that each control can respond to the changed state.
I agree, in part, with Bryan Batchelder's suggestion. My initial response was going to be something a long the lines of a rule system which is triggered every time a checkbox is altered.
Firstly, when a check box is checked, it validates (and allows or disallows the check) based on its own set of conditions. If allowed, it to propagate a change event.
Secondly, as a result of the event, every other checkbox now has to re-validate itself based on its own rules, considering that the global state has now changed.
On the assumption that each checkbox is going to execute an action based on the change in state (stay the same, toggle my checked status, toggle my enabled status), I think it'd be plausible to write an operation for each checkbox (how you associate them is up to you) which either has these values hardcoded or, and what I'd probably do, have them read in from an XML file.
To clear this up, what I ended up doing was a combination of provided options.
For my application the available open source rule engines were simply massive over kill and not worth the bloat given this application is doing real time signal processing. I did like the general idea.
I also didn't like the idea of having validation code in various places, I wanted a single location to make changes.
So what I did was build a simple rule/validation system.
The input is an array of strings, for me this is hard coded but you could read this from file if you wish.
Each string defines a valid configuration for all the check boxes.
On program launch I build a list of check box configurations, 1 per allowed configuration, that stores the selected state and enabled state for each checkbox. This way each change made to the UI just requires a look up of the proper configuration.
The configuration is calculated by comparing the current UI configuration to other allowed UI configurations to figure out which options should be allowed. The code is very much akin to calculating if a move is allowed in a board game.
I have a question not necessarily specific to any platform or API but more specific to interactions in code between animations.
A game is a good example. Let's say the player dies, and there's a death animation that must finish before the object is removed. This is typical for many cases where some animation has to finish before continuing with what ever action that would normally follow. How would you go about doing this?
My question is about the control and logic of animation. How would you design a system which is capable of driving the animation but at the same time implement custom behavior?
The problem that typically arise, is that the game logic and animation data become codependent. That is, the animation has to callback into code or somehow contain meta data for duration of animation sequences. What's typically even more so a problem, is when an animation, which has to trigger some other code, say after 1.13s spawn a custom sprite, this tend to result in deep nesting of code and animation. A bomb with a timer would be en example of both logic and animation where both things interact, but I want to keep them as separate as possible.
But what would you do to keep animation and code two separate things?
Recently I've been trying out mgrammar, and I'm thinking, a DSL might be the way to go. That would allow the animation or animator, to express certain things in a presumably safe manner which would then go into the content pipeline...
The solution depends on the gameplay you're going for. If the gameplay is 100% code-driven, the anim is driven by the entity's state(state-driven animation). If it's graphics/animation-driven, the anim length determines how long the entity's in that state(anim-driven state).
The latter is typically more flexible in a commercial production environment as the designer can just say "we need that death anim to be shorter" and it gets negotiated. But when you have very precise rules in mind or a simulated system like physics, state-driven animation may be preferable. There is no 100% orthogonal and clean solution for the general case.
One thing that helps to keep it from getting too messy is to consider game AI patterns:
Game AI is typically implemented as some form of finite state machine, possibly multiple state machines or layered in some way(the most common division being a high level scripting format with low level actions/transitions).
At the low level you can say things like "in the hitreact state, playback my hitreact anim until it finishes, then find out from the high-level logic what state to continue from." At the high level there are a multitude of ways to go about defining the logic, but simple repeating loops like "approach/attack/retreat" are a good way to start.
This helps to keep the two kinds of behaviors - the planned activities, and the reactions to new events - from being too intermingled. Again, it doesn't always work out that way in practice, for the same reasons that sometimes you want code to drive data or the other way around. But that's AI for you. No general solutions here!
I think you should separate the rendering from the game logic.
You have at least two different kind of objects:
An entity that holds the unit's data (hit points, position, speed, strength, etc.) and logic (how it should move, what happens if it runs out of hit points, ...).
Its representation, that is the sprite, colors, particles attached to it, sounds, whatever. The representation can access the entity data, so it knows the position, hit points, etc.
Maybe a controller if the entity can be directly controlled by a human or an AI (like a car in a car simulation).
Yes that sounds like the Model-View-Controller architecture. There are lots of resources about this, see this article from deWiTTERS or The Guerrilla Guide to Game Code by Jorrit Rouwé, for game-specific examples.
About your animation problems: for a dying unit, when the Model updates its entity and figures out it has no more hit points, it could set a flag to say it's dead and remove the entity from the game (and from memory). Then later, when the View updates, it reads the flag and starts the dying animation. But it can be difficult to decide where to store this flag since the entity object should disappear.
There's a better way in my humble IMHO. When your entity dies, you could send an event to all listeners that are registered to the UnitDiedEvent that belongs to this specific entity, then remove the entity from the game. The entity representation object is listening to that event, and its handler starts the dying animation. When the animation is over, the entity representation can finally be removed.
The observer design pattern can be useful, here.
Your enemy needs to have multiple states. Alive and dead are not enough. Alive, dying and dead might be. Your enemy processing loop should check its state and perform different operations:
if(state == alive and hp == 0)
{
state = dying
currentanimation = animation(enemy_death)
currentanimation.play()
}
elseif(state == dying and currentanimation.isPlaying == true)
{
// do nothing until the animation is finished.
// more efficiently implemented as a callback.
}
elseif(state == dying and currentanimation.isPlaying == false)
{
state = dead
// at this point either the game engine cleans up the object, or the object deletes itself.
}
I guess I don't really see the problem.
If you had some nest of callback's I could see why thing might be hard to follow, but if you only have one call back for all animation events and one update function which starts animations, it's pretty easy to follow the code.
So what do you gain by separating them?
The animation can call a callback function that you supply, or send a generic event back to the code. It doesn't need anything more than that, which keeps all logic in the code. Just inject the callback or connect the event when the animation is created.
I try as much as possible to keep callbacks out of child animations. Animations should indicate that they are complete, the actions taken on an animations completion should be called from the controller level of the application.
In Actionscript this is the beauty of event dispatching/listening - The controller object can create the aimation and then assign a handler for an event which the animation dispatches when it is complete.
I've used the pattern for several things in Flash projects and it helps keep code independent far better than callbacks.
Especially if you write custom event objects which extend Event to carry the kind of information you need. such as teh MouseEvent that carries localX, localY, and stageX and stageY. I use a custom I've named NumberEvent to broadcast any kind of numerical information around my applications.
in actionscript controler object:
var animObj:AwsomeAnim = AwsomeAnim();
animObj.start();
animObj.addEventListener(AwsomeAnim.COPLETE,_onAnimFinish);
function _onAnimFinish():void
{
// actions to take when animation is complete here
}
In javascript where custom events do not exist. I just have a boolean variable in the animation object, and check it on a timer from the controller.
in javascript controller object:
var animObj = new animObj();// among other things must set this.isComplete = false
animObj.start();
function checkAnimComplete()
{
if(animObj.isComplete == true)
{
animCompleteActions();
}else{
setTimeout(checkAnimComplete,300);
}
}
checkAnimComplete();
function animCompleteActions()
{
// anim complete actions chere
}
For a couple games I made to solve this problem I created two animation classes
asyncAnimation - For fire and forget type animations
syncAnimation - If I wanted to wait for the animation to resolve before returning control
As games usually have a main loop it looked something like this C# style psuedo code
while(isRunning)
{
renderStaticItems();
renderAsyncAnimations();
if (list_SyncAnimations.Count() > 0)
{
syncAnimation = list_SyncAnimations.First();
syncAnimation.render();
if (syncAnimation.hasFinished())
{
list_SyncAnimations.removeAt(0);
// May want some logic here saying if
// the sync animation was 'player dying' update some variable
// so that we know the animation has ended and the
// game can prompt for the 'try again' screen
}
}
else
{
renderInput();
handleOtherLogic(); // Like is player dead, add sync animation if required.
}
}
So what the code does is maintain a list of sync animations that need to be resolved before continuing with the game - If you need to wait for several animations just stack them up.
Also, it might be a good idea to look into the command pattern or provide a callback for when the sync animation has finished to handle your logic - its really up how you want to do it.
As for your "Spawn at sec 1.13" perhaps the SyncAnimation class should have an overridable .OnUpdate() method, which can do some custom logic (or call a script)
It depends what your requirements may be.
I'm not sure if this has been asked or not yet, but how much logic should you put in your UI classes?
When I started programming I used to put all my code behind events on the form which as everyone would know makes it an absolute pain in the butt to test and maintain. Overtime I have come to release how bad this practice is and have started breaking everything into classes.
Sometimes when refactoring I still have that feeling of "where should I put this stuff", but because most of the time the code I'm working on is in the UI layer, has no unit tests and will break in unimaginable places, I usually end up leaving it in the UI layer.
Are there any good rules about how much logic you put in your UI classes? What patterns should I be looking for so that I don't do this kind of thing in the future?
Just logic dealing with the UI.
Sometimes people try to put even that into the Business layer. For example, one might have in their BL:
if (totalAmount < 0)
color = "RED";
else
color = "BLACK";
And in the UI display totalAmount using color -- which is completely wrong. It should be:
if (totalAmount < 0)
isNegative = true;
else
isNegative = false;
And it should be completely up to the UI layer how totalAmount should be displayed when isNegative is true.
As little as possible...
The UI should only have logic related to presentation. My personal preference now is to have the UI/View
just raise events (with supporting data) to a PresenterClass stating that something has happened. Let the Presenter respond to the event.
have methods to render/display data to be presented
a minimal amount of client side validations to help the user get it right the first time... (preferably done in a declarative manner) screening off invalid inputs before it even reaches the presenter e.g. ensure that the text field value is within a-b range by setting the min and max properties.
http://martinfowler.com/eaaDev/uiArchs.html describes the evolution of UI design. An excerpt
When people talk about self-testing
code user-interfaces quickly raise
their head as a problem. Many people
find that testing GUIs to be somewhere
between tough and impossible. This is
largely because UIs are tightly
coupled into the overall UI
environment and difficult to tease
apart and test in pieces.
But there are occasions where this is
impossible, you miss important
interactions, there are threading
issues, and the tests are too slow to
run.
As a result there's been a steady
movement to design UIs in such a way
that minimizes the behavior in objects
that are awkward to test. Michael
Feathers crisply summed up this
approach in The Humble Dialog Box.
Gerard Meszaros generalized this
notion to idea of a Humble Object -
any object that is difficult to test
should have minimal behavior. That way
if we are unable to include it in our
test suites we minimize the chances of
an undetected failure.
The pattern you are looking for may be Model-view-controller, which basically separates the DB(model) from the GUI(view) and the logic(controller). Here's Jeff Atwood's take on this. I believe one should not be fanatical about any framework, language or pattern - While heavy numerical calculations probably should not sit in the GUI, it is fine to do some basic input validation and output formatting there.
I suggest UI shouldn't include any sort of business logic. Not even the validations. They all should be at business logic level. In this way you make your BLL independent of UI. You can easily convert you windows app to web app or web services and vice versa. You may use object frameworks like Csla to achieve this.
Input validations attached to control.
Like emails,age,date validators with text boxes
James is correct. As a rule of thumb, your business logic should not make any assumption regarding presentation.
What if you plan on displaying your results on various media? One of them could be a black and white printer. "RED" would not cut it.
When I create a model or even a controller, I try to convince myself that the user interface will be a bubble bath. Believe me, that dramatically reduces the amount of HTML in my code ;)
Always put the minimum amount of logic possible in whatever layer you are working.
By that I mean, if you are adding code to the UI layer, add the least amount of logic necessary for that layer to perform it's UI (only) operations.
Not only does doing that result in a good separation of layers...it also saves you from code bloat.
I have already written a 'compatible' answer to this question here. The rule is (according to me) that there should not be any logic in the UI except the UI logic and calls for standard procedures that will manage generic/specific cases.
In our situation, we came to a point where form's code is automatically generated out of the list of controls available on a form. Depending on the kind of control (bound text, bound boolean, bound number, bound combobox, unbound label, ...), we automatically generate a set of event procedures (such as beforeUpdate and afterUpdate for text controls, onClick for labels, etc) that launch generic code located out of the form.
This code can then either do generic things (test if the field value can be updated in the beforeUpdate event, order the recordset ascending/descending in the onClick event, etc) or specific treatments based on the form's and/or the control's name (making for example some work in a afterUpdate event, such as calculating the value of a totalAmount control out of the unitPrice and quantity values).
Our system is now fully automated, and form's production relies on two tables: Tbl_Form for a list of forms available in the app, and Tbl_Control for a list of controls available in our forms
Following the referenced answer and other posts in SO, some users have asked me to develop on my ideas. As the subject is quite complex, I finally decided to open a blog to talk about this UI logic. I have already started talking about UI interface, but it might take a few days (.. weeks!) until I can specifically reach the subject you're interested in.
When a long-running process is being executed, it is a good practice to provide feedback to the user, for example, updating a progress bar.
Some FAQs for GUI libraries suggest something like this:
function long_running_progress()
do_some_work()
update_progress_bar()
while finish
do_some_work()
update_progress_bar()
end while
end function
Anyway, we know it is a best practice to separate business logic code from user interface code. The example above is mixing user interface code inside a business logic function.
What is a good technique to implement functions in the business logic layer whose progress could be easily tracked by an user interface without mixing layers?
Answers for any language or platform are welcome.
Provide a callback interface. The business logic will call its method every once in a while. The user layer will update the progress or whatever. If you want to allow cancellation – no problem, let the callback method have a return value which will indicate a need for cancellation. This will work regardless of number of threads.
If you used a MVC paradigm you could have the Model publish its current progress state as a property, the Controller could extract this every x seconds and then put it into the view. This assumes multi-threading though, which I'm not sure if you allow.
Publishing is a great way to go. It all depends on the platform how this is done. However, when it comes to the user experience there are a couple of things to consider as well:
Don't give the user a progress bar if you don't know how long the task is executing. What time is left? What does half-way mean? It's better to use hour-glass functionality (spinning wheels, bouncing progress bars, etc).
The only interesting thing to view progress on is time; what does Half-way in a process mean? You want to know if you got time for that cup of coffee. If you show other things you are probably displaying the workings of the system programming. Most users are not interested or just get confused.
Long running progress should always support the user with an escape, a way to cancel the request. You don't want to lock up the user for long time. Better still is to handle a long running request completely in the background and let the user get back when the result is back.