I have a question not necessarily specific to any platform or API but more specific to interactions in code between animations.
A game is a good example. Let's say the player dies, and there's a death animation that must finish before the object is removed. This is typical for many cases where some animation has to finish before continuing with what ever action that would normally follow. How would you go about doing this?
My question is about the control and logic of animation. How would you design a system which is capable of driving the animation but at the same time implement custom behavior?
The problem that typically arise, is that the game logic and animation data become codependent. That is, the animation has to callback into code or somehow contain meta data for duration of animation sequences. What's typically even more so a problem, is when an animation, which has to trigger some other code, say after 1.13s spawn a custom sprite, this tend to result in deep nesting of code and animation. A bomb with a timer would be en example of both logic and animation where both things interact, but I want to keep them as separate as possible.
But what would you do to keep animation and code two separate things?
Recently I've been trying out mgrammar, and I'm thinking, a DSL might be the way to go. That would allow the animation or animator, to express certain things in a presumably safe manner which would then go into the content pipeline...
The solution depends on the gameplay you're going for. If the gameplay is 100% code-driven, the anim is driven by the entity's state(state-driven animation). If it's graphics/animation-driven, the anim length determines how long the entity's in that state(anim-driven state).
The latter is typically more flexible in a commercial production environment as the designer can just say "we need that death anim to be shorter" and it gets negotiated. But when you have very precise rules in mind or a simulated system like physics, state-driven animation may be preferable. There is no 100% orthogonal and clean solution for the general case.
One thing that helps to keep it from getting too messy is to consider game AI patterns:
Game AI is typically implemented as some form of finite state machine, possibly multiple state machines or layered in some way(the most common division being a high level scripting format with low level actions/transitions).
At the low level you can say things like "in the hitreact state, playback my hitreact anim until it finishes, then find out from the high-level logic what state to continue from." At the high level there are a multitude of ways to go about defining the logic, but simple repeating loops like "approach/attack/retreat" are a good way to start.
This helps to keep the two kinds of behaviors - the planned activities, and the reactions to new events - from being too intermingled. Again, it doesn't always work out that way in practice, for the same reasons that sometimes you want code to drive data or the other way around. But that's AI for you. No general solutions here!
I think you should separate the rendering from the game logic.
You have at least two different kind of objects:
An entity that holds the unit's data (hit points, position, speed, strength, etc.) and logic (how it should move, what happens if it runs out of hit points, ...).
Its representation, that is the sprite, colors, particles attached to it, sounds, whatever. The representation can access the entity data, so it knows the position, hit points, etc.
Maybe a controller if the entity can be directly controlled by a human or an AI (like a car in a car simulation).
Yes that sounds like the Model-View-Controller architecture. There are lots of resources about this, see this article from deWiTTERS or The Guerrilla Guide to Game Code by Jorrit Rouwé, for game-specific examples.
About your animation problems: for a dying unit, when the Model updates its entity and figures out it has no more hit points, it could set a flag to say it's dead and remove the entity from the game (and from memory). Then later, when the View updates, it reads the flag and starts the dying animation. But it can be difficult to decide where to store this flag since the entity object should disappear.
There's a better way in my humble IMHO. When your entity dies, you could send an event to all listeners that are registered to the UnitDiedEvent that belongs to this specific entity, then remove the entity from the game. The entity representation object is listening to that event, and its handler starts the dying animation. When the animation is over, the entity representation can finally be removed.
The observer design pattern can be useful, here.
Your enemy needs to have multiple states. Alive and dead are not enough. Alive, dying and dead might be. Your enemy processing loop should check its state and perform different operations:
if(state == alive and hp == 0)
{
state = dying
currentanimation = animation(enemy_death)
currentanimation.play()
}
elseif(state == dying and currentanimation.isPlaying == true)
{
// do nothing until the animation is finished.
// more efficiently implemented as a callback.
}
elseif(state == dying and currentanimation.isPlaying == false)
{
state = dead
// at this point either the game engine cleans up the object, or the object deletes itself.
}
I guess I don't really see the problem.
If you had some nest of callback's I could see why thing might be hard to follow, but if you only have one call back for all animation events and one update function which starts animations, it's pretty easy to follow the code.
So what do you gain by separating them?
The animation can call a callback function that you supply, or send a generic event back to the code. It doesn't need anything more than that, which keeps all logic in the code. Just inject the callback or connect the event when the animation is created.
I try as much as possible to keep callbacks out of child animations. Animations should indicate that they are complete, the actions taken on an animations completion should be called from the controller level of the application.
In Actionscript this is the beauty of event dispatching/listening - The controller object can create the aimation and then assign a handler for an event which the animation dispatches when it is complete.
I've used the pattern for several things in Flash projects and it helps keep code independent far better than callbacks.
Especially if you write custom event objects which extend Event to carry the kind of information you need. such as teh MouseEvent that carries localX, localY, and stageX and stageY. I use a custom I've named NumberEvent to broadcast any kind of numerical information around my applications.
in actionscript controler object:
var animObj:AwsomeAnim = AwsomeAnim();
animObj.start();
animObj.addEventListener(AwsomeAnim.COPLETE,_onAnimFinish);
function _onAnimFinish():void
{
// actions to take when animation is complete here
}
In javascript where custom events do not exist. I just have a boolean variable in the animation object, and check it on a timer from the controller.
in javascript controller object:
var animObj = new animObj();// among other things must set this.isComplete = false
animObj.start();
function checkAnimComplete()
{
if(animObj.isComplete == true)
{
animCompleteActions();
}else{
setTimeout(checkAnimComplete,300);
}
}
checkAnimComplete();
function animCompleteActions()
{
// anim complete actions chere
}
For a couple games I made to solve this problem I created two animation classes
asyncAnimation - For fire and forget type animations
syncAnimation - If I wanted to wait for the animation to resolve before returning control
As games usually have a main loop it looked something like this C# style psuedo code
while(isRunning)
{
renderStaticItems();
renderAsyncAnimations();
if (list_SyncAnimations.Count() > 0)
{
syncAnimation = list_SyncAnimations.First();
syncAnimation.render();
if (syncAnimation.hasFinished())
{
list_SyncAnimations.removeAt(0);
// May want some logic here saying if
// the sync animation was 'player dying' update some variable
// so that we know the animation has ended and the
// game can prompt for the 'try again' screen
}
}
else
{
renderInput();
handleOtherLogic(); // Like is player dead, add sync animation if required.
}
}
So what the code does is maintain a list of sync animations that need to be resolved before continuing with the game - If you need to wait for several animations just stack them up.
Also, it might be a good idea to look into the command pattern or provide a callback for when the sync animation has finished to handle your logic - its really up how you want to do it.
As for your "Spawn at sec 1.13" perhaps the SyncAnimation class should have an overridable .OnUpdate() method, which can do some custom logic (or call a script)
It depends what your requirements may be.
Related
I'm making a light with an ESP32 and the HomeKit library I chose uses FreeRTOS and esp-idf, which I'm not familiar with.
Currently, I have a function that's called whenever the colour of the light should be changed, which just changes it in a step. I'd like to have it fade between colours instead, which will require a function that runs for a second or two. Having this block the main execution of the program would obviously make it quite unresponsive, so I need to have it run as a task.
The issue I'm facing is that I only want one copy of the fading function to be running at a time, and if it's called a second time before it's finished, the first copy should exit(without waiting for the full fade time) before starting the second copy.
I found vTaskDelete, but if I were to just kill the fade function at an arbitrary point, some variables and the LEDs themselves will be in an unknown state. To get around this, I thought of using a 'kill flag' global variable which the fading function will check on each of its loops.
Here's the pseudocode I'm thinking of:
update_light {
kill_flag = true
wait_for_fade_to_die
xTaskCreate fade
}
fade {
kill_flag = false
loop_1000_times {
(fading code involving local and global variables)
.
.
if kill_flag, vTaskDelete(NULL)
vTaskDelay(2 / portTICK_RATE_MS)
}
}
My main questions are:
Is this the best way to do this or is there a better option?
If this is ok, what is the equivalent of my wait_for_fade_to_die? I haven't been able to find anything from a brief look around, but I'm new to FreeRTOS.
I'm sorry to say that I have the impression that you are pretty much on the wrong track trying to solve your concrete problem.
You are writing that you aren't familiar with FreeRTOS and esp-idf, so I would suggest you first familiarize with freeRTOS (or with the idea of RTOS in general or with any other RTOS, transferring that knowledge to freeRTOS, ...).
In doing so, you will notice that (apart from some specific examples) a task is something completely different than a function which has been written for sequential "batch" processing of a single job.
Model and Theory
Usually, the most helpful model to think of when designing a good RTOS task inside an embedded system is that of a state machine that receives events to which it reacts, possibly changing its state and/or executing some actions whose starting points and payload depends on the the event the state machine received as well as the state it was in when the event is detected.
While there is no event, the task shall not idle but block at some barrier created by the RTOS function which is supposed to deliver the next relevant event.
Implementing such a task means programming a task function that consists of a short initialisation block followed by an infinite loop that first calls the RTOS library to get the next logical event (see right below...) and then the code to process that logical event.
Now, the logical event doesn't have to be represented by an RTOS event (while this can happen in simple cases), but can also be implemented by an RTOS queue, mailbox or other.
In such a design pattern, the tasks of your RTOS-based software exist "forever", waiting for the next job to perform.
How to apply the theory to your problem
You have to check how to decompose your programming problem into different tasks.
Currently, I have a function that's called whenever the colour of the light should be changed, which just changes it in a step. I'd like to have it fade between colours instead, which will require a function that runs for a second or two. Having this block the main execution of the program would obviously make it quite unresponsive, so I need to have it run as a task.
I hope that I understood the goal of your application correctly:
The system is driving multiple light sources of different colours, and some "request source" is selecting the next colour to be displayed.
When a different colour is requested, the change shall not be performed instantaneously but there shall be some "fading" over a certain period of time.
The system (and its request source) shall remain responsive even while a fade takes place, possibly changing the direction of the fade in the middle.
I think you didn't say where the colour requests are coming from.
Therefore, I am guessing that this request source could be some button(s), a serial interface or a complex algorithm (or random number generator?) running in background. It doesnt really matter now.
The issue I'm facing is that I only want one copy of the fading function to be running at a time, and if it's called a second time before it's finished, the first copy should exit (without waiting for the full fade time) before starting the second copy.
What you are essentially looking for is how to change the state (here: the target colour of light fading) at any time so that an old, ongoing fade procedure becomes obsolete but the output (=light) behaviour will not change in an incontinuous way.
I suggest you set up the following tasks:
One (or more) task(s) to generate the colour changing requests from ...whatever you need here.
One task to evaluate which colour blend shall be output currently.
That task shall be ready to receive
a new-colour request (changing the "target colour" state without changing the current colour blend value)
a periodical tick event (e.g., from a hardware or software timer)
that causes the colour blend value to be updated into the direction of the current target colour
Zero, one or multiple tasks to implement the colour blend value by driving the output features of the system (e.g., configuring GPIOs or PWMs, or transmitting information through a serial connection...we don't know).
If adjusting the output part is just assigning some registers, the "Zero" is the right thing for you here. Otherwise, try "one or multiple".
What to do now
I found vTaskDelete, but if I were to just kill the fade function at an arbitrary point, some variables and the LEDs themselves will be in an unknown state. To get around this, I thought of using a 'kill flag' global variable which the fading function will check on each of its loops.
Just don't do that.
Killing a task, even one that didn't prepare for being killed from inside causes a follow-up of requirements to manage and clean-up output stuff by your software that you will end up wondering why you even started using an RTOS.
I do know that starting to design and program in that way when you never did so is a huge endeavour, starting like a jump into cold water.
Please trust me, this way you will learn the basics how to design and implement great embedded systems.
Professional education companies offer courses about RTOS integration, responsive programming and state machine design for several thousands of $/€/£, which is a good indicator of this kind of working knowledge.
Good luck!
Along that way, you'll come across a lot of detail questions which you are welcome to post to this board (or find earlier answers on).
What do you hate most about the modern game loop? Can the game loop be improved or is there just a better alternative, such as an event-driven architecture?
It seems like this really ought to be a CW...
I'm taking a grad-level game engine programming course right now and they're sticking with the game loop approach. Granted, that doesn't mean it's the only/best solution but it's certainly logical. Using a loop allows you to ensure that all game systems get their turn to run without requesting their own timed interrupts or something else. Control can be centralized: in my current project, I have a GameManager class that, each frame, loops through the Update(float deltaTime) function for every registered object in turn. I don't have to debug an event system or set up timed signals, I just use a loop to call a series of functions. No muss, no fuss.
To answer your question of what do I hate most, the loop approach does logically lend itself to liberal use of inheritance and polymorphism which can bloat the size/complexity of your objects. If you're not careful, this can be a mild-to-horrible pitfall. If you are careful, it may not be a problem at all.
No matter there is any event in the game or not, game is supposed to draw itself and update it at a fixed rate, so I don't think dropping the game loop is possible. Still I would be wondered if anyone can think of any other alternative of game loop.
Usually the event driven architectures work best for games (only do something if the user wants something done). BUT, you'll still always need to have a thread constantly redrawing the world.
To be fully event based you could spawn an extra thread that does nothing but put a CPUTick event into your event queue every x milliseconds.
In JavaScript this is usually the more natural route, because you can so easily create an extra 'thread' that sends you the events with setInterval().
Or, if you already have a loop in the framework containing your game - like JS has in the browser, or python has with twisted - you can tell that looper to call you back at fixed intervals. e.g.:
function gameLoop() {
// update, draw...
}
window.setInterval(gameLoop, 1000/fps);
Could you give me an example of the way I should code into the pfc_Validation event? This is an event that I have never used. For example here is something I have coded in the ue_itemchanged event.
if dwo.name = 'theme' then
This.Setitem(row,"theme",wf_clean_up_text(data))
end if
if dwo.name = 'Comments' then
This.Setitem(row,"Comments",wf_clean_up_text(data))
end if
Which is the proper way of coding those validations in the pfc_Validation event , so that they are performed only on save-time?
You're asking something outside of native PowerBuilder, so there's no guarantee my assumptions are correct. (e.g. anyone could create a pfc_Validation event and have it fire when the user draws circles with his mouse) There is a pfc_Validation event coded as part of the Logical Unit of Work (LUW) service in PowerBuilder Foundation Classes (PFC). If you want to find out more about it, I've written an article on the LUW.
Firstly, your question: Everything in the LUW service is only fired at save time, so you're in good shape there.
Having said that, from the looks of the code, this isn't validation, but data preparation for the update. On that basis, I'd suggest the appropriate place for this logic is pfc_UpdatePrep.
As for converting the code, it's pretty simple. (Now, watch me mess it up.)
FOR ll = 1 to RowCount()
Setitem(ll,"theme",wf_clean_up_text(GetItemString (ll, "theme")))
Setitem(ll,"comments",wf_clean_up_text(GetItemString (ll, "comments")))
NEXT
Good luck,
Terry.
I'm looking for ways to de-spaghttify my front-end widget code. It's been suggested that a Finite State Machine is the right way to think about what I'm doing. I know a State Machine paradigm can be applied to almost any problem. I'm wondering if there are some experienced UI programmers who actually make a habit of this.
So, the question is -- do any of you UI programmers think in terms of State Machines in your work? If so, how?
thanks,
-Morgan
I'm currently working with a (proprietary) framework that lends itself well to the UI-as-state-machine paradigm, and it can definitely reduce (but not eliminate) the problems with complex and unforeseen interactions between UI elements.
The main benefit is that it allows you to think at a higher level of abstraction, at a higher granularity. Instead of thinking "If button A is pressed then combobox B is locked, textfield C is cleared and and Button D is unlocked", you think "Pressing button A puts the app into the CHECKED state" - and entering that state means that certain things happen.
I don't think it's useful (or even possible) to model the entire UI as a single state machine, though. Instead, there's usually a number of smaller state machines that each handles one part of the UI (consisting of several controls that interact and belong together conceptually), and one (maybe more than one) "global" state machine that handles more fundamental issues.
State machines are generally too low-level to help you think about a user interface. They make a nice implementation choice for a UI toolkit, but there are just too many states and transitions to describe in a normal application for you to describe them by hand.
I like to think about UIs with continuations. (Google it -- the term is specific enough that you will get a lot of high quality hits.)
Instead of my apps being in various states represented by status flags and modes, I use continuations to control what the app does next. It's easiest to explain with an example. Say you want to popup a confirmation dialog before sending an email. Step 1 builds an email. Step 2 gets the confirmation. Step 3 sends the email. Most UI toolkits require you to pass control back to an event loop after each step which makes this really ugly if you try to represent it with a state machine. With continuations, you don't think in terms of the steps the toolkit forces upon you -- it's all one process of building and sending an email. However, when the process needs the confirmation, you capture the state of your app in a continuation and hand that continuation to the OK button on the confirmation dialog. When OK is pressed, your app continues from where it was.
Continuations are relatively rare in programming languages, but luckily you can get sort of a poor man's version using closures. Going back to the email sending example, at the point you need to get the confirmation you write the rest of the process as a closure and then hand that closure to the OK button. Closures are sort of like anonymous nested subroutines that remember the values of all your local variables the next time they are called.
Hopefully this gives you some new directions to think about. I'll try to come back later with real code to show you how it works.
Update: Here's a complete example with Qt in Ruby. The interesting parts are in ConfirmationButton and MailButton. I'm not a Qt or Ruby expert so I'd appreciate any improvements you all can offer.
require 'Qt4'
class ConfirmationWindow < Qt::Widget
def initialize(question, to_do_next)
super()
label = Qt::Label.new(question)
ok = ConfirmationButton.new("OK")
ok.to_do_next = to_do_next
cancel = Qt::PushButton.new("Cancel")
Qt::Object::connect(ok, SIGNAL('clicked()'), ok, SLOT('confirmAction()'))
Qt::Object::connect(ok, SIGNAL('clicked()'), self, SLOT('close()'))
Qt::Object::connect(cancel, SIGNAL('clicked()'), self, SLOT('close()'))
box = Qt::HBoxLayout.new()
box.addWidget(label)
box.addWidget(ok)
box.addWidget(cancel)
setLayout(box)
end
end
class ConfirmationButton < Qt::PushButton
slots 'confirmAction()'
attr_accessor :to_do_next
def confirmAction()
#to_do_next.call()
end
end
class MailButton < Qt::PushButton
slots 'sendMail()'
def sendMail()
lucky = rand().to_s()
message = "hello world. here's your lucky number: " + lucky
do_next = lambda {
# Everything in this block will be delayed until the
# the confirmation button is clicked. All the local
# variables calculated earlier in this method will retain
# their values.
print "sending mail: " + message + "\n"
}
popup = ConfirmationWindow.new("Really send " + lucky + "?", do_next)
popup.show()
end
end
app = Qt::Application.new(ARGV)
window = Qt::Widget.new()
send_mail = MailButton.new("Send Mail")
quit = Qt::PushButton.new("Quit")
Qt::Object::connect(send_mail, SIGNAL('clicked()'), send_mail, SLOT('sendMail()'))
Qt::Object::connect(quit, SIGNAL('clicked()'), app, SLOT('quit()'))
box = Qt::VBoxLayout.new(window)
box.addWidget(send_mail)
box.addWidget(quit)
window.setLayout(box)
window.show()
app.exec()
It's not the UI that needs to be modeled as a state machine; it's the objects being displayed that it can be helpful to model as state machines. Your UI then becomes (oversimplification) a bunch of event handlers for change-of-state in the various objects.
It's a change from:
DoSomethingToTheFooObject();
UpdateDisplay1(); // which is the main display for the Foo object
UpdateDisplay2(); // which has a label showing the Foo's width,
// which may have changed
...
to:
Foo.DoSomething();
void OnFooWidthChanged() { UpdateDisplay2(); }
void OnFooPaletteChanged() { UpdateDisplay1(); }
Thinking about what changes in the data you are displaying should cause what repainting can be clarifying, both from the client UI side and the server Foo side.
If you find that, of the 100 UI thingies that may need to be repainted when Foo's state changes, all of them have to be redrawn when the palette changes, but only 10 when the width changes, it might suggest something about what events/state changes Foo should be signaling. If you find that you have an large event handler OnFooStateChanged() that checks through a number of Foo's properties to see what has changed, in an attempt to minimize UI updates, it suggests something about the granularity of Foo's event model. If you find you want to write a little standalone UI widget you can use in multiple places in your UI, but that it needs to know when Foo changes and you don't want to include all the code that Foo's implementation brings with it, it suggests something about the organization of you data relative to your UI, where you are using classes vs interfaces, etc.... Fundamentally, it makes you think more seriously about what is your presentation layer, more seriously than "all the code in my form classes".
-PC
There is a book out there about this topic.
Sadly its out of print and the rare used ones available are very expensive.
Constructing the User Interface with Statecharts
by Ian Horrocks, Addison-Wesley, 1998
We were just talking about Horrocks' Constructing the User Interface with Statecharts, prices 2nd-hand range from $250 up to nearly $700. Our software development manager rates it as one of the most important books he's got (sadly, he lives on the other side of the world).
Samek's books on statecharts draw significantly from this work although in a slightly different domain and reportedly not as clear. "Practical UML Statecharts in C/C++Event-Driven Programming for Embedded Systems" is also available on Safari.
Horrocks is cited quite heavily - there are twenty papers on the ACM Portal so if you have access there you might find something useful.
There's a book and software FlashMX for Interactive Simulation. They have a PDF sample chapter on statecharts.
Objects, Components, and Frameworks with UML: The Catalysis(SM) Approach has a chapter on Behaviour Models which includes about ten pages of useful examples of using statecharts (I note that it is available very cheaply second hand). It is rather formal and heavy going but that section is easy reading.
Its not really a UI problem, to be honest.
I'd do the following:
Define your states
Define your transiations - which states are accessible from which others?
How are these transitions triggered? What are the events?
Write your state machine - store the current state, receive events, and if that event can cause a valid transition from the current state then change the state accordingly.
I got a prezi-presentation about a pattern that I call "State First".
It is a combination of MPV/IoC/FSM and I've used it successfully in .Net/WinForms, .Net/Silverlight and Flex (at the moment).
You start by coding your FSM:
class FSM
IViewFactory ViewFactory;
IModelFactory ModelFactory;
Container Container; // e.g. a StackPanel in SL
ctor((viewFactory,modelFactory,container) {
...assignments...
start();
}
start() {
var view = ViewFactory.Start();
var model = ModelFactory.Start();
view.Context = model;
view.Login += (s,e) => {
var loginResult = model.TryLogin(); // vm contains username/password now
if(loginResult.Error) {
// show error?
} else {
loggedIn(loginResult.UserModel); // jump to loggedIn-state
}
};
show(view);
}
loggedIn(UserModel model) {
var view = ViewFactory.LoggedIn();
view.Context = model;
view.Logout += (s,e) => {
start(); // jump to start
};
show(view);
}
Next up you create your IViewFactory and IModelFactory (your FSM makes it easy to see what you need)
public interface IViewFactory {
IStartView Start();
ILoggedInView LoggedIn();
}
public interface IModelFactory {
IStartModel Start();
}
Now all you need to do is implement IViewFactory, IModelFactory, IStartView, ILoggedInView and the models. The advantage here is that you can see all transitions in the FSM, you get über-low coupling between the views/models, high testability and (if your language permits) a great deal of type safely.
One important point in using the FSM is that your shouldn't just jump between the states - you should also carry all stateful data with you in the jump (as arguments, see loggedIn above). This will help you avoid global states that usually litter gui-code.
You can watch the presentation at http://prezi.com/bqcr5nhcdhqu/ but it contains no code examples at the moment.
Each interface item that's presented to the user can go to another state from the current one. You basically need to create a map of what button can lead to what other state.
This mapping will allow you to see unused states or ones where multiple buttons or paths can lead to the same state and no others (ones that can be combined).
Hey Morgan, we're building a custom framework in AS3 here at Radical and use the state machine paradigm to power any front end UI activity.
We have a state machine setup for all button events, all display events and more.
AS3, being an event driven language, makes this a very attractive option.
When certain events are caught, states of buttons / display objects are automatically changed.
Having a generalized set of states could definitely help de-spaghttify your code!
A state machine is something that allows code to work with other state machines. A state machine is simply logic that has memory of past events.
Therefore humans are state machines, and often they expect their software to remember what they've done in the past so that they can proceed.
For instance, you can put the entire survey on one page, but people are more comfortable with multiple smaller pages of questions. Same with user registrations.
So state machine have a lot of applicability to user interfaces.
They should be understood before being deployed, though, and the entire design must be complete before code is written - state machine can, are, and will be abused, and if you don't have a very clear idea of why you're using one, and what the goal is, you may end up worse off than other techniques.
-Adam
When a long-running process is being executed, it is a good practice to provide feedback to the user, for example, updating a progress bar.
Some FAQs for GUI libraries suggest something like this:
function long_running_progress()
do_some_work()
update_progress_bar()
while finish
do_some_work()
update_progress_bar()
end while
end function
Anyway, we know it is a best practice to separate business logic code from user interface code. The example above is mixing user interface code inside a business logic function.
What is a good technique to implement functions in the business logic layer whose progress could be easily tracked by an user interface without mixing layers?
Answers for any language or platform are welcome.
Provide a callback interface. The business logic will call its method every once in a while. The user layer will update the progress or whatever. If you want to allow cancellation – no problem, let the callback method have a return value which will indicate a need for cancellation. This will work regardless of number of threads.
If you used a MVC paradigm you could have the Model publish its current progress state as a property, the Controller could extract this every x seconds and then put it into the view. This assumes multi-threading though, which I'm not sure if you allow.
Publishing is a great way to go. It all depends on the platform how this is done. However, when it comes to the user experience there are a couple of things to consider as well:
Don't give the user a progress bar if you don't know how long the task is executing. What time is left? What does half-way mean? It's better to use hour-glass functionality (spinning wheels, bouncing progress bars, etc).
The only interesting thing to view progress on is time; what does Half-way in a process mean? You want to know if you got time for that cup of coffee. If you show other things you are probably displaying the workings of the system programming. Most users are not interested or just get confused.
Long running progress should always support the user with an escape, a way to cancel the request. You don't want to lock up the user for long time. Better still is to handle a long running request completely in the background and let the user get back when the result is back.