Which one is better ? event or straight reference? - coding-style

Hi just want to ask which one is better and why ? (or maybe there's even better solution please I need advice)
public class Player
{
GameController _gameController;
public Player
{
_gameController.OnPlayerGetHit += GetHit;
}
private void GetHit(float damage){}
}
public class GameController
{
public delegate void PlayerHitCallback();
public event PlayerHitCallback OnPlayerGetHit;
public OnCollisionHit()
{
OnPlayerGetHit?.Invoke();
}
}
or
public class Player
{
public void GetHit(float damage){}
}
public class GameController
{
Player _player;
public OnCollisionHit()
{
_player.GetHit(damage);
}
}
I want to ask wether Top solution is better or worse than the later (or maybe both of them is no good) please help I need some advice.
Thanks in advance !

Actually, there are far more alternatives available in Unity for the same goal: passing over information and triggering actions between objects.
Direct method calling;
c# events;
unity events;
unity's message system;
your own event mechanism...
As me and others said in the comments, the "best" choice depends on a lot of things, like: the size and expected timelife of your app, design choices, experience of the coders, specific problem to solve, personal preference...
By no means I intend to give an ultimate answer to this, but I can share a founded point of view, taking those factors into consideration.
After all, I'd just state the same I said In the comment: If you are doing a big application with lots of classes and component, possibly being developed by different people, having the classes need to know much about each other can be a bad, and a change in one component may result in breaks in all sort of places along the application. But, if your application is small and the relationship between the classes is pretty obvious, you shouldn't bother making your stuff that fancy and you should bother on more important things, like shipping your game.
Although, there are some differences in the intended use of those tools that I'd like to point.
Calling a method directly on the other object demands from your caller 2 things: 1) to know who is the calee (have a reference to it) and what will happen in there (to know the method name and objective). There are many situations where it stands. Suppose your game controller is responsible to make the player move around the world. You probably already have a reference to a player, and you want to make it, and just it, to know when to move. You'd better call player.move().
Calling a method in an object is like calling your friend to say you are available tonight to hang out.
Calling an event is other stuff. Your player can die in the world. Many components could be interested in knowing when the player just died. An audio manager may want to play a SE. A net controller may want to communicate to a web server. An achievement system may want to give a prize, the enemies may want to celebrate... All those components shall have had subscribed to player.OnAboutToDie and all this will happen in appropriate occasion. With events, you are communicating everyone interested in your message (maybe nobody), without control on what are they doing with this information. It's like posting on your Twitter that you are available tonight...
C# events and Unity events play the same role, the difference is that c# events are universal among all c# packages and programs, are more flexible and fast. Unity events can be manipulated in inspector by non programmers and have other unity related benefits.
There's also the Unity's Message System. This is what makes things like Update and Start to work on all monobehaviours and stuff. This is an exaggeration of the message passing metaphor that I described before. All objects that implements a message named alike will be triggered by SendMessage and even it's children with BroadcastMessage. It is like paying the local radio station to announce you are available tonight...
Edit: I just realized OP never mentioned Unity in the question [:facepalm]. The answer still applies, though.

Related

Organizing GUI code

My question has two parts:
Does anyone have any tips or references to some documentation on the web about how to write GUI code that is easy to read, write, and maintain?
Example.
I find that the more extensive my GUI forms become, I end up with a long list of fairly short event handler methods. If I try to add any private helper methods, they just get lost in the shuffle, and I constantly have to scroll around the page to follow a single line of thought.
How can I easily manage settings across the application?
Example.
If the user selects a new item in a drop-down list, I might need to enable some components on the GUI, update an app config file, and store the new value in a local variable for later. I usually opt to not create event handlers for all the settings (see above), and end up with methods like "LoadGUISettings" and "SaveGUISettings", but then I end up calling these methods all over my code, and it runs through a lot of code just to update very few, if any, actual changes.
Thanks!
Some guidelines for the first question, from an OO perspective:
Break up large classes into smaller ones. Does that panel have a bunch of fairly modular subpanels? Make a smaller class for each subpanel, then have another, higher-level class put them all together.
Reduce duplication. Do you have two trees that share functionality? Make a superclass! Are all of your event handlers doing something similar? Create a method that they all call!
Second question. I see two ways of doing this:
Listeners. If many components should respond to a change that occured in one component, have that component fire an event.
Global variables. If many components are reading and writing the same data, make it global (however you do that in your chosen language). For extra usefulness, combine the two approaches and let components listen for changes in the global data object.
If you're using WPF, you might want to read the Composite Application Guideance for WPF.
It discusses many of these topics (as well as many others). The main goal of that guideance is to make large scale applications in a flexible, maintainable manner.
You should definitely look at Jeremy Miller's guide to rich client design. It is incomplete, but I believe he is writing a book on the subject.
Another blog you should check out is Rich Newman's. He is writing about Composite Application Block which is a MS best practice guide on how to structure rich clients.
You can also read this book which is only a very light read but gives you some good ideas.

Animation and logic

I have a question not necessarily specific to any platform or API but more specific to interactions in code between animations.
A game is a good example. Let's say the player dies, and there's a death animation that must finish before the object is removed. This is typical for many cases where some animation has to finish before continuing with what ever action that would normally follow. How would you go about doing this?
My question is about the control and logic of animation. How would you design a system which is capable of driving the animation but at the same time implement custom behavior?
The problem that typically arise, is that the game logic and animation data become codependent. That is, the animation has to callback into code or somehow contain meta data for duration of animation sequences. What's typically even more so a problem, is when an animation, which has to trigger some other code, say after 1.13s spawn a custom sprite, this tend to result in deep nesting of code and animation. A bomb with a timer would be en example of both logic and animation where both things interact, but I want to keep them as separate as possible.
But what would you do to keep animation and code two separate things?
Recently I've been trying out mgrammar, and I'm thinking, a DSL might be the way to go. That would allow the animation or animator, to express certain things in a presumably safe manner which would then go into the content pipeline...
The solution depends on the gameplay you're going for. If the gameplay is 100% code-driven, the anim is driven by the entity's state(state-driven animation). If it's graphics/animation-driven, the anim length determines how long the entity's in that state(anim-driven state).
The latter is typically more flexible in a commercial production environment as the designer can just say "we need that death anim to be shorter" and it gets negotiated. But when you have very precise rules in mind or a simulated system like physics, state-driven animation may be preferable. There is no 100% orthogonal and clean solution for the general case.
One thing that helps to keep it from getting too messy is to consider game AI patterns:
Game AI is typically implemented as some form of finite state machine, possibly multiple state machines or layered in some way(the most common division being a high level scripting format with low level actions/transitions).
At the low level you can say things like "in the hitreact state, playback my hitreact anim until it finishes, then find out from the high-level logic what state to continue from." At the high level there are a multitude of ways to go about defining the logic, but simple repeating loops like "approach/attack/retreat" are a good way to start.
This helps to keep the two kinds of behaviors - the planned activities, and the reactions to new events - from being too intermingled. Again, it doesn't always work out that way in practice, for the same reasons that sometimes you want code to drive data or the other way around. But that's AI for you. No general solutions here!
I think you should separate the rendering from the game logic.
You have at least two different kind of objects:
An entity that holds the unit's data (hit points, position, speed, strength, etc.) and logic (how it should move, what happens if it runs out of hit points, ...).
Its representation, that is the sprite, colors, particles attached to it, sounds, whatever. The representation can access the entity data, so it knows the position, hit points, etc.
Maybe a controller if the entity can be directly controlled by a human or an AI (like a car in a car simulation).
Yes that sounds like the Model-View-Controller architecture. There are lots of resources about this, see this article from deWiTTERS or The Guerrilla Guide to Game Code by Jorrit Rouwé, for game-specific examples.
About your animation problems: for a dying unit, when the Model updates its entity and figures out it has no more hit points, it could set a flag to say it's dead and remove the entity from the game (and from memory). Then later, when the View updates, it reads the flag and starts the dying animation. But it can be difficult to decide where to store this flag since the entity object should disappear.
There's a better way in my humble IMHO. When your entity dies, you could send an event to all listeners that are registered to the UnitDiedEvent that belongs to this specific entity, then remove the entity from the game. The entity representation object is listening to that event, and its handler starts the dying animation. When the animation is over, the entity representation can finally be removed.
The observer design pattern can be useful, here.
Your enemy needs to have multiple states. Alive and dead are not enough. Alive, dying and dead might be. Your enemy processing loop should check its state and perform different operations:
if(state == alive and hp == 0)
{
state = dying
currentanimation = animation(enemy_death)
currentanimation.play()
}
elseif(state == dying and currentanimation.isPlaying == true)
{
// do nothing until the animation is finished.
// more efficiently implemented as a callback.
}
elseif(state == dying and currentanimation.isPlaying == false)
{
state = dead
// at this point either the game engine cleans up the object, or the object deletes itself.
}
I guess I don't really see the problem.
If you had some nest of callback's I could see why thing might be hard to follow, but if you only have one call back for all animation events and one update function which starts animations, it's pretty easy to follow the code.
So what do you gain by separating them?
The animation can call a callback function that you supply, or send a generic event back to the code. It doesn't need anything more than that, which keeps all logic in the code. Just inject the callback or connect the event when the animation is created.
I try as much as possible to keep callbacks out of child animations. Animations should indicate that they are complete, the actions taken on an animations completion should be called from the controller level of the application.
In Actionscript this is the beauty of event dispatching/listening - The controller object can create the aimation and then assign a handler for an event which the animation dispatches when it is complete.
I've used the pattern for several things in Flash projects and it helps keep code independent far better than callbacks.
Especially if you write custom event objects which extend Event to carry the kind of information you need. such as teh MouseEvent that carries localX, localY, and stageX and stageY. I use a custom I've named NumberEvent to broadcast any kind of numerical information around my applications.
in actionscript controler object:
var animObj:AwsomeAnim = AwsomeAnim();
animObj.start();
animObj.addEventListener(AwsomeAnim.COPLETE,_onAnimFinish);
function _onAnimFinish():void
{
// actions to take when animation is complete here
}
In javascript where custom events do not exist. I just have a boolean variable in the animation object, and check it on a timer from the controller.
in javascript controller object:
var animObj = new animObj();// among other things must set this.isComplete = false
animObj.start();
function checkAnimComplete()
{
if(animObj.isComplete == true)
{
animCompleteActions();
}else{
setTimeout(checkAnimComplete,300);
}
}
checkAnimComplete();
function animCompleteActions()
{
// anim complete actions chere
}
For a couple games I made to solve this problem I created two animation classes
asyncAnimation - For fire and forget type animations
syncAnimation - If I wanted to wait for the animation to resolve before returning control
As games usually have a main loop it looked something like this C# style psuedo code
while(isRunning)
{
renderStaticItems();
renderAsyncAnimations();
if (list_SyncAnimations.Count() > 0)
{
syncAnimation = list_SyncAnimations.First();
syncAnimation.render();
if (syncAnimation.hasFinished())
{
list_SyncAnimations.removeAt(0);
// May want some logic here saying if
// the sync animation was 'player dying' update some variable
// so that we know the animation has ended and the
// game can prompt for the 'try again' screen
}
}
else
{
renderInput();
handleOtherLogic(); // Like is player dead, add sync animation if required.
}
}
So what the code does is maintain a list of sync animations that need to be resolved before continuing with the game - If you need to wait for several animations just stack them up.
Also, it might be a good idea to look into the command pattern or provide a callback for when the sync animation has finished to handle your logic - its really up how you want to do it.
As for your "Spawn at sec 1.13" perhaps the SyncAnimation class should have an overridable .OnUpdate() method, which can do some custom logic (or call a script)
It depends what your requirements may be.

State Machines and User Interface work -- any examples/experience?

I'm looking for ways to de-spaghttify my front-end widget code. It's been suggested that a Finite State Machine is the right way to think about what I'm doing. I know a State Machine paradigm can be applied to almost any problem. I'm wondering if there are some experienced UI programmers who actually make a habit of this.
So, the question is -- do any of you UI programmers think in terms of State Machines in your work? If so, how?
thanks,
-Morgan
I'm currently working with a (proprietary) framework that lends itself well to the UI-as-state-machine paradigm, and it can definitely reduce (but not eliminate) the problems with complex and unforeseen interactions between UI elements.
The main benefit is that it allows you to think at a higher level of abstraction, at a higher granularity. Instead of thinking "If button A is pressed then combobox B is locked, textfield C is cleared and and Button D is unlocked", you think "Pressing button A puts the app into the CHECKED state" - and entering that state means that certain things happen.
I don't think it's useful (or even possible) to model the entire UI as a single state machine, though. Instead, there's usually a number of smaller state machines that each handles one part of the UI (consisting of several controls that interact and belong together conceptually), and one (maybe more than one) "global" state machine that handles more fundamental issues.
State machines are generally too low-level to help you think about a user interface. They make a nice implementation choice for a UI toolkit, but there are just too many states and transitions to describe in a normal application for you to describe them by hand.
I like to think about UIs with continuations. (Google it -- the term is specific enough that you will get a lot of high quality hits.)
Instead of my apps being in various states represented by status flags and modes, I use continuations to control what the app does next. It's easiest to explain with an example. Say you want to popup a confirmation dialog before sending an email. Step 1 builds an email. Step 2 gets the confirmation. Step 3 sends the email. Most UI toolkits require you to pass control back to an event loop after each step which makes this really ugly if you try to represent it with a state machine. With continuations, you don't think in terms of the steps the toolkit forces upon you -- it's all one process of building and sending an email. However, when the process needs the confirmation, you capture the state of your app in a continuation and hand that continuation to the OK button on the confirmation dialog. When OK is pressed, your app continues from where it was.
Continuations are relatively rare in programming languages, but luckily you can get sort of a poor man's version using closures. Going back to the email sending example, at the point you need to get the confirmation you write the rest of the process as a closure and then hand that closure to the OK button. Closures are sort of like anonymous nested subroutines that remember the values of all your local variables the next time they are called.
Hopefully this gives you some new directions to think about. I'll try to come back later with real code to show you how it works.
Update: Here's a complete example with Qt in Ruby. The interesting parts are in ConfirmationButton and MailButton. I'm not a Qt or Ruby expert so I'd appreciate any improvements you all can offer.
require 'Qt4'
class ConfirmationWindow < Qt::Widget
def initialize(question, to_do_next)
super()
label = Qt::Label.new(question)
ok = ConfirmationButton.new("OK")
ok.to_do_next = to_do_next
cancel = Qt::PushButton.new("Cancel")
Qt::Object::connect(ok, SIGNAL('clicked()'), ok, SLOT('confirmAction()'))
Qt::Object::connect(ok, SIGNAL('clicked()'), self, SLOT('close()'))
Qt::Object::connect(cancel, SIGNAL('clicked()'), self, SLOT('close()'))
box = Qt::HBoxLayout.new()
box.addWidget(label)
box.addWidget(ok)
box.addWidget(cancel)
setLayout(box)
end
end
class ConfirmationButton < Qt::PushButton
slots 'confirmAction()'
attr_accessor :to_do_next
def confirmAction()
#to_do_next.call()
end
end
class MailButton < Qt::PushButton
slots 'sendMail()'
def sendMail()
lucky = rand().to_s()
message = "hello world. here's your lucky number: " + lucky
do_next = lambda {
# Everything in this block will be delayed until the
# the confirmation button is clicked. All the local
# variables calculated earlier in this method will retain
# their values.
print "sending mail: " + message + "\n"
}
popup = ConfirmationWindow.new("Really send " + lucky + "?", do_next)
popup.show()
end
end
app = Qt::Application.new(ARGV)
window = Qt::Widget.new()
send_mail = MailButton.new("Send Mail")
quit = Qt::PushButton.new("Quit")
Qt::Object::connect(send_mail, SIGNAL('clicked()'), send_mail, SLOT('sendMail()'))
Qt::Object::connect(quit, SIGNAL('clicked()'), app, SLOT('quit()'))
box = Qt::VBoxLayout.new(window)
box.addWidget(send_mail)
box.addWidget(quit)
window.setLayout(box)
window.show()
app.exec()
It's not the UI that needs to be modeled as a state machine; it's the objects being displayed that it can be helpful to model as state machines. Your UI then becomes (oversimplification) a bunch of event handlers for change-of-state in the various objects.
It's a change from:
DoSomethingToTheFooObject();
UpdateDisplay1(); // which is the main display for the Foo object
UpdateDisplay2(); // which has a label showing the Foo's width,
// which may have changed
...
to:
Foo.DoSomething();
void OnFooWidthChanged() { UpdateDisplay2(); }
void OnFooPaletteChanged() { UpdateDisplay1(); }
Thinking about what changes in the data you are displaying should cause what repainting can be clarifying, both from the client UI side and the server Foo side.
If you find that, of the 100 UI thingies that may need to be repainted when Foo's state changes, all of them have to be redrawn when the palette changes, but only 10 when the width changes, it might suggest something about what events/state changes Foo should be signaling. If you find that you have an large event handler OnFooStateChanged() that checks through a number of Foo's properties to see what has changed, in an attempt to minimize UI updates, it suggests something about the granularity of Foo's event model. If you find you want to write a little standalone UI widget you can use in multiple places in your UI, but that it needs to know when Foo changes and you don't want to include all the code that Foo's implementation brings with it, it suggests something about the organization of you data relative to your UI, where you are using classes vs interfaces, etc.... Fundamentally, it makes you think more seriously about what is your presentation layer, more seriously than "all the code in my form classes".
-PC
There is a book out there about this topic.
Sadly its out of print and the rare used ones available are very expensive.
Constructing the User Interface with Statecharts
by Ian Horrocks, Addison-Wesley, 1998
We were just talking about Horrocks' Constructing the User Interface with Statecharts, prices 2nd-hand range from $250 up to nearly $700. Our software development manager rates it as one of the most important books he's got (sadly, he lives on the other side of the world).
Samek's books on statecharts draw significantly from this work although in a slightly different domain and reportedly not as clear. "Practical UML Statecharts in C/C++Event-Driven Programming for Embedded Systems" is also available on Safari.
Horrocks is cited quite heavily - there are twenty papers on the ACM Portal so if you have access there you might find something useful.
There's a book and software FlashMX for Interactive Simulation. They have a PDF sample chapter on statecharts.
Objects, Components, and Frameworks with UML: The Catalysis(SM) Approach has a chapter on Behaviour Models which includes about ten pages of useful examples of using statecharts (I note that it is available very cheaply second hand). It is rather formal and heavy going but that section is easy reading.
Its not really a UI problem, to be honest.
I'd do the following:
Define your states
Define your transiations - which states are accessible from which others?
How are these transitions triggered? What are the events?
Write your state machine - store the current state, receive events, and if that event can cause a valid transition from the current state then change the state accordingly.
I got a prezi-presentation about a pattern that I call "State First".
It is a combination of MPV/IoC/FSM and I've used it successfully in .Net/WinForms, .Net/Silverlight and Flex (at the moment).
You start by coding your FSM:
class FSM
IViewFactory ViewFactory;
IModelFactory ModelFactory;
Container Container; // e.g. a StackPanel in SL
ctor((viewFactory,modelFactory,container) {
...assignments...
start();
}
start() {
var view = ViewFactory.Start();
var model = ModelFactory.Start();
view.Context = model;
view.Login += (s,e) => {
var loginResult = model.TryLogin(); // vm contains username/password now
if(loginResult.Error) {
// show error?
} else {
loggedIn(loginResult.UserModel); // jump to loggedIn-state
}
};
show(view);
}
loggedIn(UserModel model) {
var view = ViewFactory.LoggedIn();
view.Context = model;
view.Logout += (s,e) => {
start(); // jump to start
};
show(view);
}
Next up you create your IViewFactory and IModelFactory (your FSM makes it easy to see what you need)
public interface IViewFactory {
IStartView Start();
ILoggedInView LoggedIn();
}
public interface IModelFactory {
IStartModel Start();
}
Now all you need to do is implement IViewFactory, IModelFactory, IStartView, ILoggedInView and the models. The advantage here is that you can see all transitions in the FSM, you get über-low coupling between the views/models, high testability and (if your language permits) a great deal of type safely.
One important point in using the FSM is that your shouldn't just jump between the states - you should also carry all stateful data with you in the jump (as arguments, see loggedIn above). This will help you avoid global states that usually litter gui-code.
You can watch the presentation at http://prezi.com/bqcr5nhcdhqu/ but it contains no code examples at the moment.
Each interface item that's presented to the user can go to another state from the current one. You basically need to create a map of what button can lead to what other state.
This mapping will allow you to see unused states or ones where multiple buttons or paths can lead to the same state and no others (ones that can be combined).
Hey Morgan, we're building a custom framework in AS3 here at Radical and use the state machine paradigm to power any front end UI activity.
We have a state machine setup for all button events, all display events and more.
AS3, being an event driven language, makes this a very attractive option.
When certain events are caught, states of buttons / display objects are automatically changed.
Having a generalized set of states could definitely help de-spaghttify your code!
A state machine is something that allows code to work with other state machines. A state machine is simply logic that has memory of past events.
Therefore humans are state machines, and often they expect their software to remember what they've done in the past so that they can proceed.
For instance, you can put the entire survey on one page, but people are more comfortable with multiple smaller pages of questions. Same with user registrations.
So state machine have a lot of applicability to user interfaces.
They should be understood before being deployed, though, and the entire design must be complete before code is written - state machine can, are, and will be abused, and if you don't have a very clear idea of why you're using one, and what the goal is, you may end up worse off than other techniques.
-Adam

How much logic should you put in the UI class?

I'm not sure if this has been asked or not yet, but how much logic should you put in your UI classes?
When I started programming I used to put all my code behind events on the form which as everyone would know makes it an absolute pain in the butt to test and maintain. Overtime I have come to release how bad this practice is and have started breaking everything into classes.
Sometimes when refactoring I still have that feeling of "where should I put this stuff", but because most of the time the code I'm working on is in the UI layer, has no unit tests and will break in unimaginable places, I usually end up leaving it in the UI layer.
Are there any good rules about how much logic you put in your UI classes? What patterns should I be looking for so that I don't do this kind of thing in the future?
Just logic dealing with the UI.
Sometimes people try to put even that into the Business layer. For example, one might have in their BL:
if (totalAmount < 0)
color = "RED";
else
color = "BLACK";
And in the UI display totalAmount using color -- which is completely wrong. It should be:
if (totalAmount < 0)
isNegative = true;
else
isNegative = false;
And it should be completely up to the UI layer how totalAmount should be displayed when isNegative is true.
As little as possible...
The UI should only have logic related to presentation. My personal preference now is to have the UI/View
just raise events (with supporting data) to a PresenterClass stating that something has happened. Let the Presenter respond to the event.
have methods to render/display data to be presented
a minimal amount of client side validations to help the user get it right the first time... (preferably done in a declarative manner) screening off invalid inputs before it even reaches the presenter e.g. ensure that the text field value is within a-b range by setting the min and max properties.
http://martinfowler.com/eaaDev/uiArchs.html describes the evolution of UI design. An excerpt
When people talk about self-testing
code user-interfaces quickly raise
their head as a problem. Many people
find that testing GUIs to be somewhere
between tough and impossible. This is
largely because UIs are tightly
coupled into the overall UI
environment and difficult to tease
apart and test in pieces.
But there are occasions where this is
impossible, you miss important
interactions, there are threading
issues, and the tests are too slow to
run.
As a result there's been a steady
movement to design UIs in such a way
that minimizes the behavior in objects
that are awkward to test. Michael
Feathers crisply summed up this
approach in The Humble Dialog Box.
Gerard Meszaros generalized this
notion to idea of a Humble Object -
any object that is difficult to test
should have minimal behavior. That way
if we are unable to include it in our
test suites we minimize the chances of
an undetected failure.
The pattern you are looking for may be Model-view-controller, which basically separates the DB(model) from the GUI(view) and the logic(controller). Here's Jeff Atwood's take on this. I believe one should not be fanatical about any framework, language or pattern - While heavy numerical calculations probably should not sit in the GUI, it is fine to do some basic input validation and output formatting there.
I suggest UI shouldn't include any sort of business logic. Not even the validations. They all should be at business logic level. In this way you make your BLL independent of UI. You can easily convert you windows app to web app or web services and vice versa. You may use object frameworks like Csla to achieve this.
Input validations attached to control.
Like emails,age,date validators with text boxes
James is correct. As a rule of thumb, your business logic should not make any assumption regarding presentation.
What if you plan on displaying your results on various media? One of them could be a black and white printer. "RED" would not cut it.
When I create a model or even a controller, I try to convince myself that the user interface will be a bubble bath. Believe me, that dramatically reduces the amount of HTML in my code ;)
Always put the minimum amount of logic possible in whatever layer you are working.
By that I mean, if you are adding code to the UI layer, add the least amount of logic necessary for that layer to perform it's UI (only) operations.
Not only does doing that result in a good separation of layers...it also saves you from code bloat.
I have already written a 'compatible' answer to this question here. The rule is (according to me) that there should not be any logic in the UI except the UI logic and calls for standard procedures that will manage generic/specific cases.
In our situation, we came to a point where form's code is automatically generated out of the list of controls available on a form. Depending on the kind of control (bound text, bound boolean, bound number, bound combobox, unbound label, ...), we automatically generate a set of event procedures (such as beforeUpdate and afterUpdate for text controls, onClick for labels, etc) that launch generic code located out of the form.
This code can then either do generic things (test if the field value can be updated in the beforeUpdate event, order the recordset ascending/descending in the onClick event, etc) or specific treatments based on the form's and/or the control's name (making for example some work in a afterUpdate event, such as calculating the value of a totalAmount control out of the unitPrice and quantity values).
Our system is now fully automated, and form's production relies on two tables: Tbl_Form for a list of forms available in the app, and Tbl_Control for a list of controls available in our forms
Following the referenced answer and other posts in SO, some users have asked me to develop on my ideas. As the subject is quite complex, I finally decided to open a blog to talk about this UI logic. I have already started talking about UI interface, but it might take a few days (.. weeks!) until I can specifically reach the subject you're interested in.

Giving user feedback during long-running process and user-interface/business-logic separation

When a long-running process is being executed, it is a good practice to provide feedback to the user, for example, updating a progress bar.
Some FAQs for GUI libraries suggest something like this:
function long_running_progress()
do_some_work()
update_progress_bar()
while finish
do_some_work()
update_progress_bar()
end while
end function
Anyway, we know it is a best practice to separate business logic code from user interface code. The example above is mixing user interface code inside a business logic function.
What is a good technique to implement functions in the business logic layer whose progress could be easily tracked by an user interface without mixing layers?
Answers for any language or platform are welcome.
Provide a callback interface. The business logic will call its method every once in a while. The user layer will update the progress or whatever. If you want to allow cancellation – no problem, let the callback method have a return value which will indicate a need for cancellation. This will work regardless of number of threads.
If you used a MVC paradigm you could have the Model publish its current progress state as a property, the Controller could extract this every x seconds and then put it into the view. This assumes multi-threading though, which I'm not sure if you allow.
Publishing is a great way to go. It all depends on the platform how this is done. However, when it comes to the user experience there are a couple of things to consider as well:
Don't give the user a progress bar if you don't know how long the task is executing. What time is left? What does half-way mean? It's better to use hour-glass functionality (spinning wheels, bouncing progress bars, etc).
The only interesting thing to view progress on is time; what does Half-way in a process mean? You want to know if you got time for that cup of coffee. If you show other things you are probably displaying the workings of the system programming. Most users are not interested or just get confused.
Long running progress should always support the user with an escape, a way to cancel the request. You don't want to lock up the user for long time. Better still is to handle a long running request completely in the background and let the user get back when the result is back.

Resources