I've been developing Web applications for a while now and have dipped my toe into GUI and Game application development.
In the web application (php for me), a request is made to the file, that file includes all the necessary files to process the info into memory, then the flow is from Top to Bottom for each request. (mainly)
I know that for Games the action happens within the Game Loop, but how are all the different elements of a game layered into that single loop (menu system, gui, loading of assets and the 3d world) with the constant loading and unloading of certain things.
Same for GUI programs, i believe there's an "application loop" of some sorts.
Are most of the items called into memory and then accessed, are the items linked in and loaded into memory when needed?
What helped me develop web applications faster is when i understood the flow of the program, it doesn't have to be detailed, just the general idea or pseudo code.
There's almost always a loop in all of these - but it's not something you would tend to think about during most of your development.
If you take a step back, your web applications are based around a loop - the Web Server's accept() loop:
while(listening) {
get a socket connection;
handle it;
}
.. but as a Web developer, you're shielded from that, and write 'event driven' code -- 'when someone requests this URL, do this'.
GUIs are also event driven, and the events are also detected by a loop somewhere:
while(running) {
get mouse/keyboard/whatever event
handle it
}
But a GUI developer doesn't need to think about the loop much. They write 'when a mouse click occurs here, do this'.
Games, again the same. Someone has to write a loop:
while(game is in progress) {
invoke every game object's 'move one frame' method;
poll for an input event;
}
... while other code is written in a more event-driven style: 'when a bullet object coincides with this object, trigger an explosion event'.
For applications and to a lesser extent Games the software is event driven. The user does "something" with the keyboard or mouse and that event is sent to the rest of the software.
In Games the Game Loop is important because is focused on processing the screen and the game state. With many Games needing real time performance. With modern 3D Graphics API much of the screen processing is able to be dumped onto the GPU. However the Game State is tracked by the main loop. Much of the effort of a team for a game is focused on making the processing of the loop very smooth.
For application typically heavy processing is spawned on onto a thread. It is a complex subject because of the issues surrounding two things trying to access the same data. There are whole books on the subject.
For applications the sequence is
user does X, X and associated information (like X,Y coordinates) is sent to the UI_Controller.
The UI decides which command to execute.
The Command is Executed.
The model/data is modified.
The Command tells the UI_Controller to update various areas of the UI.
The UI_Controller redraws the UI.
The Command returns.
The Application waits for the next event.
There are several variants of this. The model can allow listeners to wait for changes in the data. When the data the listener execute and redraws the UI.
As far as game programming went I was merely a hobbyist, however this is what I usually did:
I had an object that represented a very generic concept of a "Scene" in the game. All the different major sections of the game derived from this Scene object. A scene could really be anything, depending on what type of game it is. Anyway, each more specific scene that derived from scene had a procedure to Load all of the necessary elements for that scene.
When the game was to change scenes, the pointer to the active scene was set to a new scene, which would then load all of its needed objects.
The generic Scene object had virtual functions such as Load, Draw, and Logic that were called at particular times in the game loop from the active scene pointer. Every specific scene had its own ways of implementing these methods.
I don't know if that's how it's supposed to be done or not, but it was a very easy way for me to control the flow of things. The scene concept also made it easy to store multiple scenes as collections. With multiple scene pointers stored in a stack of sorts at one time, scenes could be stored in reserve and maintain their full state when returned to, or even do things like dim but to continue to draw while the active scene drew over them as an overlay of sorts.
So anyway, if you do it like that it's not precisely like a web page but I guess if you think about it the right way it's similar enough.
Related
I have a cocoa application window (NSWindow) which position on the screen should be updated frequently (depending on some calculation). As noticed in the documentation, UI changes should be made on the main thread:
void calculationThread()
{
while(true)
{
calculatePosition();
if(positionChanged)
{
dispatch_async(dispatch_get_main_queue(), ^{ setWindowPos(); });
}
}
}
void setWindowPos()
{
[window setFrame:_newFrame display:YES];
}
Now the problem I have is that the window movement is very slow and delayed. After making some profiling I see that the calculation process takes about 40mSec, meaning that I'm queueing up a backlog of UI updates 25 times a second.
I've read here that this might be faster than they can be processed and timer should be used to fire the changes every tenth of a second or so. But, wouldn't it be too slow for the human eye (I mean, in that case the movement wouldn't be delayed but would be lagged causing pretty much the same affect).
I will appreciate some knowledge sharing on this. Actually my main 2 questions are:
Are 25-30 UI updates per second really to much?
If yes, what is the recommended UI changes frequency?
The frequency at which a window can be moved around onscreen without problems will of course depend upon the speed of the user's machine, the video card they have, the size of the window, and probably a bunch of other factors. There is no single good answer to this. However, if you just drag a window around on your screen, you will notice that it can probably be moved very smoothly (unless your machine is very busy or very low on memory or something); I would not expect 25 times per second to produce a problem on a modern Mac. Not even close, in fact.
#RobNapier's points about Core Animation etc. are fine, but overstated I think; there is nothing inherently wrong with changing your UI using a timer or other periodic update if that is what you actually want to do. CoreAnimation is a toolkit for making some types of animation easier; using it is not required, and it is not suited to every problem. Similarly, if you want to make changes that are actually synched to screen refresh then CVDisplayLink is useful, but it doesn't really sound like that's what you want to do.
For your purposes, your basic approach seems fine, although I would suggest adding an NSDate check in order to skip updates if the previous update was less than, say, 1/60th of a second previous. After all, the calculation appears to take 40mSec on your machine, but it might be much faster on some other machine; you want to throttle your drawing to a reasonable rate just to be a good citizen.
So what is the problem, then? I suspect the issue might actually be your call [window setFrame:_newFrame display:YES]. If you look at Apple's docs for that method, they state "When YES the window sends a displayIfNeeded message down its view hierarchy, thus redrawing all views." Each time you call that method, then, you are not only moving your window (which I gather is your intention); you are redrawing all of the contents of the window, too, and that is slow. If you don't need to do that, then that is the overhead you need to eliminate. Call setFrameOrigin: or setFrameTopLeftPoint: instead (which make the semantics clear, that you are moving the window without resizing it or redrawing it), or perhaps just setFrame:display: passing NO instead of YES, and I'm guessing your performance problem will vanish.
If you do in fact need to redraw the window contents every time, then please edit the problem description to reflect that. In that case, the solution will have to involve profiling why your window drawing is slow, and figuring out ways to optimize that, which is an entirely different problem.
As you've discovered, you should never try to drive the UI from a tight loop. You should let the UI drive you. There are three primary tools for that.
For simple problems, AppKit is capable of moving windows around the screen. Just call [NSWindow setFrame:display:animate:]. You can override animationResizeTime: to modify the timing.
In many cases AppKit doesn't give enough control. In those case, the best tool is almost always Core Animation. You should tell the system using Core Animation how you where you want UI elements to wind up, and over what period and path, and let it do the work of getting them there. See the Core Animation Programming Guide for extensive documentation on how to use that. It focuses on animating CALayer, but the techniques are similar for NSWindow. You'll use [NSWindow setAnimations:] to add your animation. Look at the NSAnimatablePropertyContainer protocol (which NSWindow conforms to) for more information. For a simple sample project of animating NSWindow, see Just Say No from CIMGF.
In a few cases, you really do need to update the screen manually at the screen update frequency. I must stress how rare this situation is. In almost all cases, Core Animation is the correct tool. But in those rare case (some kinds of video for instance), you can use a CVDisplayLink to handle this. That will call you each time the screen would like to refresh, giving you an opportunity to update your content to match.
Is there any general-purpose programming equivalent of low-level interrupts in microcontrollers/embedded systems?
I am vaguely familiar with the concept of events (mouse events and the like) which seems similar but not general enough.
Is there a mechanism (native or otherwise), specifically in C/C++, to handle custom events, that is, events whose triggering is decided by a user-defined condition like say, when the mouse-pointer moves into a particular region when a particular user action occurs?
To provide some context, I am working on an OpenCV based interactive project where I would like to trigger specific actions when the user points to a particular place on the screen.
It seems to be a particular waste of computation to check if the pointer is currently at so and so location on the screen in each iteration of the video stream and I would like to automate function calls based on a pre-defined condition.
Or is there any other(more efficient) mechanism by which I could improve this procedure?
Thanks.
There is no interrupt programming in C/C++ as in microP or microC .
If your screen is touch screen then you try to get hold of the SDK of your operating system or API of your OS to get notified when a touch happens. (The OS internally maintains an interrupt table for touch or key pad press or mouse movement. We can program the logic which we want to execute on such an event, nothing more than that..).
If its not touch then you have to monitor the position of the user with a sensor, usually a camera (a web cam). For that you have to check each frame of the camera to decide the position of the user.
EDIT :
What you mentioned is the correct way.Its better to check each frame or else the response of your system will be sluggish. You can assign a counter to 1 and increase it with each frame and reset it on reaching any desired value. This is almost equivalent to a infinite loop.
Or you can accept some key from the keyboard to break out the loop (OpenCV has such functions)
A little more advanced approach is to grab frames from the camera in a different thread than the main thread of your executing program. So all you need to do is to start and stop that thread.
I'm fairly new to GUI programming and I'm trying to write a plotting lib in D to use with some otherwise console-based scientific apps. I'm using DFL as my GUI library.
Assume my plot form has a method called showPlot() that's supposed to display the plot on the screen. I would like to be able to have any thread in my app throw up a plot window and either block until the plot is closed or continue working, without the caller of showPlot() having to know what any other thread is doing with regard to plotting, or what plots were created in the past and may still be on screen. (The internals of showPlot() may, of course, have this knowledge.)
I'm still trying to wrap my head around how GUI libs typically work under the hood. It seems like you're supposed to only have one GUI thread, and one main form. I'd appreciate answers at the language/library-agnostic design pattern level in addition to language/library-specific ones.
Edit: To emphasize, this app has no GUI besides the plots that it throws up at interesting points in its execution. It's basically a console app plus a few plots. Therefore, there's no well-defined "main" form.
What you're going to be able to do will likely rely on how DFL works. Typically, in a GUI app, there's an event thread which handles all the events in your app - be it repaint events, button click events, mouse click events or whatever. That thread calls the event handler which is registered to handle the event (that frequently being the widget which was clicked on or whatnot). The problem that you run into is that if you try and do too much in those event handlers, you lock up the event thread, so the other events (including repaint events) don't get processed in a timely manner. Some GUI toolkits even specifically limit what you're allowed to do in an event handler. Some also limit certain types of operations to that specific thread (like doing anything - especially object creation - with actual GUI-code like the various widget or window classes that the toolkit is bound to have).
Typically, the way to handle this is to have the event thread either fire off separate threads in the event handlers and let those other threads actually handle the events, or you set some amount of state in the event handler, a separate, already-running thread is alerted to this change in state (possibly using the observer pattern), and it handles things appropriately based on that state. In either case, what the event handlers themselves do is generally fairly limited.
How GUIs work is generally very event-based. The program handles events from the user and the system and doesn't tend to a do a lot without being signaled to do it. Frequently, GUI apps don't do anything until they're told to by an event (though there are plenty of cases where a background thread is doing some sort of work separate from the events). What you're trying to do doesn't sound particularly event based, so that complicates things a bit.
My guess would be that you would need to have your app create a new window every time that you want to throw a new plot up. That window would likely be a child window of the main window which would be hidden since you obviously don't need it, and presumably DFL requires you to have a main window of some kind. There's a decent chance that each thread which wanted to create a window would have to tell the main GUI window to do that, but it depends on DFL really. It's also possible that DFL allows any thread to create new GUI elements (such as a new window).
Regardless, it would almost certainly be the main event thread which actually handles the window. The thread which wants to create a window and populate it would likely have to create the window (either directly or indirectly), and then update the state of the window by updating a set of shared variables so that the event thread could appropriately repaint the window. The most that original thread would likely to do actually repaint the window is send a repaint event to window after it's updated the state of the shared variables that hold the data necessary to paint the window. It wouldn't handle the painting itself.
As for the thread blocking, it would likely have to do busy-waiting until the window closed event was received by the event thread, and some shared variable was updated, or you could make it go to sleep until the event thread woke it up after receiving the window closed event. In the case where you didn't want it to block, it just wouldn't care about waiting to be told about the window-closed event and would keep chugging along.
In any case, hopefully that's enough information to put you in the right direction. Usually events drive everything, with everything hanging off the GUI, so to speak. But in your case, your app is using the GUI more like stdout. So, you might do better to first work on some small, fairly stupid GUI apps that are actual, normal, event-based GUI apps (like a Tic-Tac-Toe game or something) in order to get a better handle on how the GUI stuff works before trying to get it to work in a somewhat less standard manner like you are.
GUI frameworks typically have their own own event-loop, and call the application code as callbacks in reaction to external events(button clicks, timers, redraw, ...). The main difference between a "typical" console application and a GUI application is that you give up the control about when a function is called to the GUI framework.
Threads are typically used when the application code contains some long-running process, which cannot be interrupted into smaller chunks(large computations, copying files, take control over the world, ...). Then one thread is used to keep the GUI responsive, while the work is done in a separate thread. The main problem is to keep both threads correct in sync, since most GUI toolkits are not capable to handle calls from more than one thread. When you have only small work parts, which don't block the GUI for long time (<0.1s), then it is better to stay without worker threads.
I also strongly suggest to separate the GUI code and the application logic apart from each other. Having GUI code and application code mixed together is a maintenance nightmare.
Say you're building a Tetris game. As any proper programmer, you have your view logic on one side, and your business logic on the other side; probably a full-on MVC going on.
When the model sends its update(), the view redraws itself, as expected.
But then... if you wanted to add, say, an animation to vanish a line, how would you implement that in the view?
Make any assumptions you want---excepting that "Everything is properly encapsulated".
Personally, I would separate draw the screen as often as possible, even if there was no update of the block position. So I would have a loop somewhere with an "update" and a "render" part. Update plays the ball to the logic which does or does not any update of positions and/or block removal. Render plays the ball to the graphics part, which draws the blocks where they should be.
Now if there are lines to erase, the logic knows and can mark those lines to be removed. I assume here, that every piece consists of 4 single blocks and any of these blocks is a single object. Now when this block has the "die"-flag set, you may take some render-parts to vanish the block (let's say, 500ms to explode). After this time, the object may be disposed and the block a line above falls down. Why 500ms? Well, you should definitely use time-based movement as this keeps the game speed the same on different computers.
Btw, there are already so called game engines which provide such an update-render-loop. For example XNA, if you go the .NET line. You may also code your own engine but beware, it's not an easy task and it's very time consuming. I did this once and don't expect it to be an engine like the Source Engine ;-)
Most games execute a loop that constantly redraws the view of the game as fast as possible, rather than waiting for a change in the model state and then refreshing the view.
If you like the model view pattern, then it might work well for the view to continue to draw some types of objects after they are removed from the model, fading them out over a few milliseconds.
Another approach would be to combine class MVC with something like differential execution - the 'view' is a model of what is presented, but the drawing code compares the stream of events the 'view' creates with the stream from the previous rendering. So if in one stream there's a line, and the next there isn't, the drawing code can animate the difference. This allows the drawing to be abstracted away from the view . Frequently the 'view' in MVC is a collection of widgets, rather than being something which draws the display directly, so you end up with nested MVC hierarchies anyway: the application is MVC ( data model, view objects, app controller ), where the view object has a collection of widgets each of which is MVC ( widget state (eg button pressed ), look and feel/toolkit binding, mapping of toolkit events -> widget state ).
I've often wondered this myself.
My own thoughts have been along this line:
1) The view is given the state of the blocks (shape, yada-yada), but with extra "transitional" data:
2) The fact that a line must be removed is encoded in the state, NOT computed in the view.
3) The view knows how to draw transitions now:
No change: state is the same for this particular block
Change from "falling" to "locked": state is "locked in" (by a dropping block)
Change from "locked" to "remove": state is "removed" (by a line completion)
Change from "falling" to "remove": state is "removed", but old state was "falling"
Its interesting to think of a game as an MVC. Thats a perspective I've never taken (for some odd reason), but definitely an intriguing one that makes a lot of sense. Assuming you do implement your Tetris game with an MVC, I think there are two things you might want to take into account in regards to communication between your controller and your view: There is state, and there are events.
Your controller is obviously the central point of interaction for the user. When they issue keyboard commands, your controller will interpret them, and make the appropriate state adjustments. However, sometimes the game will enter a state that coincides with a particular event...such as filling a line with blocks that should now be removed.
Scoregraphic has given you a great foundation. Your view should operate on a fixed cycle to maintain consistent speed across computers. But in addition to updating the screen to render new state, it should also have a queue of events that it can perform animations in response to. In the case of filling lines in Tetris, your controller could issue strongly typed event objects that derive from some kind of base event type into the view event queue, which could then be used by the view to perform the appropriate animated responses.
I have a set of nonGUI objects which have a one to one realtionship with GUI objects.
All events are routed through the top level window.
Many ( not all ) events occuring on the GUI object result in calling a method on the associated object.
Some methods in the NonGui objects which when called change the GUI objects.
One example would be some sort of game like Rogue with a modern GUI.
You have the area a player occupies in one turn ( call it a region )
and you have the object ( a button ) associated with it on the GUI.
Keep in mind it's only an analogy ( and not even the real problem ) and no analogy is perfect.
The question is, how does one design this sort of thing?
Since the button class is from a third party library, I cannot imbed a reference to the nonGUI object in it, though I can imbed a reference to the GUI object in the nonGUI object. So it looks likeI will have to create a map from a buttons to "regions" somewhere, but where do I put it? In the toplevel window? In the top level model?
Do IU spin off some sort of interface class?
Suggestions?
It would help if you mentioned your platform and language, but generally it sounds like you are describing Model-View-Controller.
Your "GUI" object(s) are the View. This is where you keep all the rendering logic for your user interface. User interactions with the View are handled by the Controller.
The Controller is a thin layer of event handles. User interaction calls methods on the Controller, which then routes them to the Model.
Your "non-GUI" object(s) are the Model. This is the object that contains business logic and whose state is ultimately updated by clicking buttons on your GUI.
You mention "embedding" references between the objects. This is not necessary as long as events in your GUI can be routed by some mechanism to your Controller. This design pattern is useful because it separates your UI logic from your business logic. You can "snap on" a new Views very easily because there is very little event wiring between the View and the controller.
The Wikipedia article has more information and links to implementation examples.
Waste a little time looking at Falcon's Eye (though it is Nethack rather than Rogue). There's a long history of skinning rogue like games (or command line apps in general), which isn't quite classic mvc - it already has a full UI, instead you're adding a decorator to that UI with either a direct translation, or another metaphor (such as gparted, the gnome partition editor, which allows construction of a sequence of partition editing commands by direct manipulation)