Seems like I'm confused with the next stuff:
Screen
ApplicationListener
Game
I was trying hard but still can't undestand the difference of these terms.
What I'm trying to do:
Create a Class which will be extended from Game (implements ApplicationListener or Screen), it will be the area, where all actions happen (player movement, spawning enemies, etc). Then create a class that will be probably implemented the Screen. Score, multipliers and some usefull text will be there.
Then I want to create a stage in another class that includes my classes (thin line with score on high of the screen and the area with the game below. In other words I want to divide the game and text).
Hope you get the idea, so how should I realize the classes with the game, score and a main class which includes them? (Screen, ApplicationListener, Game)
Or there is a way how to do it easier?
The ApplicationListener is the entry point of your LibGDX application. LibGDX has several backends to support multiple platforms. All those backends get an ApplicationListener which will be one of your self-made classes.
Game actually implements ApplicationListener which means that you could also supply a Game instead of an ApplicationListener. But a Game has more functionality, since it is a class and not only an interface. It provides you with a basic approach to split your game in several logical parts, which will keep your code more clean. Those parts are called Screens. A Game will always have one Screen which will count as the active one and all the methods from Game are actually just being forwarded to the currently active Screen.
A Screen is one more of your own classes which you implement. It could be a SplashScreen, a LoadingScreen or a GameplayScreen... maybe a MainMenuScreen and an OptionsScreen. What about a HighscoresScreen? You get the idea here. Whenever you want to switch the Screen you will use Game.setScreen() to do so. This in turn will call Screen.hide() on the current Screen and Screen.show() on the next Screen.
The ApplicationListener Interface gives you all the Methods, which are called by the main game loop:
create() is called when your App is started for the first time.
dispose() is called when your App is getting closed.
pause() is called when a call is incoming or you press the homebutton.
resume() is called, when you come back to the App, after pause() has been called.
resize() is called on desktop when you resize the window.
render() is called once every Gameloop (max 60 times per ).
Extending Game does more or less the same, but it allready has some deffault implementations. Those deffault implementations call this functions on the current screen. So if you start your App create() is called on your Game class and this calls create() for your current screen.
A Screen reppresents the things which have to be rendered on Screen. Most times there are different Screens for different logics. So for example a MainMenuScreen a GameScreen and a OptionsScreen. Those Screens can be set by calling setScreen() in your Game class. This automatically calls hide() for the current Screen and show() for the new Screen.
I hope this helps you by understanding Libgdx.
Related
Most Unity tutorials suggest using Mouse events within the Update function, like this:
function Update () {
if (UnityEngine.Input.GetMouseButton(1)) {
}
}
This strikes me as really inefficient though, similar to using onEnterFrame in AS or setInterval in JS to power the whole application - I'd really prefer to use an events based system.
the OnMouseDown() method is useful, but is only fired when the MouseDown is on the object, not anywhere in the scene.
So here's the question: Is there a MouseEvent in Unity for detecting if the mouse button is down globally, or is the Update solution the recommended option?
This strikes me as really inefficient though, similar to using
onEnterFrame in AS or setInterval in JS to power the whole application
- I'd really prefer to use an events based system.
As already pointed out in comments, this isn't necessary less efficient. Every event based system is probably using a polling routine like that behind the scenes, updated at a given frequency.
In many game engines/frameworks you are going to find a polling based approach for input handling. I think this is related to the fact that input update frequency is directly correlated to the frame rate/update loop frequency. In fact it doesn't make much sense to listen for input at higher or lower frequency than your game loop.
So here's the question: Is there a MouseEvent in Unity for detecting
if the mouse button is down globally, or is the Update solution the
recommended option?
No there isn't. Btw if you want you can wrap mouse input detection inside a single class, and expose events from there where other classes can register to.
Something like:
public class MouseInputHandler : MonoBehavior
{
public event Action<Vector2> MousePressed;
public event Action<Vector2> MouseMoved;
...
void Update()
{
if (Input.GetMouseButton(0))
{
MousePressed(Input.mousePosition);
...
}
}
}
Like stated, you can use it without major concerns, Unity will 'make its magic' internally as to set processing power sensitive code execution for you in terms of polling events. That's the beauty of a modern game engine after all. You normally shouldn't have to be hacking your way around a common feature such a mouse click detection.
However if you don't want to go using the main Update() you can make a CoRoutine if you feel more comfortable with that, just bear in mind that Unity coroutines are not multi-threaded neither, so at the end everything needs to wait anyway.
New to GWT here...
I'm using the UIBinder approach to layout an app, somewhat in the style of the GWT Mail sample. The app starts with a DockLayoutPanel added to RootLayoutPanel within the onModuleLoad() method. The DockLayoutPanel has a static North and a static South, using a custom center widget defined like:
public class BigLayoutWidget extends ResizeComposite {
...
}
This custom widget is laid out using BigLayoutWidget.ui.xml, which in turn consists of a TabLayoutPanel (3 tabs), the first of which contains a SplitLayoutPanel divided into WEST (Shortcuts.ui.xml) and CENTER (Workpanel.ui.xml). Shortcuts, in turn, consists of a StackLayoutPanel with 3 stacks, each defined in its own ui.xml file.
I want click events within one of Shortcuts' individual stacks to change the contents of Workpanel, but so far I've only been able to manipulate widgets within the same class. Using the simplest case, I can't get a button click w/in Shortcuts to clear the contents of Workpanel or make WorkPanel non-visible.
A few questions...
Is ResizeComposite the right type of class to extend for this? I'm following the approach from the Mail example for TopPanel, MailList, etc, so maybe not?
How can I make these clicks manipulate the contents of panels in which they do NOT reside?
Are listeners no longer recommended for handling events? I thought I saw somewhere during compilation that ClickHandlers are used these days, and the click listener "subscription" approach is being deprecated (I'm mostly using #UiHandler annotations)
Is there an easy way to get a handle to specific elements in my app/page? (Applying the "ID" field in the UI.XML file generates a deprecation warning). I'm looking for something like a document.getElementById() that get me a handle to specific elements. If that exists, how do I set the handle/ID on the element, and how can I then call that element by name/id?
Note that I have the layout itself pretty well nailed; it's the interaction from one ui.xml modularized panel to the next that I can't quite get.
Thanks in advance.
If you don't have a use for resizing events than just use Composite
What you want is what the GWT devs called message bus (implemented as HandlerManager). You can get a nice explanation in the widely discussed (for example, on the GWT Google Group, just search for 'mvp') presentation by Ray Ryan from Google I/O 2009 which can be found here. Basically, you "broadcast" an event on that message bus and then a Widget listening for that event gets the message and does its stuff.
Yep, *Handlers are the current way of handling events - the usage is basically the same so migration shouldn't be a problem (docs). They changed it so that they could introduce custom fields in the future, without breaking existing code.
If you've set an id for any DOM element (for Widgets I use someWidget.getElement().setId(id), usually in combination with DOM.createUniqueId()) you can get it via GWT.get(String id). You'll get then a RootPanel which you'll have to cast to the right Widget class - as you can see it can get a little 'hackish' (what if you change the type of the Widget by that id? Exceptions, or worse), so I'd recommend sticking with MVP (see the first point) and communicating via the message bus. Remember however, that sometimes it's also good to aggregate - not everything has to be handled via the message bus :)
Bottom line is I'd recommend embracing MVP (and History) as soon as possible - it makes GWT development much easier and less messy :) (I know from experience, that with time the code starts to look like a nightmare, if you don't divide it into presentation, view, etc.)
Say you're building a Tetris game. As any proper programmer, you have your view logic on one side, and your business logic on the other side; probably a full-on MVC going on.
When the model sends its update(), the view redraws itself, as expected.
But then... if you wanted to add, say, an animation to vanish a line, how would you implement that in the view?
Make any assumptions you want---excepting that "Everything is properly encapsulated".
Personally, I would separate draw the screen as often as possible, even if there was no update of the block position. So I would have a loop somewhere with an "update" and a "render" part. Update plays the ball to the logic which does or does not any update of positions and/or block removal. Render plays the ball to the graphics part, which draws the blocks where they should be.
Now if there are lines to erase, the logic knows and can mark those lines to be removed. I assume here, that every piece consists of 4 single blocks and any of these blocks is a single object. Now when this block has the "die"-flag set, you may take some render-parts to vanish the block (let's say, 500ms to explode). After this time, the object may be disposed and the block a line above falls down. Why 500ms? Well, you should definitely use time-based movement as this keeps the game speed the same on different computers.
Btw, there are already so called game engines which provide such an update-render-loop. For example XNA, if you go the .NET line. You may also code your own engine but beware, it's not an easy task and it's very time consuming. I did this once and don't expect it to be an engine like the Source Engine ;-)
Most games execute a loop that constantly redraws the view of the game as fast as possible, rather than waiting for a change in the model state and then refreshing the view.
If you like the model view pattern, then it might work well for the view to continue to draw some types of objects after they are removed from the model, fading them out over a few milliseconds.
Another approach would be to combine class MVC with something like differential execution - the 'view' is a model of what is presented, but the drawing code compares the stream of events the 'view' creates with the stream from the previous rendering. So if in one stream there's a line, and the next there isn't, the drawing code can animate the difference. This allows the drawing to be abstracted away from the view . Frequently the 'view' in MVC is a collection of widgets, rather than being something which draws the display directly, so you end up with nested MVC hierarchies anyway: the application is MVC ( data model, view objects, app controller ), where the view object has a collection of widgets each of which is MVC ( widget state (eg button pressed ), look and feel/toolkit binding, mapping of toolkit events -> widget state ).
I've often wondered this myself.
My own thoughts have been along this line:
1) The view is given the state of the blocks (shape, yada-yada), but with extra "transitional" data:
2) The fact that a line must be removed is encoded in the state, NOT computed in the view.
3) The view knows how to draw transitions now:
No change: state is the same for this particular block
Change from "falling" to "locked": state is "locked in" (by a dropping block)
Change from "locked" to "remove": state is "removed" (by a line completion)
Change from "falling" to "remove": state is "removed", but old state was "falling"
Its interesting to think of a game as an MVC. Thats a perspective I've never taken (for some odd reason), but definitely an intriguing one that makes a lot of sense. Assuming you do implement your Tetris game with an MVC, I think there are two things you might want to take into account in regards to communication between your controller and your view: There is state, and there are events.
Your controller is obviously the central point of interaction for the user. When they issue keyboard commands, your controller will interpret them, and make the appropriate state adjustments. However, sometimes the game will enter a state that coincides with a particular event...such as filling a line with blocks that should now be removed.
Scoregraphic has given you a great foundation. Your view should operate on a fixed cycle to maintain consistent speed across computers. But in addition to updating the screen to render new state, it should also have a queue of events that it can perform animations in response to. In the case of filling lines in Tetris, your controller could issue strongly typed event objects that derive from some kind of base event type into the view event queue, which could then be used by the view to perform the appropriate animated responses.
I have a set of nonGUI objects which have a one to one realtionship with GUI objects.
All events are routed through the top level window.
Many ( not all ) events occuring on the GUI object result in calling a method on the associated object.
Some methods in the NonGui objects which when called change the GUI objects.
One example would be some sort of game like Rogue with a modern GUI.
You have the area a player occupies in one turn ( call it a region )
and you have the object ( a button ) associated with it on the GUI.
Keep in mind it's only an analogy ( and not even the real problem ) and no analogy is perfect.
The question is, how does one design this sort of thing?
Since the button class is from a third party library, I cannot imbed a reference to the nonGUI object in it, though I can imbed a reference to the GUI object in the nonGUI object. So it looks likeI will have to create a map from a buttons to "regions" somewhere, but where do I put it? In the toplevel window? In the top level model?
Do IU spin off some sort of interface class?
Suggestions?
It would help if you mentioned your platform and language, but generally it sounds like you are describing Model-View-Controller.
Your "GUI" object(s) are the View. This is where you keep all the rendering logic for your user interface. User interactions with the View are handled by the Controller.
The Controller is a thin layer of event handles. User interaction calls methods on the Controller, which then routes them to the Model.
Your "non-GUI" object(s) are the Model. This is the object that contains business logic and whose state is ultimately updated by clicking buttons on your GUI.
You mention "embedding" references between the objects. This is not necessary as long as events in your GUI can be routed by some mechanism to your Controller. This design pattern is useful because it separates your UI logic from your business logic. You can "snap on" a new Views very easily because there is very little event wiring between the View and the controller.
The Wikipedia article has more information and links to implementation examples.
Waste a little time looking at Falcon's Eye (though it is Nethack rather than Rogue). There's a long history of skinning rogue like games (or command line apps in general), which isn't quite classic mvc - it already has a full UI, instead you're adding a decorator to that UI with either a direct translation, or another metaphor (such as gparted, the gnome partition editor, which allows construction of a sequence of partition editing commands by direct manipulation)
I've been developing Web applications for a while now and have dipped my toe into GUI and Game application development.
In the web application (php for me), a request is made to the file, that file includes all the necessary files to process the info into memory, then the flow is from Top to Bottom for each request. (mainly)
I know that for Games the action happens within the Game Loop, but how are all the different elements of a game layered into that single loop (menu system, gui, loading of assets and the 3d world) with the constant loading and unloading of certain things.
Same for GUI programs, i believe there's an "application loop" of some sorts.
Are most of the items called into memory and then accessed, are the items linked in and loaded into memory when needed?
What helped me develop web applications faster is when i understood the flow of the program, it doesn't have to be detailed, just the general idea or pseudo code.
There's almost always a loop in all of these - but it's not something you would tend to think about during most of your development.
If you take a step back, your web applications are based around a loop - the Web Server's accept() loop:
while(listening) {
get a socket connection;
handle it;
}
.. but as a Web developer, you're shielded from that, and write 'event driven' code -- 'when someone requests this URL, do this'.
GUIs are also event driven, and the events are also detected by a loop somewhere:
while(running) {
get mouse/keyboard/whatever event
handle it
}
But a GUI developer doesn't need to think about the loop much. They write 'when a mouse click occurs here, do this'.
Games, again the same. Someone has to write a loop:
while(game is in progress) {
invoke every game object's 'move one frame' method;
poll for an input event;
}
... while other code is written in a more event-driven style: 'when a bullet object coincides with this object, trigger an explosion event'.
For applications and to a lesser extent Games the software is event driven. The user does "something" with the keyboard or mouse and that event is sent to the rest of the software.
In Games the Game Loop is important because is focused on processing the screen and the game state. With many Games needing real time performance. With modern 3D Graphics API much of the screen processing is able to be dumped onto the GPU. However the Game State is tracked by the main loop. Much of the effort of a team for a game is focused on making the processing of the loop very smooth.
For application typically heavy processing is spawned on onto a thread. It is a complex subject because of the issues surrounding two things trying to access the same data. There are whole books on the subject.
For applications the sequence is
user does X, X and associated information (like X,Y coordinates) is sent to the UI_Controller.
The UI decides which command to execute.
The Command is Executed.
The model/data is modified.
The Command tells the UI_Controller to update various areas of the UI.
The UI_Controller redraws the UI.
The Command returns.
The Application waits for the next event.
There are several variants of this. The model can allow listeners to wait for changes in the data. When the data the listener execute and redraws the UI.
As far as game programming went I was merely a hobbyist, however this is what I usually did:
I had an object that represented a very generic concept of a "Scene" in the game. All the different major sections of the game derived from this Scene object. A scene could really be anything, depending on what type of game it is. Anyway, each more specific scene that derived from scene had a procedure to Load all of the necessary elements for that scene.
When the game was to change scenes, the pointer to the active scene was set to a new scene, which would then load all of its needed objects.
The generic Scene object had virtual functions such as Load, Draw, and Logic that were called at particular times in the game loop from the active scene pointer. Every specific scene had its own ways of implementing these methods.
I don't know if that's how it's supposed to be done or not, but it was a very easy way for me to control the flow of things. The scene concept also made it easy to store multiple scenes as collections. With multiple scene pointers stored in a stack of sorts at one time, scenes could be stored in reserve and maintain their full state when returned to, or even do things like dim but to continue to draw while the active scene drew over them as an overlay of sorts.
So anyway, if you do it like that it's not precisely like a web page but I guess if you think about it the right way it's similar enough.