Most Unity tutorials suggest using Mouse events within the Update function, like this:
function Update () {
if (UnityEngine.Input.GetMouseButton(1)) {
}
}
This strikes me as really inefficient though, similar to using onEnterFrame in AS or setInterval in JS to power the whole application - I'd really prefer to use an events based system.
the OnMouseDown() method is useful, but is only fired when the MouseDown is on the object, not anywhere in the scene.
So here's the question: Is there a MouseEvent in Unity for detecting if the mouse button is down globally, or is the Update solution the recommended option?
This strikes me as really inefficient though, similar to using
onEnterFrame in AS or setInterval in JS to power the whole application
- I'd really prefer to use an events based system.
As already pointed out in comments, this isn't necessary less efficient. Every event based system is probably using a polling routine like that behind the scenes, updated at a given frequency.
In many game engines/frameworks you are going to find a polling based approach for input handling. I think this is related to the fact that input update frequency is directly correlated to the frame rate/update loop frequency. In fact it doesn't make much sense to listen for input at higher or lower frequency than your game loop.
So here's the question: Is there a MouseEvent in Unity for detecting
if the mouse button is down globally, or is the Update solution the
recommended option?
No there isn't. Btw if you want you can wrap mouse input detection inside a single class, and expose events from there where other classes can register to.
Something like:
public class MouseInputHandler : MonoBehavior
{
public event Action<Vector2> MousePressed;
public event Action<Vector2> MouseMoved;
...
void Update()
{
if (Input.GetMouseButton(0))
{
MousePressed(Input.mousePosition);
...
}
}
}
Like stated, you can use it without major concerns, Unity will 'make its magic' internally as to set processing power sensitive code execution for you in terms of polling events. That's the beauty of a modern game engine after all. You normally shouldn't have to be hacking your way around a common feature such a mouse click detection.
However if you don't want to go using the main Update() you can make a CoRoutine if you feel more comfortable with that, just bear in mind that Unity coroutines are not multi-threaded neither, so at the end everything needs to wait anyway.
Related
I already tried multiple ways of rendering simple animations on winapi windows using wingdi, and realized that its fairly slow even for primitive animations, and inconsistent if i do it by spamming SendMessageW when there are no events for this window. So I thought that instead of rendering when I can i should schedule it with a fixed interval hence improving framerate and quality of the animation. I now wondering if this can be done with registering some callback with windows or there is no such functionality and i do have to spin another thread to function as a timer? Is it generally a better idea? What is the commonly accepted way to do things like this, when event triggered redraws just wont cut it?
Concerning the scheduled animations, there is SetTimer() and KillTimer(), if I remember their names correctly. In any case, "timer" is what you're looking for.
Concerning the painting in the timer callbacks, don't. Instead, adjust animation parameters (positions, colours etc) and trigger a redraw using InvalidateRect(). This will in turn invoke the regular drawing event handler. The difference is that this will not waste any CPU if your window is hidden or minimized. Also, in any case, when the window is un-hidden, the drawing event handler has to be able to draw the right window content anyway.
There is a derived version of this where you update an image in a memory DC and only blit to the window in the drawing event handler. It's unclear whether that is necessary in your case though. It's only when drawing takes an extended amount of time and you want to keep your UI responsive.
Well as my luck usually goes i found this article on MSDN, and it has a very nice example. Putting it here if someone will have same question
I have a cocoa application window (NSWindow) which position on the screen should be updated frequently (depending on some calculation). As noticed in the documentation, UI changes should be made on the main thread:
void calculationThread()
{
while(true)
{
calculatePosition();
if(positionChanged)
{
dispatch_async(dispatch_get_main_queue(), ^{ setWindowPos(); });
}
}
}
void setWindowPos()
{
[window setFrame:_newFrame display:YES];
}
Now the problem I have is that the window movement is very slow and delayed. After making some profiling I see that the calculation process takes about 40mSec, meaning that I'm queueing up a backlog of UI updates 25 times a second.
I've read here that this might be faster than they can be processed and timer should be used to fire the changes every tenth of a second or so. But, wouldn't it be too slow for the human eye (I mean, in that case the movement wouldn't be delayed but would be lagged causing pretty much the same affect).
I will appreciate some knowledge sharing on this. Actually my main 2 questions are:
Are 25-30 UI updates per second really to much?
If yes, what is the recommended UI changes frequency?
The frequency at which a window can be moved around onscreen without problems will of course depend upon the speed of the user's machine, the video card they have, the size of the window, and probably a bunch of other factors. There is no single good answer to this. However, if you just drag a window around on your screen, you will notice that it can probably be moved very smoothly (unless your machine is very busy or very low on memory or something); I would not expect 25 times per second to produce a problem on a modern Mac. Not even close, in fact.
#RobNapier's points about Core Animation etc. are fine, but overstated I think; there is nothing inherently wrong with changing your UI using a timer or other periodic update if that is what you actually want to do. CoreAnimation is a toolkit for making some types of animation easier; using it is not required, and it is not suited to every problem. Similarly, if you want to make changes that are actually synched to screen refresh then CVDisplayLink is useful, but it doesn't really sound like that's what you want to do.
For your purposes, your basic approach seems fine, although I would suggest adding an NSDate check in order to skip updates if the previous update was less than, say, 1/60th of a second previous. After all, the calculation appears to take 40mSec on your machine, but it might be much faster on some other machine; you want to throttle your drawing to a reasonable rate just to be a good citizen.
So what is the problem, then? I suspect the issue might actually be your call [window setFrame:_newFrame display:YES]. If you look at Apple's docs for that method, they state "When YES the window sends a displayIfNeeded message down its view hierarchy, thus redrawing all views." Each time you call that method, then, you are not only moving your window (which I gather is your intention); you are redrawing all of the contents of the window, too, and that is slow. If you don't need to do that, then that is the overhead you need to eliminate. Call setFrameOrigin: or setFrameTopLeftPoint: instead (which make the semantics clear, that you are moving the window without resizing it or redrawing it), or perhaps just setFrame:display: passing NO instead of YES, and I'm guessing your performance problem will vanish.
If you do in fact need to redraw the window contents every time, then please edit the problem description to reflect that. In that case, the solution will have to involve profiling why your window drawing is slow, and figuring out ways to optimize that, which is an entirely different problem.
As you've discovered, you should never try to drive the UI from a tight loop. You should let the UI drive you. There are three primary tools for that.
For simple problems, AppKit is capable of moving windows around the screen. Just call [NSWindow setFrame:display:animate:]. You can override animationResizeTime: to modify the timing.
In many cases AppKit doesn't give enough control. In those case, the best tool is almost always Core Animation. You should tell the system using Core Animation how you where you want UI elements to wind up, and over what period and path, and let it do the work of getting them there. See the Core Animation Programming Guide for extensive documentation on how to use that. It focuses on animating CALayer, but the techniques are similar for NSWindow. You'll use [NSWindow setAnimations:] to add your animation. Look at the NSAnimatablePropertyContainer protocol (which NSWindow conforms to) for more information. For a simple sample project of animating NSWindow, see Just Say No from CIMGF.
In a few cases, you really do need to update the screen manually at the screen update frequency. I must stress how rare this situation is. In almost all cases, Core Animation is the correct tool. But in those rare case (some kinds of video for instance), you can use a CVDisplayLink to handle this. That will call you each time the screen would like to refresh, giving you an opportunity to update your content to match.
I've written a few relatively trivial GUIs in WxWidgets and Qt, and I'm persistently unsure how to architecturally handle the following situations:
You catch a mouse event to do something to a graphical object in your GUI
What you do with the object depends on which modifier keys the user is holding down
At the start I usually do something like the following:
void MyClass::mouseMoveEvent(QGraphicsSceneMouseEvent* event)
{
if (event->modifiers() & Qt::AltModifier) {
// do something
} else if (event->modifiers() & Qt::ControlModifier) {
// do something else
} else {
// do yet another thing
}
}
// Repreat ad-nausium for other mouse click/move events...
Eventually, with similar if/else/switch code in lots of mousePressEvent, mouseReleaseEvent handlers, this seems a bit unwieldy so I try and encapsulate some of the repetition by putting the object into different "modes" depending on which modifiers are down. Nonetheless, I still feel I'm missing some nice elegant solution.
I've tried to look at the code of various open-source tools but I've not found anything tangible (erm, simple) enough to point me in a different direction.
Some tools (say, the GIMP) seem to have so many rich and varied tool- and modifier-dependent behaviours that I feel there's got to be a nice way of architecting this pattern.
Event handling in such a GUI toolkits decides what to do according to an event and an event handler you provide. What you need is a way to decide what to do according to an event, modifier and an event handler. So you can based on your events and modifier call special event processing object in all of your standard event handlers for events provided by the toolkit. What you have to implement is that event processing object, which will according to an even and modifier call the right behaviour (event+modifier handler). This is what I would call Chain of responsibility design pattern.
I've done lots of stuff with pygtk however i'm deciding to learn pyqt, im stuck at the qgraphicsview i have absolutley no idea how to get signals from the items i place on the graphics view, primarily mouse events.How do i get the mouse events from idividual items in a scene?
QGraphicsItem is not a QObject and cannot send signals, nor receive slots. Instead, you must handle events. You can do that either through an event filter, sub-classing the view or scene to intercept events or simply sub-classing the items themselves and implementing the event handling functions (see protected member functions in the documentation). Perhaps this example can be of interest: http://doc.trolltech.com/4.6/graphicsview-diagramscene.html .
Right after you create an item, connect the signals you want from it to the instance of the widget that contains it.
Another option is to just give up using signals and have your instance of QGraphicItem directly call a method of its parent by keeping a reference to it. This is less pretty than using signals but ultimately, it gets the job done.
I've been developing Web applications for a while now and have dipped my toe into GUI and Game application development.
In the web application (php for me), a request is made to the file, that file includes all the necessary files to process the info into memory, then the flow is from Top to Bottom for each request. (mainly)
I know that for Games the action happens within the Game Loop, but how are all the different elements of a game layered into that single loop (menu system, gui, loading of assets and the 3d world) with the constant loading and unloading of certain things.
Same for GUI programs, i believe there's an "application loop" of some sorts.
Are most of the items called into memory and then accessed, are the items linked in and loaded into memory when needed?
What helped me develop web applications faster is when i understood the flow of the program, it doesn't have to be detailed, just the general idea or pseudo code.
There's almost always a loop in all of these - but it's not something you would tend to think about during most of your development.
If you take a step back, your web applications are based around a loop - the Web Server's accept() loop:
while(listening) {
get a socket connection;
handle it;
}
.. but as a Web developer, you're shielded from that, and write 'event driven' code -- 'when someone requests this URL, do this'.
GUIs are also event driven, and the events are also detected by a loop somewhere:
while(running) {
get mouse/keyboard/whatever event
handle it
}
But a GUI developer doesn't need to think about the loop much. They write 'when a mouse click occurs here, do this'.
Games, again the same. Someone has to write a loop:
while(game is in progress) {
invoke every game object's 'move one frame' method;
poll for an input event;
}
... while other code is written in a more event-driven style: 'when a bullet object coincides with this object, trigger an explosion event'.
For applications and to a lesser extent Games the software is event driven. The user does "something" with the keyboard or mouse and that event is sent to the rest of the software.
In Games the Game Loop is important because is focused on processing the screen and the game state. With many Games needing real time performance. With modern 3D Graphics API much of the screen processing is able to be dumped onto the GPU. However the Game State is tracked by the main loop. Much of the effort of a team for a game is focused on making the processing of the loop very smooth.
For application typically heavy processing is spawned on onto a thread. It is a complex subject because of the issues surrounding two things trying to access the same data. There are whole books on the subject.
For applications the sequence is
user does X, X and associated information (like X,Y coordinates) is sent to the UI_Controller.
The UI decides which command to execute.
The Command is Executed.
The model/data is modified.
The Command tells the UI_Controller to update various areas of the UI.
The UI_Controller redraws the UI.
The Command returns.
The Application waits for the next event.
There are several variants of this. The model can allow listeners to wait for changes in the data. When the data the listener execute and redraws the UI.
As far as game programming went I was merely a hobbyist, however this is what I usually did:
I had an object that represented a very generic concept of a "Scene" in the game. All the different major sections of the game derived from this Scene object. A scene could really be anything, depending on what type of game it is. Anyway, each more specific scene that derived from scene had a procedure to Load all of the necessary elements for that scene.
When the game was to change scenes, the pointer to the active scene was set to a new scene, which would then load all of its needed objects.
The generic Scene object had virtual functions such as Load, Draw, and Logic that were called at particular times in the game loop from the active scene pointer. Every specific scene had its own ways of implementing these methods.
I don't know if that's how it's supposed to be done or not, but it was a very easy way for me to control the flow of things. The scene concept also made it easy to store multiple scenes as collections. With multiple scene pointers stored in a stack of sorts at one time, scenes could be stored in reserve and maintain their full state when returned to, or even do things like dim but to continue to draw while the active scene drew over them as an overlay of sorts.
So anyway, if you do it like that it's not precisely like a web page but I guess if you think about it the right way it's similar enough.