How comes Unity uses polling for all the Input events, is't it very inefficient to check each update loop if there is a new event? If I have 1 mio objects doing it each update cycle I would assume the constant polling would slow down the system significantly..
public void Update() {
if (Input.GetKeyUp(KeyCode.Escape)) {
// escape clicked
}
}
why is there nothing like this:
public void Start() {
Input.addKeyUpListener(KeyCode.Escape, delegate {
// escape clicked
});
}
Note that Unity is not polling the system every time you call one of those methods - it is instead polling once per frame and then caching the value, as evidenced by the ResetInputAxes function.
This is not the only place Unity does something seemingly insane, but which may be more efficient in the lower levels of code - keep in mind that Unity maintains a lot of customization to the runtime (particularly with garbage collection and construction) that works differently from standard C#.
Note also that callback logic, while great for handlers and generally long-lived objects such as singletons and systems, is not so great for scripts which are normally collected and created several times throughout the lifetime of the game. Since Unity only exposes the ability to make scripts with code, it thus makes more sense to poll, rather than use callbacks which would need attach, handle, and detach behaviours to prevent errors.
Related
I have to create a library that communicates with a device via a COM port.
In the one of the functions, I need to issue a command, then wait for several seconds as it performs a test (it varies from 10 to 1000 seconds) and return the result of the test:
One approach is to use async-await pattern:
public async Task<decimal> TaskMeasurementAsync(CancellationToken ctx = default)
{
PerformTheTest();
// Wait till the test is finished
await Task.Delay(_duration, ctx);
return ReadTheResult();
}
The other that comes to mind is to just fire an event upon completion.
The device performs a test and the duration is specified prior to performing it. So in either case I would either have to use Task.Delay() or Thread.Sleep() in order to wait for the completion of the task on the device.
I lean towards async-await as it easy to build in the cancellation and for the lack of a better term, it is self contained, i.e. I don't have to declare an event, create a EventArgs class etc.
Would appreciate any feedback on which approach is better if someone has come across a similar dilemma.
Thank you.
There are several tools available for how to structure your code.
Events are a push model (so is System.Reactive, a.k.a. "LINQ over events"). The idea is that you subscribe to the event, and then your handler is invoked zero or more times.
Tasks are a pull model. The idea is that you start some operation, and the Task will let you know when it completes. One drawback to tasks is that they only represent a single result.
The coming-soon async streams are also a pull model - one that works for multiple results.
In your case, you are starting an operation (the test), waiting for it to complete, and then reading the result. This sounds very much like a pull model would be appropriate here, so I recommend Task<T> over events/Rx.
This is a larger question about how computers are able to fire tasks based off a time trigger.
I'm building a game in Unity with a GameManager singleton Instance that runs a game clock. I want events to happen periodically throughout the game, but it seems inefficient and counter-intuitive to run a time variable through a series of if-statements to know when an event should happen.
I've also been developing mobile apps, and I've always wondered how an alarm works (is this similar to the above context?). Does the device/underlying tasks run 24hours of the day and wait by checking the equivalent of an if-statement to know when to alert the alarm event?
What you've described is basically how computers do events. most event-driven programs including the OS itself use what is called a pooling system where the system will go throw a large list of all existing events and check if their firing condition is met. This firing condition can be a time or other triggers such as input being received from a device like a keyboard or network card.
The Unity engine actually does this too. Behind the scenes, Unity will have a main process running constantly checking if the firing conditions for all the built-in monobehaviour methods have been met and then firing the methods if they have. For example, the FixedUpdate method is guaranteed to be called so many times per second, i believe its 30 times a second but i'm not sure off the top of my head. so the calling method for fixed update would simply be something like this.
timer += Time.deltaTime();
while(timer >= (1.0f/30.0f))
{
FixedUpdate();
timer -= (1.0f/30.0f);
}
This code would run as often as the processor would allow, constantly checking if enough time has passed to perform another call to FixedUpdate(), this code also performs backdating so if the processor become overloaded and this code doesn't get called for say 2 seconds then the next time it gets called it will perform enough calls for the 2 seconds that were missed.
Now for your unity application, if this is going to be small class say only around 10 time conditions then i would just use if statements as you suggest any more than that and i would start considering taking a more oop approach and have a abstract class with two methods
abstract class ConditionEvent{
bool ConditionMet();
void Process();
}
Then in your MonoBehavour call i would have a list of filled with these classes with a loop that calls the ConditionMet function. If the function returns true then call Process.
I am looking for a pub/sub mechanism that behaves like a promise but can resolve multiple times, and behaves like an event except if you subscribe after a notification has happened it triggers with the most recent value.
I am aware of notify, but deferred.notify is order-sensitive, so in that way it behaves just like an event. eg:
d.notify('notify before'); // Not observed :(
d.promise.progress(function(data){ console.log(data) });
d.notify('notify after'); // Observed
setTimeout(function(){ d.notify('notify much later') }, 100); // Also Observed
fiddle: http://jsfiddle.net/foLhag3b/
The notification system I'd like is a good fit for a UI component that should update to reflect the state of the data behind it. In these cases, you don't want to care about whether the data has arrived yet or not, and you want updates when they come in.
Maybe this is similar to Immediate mode UIs, but is distinct because it is message based.
The state of the art for message based UI updating, as far as I'm aware, is something which uses a promise or callback to initialize, then also binds an update event:
myUIComponent.gotData(model.data);
model.onUpdate(myUIComponent.gotData); // doing two things is 2x teh workz :(
I don't want to have to do both. I don't think anyone should have to, the use case is common enough to abstract.
model.whenever(myUIComponent.gotData); // yay one intention === one line of code
I could build a mechanism to do what I want, but I wanted to see if a pub/sub mechanism like this already exists. A lot of smart people have done a lot in CS and I figure this concept must exist, I just am looking for the name of it.
To be clear, I'm not looking to change an entire framework, say to Angular or React. I'm looking only for a pub/sub mechanism. Preferably an implementation of a known concept with a snazzy name like notifr or lemme-kno or touch-base.
You'll want to have a look at (functional) reactive programming. The concept you are looking for is known as a Behavior or Signal in FRP. It models the change of a value over time, and can be inspected at any time (continuously holds a value, in contrast to a stream that discretely fires events).
var ui = state.map(render); // whenever state updates, so does ui with render() result
Some popular libraries in JS are Bacon and Rx, which use their own terminology however: you'll find properties and observables.
I'm currently developing a multiplayer cardgame for android, with libgdx as the game engine. My question is more generel tho.
I'm not sure whats the best practice for handling callbacks in this architecture. My controller is a big statemachine, that checks inputs over and over while beeing called from the render() method of the gameengine.
I have two main callbacks, userinput from the gui, and network callbacks from the android google play services part.
Currently these callback methode/ inputListeners just set member variables, which are check by getter methods from the controller/statemachine, for example i call this from the controller over and over, check if its != null and proceed if it is.
#Override
public Boolean allPlayersConnected() {
Boolean allConnected = null;
if (startGame != null) {
allConnected = startGame;
startGame = null;
}
return allConnected;
}
The startGame "flag" beeing set by callbacks from the google play services api.
I dont know if this is good practice, doesnt look like.
I could call controller methods from the google play services callbacks that set a controller member variable, and check this in each render loop, but thats just moving the variable.
I could also design the controller as an observer of those events, but what am i going to do in the update method inside the controller thats beeing called if an event happens. i dont think i want change stats in these, even if i can access the currrent state. Im spreading state code all over the place with this, some in different parts of a huge update method and some in the actual state machine code. Just setting a member variable in the update method is quite similar to the think i did above.
Another thing would be, to directly change controller state from the callback methods. That would be less code, less variables and a little faster, but i think i'd destroy the MVP concept, cause i take away the control from the controller and let i.e. the gui change the state of the controller.
Any input on this ?
Edit:
The more i think about it, the more i think a combination of observer and command pattern is the way to go.
So i could indeed cut big part of the current state machine and pack it into the observer update() method. Instead of sending the commands through a big command enum, i could create command object with the information available, and pass them to the observer(controller), where i check the command as viable, and call the execute with the information needed to be excecuted, eg the model interface.
First, I think whether your commands are enums or command objects is independent of the main problem here -- which is how to connect user and network input to state management.
The most common game architecture I've seen is an update loop that checks input, iterates the game simulation, and then renders a frame. In the MVC world, this structure just synchronizes those steps; you still have an encapsulated view and data model, with the controller (the game loop) serving as a a bridge between those two worlds.
Input, whether from the local user or one over the net, is generally treated as a request to modify game state. That is, the controller (as the first part of its loop) reads in pending input messages and processes them, modifying state as it goes. This way, the code that changes state is in one place: that controller. You are right, spreading state-modification code throughout the app is a bad practice; basically, it's not MVC.
In other words, all of your callbacks should convert the input to commands and stick them into a queue. You don't want to synchronize the controller -- whose job it is to modify state -- with those input callbacks. You have no idea when input will occur relative to the game loop, so it's best to decouple them. Serializing input processing with game simulation should also make your logic simpler.
You have some choice in how to connect the callbacks to the controller; a shared queue (where one side writes into it and the other reads out from it) is a strong pattern and easy to make thread-safe.
I have some ViewModels where I use a Service, which is quite bandwidth intensive. However this service is only required when viewing specific Views in the application.
In MvvmCross vNext I used the ViewUnRegistered/ViewRegistered events to detect when a ViewModel was shown, and had a BaseViewModel which looked something like this:
public class BaseViewModel
: MvxViewModel
, IMvxServiceConsumer
{
public BaseViewModel()
{
ViewUnRegistered += (s, e) =>
{
if (!HasViews)
{
OnViewsDetached();
}
};
ViewRegistered += (s, e) =>
{
if (HasViews)
{
OnViewsAttached();
}
};
}
public virtual void OnViewsAttached()
{
// nothing to do here
}
public virtual void OnViewsDetached()
{
// nothing to do in this base class
}
}
Then in my other ViewModels I would just inherit from this and override OnViewsAttached and OnViewsDetached to start and stop the service.
Now in MvvmCross v3 these two Events are not present anymore. As I understand they were not working properly on iOS either. v3 also has a new ViewModel life cycle, which has SavedState and ReloadState. Although as I understand it SavedState only gets called in the ViewModel is destroyed, which might not be the case even though it is not showing.
As to detecting whether the associated View is showing, one could assume that a View is showing when ShowViewModel is called and have some Init parameters in the view, but the tricky part here is to detect when a View is not showing any more. Any ideas on how to achieve this?
This area of determining View/ViewModel lifecycle across all the platforms is fairly tricky, especially once developers start straying from the 'basic' presentation models and start using tabs, splitviews, popups, flyouts, etc
MvvmCross v3 doesn't currently have a common way to handle this.
The previous code from vNext was broken when ios6 removed viewDidUnload (but was generally wrongly used anyway - as viewDidUnload was not generally called when ViewModel developers thought it would be!)
There is an issue open still to discuss possible future common ideas... https://github.com/slodge/MvvmCross/issues/74
With that said, some of the patterns I've recently used for this type of situation are:
for most viewmodels I do nothing - since these viewmodels don't consume any resources and can just be garbage collected when the system needs the memory back.
for ViewModels which consume low-intensity resources - like timer ticks, then I generally use the MvxMessenger to connect the ViewModel to those resources. This messenger uses weak referencing by default and itself sends out subscription change messages when clients subscribe/unsubscribe
Using this method, I can allow the background resources to monitor whether the viewmodels are in memory (and referenced by views) - and so the background resources can manage themselves.
... although actually quite often (e.g. for timer ticks) then I leave the background resources constantly running regardless of whether a ViewModel is listening.
for those rare situations where resource monitoring is actively needed - e.g. for the SpheroViewModel which needs to maintain an active BlueTooth SPP channel - then I implement a custom interface on the ViewModel - e.g. IActiveViewModel - and then I hook into that interface from the vies on each of the various platforms
Generally I do this from ViewDidAppear/Disappear, OnNavigatedTo/From, OnRestart/Pause - but whether this exact timing works for you depends on the situation.
I suspect, moving forwards, that these resource-intensive viewmodels will be the exception rather than the norm, but I hope that we'll see some samples/recipes posted which demonstrate some ways of handling them.
It's also likely that we'll see some people experimenting with other ongoing-resource situations - e.g. where the application needs to perform background network operations or needs to monitor geo-location beyond the lifetime of a single viewmodel (and maybe even beyond the app). Doing these sort of things in a cross-platform way is an 'interesting' pattern to consider!