Detecting whether a ViewModel's associated View is showing/not showing - xamarin

I have some ViewModels where I use a Service, which is quite bandwidth intensive. However this service is only required when viewing specific Views in the application.
In MvvmCross vNext I used the ViewUnRegistered/ViewRegistered events to detect when a ViewModel was shown, and had a BaseViewModel which looked something like this:
public class BaseViewModel
: MvxViewModel
, IMvxServiceConsumer
{
public BaseViewModel()
{
ViewUnRegistered += (s, e) =>
{
if (!HasViews)
{
OnViewsDetached();
}
};
ViewRegistered += (s, e) =>
{
if (HasViews)
{
OnViewsAttached();
}
};
}
public virtual void OnViewsAttached()
{
// nothing to do here
}
public virtual void OnViewsDetached()
{
// nothing to do in this base class
}
}
Then in my other ViewModels I would just inherit from this and override OnViewsAttached and OnViewsDetached to start and stop the service.
Now in MvvmCross v3 these two Events are not present anymore. As I understand they were not working properly on iOS either. v3 also has a new ViewModel life cycle, which has SavedState and ReloadState. Although as I understand it SavedState only gets called in the ViewModel is destroyed, which might not be the case even though it is not showing.
As to detecting whether the associated View is showing, one could assume that a View is showing when ShowViewModel is called and have some Init parameters in the view, but the tricky part here is to detect when a View is not showing any more. Any ideas on how to achieve this?

This area of determining View/ViewModel lifecycle across all the platforms is fairly tricky, especially once developers start straying from the 'basic' presentation models and start using tabs, splitviews, popups, flyouts, etc
MvvmCross v3 doesn't currently have a common way to handle this.
The previous code from vNext was broken when ios6 removed viewDidUnload (but was generally wrongly used anyway - as viewDidUnload was not generally called when ViewModel developers thought it would be!)
There is an issue open still to discuss possible future common ideas... https://github.com/slodge/MvvmCross/issues/74
With that said, some of the patterns I've recently used for this type of situation are:
for most viewmodels I do nothing - since these viewmodels don't consume any resources and can just be garbage collected when the system needs the memory back.
for ViewModels which consume low-intensity resources - like timer ticks, then I generally use the MvxMessenger to connect the ViewModel to those resources. This messenger uses weak referencing by default and itself sends out subscription change messages when clients subscribe/unsubscribe
Using this method, I can allow the background resources to monitor whether the viewmodels are in memory (and referenced by views) - and so the background resources can manage themselves.
... although actually quite often (e.g. for timer ticks) then I leave the background resources constantly running regardless of whether a ViewModel is listening.
for those rare situations where resource monitoring is actively needed - e.g. for the SpheroViewModel which needs to maintain an active BlueTooth SPP channel - then I implement a custom interface on the ViewModel - e.g. IActiveViewModel - and then I hook into that interface from the vies on each of the various platforms
Generally I do this from ViewDidAppear/Disappear, OnNavigatedTo/From, OnRestart/Pause - but whether this exact timing works for you depends on the situation.
I suspect, moving forwards, that these resource-intensive viewmodels will be the exception rather than the norm, but I hope that we'll see some samples/recipes posted which demonstrate some ways of handling them.
It's also likely that we'll see some people experimenting with other ongoing-resource situations - e.g. where the application needs to perform background network operations or needs to monitor geo-location beyond the lifetime of a single viewmodel (and maybe even beyond the app). Doing these sort of things in a cross-platform way is an 'interesting' pattern to consider!

Related

Am I right in separating integration events from domain events?

I use event sourcing to store my object.
Changes are captured via domain events, holding only the minimal information required, e.g.
GroupRenamedDomainEvent
{
string Name;
}
GroupMemberAddedDomainEvent
{
int MemberId;
string Name;
}
However elsewhere in my application I want to be notified if a Group is updated in general. I don’t want to have to accumulate or respond to a bunch of more granular and less helpful domain events.
My ideal event to subscribe to is:
GroupUpdatedIntegrationEvent
{
int Id;
string Name;
List<Member> Members;
}
So what I have done is the following:
Update group aggregate.
Save down generated domain events.
Use these generated domain events to to see whether to trigger my integration event.
For the example above, this might look like:
var groupAggregate = _groupAggregateRepo.Load(id);
groupAggregate.Rename(“Test”);
groupAggregate.AddMember(1, “John”);
_groupAggregateRepo.Save(groupAggregate);
var domainEvents = groupAggregate.GetEvents();
if (domainEvents.Any())
{
_integrationEventPublisher.Publish(
new GroupUpdatedIntegrationEvent
{
Id = groupAggregateId,
Name = groupAggregate.Name,
Members = groupAggregate.Members
});
}
This means my integration events used throughout the application are not coupled to what data is used in my event sourcing domain events.
Is this a sensible idea? Has anyone got a better alternative? Am I misunderstanding these terms?
Of course you're free to create and publish as many events as you want, but I don't see (m)any benefits there.
You still have coupling: You just shifted the coupling from one Event to another. Here it really depends how many Event Consumers you've got. And if everything is running in-memory or stored in a DB. And if your Consumers need some kind of Replay mechanism.
Your integration Events can grow over time and use much bandwidth: If your Group contains 1000 Members and you add 5 new Members, you'll trigger 5 integration events that always contain all members, instead of just the small delta. It'll use much more network bandwidth and hard drive space (if persisted).
You're coupling your Integration Event to your Domain Model. I think this is not good at all. You won't be able to simply change the Member class in the future, because all Event Consumers depend on it. A solution could be to instead use a separate MemberDTO class for the Integration Event and write a MemberToMemberDTO converter.
Your Event Consumers can't decide which changes they want to handle, because they always just receive the full blown current state. The information what actually changed is lost.
The only real benefit I see is that you don't have to again write code to apply your Domain Events.
In general it looks a bit like Read Model in CQRS. Maybe that's what you're looking for?
But of course it depends. If your solution fits your application's needs, then it'll be fine to do it that way. Rules are made to show you the right direction, but they're also meant to be broken when they get in your way (and you know what you're doing).

Where to validate pagination logic in domain driven design?

In DDD, where should validation logic for pagination queries reside?
For example, if the service layer receives a query for a collection with parameters that look like (in Go), though feel free to answer in any language:
// in one file
package repositories
type Page struct {
Limit int
Offset int
}
// Should Page, which is part of the repository
// layer, have validation behaviour?
func (p *Page) Validate() error {
if p.Limit > 100 {
// ...
}
}
type Repository interface {
func getCollection(p *Page) (collection, error)
}
// in another file
package service
import "repositories"
type Service struct {
repository repositories.Repository
}
// service layer
func (s *Service) getCollection(p *repositories.Page) (pages, error) {
// delegate validation to the repository layer?
// i.e - p.Validate()
// or call some kind of validation logic in the service layer
// i.e - validatePagination(p)
s.repository.getCollection(p)
}
func validatePagination(p *Page) error {
if p.Limit > 100 {
...
}
}
and I want to enforce a "no Limit greater than 100" rule, does this rule belong in the Service layer or is it more of a Repository concern?
At first glance it seems like it should be enforced at the Repository layer, but on second thought, it's not necessarily an actual limitation of the repository itself. It's more of a rule driven by business constraints that belongs on the entity model. However Page isn't really a domain entity either, it's more a property of the Repository layer.
To me, this kind of validation logic seems stuck somewhere between being a business rule and a repository concern. Where should the validation logic go?
The red flag for me, is the same one identified by #plalx. Specifically:
It's more of a rule driven by business constraints that belongs on the
entity model
In all likelihood, one of two things are happening. The less likely of the two is that the business users are trying to define the technical application the domain model. Every once in a while, you have a business user who knows enough about technology to try to interject these things, and they should be listened to - as a concern, and not a requirement. Use cases should not define performance attributes, as those are acceptance criteria of the application, itself.
That leads into the more likely scenario, in that the business user is describing pagination in terms of the user interface. Again, this is something that should be talked about. However, this is not a use case, as it applies to the domain. There is absolutely nothing wrong with limiting dataset sizes. What is important is how you limit those sizes. There is an obvious concern that too much data could be pulled back. For example, if your domain contains tens of thousands of products, you likely do not want all of those products returned.
To address this, you should also look at why you have a scenario that can return too much data in the first place. When looking at it purely from a repository's perspective, the repository is used simply as a CRUD factory. If your concern is what a developer could do with a repository, there are other ways to paginate large datasets without bleeding either a technological or application concern into the domain. If you can safely deduce that the aspect of pagination is something owned by the implementation of the application, there is absolutely nothing wrong with having the pagination code outside of the domain completely, in an application service. Let the application service perform the heavy lifting of understanding the application's requirement of pagination, interpreting those requirements, and then very specifically telling the domain what it wants.
Instead of having some sort of GetAll() method, considering having a GetById() method that takes an array of identifiers. Your application service performs a dedicated task of "searching" and determining what the application is expecting to see. The benefit may not be immediately apparent, but what do you do when you are searching through millions of records? If you want to considering using something like a Lucene, Endeca, FAST, or similar, do you really need to crack the domain for that? When, or if, you get to the point where you want to change out a technical detail and you find yourself having to actually touch your domain, to me, that is a rather large problem. When your domain starts to serve multiple applications, will all of those application share the same application requirements?
The last point is the one that I find hits home the most. Several years back, I was in the same situation. Our domain had pagination inside of the repositories, because we had a business user who had enough sway and just enough technical knowledge to be dangerous. Despite the objections of the team, we were overruled (which is a discussion onto itself). Ultimately, we were forced to put pagination inside of the domain. The following year, we started to use the domain within the concept of other application's inside of the business. The actual business rules never changed, but the way that we searched did - depending on the application. That left us having to bring up another set of methods to accommodate, with the promise of reconciliation in the future.
That reconciliation came with the fourth application to use the domain, which was for an external third-party to consume, when we finally conveyed the message that these continual changes in the domain could have been avoided by allowing the application to own its own requirements and providing a means to facilitate a specific question - such as "give me these specific products". The previous approach of "give me twenty products, sorted in this fashion, with a specific offset" in no way described the domain. Each application determined what a "pagination" ultimately meant to itself and how it wanted to load those results. Top result, reversing order in the middle of a paged set, etc. Those were all eliminated because those were moved nearer their actual responsibilities and we empowered the application while still protecting the domain. We used the service layer as a delineation for what is considered "safe". Since the service layer acted as a go-between, between the domain and the application, we could reject a request at the service-level if, for example, the application requested more than one hundred results. This way, the application could not just do whatever it pleased, and the domain was left gleefully oblivious to the technical limitation being applied to the call being made.
"It's more of a rule driven by business constraints that belongs on
the entity model"
These kind of rules generally aren't business rules, they are simply put in place (most likely by developers without business experts involvement) due to technical system limitations (e.g. guarantee the system's stability). They usually find their natural home in the Application layer, but could be placed elsewhere if it's more practical to do so.
On the other hand, if business experts are interested by the resource/cost factor and decide to market this so that customers may pay more to view more at a time then that would become a business rule; something the business really cares about.
In that case the rule checking would certainly not go in the Repository because the business rules would get buried in the infrastructure layer. Not only that but the Repository is very low-level and may be used in automated scripts or other processes where you would not want these limitations to apply.
Actually, I usually apply some CQRS principles and avoid going through repositories entirely for queries, but that's another story.
At first glance it seems like it should be enforced at the Repository
layer, but on second thought, it's not necessarily an actual
limitation of the repository itself. It's more of a rule driven by
business constraints that belongs on the entity model.
Actually repositories are still domain. They're mediators between the domain and data mapping layer. Thereby, you should still consider them as domain.
Therefore, a repository interface implementation should enforce domain rules.
In summary, I would ask myself: do I want to allow non-paginated access to abstracted data by the repository from any domain operation?. And the answer should be probably not, because such domain might own thousands of domain objects, and it would be a suboptimal retrieval trying to get too many domain objects at once, wouldn't be?
Suggestion
* Since I don't know which language is currently using the OP, and I find that programming language doesn't matter on this Q&A, I'll explain a possible approach using C# and the OP can translate it to any programming language.
For me, enforcing a no more than 100 results per query rule should be a cross-cutting concern. In opposite to what #plalx has said on his answer, I really believe that something that can be expressed in code is the way to go and it's not only an optimization concern, but a rule enforced to the entire solution.
Based on what I've said above, I would design a Repository abstract class to provide some common behaviors and rules across the entire solution:
public interface IRepository<T>
{
IList<T> List(int skip = 0, int take = 0);
// Other method definitions like Add, Remove, GetById...
}
public abstract class Repository<T> : IRepository<T>
{
protected virtual void EnsureValidPagination(int skip = 0, int take = 0)
{
if(take > 100)
{
throw new ArgumentException("take", "Cannot take more than 100 objects at once");
}
}
public IList<T> List(int skip = 0, int take = 0)
{
EnsureValidPagination(skip, take);
return DoList<T>(skip, take);
}
protected abstract IList<T> DoList(int skip = 0, int take = 0);
// Other methods like Add, Remove, GetById...
}
Now you would be able to call EnsureValidPagination in any implementation of IRepository<T> that would also inherit Repository<T>, whenever you implement an operation which involves returning object collections.
If you need to enforce such rule to some specific domain, you could just design another abstract class deriving some like I've described above, and introduce the whole rule there.
In my case, I always implement a solution-wide repository base class and I specialize it on each domain if needed, and I use it as base class to specific domain repository implementations.
Answering to some #guillaume31 comment/concern on his answer
I agree that it isn't a domain-specific rule. But Application and
Presentation aren't domain either. Repository is probably a bit too
sweeping and low-level for me -- what if a command line data utility
wants to fetch a vast number of items and still use the same domain
and persistence layers as the other applications?
Imagine you've defined a repository interface as follows:
public interface IProductRepository
{
IList<Product> List(int skip = 0, int take = 0);
}
An interface wouldn't define a limitation on how many products I can take at once, but see the following implementation to IProductRepository:
public class ProductRepository : IRepository
{
public ProductRepository(int defaultMaxListingResults = -1)
{
DefaultMaxListingResults = defaultMaxListingResults;
}
private int DefaultMaxListingResults { get; }
private void EnsureListingArguments(int skip = 0, int take = 0)
{
if(take > DefaultMaxListingResults)
{
throw new InvalidOperationException($"This repository can't take more results than {DefaultMaxListingResults} at once");
}
}
public IList<Product> List(int skip = 0, int take = 0)
{
EnsureListingArguments(skip, take);
}
}
Who said we need to harcode the maximum number of results that can be taken at once? If the same domain is consumed by different application layers I see you wouldn't be able to inject different constructor parameters depending on particular requirements by these application layers.
I see same service layer injecting exactly the same repository implementation with different configurations depending on the consumer of the whole domain.
Not a technical requirement at all
I want to throw my two cents on some consensus made by other answerers, which I believe that are partially right.
The consensus is a limitation like the one required by the OP is a technical requirement rather than a business requirement.
BTW, it seems like no one has put the focus on the fact that domains can talk to each other. That is, you don't design your domain and other layers to support the more traditional execution flow: data <-> data mapping <-> repository <-> service layer <-> application service <-> presentation (this is just a sample flow, it might be variants of it).
Domain should be bullet proof in all possible scenarios or use cases on which it'll be consumed or interacted. Hence, you should consider the following scenario: domain interactions.
We shouldn't be less philosophical and more ready to see the real world scenario, and the whole rule can happen in two ways:
The entire project isn't allowed to take more than 100 domain objects at once.
One or more domains aren't allowed to take more than 100 domain objects at once.
Some argue that we're talking about a technical requirement, but for me is a domain rule because it also enforces good coding practices. Why? Because I really think that, at the end of the day, there's no chance that you would want to get an entire domain object collection, because pagination has many flavors and one is the infinite scroll pagination which can be also be applied to command-line interfaces and simulate the feel of a get all operation. So, force your entire solution to do things right, and avoid get all operations, and probably the domain itself will be implemented differently than when there's no pagination limitation.
BTW, you should consider the following strategy: the domain enforces that you couldn't retrieve more than 100 domain objects, but any other layer on top of it can also define a limit lower than 100: you can't get more than 50 domain objects at once, otherwise the UI would suffer performance issues. This won't break the domain rule because the domain won't cry if you artificially limit what you can get within the range of its rule.
Probably in the Application layer, or even Presentation.
Choose Application if you want that rule to hold true for all front ends (web, mobile app, etc.), Presentation if the limit has to do with how much a specific device is able to display on screen at a time.
[Edit for clarification]
Judging by the other answers and comments, we're really talking about defensive programming to protect performance.
It cannot be in the Domain layer IMO because it's a programmer-to-programmer thing, not a domain requirement. When you talk to your railway domain expert, do they bring up or care about a maximum number of trains that can be taken out of any set of trains at a time? Probably not. It's not in the Ubiquitous Language.
Infrastructure layer (Repository implementation) is an option but as I said, I find it inconvenient and overly restrictive to control things at such a low level. Matías's proposed implementation of a parameterized Repository is admittedly an elegant solution though, because each application can specify their own maximum, so why not - if you really want to apply a broad sweeping limit on XRepository.GetAll() to a whole applicative space.

Unity3D Input.GetKeyUp() polling inefficient?

How comes Unity uses polling for all the Input events, is't it very inefficient to check each update loop if there is a new event? If I have 1 mio objects doing it each update cycle I would assume the constant polling would slow down the system significantly..
public void Update() {
if (Input.GetKeyUp(KeyCode.Escape)) {
// escape clicked
}
}
why is there nothing like this:
public void Start() {
Input.addKeyUpListener(KeyCode.Escape, delegate {
// escape clicked
});
}
Note that Unity is not polling the system every time you call one of those methods - it is instead polling once per frame and then caching the value, as evidenced by the ResetInputAxes function.
This is not the only place Unity does something seemingly insane, but which may be more efficient in the lower levels of code - keep in mind that Unity maintains a lot of customization to the runtime (particularly with garbage collection and construction) that works differently from standard C#.
Note also that callback logic, while great for handlers and generally long-lived objects such as singletons and systems, is not so great for scripts which are normally collected and created several times throughout the lifetime of the game. Since Unity only exposes the ability to make scripts with code, it thus makes more sense to poll, rather than use callbacks which would need attach, handle, and detach behaviours to prevent errors.

Dojo: topics vs events, what design considerations should be taken in account?

I've been using Dojo in various contexts and never found a good explanation on events versus topics. What I understand from using both mechanisms is the following:
Both are event or more generally message mechanisms.
Both work more or less the same, in that you subscribe to a topic/event by setting a callback.
Events are tightly coupled to an object/widget, as in, you need the actual instance of an object or widget to register listeners for specific events.
The topic mechanism on the other hand provides a more decoupled approach, as you can subscribe for any topic without knowing which component is publishing the topic, or even without knowing if the topic will be published at all.
An approach I used a couple of times when developing custom widgets with Dojo was by letting them publish to certain topics. Other components would subscribe to these topics and react appropriately. However, this leads to code that is hard to follow, because when you find a piece of code that subscribes to a certain topic, you start wondering who is publishing to that topic and vice versa. Currently I tend to let my custom widgets submit events and have a controller listening to these events and dispatch them to other widgets that should react on these events.
So in the first approach, the topic mechanism is the glue between widgets, but it is decentralized which makes it hard to maintain the code on the longer term in my experience. In the second approach, a controller class (following the MVC pattern) is the glue, which centralizes event handling.
I'd be interested in knowing if this is a correct understanding of the two mechanisms. I'd also be interested in any design consideration one should take in account when choosing one of the two (or mix them even?). Any pointers to an elaborate discussion on the topic would be appreciated as well. I have been looking at: http://dojotoolkit.org/documentation/tutorials/1.9/events/ but that mainly describes how both mechanisms work but give little insight in how to structure a complex application.
I'm having the exact same idea about topics and events as you. As JavaScript is event-driven both are of course event-ish (like you describe in your first point).
Events are indeed coupled to the widget itself while topics aren't. I usually see it as the following:
When you have master-slave kind of structure (like a list having many items), then using widgets and events is probably the best approach to handle your problem.
When both widgets are unrelated to each other, then topics are probably the best way to communicate between each other.
You're right, topics make it harder to know what the origin is, but if you think about it, you don't need to know the origin. The topics provide you an API that decouples the source from the destination, making it so that you don't need to know the source.
Because both widgets are unrelated (that's the approach I follow, described before), you should normally don't need to know what the origin is when maintaining the code.
What you need is a well written API and make sure both source as destination are following it. If the API changes (code maintaining) you can use your IDE to find out which widgets are publishing/subscribing (for example by searching to the topic name) and make sure each of them is updated.
You can also choose to encapsulate the publish/subscribe behavior and providing a more high level API by creating a module like this:
define([ "dojo/topic", "dojo/_base/array" ], function(topic, arrayUtils) {
var MY_TOPIC = "/my/topic";
var module = {
observers: [],
notify: function(/** String */ name, /** Integer */ age) {
topic.publish(MY_TOPIC, {
name: name,
age: age
});
},
addObserver: function(/** Function */ callback) {
return this.observers.push(callback) - 1;
},
removeObserver: function(/** Integer */ index) {
this.observers[index] = null;
}
};
topic.subscribe(MYTOPIC, function(data) {
arrayUtils.forEach(module.observers, function(observer) {
if(observer !== null && data.name !== undefined && data.age !== undefined) {
observer(name, age);
}
});
});
return module;
});
You publish using the notify() function (providing the correct function parameters) and you add/remove observers with the other functions. Then you will make this component your sole subscriber and make it notify all observers.
This way you don't need to know about the topic and the API is uniform. You only need to make sure that the callbacks use the arguments correctly. To maintain your code you just change the high level API and look for modules that use this high level component. This is way easier to detect since it's in the require() function.
When I use topics I usually create a high level API like this (might change a bit depending on the use of it). But I think the point made is clear, it's easier to change the topic and to modify the data that is sent through.
In the sense of design patterns and software architecture topics seem to be the perfect mechanism to implement flux in dojo. Found an article with basic idea here.

Should I prefer NSNotificactionCenter or .NET events when using Monotouch?

When developing in Monotouch, is it "better" to us real .NET events or NSNotificationCenter?
Simple example: I have a UIViewController. It offers an event "CallbackWhenDisappeared". This event is triggred in ViewDidDisappear. Who ever is interested can register to the event.
I could as well post a "MyFancyControllerHasDisappeared" on the NSNotificationCenter and let interested objects subscribe there.
Which version is to be preferred?
The disadvantage with the .NET events I see: the disappearing controller might hold a reference to the subscribing controller (or the other way round?) and might not be garbage collected.
I also like the loose coupling when using NSNotificationCenter compared to the events where the classes really have to know each other.
Is there a wrong or a right way of doing it?
I actually prefer to use TinyMessenger. Unlike NSNotifications it handles the asynchronicity of the calls for you as part of the framework.
Managed objects also allow for better debuggability especially considering that these are usually cross container calls I find this to be very very useful.
var messageHub = new TinyMessengerHub();
// Publishing a message is as simple as calling the "Publish" method.
messageHub.Publish(new MyMessage());
// We can also publish asyncronously if necessary
messageHub.PublishAsync(new MyMessage());
// And we can get a callback when publishing is completed
messageHub.PublishAsync(new MyMessage(), MyCallback);
// MyCallback is executed on completion
https://github.com/grumpydev/TinyMessenger
There is no really right or wrong, but in my opinion it looks so:
NotificationCenter - You don't know which Objects are interested on the "Events", you send it out and any object can receive it
.Net Events - If there is a direct connection between two objects use this, for example like an UIViewController shows an other UIViewcontroller as Modal. The ModalUIViewcontroller fires an event, if it will hide and the UIViewController is Suscribed to it

Resources