The Caveats of Sub States? - ngxs

I am a newbie and hacking around with ngxs.
On the docs There are caveats to Sub States.
This is only intended to work with nested objects, so trying to create stores on nested array objects will not work.
Sub states can only be used once, reuse implies several restrictions that would eliminate some high value features. If you want to re-use them, just create a new state and inherit from it.
I believe I understand the first point to a small degree but I don't fully grasp what the second point means.
Would someone be able to expand on that?

it means that a single state class can't be a child of multiple parent classes. the workaround would be to create new states by extending. so
#State({
name: 'foo' // you can't have another state with this name
})
class MyState1 {}
// so if you want to reuse the listeners and such from 'foo' you have to extend
#State({
name: 'bar'
})
class MyState2 extends MyState1 {}

Related

What is the relation in (class diagrams) between those 3 classes?

I have the code as follow :
class Synchronization
def initialize
end
def perform
detect_outdated_documents
update_documents
end
private
attr_reader :documents
def detect_outdated_documents
#documents = DetectOutdatedDocument.new.perform
end
def update_documents
UpdateOutdatedDocument.new(documents).perform
end
#documents is an array of Hashes I return from a method in DetectOutdatedDocument.
I then use this array of Hash to initialize the UpdateOutdatedDocument class and run the perform method.
Is something like this correct?
Or should I use associations or something else?
Ruby to UML mapping
I'm not a Ruby expert, but what I understand from your snippet given its syntax is:
There's a Ruby class Synchronization: That's one UML class
The Ruby class has 4 methods initialize, perform, detect_outdated_documents, and update_documents, the two last being private. These would be 4 UML operations.
initialize is the constructor, and since it's empty, you have not mentioned it in your UML class diagram, and that's ok.
The Ruby class has 1 instance variable #documents. In UML, that would be a property, or a role of an association end.
The Ruby class has a getter created with attr_reader. But since it is in a private section, its visibility should be -. This other answer explains how to work with getters and setters elegantly and accurately in UML (big thanks to #engineersmnky for the explanations on getters in Ruby, and for having corrected my initial misunderstanding in this regard)
I understand that SomeClass.new creates in Ruby a new object of class SomeClass.
Ruby and dynamic typing in UML
UML class diagrams are based on well-defined types/classes. You would normally indicate associations, aggregations and compositions only with known classes with whom there’s for sure a stable relation. Ruby is dynamically typed, and all what is known for sure about an instance variable is that it's of type Object, the highest generalization possible in Ruby.
Moreover, Ruby methods return the value of the latest statement/expression in its execution path. If you did not care about a return value of an object, you'd just mark it as being Object (Thanks engineersmnky for the explanation).
Additional remarks:
There is no void type in UML (see also this SO question). An UML operation that does not return anything, would just be an operation with no return type indicated.
Keep also in mind that the use of types that do not belong to the UML standard (such as Array, Hash, Object, ...) would suppose the use of a language specific UML profile.
Based on all this, and considering that an array is also an Object, your code would lead to a very simple UML diagram, with 3 classes, that are all specializations of Object, and a one-to-many association between Synchronization and Object, with the role #documents at the Object end.
Is it all what we can hope for?
The very general class diagram, may perhaps match very well the implementation. But it might not accurately represent the design.
It's your right to model in UML a design independently of the implementation. Hence, if the types of instance variables are known by design (e.g. you want it to be of some type and make sure via the initialization and the API design that the type will be enforced), you may well show this in your diagram even if it deviates from the code:
You have done some manual type inferencing to deduce the return type of the UML operations. Since all Ruby methods return something, we'd expect for all Ruby methods at least an Object return type. But it would be ok for you not to indicate any return type (the UML equivalent to void) to express taht the return value is not important.
You also have done some type inference for the instance variable (UML property): you clarify that the only value it can take is the value return by DetectOutdatedDocument.new.perform.
Your diagram indicates that the class is related to an unspecified number of DetectOutdatedDocument objects, and we guess it's becaus of the possible values of #documents. And the property is indicated as an array of objects. It's very misleading to have both on the diagram. So I recommend to remove the document property. Instead, prefer a document role at the association end on the side of DetectOutdatedDocument. This would greatly clarify for the non-Ruby-native readers why there is a second class on the diagram. :-) (It took me a while)
Now you should not use the black diamond for composition. Because documents has a public reader; so other objects could also be assigned to the same documents. Since Ruby seems to have reference semantic for objects, the copy would then refer to the same objects. That's shared aggregation (white diamond) at best. And since UML has not defined very well the aggregation semantic, you could even show a simple association.
A last remark: from the code you show, we cannot confirm that there is an aggregation between UpdateOutdatedDocument and DetectOutdatedDocument. If you are sure there is such a relationship, you may keep it. But if it's only based on the snippet you showed us, remove the aggregation relation. You could at best show a usage dependency. But normally in UML you would not show such a dependency if it is about the body of a method, since the operation could be implemented very differently without being obliged to have this dependency.
There is no relation, UML or otherwise, in the posted code. In fact, at first glance it might seem like a Synchronization has-many #documents, but the variable and its contents are never defined, initialized, or assigned.
If this is a homework assignment, you probably need to ask your instructor what the objective is, and what the correct answer should be. If it's a real-world project, you haven't done the following:
defined the collaborator objects like Document
initialized #documents in a way that's accessible to the Synchronization class
allowed your class method to accept any dependency injections
Without at least one of the items listed, your UML diagram doesn't really fit the posted code.

Is there a good way to use polymorphism to remove this switch statement?

I've been reading on refactoring and replacing conditional statements with polymorphism. The trouble I have is that it only seems to make sense to me when you have a more complex case where, without polymorphism, you would have to repeat the same switch statements or if-elses many times. I don't see how it makes sense if you're only doing it once - you have to have that conditional somewhere, right?
As an example, I recently wrote the following class, which is responsible for reading a XML file and converting its data into the program's objects. There are 2 possible formats for the file that we are supporting, so I simply wrote a method in the class for handling each one, and used a case-switch to determine which one to use:
public class ComponentXmlReader
{
public IEnumerable<UserComponent> ImportComponentsFromXml(string path)
{
var xmlFile = XElement.Load(path);
switch (xmlFile.Name.LocalName)
{
case "CaseDefinition":
return ImportComponentsFromA(xmlFile);
case "Root":
return ImportComponentsFromB(xmlFile);
}
}
private IEnumerable<UserComponent> ImportComponentsFromA(XContainer file)
{
//do stuff
}
private IEnumerable<UserComponent> ImportComponentsFromB(XContainer file)
{
//do stuff
}
}
As far as I can tell, I could write a class hierarchy for this to do the parsing, but I don't see the advantage here - I'd still have to use a case-switch to determine which class to instantiate. It looks to me like it would be extra complexity for no benefit. If I was going to keep these classes around and do more things with them that depended on the file type, then it would eliminate doing the same switch in multiple places, but this is single-use. Is this right, or is there some reason or technique I'm not seeing that makes it a good idea to use a polymorphic class hierarchy to do this?
If you had, say, an abstract ComponentImporter class, with concrete subclasses FromA and FromB, you could instantiate one of each, and put it in a Map. Then you could call componentImporterMap.get(xmlFile.Name.LocalName).importComponents() and avoid the switch.
As with all design choices, context is key. In this case, you have what seems to be a fairly simple class handling two very similar tasks. If the two Import methods contained very little duplicate code, then including them in a single class is perhaps the best choice since, as you say, it reduces complexity.
However, it's possible you'll use this class in the future, and even add new types of imports. In that case, the class would be more reusable if it was polymorphic.
Additionally, since these methods sound very similar, you're likely to have a bunch of duplicate code, which you could keep in a base class and only put import-specific code in the child classes.
Plus, as Carl mentions, there are numbers of ways to implement this logic without using a case statement.

Overriding backbone model prototype vs extending model

I'm new to Backbone.js and in my recent project I need a custom validation mechanism for models. I see two ways I could do that.
Extending the Backbone.Model.prototype
_.extend(Backbone.Model.prototype, {
...
});
Creating custom model that inherit from Backbone model
MyApp.Model = Backbone.Model.extend({ ... });
I quite unsure which one is a good approach in this case. I'm aware that overriding prototype is not good for native objects but will that applies to backbone model prototype as well? What kind of problems I'll face if I go with the first approach?
You are supposed to use the second approach, that's the whole point of Backbone.Model.extend({}).
It already does your first approach + other near tricks to actually setup a proper inheritance chain (_.extend is only doing a copy of the object properties, you can look up the difference in code for Backbone's extend() and Underscore's _.extend, they are very large and not very interesting. Just extending the .prototype isn't enough for 'real' inheritance).
When I first read your question, I misunderstood and thought you were asking whether to extend from your own Model Class or directly extend from Backbone Extend. It's not your question, so I apologize for the first answer, and just to keep a summary here: you can use both approach. Most large websites I saw or worked on first extend from Backbone.Model to create a generic MyApp.Model (which is why I got confused, that's usually the name they give to it :)), which is meant to REPLACE the Backbone.Model. Then for each model (for instance User, Product, Comment, whatever..), they'll extend from this MyApp.Model and not from Backbone.Model. This way, they can modify some global Backbone behavior (for all their Models) without changing Backbone's code.
_.extend(Backbone.Model.prototype, {...mystuff...}) would add your property/ies to every Backbone.Model, and objects based on it. You might have meant to do the opposite, _.extend({...mystuff...}, Backbone.Model) which won't change Backbone itself.
If you look at the annotated Backbone source you'll see lines like
_.extend(Collection.prototype, Events, { ... Collection functions ...} )
This adds the Events object contents to every Collection, along with some other collection functions. Similarly, every Model has Events:
_.extend(Model.prototype, Events, { ... Model functions ...})
This seems to be a common pattern of making "classes" in Javascript:
function MyClass(args) {
//Do stuff
}
MyClass.prototype = {....}
It's even used in the Firefox source code.

XNA phase management

I am making a tactical RPG game in XNA 4.0 and was wondering what the best way to go about "phases" is? What I mean by phases is creating a phase letting the player place his soldiers on the map, creating a phase for the player's turn, and another phase for the enemy's turn.
I was thinking I could create some sort of enum and set the code in the upgrade/draw methods to run accordingly, but I want to make sure this is the best way to go about it first.
Thanks!
Edit: To anaximander below:
I should have mentioned this before, but I already have something implemented in my application that is similar to what you mentioned. Mine is called ScreenManager and Screen but it works exactly in the same way. I think the problem is that I am treating screen, phase, state, etc, to be different things but in reality they are the same thing.
Basically what I really want is a way to manage different "phases" in a single screen. One of my screens called map will basically represent all of the possible maps in the game. This is where the fighting takes place etc. I want to know what is the best way to go about this:
Either creating an enum called FightStage that holds values like
PlacementPhase,PlayerPhase, etc, and then split the Draw and
Update method according to the enum
Or create an external class to manage this.
Sorry for the confusion!
An approach I often take with states or phases is to have a manager class. Essentially, you need a GamePhase object which has Initialise(), Update(), Draw() and Dispose() methods, and possibly Pause() and Resume() as well. Also often worth having is some sort of method to handle the handover. More on that later. Once you have this class, inherit from it to create a class for each phase; SoldierPlacementPhase, MovementPhase, AttackPhase, etc.
Then you have a GamePhaseManager class, which has Initialise(), Update(), Draw() and Dispose() methods, and probably a SetCurrentPhase() method of some kind. You'll also need an Add() method to add states to the manager - it'll need a way to store them. I recommend a Dictionary<> using either an int/enum or string as the key. Your SetCurrentPhase() method will take that key as a parameter.
Basically, what you do is to set up an instance of the GamePhaseManager in your game, and then create and initialise each phase object and add it to the manager. Then your game's update loop will call GamePhaseManager.Update(), which simply calls through to the current state's Update method, passing the parameters along.
Your phases will need some way of telling when it's time for them to end, and some way of handling that. I find that the easiest way is to set up your GamePhase objects, and then have a method like GamePhase.SetNextPhase(GamePhase next) which gives each phase a reference to the one that comes next. Then all they need is a boolean Exiting with a protected setter and a public getter, so that they can set Exiting = true in their Update() when their internal logic decides that phase is over, and then in the GamePhaseManager.Update() you can do this:
public void Update(TimeSpan elapsed)
{
if (CurrentPhase.Exiting)
{
CurrentPhase.HandOver();
CurrentPhase = CurrentPhase.NextPhase;
}
CurrentPhase.Update(elapsed);
}
You'll notice I change phase before the update. That's so that the exiting phase can finish its cycle; you get odd behaviour otherwise. The CurrentPhase.HandOver() method basically gets the current phase to pass on anything the next phase needs to know to carry on from the right point. This is probably done by having it call NextPhase.Resume() internally, passing it any info it needs as parameters. Remember to also set Exiting = false in here, or else it'll keep handing over after only one update loop.
The Draw() methods are handled in the same way - your game loop calls GamePhaseManager.Draw(), which just calls CurrentPhase.Draw(), passing the parameters through.
If you have anything that isn't dependent on phase - the map, for example - you can either have it stored in the GamePhaseManager and call its methods in GamePhaseManager's methods, you can have the phases pass it around and have them call its methods, or you can keep it up at the top level and call it's methods alongsideGamePhaseManager's. It depends how much access the phases need to it.
EDIT
Your edit shows that a fair portion of what's above is known to you, but I'm leaving it there to help anyone who comes across this question in future.
If already you have a manager to handle stages of the game, my immediate instinct would be to nest it. You saw that your game has stages, and built a class to handle them. You have a stage that has its own stages, so why not use the stage-handling code you already wrote? Inherit from your Screen object to have a SubdividedScreen class, or whatever you feel like calling it. This new class is mostly the same as the parent, but it also contains its own instance of the ScreenManager class. Replace the Screen object you're calling map with one of these SubdividedScreen objects, and fill its ScreenManager with Screen instances to represent the various stages (PlacementPhase, PlayerPhase, etc). You might need a few tweaks to the ScreenManager code to make sure the right info can get to the methods that need it, but it's much neater than having a messy Update() method subdivided by switch cases.

How to implement polymorphism in sproutcore?

I am developing an application that involves a type hierarchy and started by defining the models for each type via inheritance. When it comes to writing the corresponding controllers I am not sure how to approach the whole thing in a clean way. Should I write only one controller for the base type that is able to handle derived models or should there be one controller for each subtype? How should the view-controller bindings be set up to work with the different controllers?
You might want to check out SproutCore's new experimental polymorphism support: http://groups.google.com/group/sproutcore-dev/browse_thread/thread/b63483ab66333d15
Here's some information on defining sub-classes and overriding properties and methods:
http://wiki.sproutcore.com/w/page/12412971/Runtime-Objects.
From my (limited) use of Sproutcore, I've only been able to bind 1 view to 1 controller.
As such, if you are planning to use a single view (e.g. ListView) to display your data, then I think you will only be able to bind that view to 1 controller. This means the 1 base type that is able to handle derived models seems to be the way to go.
Typically you populate the content of ArrayController instances with the results of App.store.find calls. SC.Store#find can take an SC.Query instance, which typically looks like:
MyApp.myController.set('content') = MyApp.store.find(SC.Query.local(MyApp.MyModel));
This should return all instances of MyApp.MyModel, including any instances of MyApp.MyModel's subclasses.
The first argument to SC.Query.local can either be an SC.Record subclass or a string referring to the subclass. So if you've got some intermediary SC.Record subclasses, you might want to try using them there.
Controllers should just be proxies for objects, when dealing with single instances of your model. In other words, ObjectController can proxy anything. Here is what I mean in code:
You have two objects, Person and Student.
App.Person = SC.Object.extend({
// person stuff here
})
App.Student = App.Person.extend({
// student stuff here, you have have all Person things because you are extending person.
})
You then want to define controllers:
App.personController = SC.ObjectController.create({
contentBinding: 'App.path.to.person'
})
App.studentController = SC.ObjectController.create({
contentBinding: 'App.path.to.student'
})
note that you would only bind the controller's content to something if the person/student is a result of a selection, or some other flow where bindings fire. In other words, if you set the person manually (say from a statechart, as the result of an interaction), you would still define the controller but would do
App.personController.set('content', person);
You set up the controller differently depending on whether the Person is a 'top level' object in your app, or some intermediate object that gets selected. Also, you might only need one controller, you would only have a studentController and a personController if you were acting on a person and a student at the same time. Both are just ObjectControllers, and those can proxy anything.
Finally, in your view you would bind the relevant view element to the controller:
...
nameView: SC.LabelView.design({
layout: {/* props */},
valueBinding: SC.Binding.oneWay('App.personController.name')
})
...
note that the oneway binding is if the name is not going to be changed on the view, if the view can change the name, then just do a normal binding. Also note the path here. I am not binding to
'App.personController.content.name'
Since the personController proxies the object, you bind to the
'namespace.controller.property-on-object-controller-proxies'
If you are putting a lot of business logic in your controller, you are doing it wrong. Controllers should just be for proxying objects (at least ObjectControllers should be). Business logic should be on the models themselves, and decision making logic should be in statecharts.

Resources