During the bot builder v4 preview release I was able to get my state through the turnContext like so:
var state = await turnContext.GetConversationState<MyConversationState>();
state.CounterState.Count++; // state updated... no other steps
Now with the non preview release I have to setup accessors to get my state making the whole process very convoluted, like so:
var state = await _accessors.CounterState.GetAsync(turnContext, () => new CounterState());
state.TurnCount++;
await _accessors.CounterState.SetAsync(turnContext, state);
await _accessors.ConversationState.SaveChangesAsync(turnContext);
await turnContext.SendActivityAsync(responseMessage);
I understand how to use and implement the accessors I just get the use in them. Can someone explain why the second method above is better than the first? In the first method I had a state class that held all my data that I could manage within that class. Now from what I understand that class that I had before becomes an accessor?
you do not need to use accessors if you do not need/want to. They exist so that developers can expose only the properties they want to expose to specific components of their application.
An example could be if you were collecting personal data about a user in your app but had to pass off your state to be read/write to another component of your application that does not need the user's personal data. You can expose pieces of your state without exposing everything via accessors.
If you do not need this security/functionality you do not need to use accessors.
Related
Is it possible to execute some C# code when checking the "Is Approved" checkbox for a Member?
Our site has a registration form which programmatically creates a user in the Members section, however the new Members must be approved by an admin and we would like to send an email to the Member when they are approved.
I think what you will need to do is look at MemberService.Saving and MemberService.Saved events and attach a custom event handler. See Determining if an entity is new for information on determining if you are dealing with a new or existing member. Below is copied from documentation:
In v6.2+ and 7.1+ you can use the extension method on any implementation of IEntity (which is nearly all models returned by the Umbraco Services):
var isNew = entity.IsNewEntity();
How it works
This is all possible because of the IRememberBeingDirty interface. Indeed the name of this interface is hilarious but it describes exactly what it does. All entities implement this interface which is extremely handy as it tracks not only the property data that has changed (because it inherits from yet another hilarious interface called ICanBeDirty) but also the property data that was changed before it was committed.
From here you should be able to check the property data you are interested in and send your email accordingly.
From the Flux's TodoMVC example, I saw the TodoApp component is asking the store to get the states.
Should the view create the action and let the dispatcher to call the store instead?
The views that are listening for the stores' "change" event are called controller-views, because they have this one controller-like aspect: whenever the stores change, they get data from the stores and pass it to their children through props.
The controller-views are the only views that should be calling the stores' getters. The getters should be the only public API that the stores expose. Stores have no setters.
It's very tempting to call the stores' getters within the render() method of some component deep in the tree, but this is an anti-pattern. It violates the unidirectional data flow, making it more difficult to understand the flow of data through the application, and it and makes your rendering more expensive.
In the TodoMVC Flux example, the TodoApp component is the only controller-view.
You should get the values from stores somehow:
Get value directly from store. E.g. postsStore.get('firstPost')
You'll not be notified on changes. So, don't use this method.
Get & Subscribe to store using lifecycle methods on component
componentWillMount: function(){
var _this = this;
myStore.subscribe(function(newValue){
_this.setState({
myValue: newValue
});
})
},
componentWillUnmount: function(){
// don't forget to unsubscribe from store here
}
Get & Subscribe to store using mixins. Usually Flux implementations gives you Mixin for it. So value from store setting to component state on changes of value in store.
example from Reflux
mixins: Reflux.connect(myStore, 'myValue'),
render: function(){
// here you have access to this.state.myValue
}
Subscribe to action. It can be useful for rendering errors, that you don't want to store. But you can use it for whatever you want.
Implementation same as previous, but instead store use action
Best way to sync with stores is to subscribe to store.
So answer to your question is:
Yes, it's ok, and No, you shouldn't call methods on stores in components.
It's ok to call methods on stores if it's pure methods (doesn't change data in store). So you can call only get methods.
But if you want (you should) to be notified on changes in store, you should subscribe to it. As manual subscribing can be added through mixins, it should use it (your own, or from flux-library). So SubscribingMixin(MyStore) calls some methods on store internally, but not you are right in component.
But if you think about reinvent Flux, notice, that there is no difference between subscribing to store and subscribing to action. So it's possible to implement it so all data will pass through actions.
View could get the states of Stores directly.
Action + Dispatcher is the flux way to change the states of the Store, not accessing existing Store data.
I'm working on my pet project where I have the following scenario:
user can create article and becomes its owner
only article owner can edit given article
I wonder how to model it correctly. I don't want to have dumb objects like User and Article that only have properties, but would like them to have some behavior. This is how I'd approach it initially:
article = articles_repository.find(id)
if(article.changeable_by(user))
article.change(title, content)
articles_repository.save(article)
else
raise NoEditRights
end
My only concern here is that I need to check if user can modify before I do modifications. I
Another approach was to pass current user to change method and let article check it and raise error if user is not allowed to change it.
I was also thinking about something like this:
article = articles_repository.find(id)
article.as_user(user) do
article.change(title, content)
articles_repository.save(article)
end
but I don't know if it is any better.
How would you approach such case? How to internally prevent article from being changed by other users I know it is quite simple, but I'd like to grasp how to model such cases before I jump into something more difficult.
EDIT: some more info added
So this is content publishing application, users can write and publish articles, others can read them and comment on them.
This is really simple app (just a toy project) and I can see the following bounded contexts here:
publishing article
editing article
some others that are not important I guess (like comment on article)
I'm not sure if I should introduce different models for each context?
These are not bounded context, but some use cases.
From what you say I guess there will be 2 bounded contexts: publishing and access management. Access management - unless you're willing to introduce some unordinary mechanisms - is a generic concern that probably don't require your focus and DDD - just add some good library that solves this problem already. And maybe wrap it with some application service.
So in some app service there would be a method doing something like (pseudocode, sorry, I don't know Ruby):
var user = auth::authenticationService.getUser(...)
if user.hasAccessTo(articleId) then
var article = pub::articleRepo.get(articleId)
article.doSomething()
end
Note that authentication service and user belongs to one context (auth) and article and article repo to another (pub). There is only a small connection between them. User don't know anything about articles in pub context (it's just a value object storing the id) and article doesn't know anything about access management (but probably has a value object of user that contains his name).
Another way would introduce some tiny objects in pub context like Author, Editor, Commenter representing the roles over the article.
var role = pub::roleService.getAuthorFor(articleId, userId)
if role != null then
role.doSomethingWithArticle()
end
where roleService acts as an anticorruption layer between auth and pub (so it calls a authenticationService, gets user object full of auth-specific stuff and based on it, construct a lightweight role object that contains only pub-specific behavior.
The second example sounds heavier but it's more prone to changes in one of the contexts.
I am using MVVM light and figured out since that the ViewModelLocator can be used to grab any view model and thus I can use it to grab values.
I been doing something like this
public class ViewModel1
{
public ViewModel1()
{
var vm2 = new ViewModelLocator().ViewModel2;
string name = vm2.Name;
}
}
This way if I need to go between views I can easily get other values. I am not sure if this would be best practice though(it seems so convenient makes me wonder if it is bad practice lol) as I know there is some messenger class thing and not sue if that is the way I should be doing it.
Edit
static ViewModelLocator()
{
ServiceLocator.SetLocatorProvider(() => SimpleIoc.Default);
SimpleIoc.Default.Register<ViewModel1>();
SimpleIoc.Default.Register<ViewModel2>();
}
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Performance",
"CA1822:MarkMembersAsStatic",
Justification = "This non-static member is needed for data binding purposes.")]
public ViewModel1 ViewModel1
{
get
{
return ServiceLocator.Current.GetInstance<ViewModel1 >();
}
}
Edit
Here is a scenario that I am trying to solve.
I have a view that you add price and store name to. When you click on the textbox for store name you are transferred to another view. This view has a textbox that you type the store you are looking for, as you type a select list get populated with all the possible matches and information about that store.
The user then chooses the store they want. They are transferred back to the view where they "add the price", now the store name is filled in also.
If they hit "add" button it takes the price, the store name, and the barcode(this came from the view BEFORE "add price view") and sends to a server.
So as you can see I need data from different views.
I'm trying to understand what your scenario is. In the MVVMlight forum, you added the following context to this question:
"I have some data that needs to be passed to multiple screens and possibly back again."
For passing data between VMs, I would also - as Matt above - use the Messenger class of MVVMLight as long as it is "fire and forget". But it is the "possibly back again" comment that sounds tricky.
I can imagine some scenarios where this can be needed. Eg. a wizard interface. In such a case I would model the data that the wizard is responsible for collecting and then bind all Views to the same VM, representing that model object.
But that's just one case.
So maybe if you could provide a little more context, I would be happy to try and help.
Yes, you can do this, in as much as the code will work but there is a big potential issue you may run into in the future.
One of the strong arguments for using the MVVM pattern is that it makes it easier to write code that can be easily tested.
With you're above code you can't test ViewModel1 without also having ViewModelLocator and ViewModel2. May be that's not too much of a bad thing in and of itself but you've set a precedent that this type of strong coupling of classes is acceptable. What happens, in the future, when you
From a testing perspective you would probably benefit from being able to inject your dependencies. This means passing, to the constructor--typically, the external objects of information you need.
This could mean you have a constructor like this:
public ViewModel1(string vm2Name)
{
string name = vm2Name;
}
that you call like this:
var vm1 = new ViewModel1(ViewModelLocator.ViewModel2.name);
There are few other issues you may want to consider also.
You're also creating a new ViewModelLocator to access one of it's properties. You probably already have an instance of the locator defined at the application level. You're creating more work for yourself (and the processor) if you're newing up additional, unnecessary instances.
Do you really need a complete instance of ViewModel2 if all you need is the name? Avoid creating and passing more than you need to.
Update
If you capture the store in the first view/vm then why not pass that (ID &/or Name) to the second VM from the second view? The second VM can then send that to the server with the data captured in the second view.
Another approach may be to just use one viewmodel for both views. This may make your whole problem go away.
If you have properties in 1 view or view model that need to be accessed by a second (or additional) views or view models, I'd recommend creating a new class to store these shared properties and then injecting this class into each view model (or accessing it via the locator). See my answer here... Two views - one ViewModel
Here is some sample code still using the SimpleIoc
public ViewModelLocator()
{
ServiceLocator.SetLocatorProvider(() => SimpleIoc.Default);
SimpleIoc.Default.Register<IMyClass, MyClass>();
}
public IMyClass MyClassInstance
{
get{ return ServiceLocator.Current.GetInstance<IMyClass>();}
}
Here is a review of SimpleIOC - how to use MVVMLight SimpleIoc?
However, as I mentioned in my comments, I changed to use the Autofac container so that my supporting/shared classes could be injected into multiple view models. This way I did not need to instantiate the Locator to access the shared class. I believe this is a cleaner solution.
This is how I registered MyClass and ViewModels with the Autofac container-
var builder = new ContainerBuilder();
var myClass = new MyClass();
builder.RegisterInstance(myClass);
builder.RegisterType<ViewModel1>();
builder.RegisterType<ViewModel2>();
_container = builder.Build();
ServiceLocator.SetLocatorProvider(() => new AutofacServiceLocator(_container));
Then each ViewModel (ViewModel1, ViewModel2) that require an instance of MyClass just add that as a constructor parameter as I linked initially.
MyClass will implement PropertyChanged as necessary for its properties.
Ok, my shot at an answer for your original question first is: Yes, I think it is bad to access one VM from another VM, at least in the way it is done in the code example of this question. For the same reasons that Matt is getting at - maintainability and testability. By "newing up" another ViewModelLocator in this way you hardcode a dependency into your view model.
So one way to avoid that is to consider Dependency Injection. This will make your dependencies explicit while keeping things testable. Another option is to use the Messenger class of MVVMLight that you also mention.
In order to write maintainable and testable code in the context of MVVM, ViewModels should be as loosely coupled as possible. This is where the Messenger of MVVMLight can help. Here's a quote from Laurent on what Messenger class was intended for:
I use it where decoupled communication must take place. Typically I use it between VM and view, and between VM and VM. Strictly speaking you can use it in multiple places, but I always recommend people to be careful with it. It is a powerful tool, but because of the very loose coupling, it is easy to lose the overview on what you are doing. Use it where it makes sense, but don't replace all your events and commands with messages.
So, to answer the more specific scenario you mention, where one view pops up another "store selection" view and the latter must set the current store when returning back to the first view, this is one way to do it (the "Messenger way"):
1) On the first view, use EventToCommand from MVVMLight on the TextBox in the first view to bind the desired event (eg. GotFocus) to a command exposed by the view model. Could be eg. named OpenStoreSelectorCommand.
2) The OpenStoreSelectorCommand uses the Messenger to send a message, requesting that the Store Selector dialog should be opened. The StoreSelectorView (the pop-up view) subscribes to this message (registers with the Messenger for that type of message) and opens the dialog.
3) When the view closes with a new store selected, it uses the Messenger once again to publish a message that the current store has changed. The main view model subscribes to this message and can take whatever action it needs when it receives the message. Eg. update a CurrentStore property, which is bound to a field on the main view.
You may argue that this is a lot of messaging back and forth, but it keeps the view models and views decoupled and does not require a lot code.
That's one approach. That may be "old style" as Matt is hinting, but it will work, and is better than creating hard dependencies.
A service-based approach:
For a more service-based approach take a look at this recent MSDN Magazine article, written by the inventor of MVVMLight. It gives code examples of both approaches: The Messenger approach and a DialogService-based approach. It does not, however, go into details on how you get values back from a dialog window.
That issue is tackled, without relying on the Messenger, in this article. Note the IModalDialogService interface:
public interface IModalDialogService
{
void ShowDialog<TViewModel>(IModalWindow view, TViewModel viewModel, Action<TViewModel> onDialogClose);
void ShowDialog<TDialogViewModel>(IModalWindow view, TDialogViewModel viewModel);
}
The first overload has an Action delegate parameter that is attached as the event handler for the Close event of the dialog. The parameter TViewModel for the delegate is set as the DataContext of the dialog window. The end result is that the view model that caused the dialog to be shown initially, can access the view model of the (updated) dialog when the dialog closes.
I hope that helps you further!
I would like to "simulate" global variables in Angular. I would also like to be able to initialize those "global variables" in an App_Init()-type of handler. Such an initialization will require $http calls to populate those variables. I want all of the "global variables" to be completely loaded before Angular "calls up" controllers and views, because controllers would depend on the initialized data.
What are some best practices for that in Angular? Nested controllers? Services?
Example: An app that manages items in a restaurant's menu. Each item will be associated with a category (beverage, appetizer, dessert, etc.). I need to load all of those categories first before Angular "even touches" the controllers and views for the food items.
Are you using ng-view and the $routeProvider service for your app? Assuming you are or will consider, here is what you can do in order of steps:
Build a service that provides acces to the categories to its called. My idea is that this service would load the categories from the server when it's first called and then cache the data loaded, so the next time it's called a cache copy is served, instead, to save a request to the server. Let's call this service categories for now.
Use the resolve property of route object to ensure the dependency on the categories service is resolved before loading the view the user is requesting (and the corresponding controller). As a result, you can inject the categories service into the controller and be sure the service is always available because it has already been resolved.
If you have never worked with the resolve property in configuring routing before, Here is an example and the part of the official docs that talks about it. I recommend you go through them.
Also, in order to understand how resolve works, you need to be familiar with the concept of promise and deferred. If you are not, here is a good starting point on the topic. $q is AngularJS's implementation of promise/deferred.
I can't comment on if the above approach is the best practice, but I know it's a good practice to use service to provide access to some data/functions like if there were global variables.
You can create a service to load the configuration from server and put it in rootScope, in your controller you can call this service on your function and this function you call from ng-init in your view.
View
<mytag ng-init="myFunc()">
Controller
module.controller('MyCtrl', function($scope, myService){
$scope.myFunc = function(){
myService();
}
}
your service (initialize app)
module.service('myService', function($http, $rootScope){
//do something
$rootScope.config = configLoaded;
}
another tip
if you are in trouble with asynchronous calls you can try to use
var deferred = $q.defer();
//when your call come back
deferred.resolve(yourData);
//and in the last line of function
deferred.promise;