I am creating my first project that uses ui-router.
My project has about 10 views, each with their own controller and state. I am trying to modularise/encapsulate/decouple as best as possible but I am having trouble working out where to put the onExit and onEnter state callbacks.
The first option is to put it in app.js which is currently defining all of my states, however I feel that this would not be a good place as it could cause this file to blow up and become hard to read as more states are introduced and the logic gets more complex.
The second option I looked into was to put it into a controller (I have one for each state), however from researching it doesn't seem to be best practice to do this.
The third option is to create a service that is resolved, however with this option I would end up with either a giant service full of state change functions for each of the states (not decoupled) or an additional service per state which would contain the state change functionality, and I worry that would increase project complexity.
What is the standard way to achieve this?
Are there any other options that I am missing?
Our strategy for this has been to disregard the onEnter and onExit on the state object, because as you are discovering, they feel like they are in the wrong place in terms of separation of concerns (app.js).
For onEnter: we handle setup in an activate() function in each controller, which we manually execute inside the controller. This happens to also match the callback that will get executed in Angular 2.0, which was not an accident ;).
function activate() {
// your setup code here
}
// execute it. this line can be removed in Angular 2.0
activate();
For onExit: We rarely need an exit callback, but when we do, we listen for the $scope $destroy event.
$scope.$on("$destroy", function() {
if (timer) {
$timeout.cancel(timer);
}
});
Related
I'm using react-redux for a project I'm working on. I noticed that when I grab an object from my store and edit it, the object in state changes without me dispatching the change (but doesn't trigger a re-render on the components attached to that reducer's object). How can I stop state from changing without a dispatch?
For example if I do:
export function changeNeonGreenColourValue(colour) {
return (dispatch) => {
var neonColours = store.getState().colours.neon;
neonColours.green = colour;
dispatch(push('./home'));
};
}
And then in the layoutComponent I log:
console.log(this.props.state.colours.neon.green)
The output is still whatever I passed into changeNeonGreenColourValue() as "colour" but the page doesn't re-render to show that change. I know to get the page to re-render, all I have to do is dispatch the appropriate reducer case but I don't want the state object being altered at all unless I have an appropriate dispatch.
Apparently the 'standard' deep copying technique for solving this is to do a JSON stringifying and parse as so: const copiedObj = JSON.parse(JSON.stringify(sourceObj)); Unfortunately if you use this on large objects that will need parsing frequently you're going to have performance issues in your app as I do, if anyone has any suggestions for this I welcome them.
edit: so both jQuery and Loadash have their own implementations of deep cloning that are supposed to be better performance-wise:
https://lodash.com/docs/#cloneDeep
I personally implemented loadash to resolve my issue and it worked fine with little to no performance issues. I highly recommend it over JSON.stringify.
I am working in Vue and also i use VueRouter, VueX and VueWebsocket. My App has component called App which holds all other components inside itself. Also I have websocket event which is set globally like this:
this.$options.sockets.onmessage = (websocket) => { /* sth1 */ }
When it gets any data from websocket, sth1 is called. it works like charm. However deep inside App component is another component, let's call it InputComponent. It may be included in App or not becaue it is single page aplication and some parts do include InputComponent, and some do not. Inside InputComponent there is also:
this.$options.sockets.onmessage = (websocket) => { /* sth2 */ }
And of course it overwrites function on message so sth1 will never be executed if InputComponent is nested by App component. It is quite obvious. However if i remove (in next SPA page), and InputComponent disappears i still have my onmessage event overwritten which i would like to have in original version.
I could ofcourse make some kind of merging functionalities of sth1 and sth2 in App component or InputComponent but it is repeating myself.
Here comes the question - is there a way to return original version of onmessage event without reloading whole App Component? In other words: can i have temporary overwritten function and then come back to its functionalities? Something like extending an eent with new functionalities of sth2.
I hope you get the idea!
K.
The general way to do that would be to use addEventListener and removeEventListener. So in the input component
created() {
this.$options.sockets.addEventListener('message', handleMessage);
},
destroyed() {
this.$options.sockets.removeEventListener('message', handleMessage);
}
Note that this approach doesn't prevent the original handler from also receiving the events. Without knowing more about the app architecture, it's hard to suggest the best way to avoid this behavior, but perhaps you can set a messageHandled flag on the event in the component's handler; then check that flag in the parent.
Mark Dalgleish wrote a nice little article about how to use promises in AngularJS views.
Some people asked questions about this in the comments, but Mark didn't answer them (yet). Because I'm asking me the same question, I will ask on StackOverflow instead to get an answer:
If you use promises in views, how do I handle "loading"/"waiting" indication, because they are async? Does a promise have something like a "resolved" or "withinRequest" property?
How do I handle errors? Normally they would arise in the second callback, but if I use a promise directly in the view I don't handle this case. Is there another way?
Thank you.
EDIT: as of angular v1.2 the resolution of promise in views is not activated by default.
The automatic resolution of promises in a view looks like a handy tool at first but it has number of limitations that need to be understood and evaluated carefully. The biggest issue with this approach is that it is AngularJS who will add callbacks to a promise and we've got little control over it.
Answering your questions:
1) As indicated, it is ultimately AngularJS who will add a success / error callbacks so we don't have much control here. What you could do is to wrap the original promise into a custom one that would track resolution. But this kind of deft the whole purpose of saving few keystrokes. And no, there is no things like 'resolved'. In short - there is no universal mechanism for tracking progress that would work for all promises. If your promises are $http-based you might use interceptors or pendingRequests property to track request in progress.
2) You can't. Once again, it is AngularJS that adds a handler inside the $parse service and it looks like this: promise.then(function(val) { promise.$$v = val; }); (see code here). You can see that only a success callback are added so all the failures are going to be silently ignored.
Those are not the only limitations of the automatic promise resolution in the view. The other problem is that promises returned by a function won't be resolved correctly. For example, if you would rewrite an example like so:
myModule.controller('HelloCtrl', function($scope, HelloWorld) {
$scope.messages = function() {
return HelloWorld.getMessages();
}
});
and try to use the following markup:
<li ng-repeat="message in messages()"></li>
things would work as expected, which might come as a surprise.
In short: while the automatic resolution of promises might seem like a handy shortcut it has number of limitations and non-obvious behaviors. Evaluate those carefully and decide if saving few keystrokes are worth it.
I recently had a problem with multiple form posting in an ASP.NET MVC application. The situation was basically, if someone intentionally hammered the submit button, they could force data to be posted multiple times despite validation logic (both server and client side) that was intended to prohibit this. This occurred because their posts would go through before the Transaction.Commit() method could run on the initial request (this is all done in nHibernate)
The MVC ActionMethod looked kind of like this..
public ActionResult Create(ViewModelObject model)
{
if(ModelState.IsValid)
{
// ...
var member = membershipRepository.GetMember(User.Identity.Name);
// do stuff with member
// update member
}
}
There were a lot of solutions proposed, but I found the C# lock statement, and gave it a try, so I altered my code to look like this...
public ActionResult Create(ViewModelObject model)
{
if(ModelState.IsValid)
{
// ...
var member = membershipRepository.GetMember(User.Identity.Name);
lock(member) {
// do stuff with member
// update member
}
}
}
It worked! None of my testers can reproduce the bug, anymore! We've been hammering away at it for over a day and no one can find any flaw. But I'm not all that experienced with this keyword. I looked it up again to get clarification...
The lock keyword marks a statement block as a critical section by obtaining the mutual-exclusion lock for a given object, executing a statement, and then releasing the lock
Okay, that makes sense. Here is my question.
This was too easy
This solution seemed simple, straightforward, clear, efficient, and clean. It was way too simple. I know better than to think something that complicated has that simple a solution. So I wanted to ask more experienced programmers ...
Is there something bad going on I should be aware of?
No it's not that easy. Locking only works if the same instance is used.
This will not work:
public IActionResult Submit(MyModel model)
{
lock (model)
{
//will not block since each post generates it's own instance
}
}
Your example could work. It all depends on if second-level caching is enabled in nhibernate (and thus returning the same user instance). Note that it will not prevent anything from being posted to the database, just that each post will be saved in sequence.
Update
Another solution would be to add return false; to the submit button when it's being pressed. it will prevent the button from submitting the form multiple times.
Here is a jquery script that will fix the problem for you (it will go through all submit buttons and make sure that they will only submit once)
$(document).ready(function(){
$(':submit').click(function() {
var $this = $(this);
if ($this.hasClass('clicked')) {
alert('You have already clicked on submit, please be patient..');
return false;
}
$this.addClass('clicked');
});
});
Add it do you layout or to a javascript file.
Update2
Note that the jquery code works in most cases, but remember that any user with a little bit of programming knowledge can use for instance HttpWebRequest to spam POSTs to your web server. It's not likely, but it could happen. The point I'm making is that you should not rely on client side code to handle problems since they can be circumvented.
Yeah, it's that easy, but - there may be a performance hit. Remember that a Monitor lock restricts that code to be run by only one thread at a time. There is a new thread for each HTTP Request, so that means only one of those requests at any given time can access that code. If it's a long running procedure, or a lot of people are trying to access that part of the site at the same time - you might start to sluggish responses.
It's that easy, but be careful what object you lock on. It should be the same one for all the threads - for example, it could be a static object.
lock is syntactic sugar for a Monitor, so there is quite a bit going on under the cover.
Also, you should keep an eye out for deadlocks - they can happen when you lock on two or more objects.
What is the best tool / practice to enable browser history for Flash (or AJAX) websites?
I guess the established practice is to set and read a hash-addition to the URL like
http://example.com/#id=1
I am aware of the Flex History Manager, but was wondering if there are any good alternatives to consider. Would also be interested in a general AJAX solution or best practice.
SWFAddress has been widely used and tested. It makes it almost trivial (given you plan ahead) to handle deeplinking in Flash. It provides a JS and AS library that work together and make the whole process pretty foolproof. You'd want to look at something like RSH for AJAX.
I've used swfadress for some small stuff.
For AJAX, something like Really Simple History is great.
This will seem a bit roundabout, but I'm currently using the dojo framework for that. There's a dojo.back that was very useful when my UI was mostly JS/HTML. Now that I've gone to flex for more power, fluid animations, and browser stability, the only thing I've need to keep using has been the back URL.
FlexBuilder seemed to have it's own browser history in default projects.
Also, the Flex 3 Cookbook has a recipe for using mx.managers.HistoryManager to create your own custom history management. I have plans to give this a try someday to remove our dependence on the dojo.back, but haven't had time yet.
I've rolled my own solutions that were ultra-simple like this:
(function() {
var oldHash, newHash;
function checkHash() {
// Grab the hash
newHash = document.location.hash;
// Check to see if it changed
if (oldHash != newHash) {
// Trigger a custom event if it changed,
// passing the old and new values as
// metadata on the event.
$(document).trigger('hash.changed', {
old: oldHash,
new: newHash
});
// Update the oldHash for the next check
oldHash = newHash;
}
}
// Poll the hash every 10 milliseconds.
// You might need to alter this time based
// on performance
window.setInterval(checkHash, 10);
})(jQuery);
Then you just need to have event handlers for the 'hash.changed' event to respond accordingly based on what the new value is. The approach works will in super simple cases.