ZF2: Forward controller plugin and performance - performance

Does the ZF2 forward() controller plugin fire off a new request cycle? Or part thereof?
I am writing a ZF2 MVC application with widgetized content. The widgetized content is exposed via its own controller action because sometimes I need to hit it with ajax.
When I need to incorporate the widgetized content as a sub-view of another action (i.e. on a full page load), that action is using the forward() plugin to get the widgetized content. If it's going to introduce a significant overhead though I will go straight to the service layer instead (even though that approach is less DRY).
I realise that a performance test will answer this question for me, but I'm a few weeks away from being able to run such a test.
EDIT: when I say 'new request cycle', I mean the ZF2 MVC request cycle, i.e. route, dispatch, etc. Intuitively I would doubt it fires route a second time, but it could start the cycle from dispatch. I'm asking the question because I know that in ZF1 it triggered a who second cycle, which was a real performance drain.

There are two options to "forward". Understand php is as server side language a processor to grab an incoming request and return a response.
That said, the first "forward" uses in-framework forwarding. This means there is only one request and one response. Internally the framework calls one controller action and then another one. Zend Framework calls this method forward.
The second "forward" is a real redirect, where the first response contains a Location header and the 302 http status code. This results in a second request and consecutively in a second response. Zend Framework calls this method redirect.
So with above, the forward you talk about in your question does not involve any sessions or route match parameters, since the second call to an action is within the same php process, so all variables are already known.
Is it possible to forward data to another controller/action in Zend 2?

Related

REST API for main page - one JSON or many?

I'm providing RESTful API to my (JS) client from (Java Spring) server.
Main site page contains a number of logical blocks (news, last comments, some trending stuff), each of them has a corresponding entity on server. Which way is a right one to go, handle one request like
/api/main_page/ ->
{
news: {...}
comments: {...}
...
}
or let the client do a few requests like
/api/news/
/api/comments/
...
I know in general it's better to have one large request/response, but is this an answer to this situation as well?
Ideally, you should have different API calls for fetching individual configurable content blocks of the page from the same API.
This way your content blocks are loosely bounded to each other.
You
can extend, port(to a new framework) and modify them independently at
anytime you want.
This comes extremely useful when application grows.
Switching off a feature is fairly easy in this
case.
A/B testing is also easy in this case.
Writing automation is
also very easy.
Overall it helps in reducing the testing efforts.
But if you really want to fetch this in one call. Then you should add additional params in request and when the server sees that additional param it adds the additional independent JSON in the response by calling it's own method from BL layer.
And, if speed is your concern then try caching these calls on server for some time(depends on the type of application).
I think in general multiple requests can be justified, when the requested resources reflect parts of the system state. (my personal rule of thumb, still WIP).
i.e. if a news gets displayed in your client application a lot, I would request it once and reuse it wherever I can. If you aggregate here, you would need to request for it later, maybe some of them never get actually displayed, and you have some magic to do if the representation of a news differs in the aggregation and /news/{id}-resource.
This approach would increase communication if the page gets loaded for the first time, but decrease communication throughout your client application the longer it runs.
The state on the server gets copied request by request to your client or updated when needed (Etags, last-modified, etc.).
In your example it looks like /news and /comments are some sort of latest or since last visit, but not all.
If this is true, I would design them to be a resurce as well, like /comments/latest or similar.
But in any case I would them only have self-links to the /news/{id} or /comments/{id} respectively. Then you would have a request to /comments/latest, what results in a list of news-self-links, for what I would start a request only if I don't already have that news (maybe I want to check if the cached copy is still up to date).
It is also possible to trigger the request to a /news/{id} only if it gets actually displayed (scrolling, swiping).
Probably the lifespan of a news or a comment is a criterion to answer this question. Meaning the caching in the client it is not that vital to the system, in opposite of a book in an Book store app.

Is it a good practice to call action within another action (in flux)

I have an action as follows:
SomeActions.doAction1(){
//..dispatch event "started"...
//...do some process....
FewActions.doAnotherAction(); //CAN WE DO THIS
//...do something more....
//..dispatch event "completed"..
}
While the above works with no problems, just wondering, if it is valid according to flux pattern/standard or is there a better way.
Also, I guess calling Actions from Stores are a bad idea. Correct me if I am wrong.
Yes, calling an Action within another Action is a bad practice. Actions should be atomic; all changes in the Stores should be in response to a single action. They should describe one thing that happened in the real world: the user clicked on a button, the server responded with data, the screen refreshed, etc.
Most people get confused by Actions when they are thinking about them as imperative instructions (first do A, then do B) instead of descriptions of what happened and the starting point for reactive processes.
This is why I recommend to people that they name their Action types in the past tense: BUTTON_CLICKED. This reminds the programmer of the fundamentally externally-driven, descriptive nature of Actions.
Actions are like a newspaper that gets delivered to all the stores, describing what happened.
Calling Actions from Stores is almost always the wrong thing to do. I can only think of one exception: when the Store responds to the first Action by starting up an asynchronous process. When the async process completes, you want to fire off a second Action. This is the case with a XHR call to the server. But the better way is to put the XHR handling code into a Utils module. The store can then respond to the first Action by calling a method in the Utils module, and then the Utils module has the code to call the second Action when the server response comes back.

How to deal with out-of-sequence Ajax requests?

What is the best way deal with out-of-sequence Ajax requests (preferably using a jQuery)?
For example, an Ajax request is sent from the user's browser anytime a field changes. A user may change dog_name to "Fluffy", but a moment later, she changes it to "Spot". The first request is delayed for whatever reason, so it arrives at the server after the second, and her dog ends up being called "Fluffy" instead of "Spot".
I could pass along a client-side timestamp along with each request, and have the server track it as part of each Dog record and disregard earlier requests to change the same field (but only if there is a difference of less than 5 minutes, in case the user changes the time on her machine).
Is this approach sufficiently robust, or is there a better, more standardized approach?
EDIT:
Matt made a great point in his comment. It's much better to serialize requests to change the same field, so is there a standard way of implementing Ajax request queues?
EDIT #2
In response to #cherouvim's comment, I don't think I'd have to lock the form. The field changes to reflect the user's change, a change request is placed into the queue. If a request to change the same field is waiting in the queue, delete that old request. 2 things I still would have to address:
Placing a request into the queue is an asynchronous task. I could have the callback handler from the previous Ajax request send the next request in the queue. Javascript code isn't multi-threaded (or... is it?)
If a request fails, I would need the user interface to reflect the state of the last successful request. So, if the user changes the dog's name to "Spot" and the Ajax request fails, the field would have to be set back to "Fluffy" (the last value successfully committed).
What issues am I missing?
First of all you need to serialize server side processing for each client. If you are programming in Java then synchronizing execution on the http session object is sufficient. Serializing will help in case the second update comes while the first is being processed.
A second enhancement you can implement in your entity updating is http://en.wikipedia.org/wiki/Optimistic_concurrency_control. You add a version property (and column) for your entity. Each time an update happens this is incremented once. In fact the update statement looks like:
update ... set version=6 ... where id=? and version=5;
If affected rows from above pseudoquery query are 0 then someone else has managed to update the entity first. What you do then is up to you. Note that you need to be rendering the version on the html update form of the entity as a hidden parameter and sending it back to the server each time you update. On return you have to write back the updated version.
Generally the first enhancement would be enough. The second one will improve the system in case many people are editing the same entities at the same time. It solves the "lost update" problem.
I would implement a queue on the client side with chaining of successful requests or rollbacks on unsuccessful requests.
You need to define "unsuccessful", be it a timeout or a returned value.

Best practice for combining requests with possible different return types

Background
I'm working on a web application utilizing AJAX to fetch content/data and what have you - nothing out of the ordinary.
On the server-side certain events can happen that the client-side JavaScript framework needs to be notified about and vice versa. These events are not always related to the users immediate actions. It is not an option to wait for the next page refresh to include them in the document or to stick them in some hidden fields because the user might never submit a form.
Right now it is design in such a way that events to and from the server are riding a long with the users requests. For instance if the user clicks a 'view details' link this would fire a request to the server to fetch some HTML or JSON with details about the clicked item. Along with this request or rather the response, a server-side (invoked) event will return with the content.
Question/issue 1:
I'm unsure how to control the queue of events going to the server. They can ride along with user invoked events, but what if these does not occur, the events will get lost. I imagine having a timer setup up to send these events to the server in the case the user does not perform some action. What do you think?
Question/issue 2:
With regards to the responds, some being requested as HTML some as JSON it is a bit tricky as I would have to somehow wrap al this data for allow for both formalized (and unrelated) events and perhaps HTML content, depending on the request, to return to the client. Any suggestions? anything I should be away about, for instance returning HTML content wrapped in a JSON bundle?
Update:
Do you know of any framework that uses an approach like this, that I can look at for inspiration (that is a framework that wraps events/requests in a package along with data)?
I am tackling a similar problem to yours at the moment. On your first question, I was thinking of implementing some sort of timer on the client side that makes an asycnhronous call for the content on expiry.
On your second question, I normaly just return JSON representing the data I need, and then present it by manipulating the Document model. I prefer to keep things consistent.
As for best practices, I cant say for sure that what I am doing is or complies to any best practice, but it works for our present requirement.
You might want to also consider the performance impact of having multiple clients making asynchrounous calls to your web server at regular intervals.

ASP.NET MVC 2 beta Asynchronous RenderAction

Background:
http://www.hanselman.com/blog/HanselminutesPodcast188ASPNETMVC2BetaWithPhilHaack.aspx
Start from 27:15,RenderAction has been discussed at 28:43 that a RenderAction will not be part of Asynchronocity as an asyncronous action method called.
(Let's say your home portal index action calling 1.GetNews 2.GetWeather 3.GetStock asynchronously.You have have a RenderAction displaying user recent posts on the same view. (GetUserRecentPosts))
Questions
What if RenderActions themselves are asynchronous ?
Would GetUserRecentPosts be called only after home index completed its action even though?
Should RenderActions be rendered asynchronously on a view by default?
I don't think you can do this successfully. The point where you could benefit from asynch processing has already passed when your views start rendering. The MVC pipeline that sets up the begin/end methods has already completed and the View has no way to get back into it on the same request. Seems like you may be stuck with synchronous processing OR devise some way to retrieve all your data up front and cache it in TempData or something for rendering.
Lift framework in Scala is probably the only one that I am aware of that has parallel partial actions which will not block the rendering of the main content but will use Comet-push to deliver partial view content for those blocks which may take a while to get data for.
to use it, just wrap a block in your view inside a parallel node
<lift:parallel>
//this is where Html.RenderAction("GottaFetchNetworkDataFromSomewhereView");
//this is where Html.RenderAction("GottaFetchNetworkDataFromSomewhereView2");
// would go
</lift:parallel>
Lift will also take care of connection starvation of your page to manage http requests in the appropriate manner so that browser pushes are not "waiting 'round".
Unfortunately, ASP.NET MVC has poor Comet support. There's not much outside of Asynchronous Controllers, which is an improvement but not as elegant as, say, Akka's framework suspend() method for long-polling.

Resources