CakePHP 2.x Performance - Can I bring the View to the user before the Controller has finished working on the data? - performance

I have a CakePHP 2.x app handling a tourism agency/operator's day-to-day sales and operations.
The Sales Model interacts with pretty much all other Models, because it needs client information from Clients, agent information from Agents, passenger information from Passengers (Not always the same thing as client), its payment information in Payments, tour info in Tours, flights info in Flights, etc.
Basically, when a sales staff person navigates to a specific sale, they need the screen to show them lots of information that come from other Models.
This has made the screen slower and slower to load.
I'm avoiding using cache, because this is a boots-on-the-ground type agency, meaning they're not just selling, they're actually doing the operations themselves. In this environment, information always needs to be the most up to date.
So, the question is:
Is there a way for me to bring the View to the user before the controller finishes handling every piece of data?
Like:
<?php
class SalesController extends AppController {
public function show($id) {
// Get easy/quick data
// Take user to the View
// Get the more time consuming data
// Feed it to the View as it becomes ready
}
}
I've been thinking I should just call the page with the simple data and then have the more complicated data come in after loading, with some Ajax and javascript, but is that the best use of the framework?
An average sale's screen takes about 10000 ms to fully load at this point. Over 6000 ms of that time is Idle frame. That means it's my Controller working in the background, right?

This is most likely not possible without breaking the "controllers should never echo data" rule, violating it can cause all sorts of problems, like the data not being returned in the test environment, headers not being sent, data not being read completely, etc.
If you know what you're doing, and you're aware about the implications, then you could probably get away with it, but the AJAX solution ist most likely the safer workaround.
In any case I wouldn't do anything before identifying where exactly and why exactly the time is spent, and figuring if there's a way to speed up things at the root of the problem!

Related

Writing query in controller instead model

I wanna ask about if we write query in controller instead model. does it have any effect? such as load data become slower or others?
the data that i want to use is over 1000.
if not, what makes the load of data slow in web.
like the ajax request is needed 4-5 sec, and some is until 1 minutes
There is virtually no difference between running a query in a controller or a model except for the small (negligible) amount of overhead a model adds (if it were me I wouldn't worry about it). If you want to adhere to the MVC design pattern, database queries are typically done in the model, not the controller. But this is more of a stylistic thing in CodeIgniter as it is not strictly enforced as it is in other frameworks that use ORM.
With that being said, if you are experiencing long execution times I would recommend not getting all the data at once, and instead using load more, pagination, datatables, or a similar system to reduce the amount of records selected at once to something more manageable depending on your acceptable execution/response times.
You can test what works in your situation best by setting benchmark points: https://www.codeigniter.com/user_guide/libraries/benchmark.html
I have been working with CodeIgniter for some few years and what I have realized is to properly do your queries in models.
The idea of MVC is about code separation where the M(Model) handles or does the heavy liftings of the application which mostly relates to databases, the V(View) does the presentation that is where users of the system get to interact with the application and the C(Controller) acts as a messenger or intermediary between the Model and the View.
This releases the controller from lots of processes so that it doesn't have to query from the database, try to upload lots of data before then showing it on a view.
I always use the (I call it the little magic) {elapsed_time} and {memory_usage}to check how my application work per whatever logic I implement, you can give it a try.
I think this helps.

How to handle data composition and retrieval with dependencies in Flux?

I'm trying to figure out what is the best way to handle a quite commons situation in medium complex apps using Flux architecture, how to retrieve data from the server when the models that compose the data have dependencies between them. For example:
An shop web app, has the following models:
Carts (the user can have multiple carts)
Vendors
Products
For each of the models there is an Store associated (CartsStore, VendorsStore, ProductsStore).
Assuming there are too many products and vendors to keep them always loaded, my problem comes when I want to show the list of carts.
I have a hierarchy of React.js components:
CartList.jsx
Cart.jsx
CartItem.jsx
The CartList component is the one who retrieves all the data from the Stores and creates the list of Cart components passing the specific dependencies for each of them. (Carts, Vendors, Products)
Now, if I knew beforehand which products and vendors I needed I would just launch all three requests to the server and use waitFor in the Stores to synch the data if needed. The problem is that until I get the carts and I don't know which vendors or products I need to request to the server.
My current solution is to handle this in the CartList component, in getState I get the Carts, Vendors and Products from each of the Stores, and on _onChange I do the whole flow:
This works for now, but there a few things I don't like:
1) The flow seems a bit brittle to me, specially because the component is listening to 3 stores but there is only entry point to trigger "something has changed in the data event", so I'm not able to distinguish what exactly has changed and react properly.
2) When the component is triggering some of the nested dependencies, it cannot create any action, because is in the _onChange method, which is considering as still handling the previous action. Flux doesn't like that and triggers an "Cannot dispatch in the middle of a dispatch.", which means that I cannot trigger any action until the whole process is finished.
3) Because of the only entry point is quite tricky to react to errors.
So, an alternative solution I'm thinking about is to have the "model composition" logic in the call to the API, having a wrapper model (CartList) that contains all 3 models needed, and storing that on a Store, which would only be notified when the whole object is assembled. The problem with that is to react to changes in one of the sub models coming from outside.
Has anyone figured out a nice way to handle data composition situations?
Not sure if it's possible in your application, or the right way, but I had a similar scenario and we ended up doing a pseudo implementation of Relay/GraphQL that basically gives you the whole tree on each request. If there's lots of data, it can be hard, but we just figured out the dependencies etc on the server side, and then returned it in a nice hierarchical format so the React components had everything they needed up to the level where the call came from.
Like I said, depending on details this might not be feasible, but we found it a lot easier to sort out these dependencies server-side with stuff like SQL/Java available rather than, like you mentioned, making lots of async calls and messing with the stores.

ASP.NET MVC AJAX Database update - ViewModel vs Controller

I've currently got a controller which passes a model to a view. The view will be making AJAX calls to the controller to make updates to the data rather freqeuently, and the model makes for a rather nice arrangements to make these updates.
I know that making changes to the database in the controller is bad form, and I would like to avoid doing it. However, creating a model on every call and handing the update data off to it, although it seems more correct to me, takes longer on each request, as the model needs to be initialized. Since the user is blocked from interaction with certain elements on the page during the update, this time can really add up across dozens of updates.
Which method is best? Simply making the updates in the controller to keep the application as interactive as possible, or initializing an instance of the model on every request to handle the update at the expense of speedy request processing?
I would suggest optimize your model and/or create a lighter version of it.
Why is your model taking too long to initialize? Is the initialization loading things that you don't need when called in this particular case?
Bottom line, I would move the saving logic to a model but would make sure the model is fast and optimal.

Edit a model over several views

Is it possible to have one model that you break up into several views so that the user is not overwhelmed by the amount of data they will need to input? I'm trying to build a turbo tax like interface where a question or two are asked, then the user clicks next to answer the next set of questions and so on.
The Model doesn't seem make sense to break up into more models. Since it is a single distinct entity, like a questionnare.
See similar question for a nice example:
multi-step registration process issues in asp.net mvc (splitted viewmodels, single model)
It is possible to use the same model for multiple views, but you should decide how you want to preserve the state as you go though this "wizard". One approach can be to cross-post between the views and keep the state in post data, but in that case you have to add a lot of hidden fields for all model properties that are otherwise not displayed in an input on the current view. Another approach can be to persist the partially filled model, with the additional benefit, that the user might be able to continue after a session timeout or another problem, but then you might need to clean up stale data and be flexible in the validation on the database level. You can also preserve the state in the session if you want. Finally, you can also keep the state in the browser independent from the post data and do only AJAX calls with the server until you reach the point when you want to save everything.

Saving Ajax Form Data Best Practices

I am just wondering what general best practice is for saving data in Ajax Forms. In Spree ECommerce for example, every time you change a value in a list of objects (say you change the quantity of a certain Item in an Order), it updates the database with an Ajax call.
Is it better to have the User manually press "Save" or "Update" when they're done editing a form, or if you can (you have setup an ajax alternative), to just automatically save the data every time something changes?
It seems like Stack Overflow Careers saves a "Draft" of your profile every few seconds using some ajax thing.
As such, it seems like there's 3 ways to save data in a form if you have Ajax going:
User presses button, saves all data at once, not good if data is important
Save every time interval
Save every change
What do you recommend?
Good question. I don't think there's a one-size-fits-all best-practice that covers all situations. Generally, the more user-friendly your solution is, the greater the complexity of implementation, the less likely the potential for a proper gracefully degrading solution (unless you have been very, very careful).
Also, there are implications to whichever approach you have opted to go with. For example, autosaving periodically might not be a good idea where substantial data validation is involved. A user might type some stuff in, and get an error message after a few seconds. Instant feedback would be much more beneficial to the user in such a situation, as it is possible that the input which led to the failed validation was, say, a few actions ago, so it might be somewhat confusing to the user.
Saving whenever the user changes something (a keypress, a checkbox selection, etc.) would seem to be the way to go from a usability perspective, but again, it depends on what you are doing and could have negative side-effects. For example, if the user is on a slow connection, he/she might feel that your site is slow or buggy. It would also yield a lot more database queries than the old-school 'click save' method.
I guess an obvious way to get around some of the above caveats would be to incorporate on-the-spot client side validation, but what works in the end might well be down to what your hallway testers say.
Final recommendation: create the old-style 'click save to save' forms and enhance from there, making sure things don't break without javascript (unless you have express permission from a higher authority). Hope that wasn't all nonsense.
It all depends on the situation. If the form is going to change due to user input then you may be better served save/update form on every change. Otherwise wait for an explicit user action.
I can only see trouble on the horizon if you adopt an autosave strategy for a form..
I know this post is old, but I like this simple solution, if the user change som data on your form and try to leave page without saving it, I prompt a remember message
In a global .js:
var validate=false;
window.onbeforeunload = function() { if(validate) return "You made some changes, are you sure you want to leave?"; };
In the form page, (i did it in jquery):
$('input,textarea,select').change(function(){ validate=true; });
$('form').submit(function() { validate=false; });
BR!

Resources