I have developed some websites and I always stumble a the same point: multiple ajax calls. I have a main page where all the content is loaded asynchronously. When the page is loaded, there are four INDEPENDENT calls that "draw" the page by areas (top, left, right and bottom) and while it is loaded I show to the user the typical ajax spins. So, when a request is received by the browser I execute the callback and the different areas are drawing at different time. The fact is that the answer for the server sometimes are mixed up, I mean, the answer of top is drawn in the left or vice-versa.
I've tried some solutions like creating a timestamp in each request to indicate to the browser and server that each request is different.
Also I've tried to configure some parameters of cache in the server, in case.
The only way in which works has been including the request2 in the callback of the one, etc.
Anyone knows the proper way to do it or ever has beaten this issue?? I don't want to do chained request.
Thanks
Here is an example of what I mean:
$(document).ready(function() {
$.get('/activity',Common.genSafeId(),function(data){$('#stream').html(data);$("#load_activity").addClass("empty");});
$.get('/messages',Common.genSafeId(),function(data){$('#message').html(data);$("#load_messages").addClass("empty");});
$.get('/deals',Common.genSafeId(),function(data){$('#new_deals_container').html(data);$("#load_deal").addClass("empty");});
$.get('/tasks',Common.genSafeId(),function(data){$('#task_frames').html(data);$("#load_task").addClass("empty");});});
And the html is a simple jsp with four container each one with a different id.
CLOSURES
Closures are a little mind-blowing at first. They are a feature of javaScript and several other modern computing languages.
A closure is formed by an executed instance of a function that has an inner function (typically an anonymous event handler or named method) that needs access to one or more outer variables (ie. variables that are within the outer function but outside the inner function). The mind-blowing thing is that the inner function retains access to the outer variables even though the outer function has completed and returned at the time that the inner function executes!
Moreover, variables trapped by a closure are accessible only to inner functions and not to the further-out environment that brought the closure into being. This feature allows us, for example, to create class-like structures with private as well as public members even in the absence of language keywords "Public" and "Private".
Closures are made possible by inner functions' use of outer variables suppressing javaScript's "garbage collection" which would otherwise destroy the outer function's environment at some indeterminate point after completion.
The importance of closures to good, tidy javaScript programming cannot be overstressed.
In the code below the function getData() forms, at each call, a closure trapping id1 and id2 (and url), which remain available to the anonymous ajax response handler ($.get's third argument).
$(document).ready(function() {
function getData(url, id1, id2) {
$.get(url, Common.genSafeId(), function(data) {
$(id1).html(data);
$(id2).addClass("empty");
});
}
getData('/activity', '#stream', '#load_activity');
getData('/messages', '#message', '#load_messages');
getData('/deals', '#new_deals_container', '#load_deal');
getData('/tasks', '#task_frames', '#load_task');
});
Thus, rather than writing four separate handlers, we exploit the language's ability to form closures and call the same function, getData(), four times. At each call, getData() forms a new closure which allows $.get's response handler (which is called asynchronously when the server responds) to address its DOM elements.
Make sure you have different callbacks for each ajax call, it sounds like you are trying to use the same function for all four, thus when they are called out of order (because they take different amounts of time server-side), they are rendering in the wrong place. If you insist on using the same function for all your callbacks, then you have to put something in the payload so that the callback knows where to render to.
Related
I want to perform an action based to the result of an asynchronous NGXS action.
In a Angular frontend app I'm using NGXS for state management. Some of the actions involve talking to a backend via REST calls. Those actions are implemented as asynchronous actions, with the reducer functions in my state classes returning an Observable.
What I'm looking for is a way to get hands on the result of the backend call, to be able to perform some action.
One use case I'm trying to implement is navigation to just created objects: Business objects are created in the frontend (Angular) app with a couple of domain properties. They get persisted in the backend, and as a result an ID for this object is created and returned to the frontend, and incorporated into the NGXS store. As a direct response to this, I'd like to navigate to a detail view for the new object. To do so, I need
(a) the information that the call has been returned successful, and
(b) the answer from the backend (the ID in this case).
Another slightly more complicated use case is the assignment of a number of tags to an business object. The tags are entities by themselfes, and have an ID each. In the UI, the user can either pick existing or add new tags. Either way, multiple tags can be added in a single step in the UI, which means I have to
call the backend for each new tag to create the ID
after all missing tags are created, update the business object with the list of tag IDs
In general, there are use cases in the frontend that depend on the result of a backend call, and there is no clean way to find this result in the store (although it's in there)
I know I can subscribe to the Observable returned from the store's dispatch method (as shown in asynchronous actions).
I also know about action handlers. In both cases I can attach code to the event of an action finished, but neither option enables me to get the result of the backend call. In the fist case, the Observable carries the whole store, while in the latter case I get the original Action, which is unfortunately missing the essential information (the ID).
The part you're missing here are selectors. Dispatching actions is not supposed to give you back a result. The only purpose of the Observable returned by store.dispatch() is to tell you when the action's handlers are done.
To get to the data returned by your calls to the backend, you have to patch the state inside your action handler. And then, outside of your state, you can access the data using store.select() or store.selectSnapshot() depending on what you need. Your state class should look somewhat like this (untested):
#State()
export class SampleState {
#Selector(SampleState)
sampleSelector(state) {
return state.sampleObject;
}
#Action(SampleAction)
sampleAction(ctx: StateContext<any>, action: sampleAction) {
return sampleBackendCall(/* ... */).pipe(
tap((result) => {
ctx.patchState({ sampleObject: result });
})
);
}
}
Now you can access this result where ever you need using the Store. For the use case of navigating to an element after its creation, you can combine a subscription to store.dispatch() with a store.selectSnapshot() like this:
store.dispatch(new SampleAction()).subscribe(() => {
navigateTo(store.selectSnapshot(SampleState.sampleSelector));
});
Note that in this easy case a selectSnapshot is perfectly fine, as we only want to get the value we just finished writing into the state. In most cases though, you will want to use store.select() or the #Select() decorator because they return Observables which enable you to also correctly display changes in your state.
That said, I'd like to add that if saving data inside the state is not necessary for you at all, then probably NGXS is the wrong library for you in the first place and you could as well just use an ordinary angular service directly returning the result of the backend call, like suggested in the comments.
Both promises and AJAX calls are asynchronous operations. A GET/POST request could be made with both. << Edit: that's a WRONG statement
So what's the difference between them? And when would be best to use one instead of the other?
Also, one more thing:
Recently I encountered a promise which had an AJAX in its body. Why put an async operation inside an async operation? That's like putting a bread loaf in a bread sandwich.
function threadsGet() {
return new Promise((resolve, reject) => {
$.getJSON('api/threads')
.done(resolve)
.fail(reject);
})
}
jQuery is used here. And the AJAX call has Promise behavior and properties. I didn't get that earlier but here are my thoughts:
We can do something in the Promise. Then use the AJAX call and in the done function pass the resolved Promise logic. Specifically in this example there is none.
Now I see that I had confused both. They're pretty much 2 different things. Just because they're asynchronous, doesn't mean they're interchangeable.
==============
EDIT 2: Just some materials I found useful:
Promise Anti-Patterns
You are confused about promises and Ajax calls. They are kind of like apples and knives. You can cut an apple with knife and the knife is a tool that can be applied to an apple, but the two are very different things.
Promises are a tool for managing asynchronous operations. They keep track of when asynchronous operations complete and what their results are and let you coordinate that completion and those results (including error conditions) with other code or other asynchronous operations. They aren't actually asynchronous operations in themselves. An Ajax call is a specific asynchronous operation that can be used with with a traditional callback interface or wrapped in a promise interface.
So what's the difference between them? And when would be best to use
one instead of the other?
An Ajax call is a specific type of asynchronous operation. You can make an Ajax call either with a traditional callback using the XMLHttpRequest interface or you can make an Ajax call (in modern browsers), using a promise with the fetch() interface.
Recently I encountered a promise which had an AJAX in its body. Why
put an async operation inside an async operation? That's like putting
a bread loaf in a bread sandwich.
You didn't show the specific code you were talking about, but sometimes you want to start async operation 1 and then when that async operation is done, you want to them start async operation 2 (often using the results of the first one). In that case, you will typically nest one inside the other.
Your code example here:
function threadsGet() {
return new Promise((resolve, reject) => {
$.getJSON('api/threads')
.done(resolve)
.fail(reject);
})
}
is considered a promise anti-pattern. There's no reason to create a new promise here because $.getJSON() already returns a promise which you can return. You can just do this instead:
function threadsGet() {
return $.getJSON('api/threads');
}
Or, if you want to "cast" the somewhat non-standard jQuery promise to a standard promise, you can do this:
function threadsGet() {
return Promise.resolve($.getJSON('api/threads'));
}
I am having someone create a bunch of templates (themes) for a website, and want to keep data passed to the views flexible.
For example, with the users in the system I want to be able to supply the top x users and the most recent x users. In my controller I don't want to pass this data to the view, because he might just need the top 5 users and I am querying the top 10 - or worse, I might only get the top 5 and he wants the top 10.
I am thinking there would be two ways to do this.
1 - A view "helpers" file, which could contain functions like. getTopUsers($count) and getNewestUsers($count) which would do the model / repo call.
2 - Create a view presenter to keep these extra functions. I've had a look and there seems to be two main presenter packages - https://github.com/ShawnMcCool/laravel-auto-presenter and https://github.com/laracasts/Presenter
Maybe there is a better way?
There could be half a dozen of these, for various models...
I would pop some client side code into your views and access a route to a controller action (which returns JSON by default) and conditionally add that particular snippet into your view (via a variable passed to the view that determines if the person is logged in). Then you can apply an auth filter to your route to protect it.
Note: with this approach you can pass url parameters to your action. This means you can tell your controller to limit your results more easily.
This is a very interesting question, my friend. What I can think of is the following
1) cheap way, just query 10 or whichever the biggest number, and then pass a variable $count to the view or let view pass a variable to the sub view
2) api call, if you'd like to do AJAX call, then as others suggested, you could just design a new route, getData?count=5.
Normally it's not easy to meet all requirements, and practically speaking in the prototype stage, it'll be more cost-effective to write fixed function like getData5, and getData10, or just make two pages :) it'll be a lot faster than coming up another new architecture design and then realize in the end nobody really uses them.
A bit of context: I need to cache the homepage of my CakePHP site - apart from one small part, which displays events local to the user based on their IP address.
You can obviously use the <cake:nocache> tag to dictate a part of the page that shouldn't be cached; but you can't surround a controller-set variable with these tags to make it dynamic. Once a page is cached, that's it for the controller action, as far as I know.
What you can usefully surround with the nocache tags are elements and helpers. As such, I've created an element inside these tags, which calls a helper function to access the model and get the appropriate data. To get at the model from the helper I'm using:
$this->Modelname =& ClassRegistry::init("Modelname");
This seems to me, however, to be a kind of iffy way of doing things, both in terms of CakePHP and general MVC principles. So my question is, is this an appropriate way of getting what I want to do done, or should it ring warning bells? Is there a much better way of achieving my objectives that I'm just missing here?
Rather than using a Helper, try to put your code in an element and use requestAction inside of the element.
see this link
http://bakery.cakephp.org/articles/gwoo/2007/04/12/creating-reusable-elements-with-requestaction
This would be a much better approach than trying to use a model in your helper.
Other than breaking all the carefully-laid principles of MVC?
In addition to putting this item into an element, why not fetch it with a trivial bit of ajax?
Put the call in its own controller action, such that the destination URL -> /controller/action (quite convenient!)
Pass the IP back to that action for use in the find call
Set the ajax update callback to target within the element with the results of the call accordingly
No need to muck around calling Models directly from Views, and no need to bog things down with requestAction. :)
HTH
We are using the Dynamic Script Tag with JsonP mechanism to achieve cross-domain Ajax calls. The front end widget is very simple. It just calls a search web service, passing search criteria supplied by the user and receiving and dynamically rendering the results.
Note - For those that aren’t familiar with the Dynamic Script Tag with JsonP method of performing Ajax-like requests to a service that return Json formatted data, I can explain how to utilise it if you think it could be relevant to the problem.
The service is WCF hosted on IIS. It is Restful so the first thing we do when the user clicks search is to generate a Url containing the criteria. It looks like this...
https://.../service.svc?criteria=john+smith
We then use a dynamically created Html Script Tag with the source attribute set to the above Url to make the request to our service. The result is returned and we process it to show the results.
This all works fine, but we noticed that when using IE the service receives the request from the client Twice. I used Fiddler to monitor the traffic leaving the browser and sure enough I see two requests with the following urls...
Request 1: https://.../service.svc?criteria=john+smith
Request 2: https://.../service.svc?criteria=john+smith&_=123456789
The second request has been appended with some kind of Id. This Id is different for every request.
My immediate thought is it was something to do with caching. Adding a random number to the end of the url is one of the classic approaches to disabling browser caching. To prove this I adjusted the cache settings in IE.
I set “Check for newer versions of stored pages” to “Never” – This resulted in only one request being made every time. The one with the random number on the end.
I set this setting value back to the default of “Automatic” and the requests immediately began to be sent twice again.
Interestingly I don’t receive both requests on the client. I found this reference where someone is suggesting this could be a bug with IE. The fact that this doesn’t happen for me on Firefox supports this theory.
Can anyone confirm if this is a bug with IE? It could be by design.
Does anyone know of a way I can stop it happening?
Some of the more vague searches that my users will run take up enough processing resource to make doubling up anything a very bad idea. I really want to avoid this if at all possible :-)
I just wrote an article on how to avoid caching of ajax requests :-)
It basically involves adding the no cache headers to any ajax request that comes in
public abstract class MyWebApplication : HttpApplication
{
protected MyWebApplication()
{
this.BeginRequest += new EventHandler(MyWebApplication_BeginRequest);
}
void MyWebApplication_BeginRequest(object sender, EventArgs e)
{
string requestedWith = this.Request.Headers["x-requested-with"];
if (!string.IsNullOrEmpty(requestedWith) && requestedWith.Equals(”XMLHttpRequest”, StringComparison.InvariantCultureIgnoreCase))
{
this.Response.Expires = 0;
this.Response.ExpiresAbsolute = DateTime.Now.AddDays(-1);
this.Response.AddHeader(”pragma”, “no-cache”);
this.Response.AddHeader(”cache-control”, “private”);
this.Response.CacheControl = “no-cache”;
}
}
}
I eventually established the reason for the duplicate requests. As I said, the mechanism I chose to use for making Ajax calls was with Dynamic Script Tags. I build the request Url, created a new Script element and assigned the Url to the src property...
var script = document.createElement(“script”);
script.src = https://....”;
Then to execute the script by appending it to the Document Head. Crucially, I was using the JQuery append function...
$(“head”).append(script);
Inside the append function JQuery was anticipating that I was trying to make an Ajax call. If the type of element being appended is a Script, then it executes a special routine that makes an Ajax request using the XmlHttpRequest object. But the script was still being appended to the document head, and being executed there by the browser too. Hence the double request.
The first came direct from the script – the one I intended to happen.
The second came from inside the JQuery append function. This was the request suffixed with the randomly generated query string argument in the form “&_=123456789”.
I simplified things by preventing the JQuery library side effect. I used the native append function...
document.getElementByTagName(“head”).appendChild(script);
One request now happens in the way I intended. I had no idea that the JQuery append function could have such a significant side effect built in.
See www.enhanceie.com/redir/?id=httpperf for further discussion.