I just have implemented AJAX calls using fetch in react-native.
Implementing queuing of these AJAX calls hasn't been very well implemented.
Can any one help me?
If you want to make parallel requests, you can use the fact that fetch returns a promise to you, and then you can use Promise.all to wait for completion of all promises.
For example:
var urls = ['http://url1.net', 'http://url2.net'];
var requests = [];
urls.forEach((url)=>{
request = fetch(url); // You can also pass options or any other parameters
requests.push(request);
});
// Then, wait for all Promises to finish. They will run in parallel
Promise.all(requests).then((results) => {
// Results will hold an array with the results of each promise.
}).catch((err)=>{
// Promise.all implements a fail-fast mechanism. If a request fails, the catch method will be called immediately
});
I noticed that you added the 'multithreading' tag. It is important to notice that this code won't be doing any threading for you, as JS (generally) runs in only one thread.
Related
I want to be able to call an HTTP endpoint (that I own) from an Azure Function at the end of the Azure Function request.
I do not need to know the result of the request
If there is a problem in the HTTP endpoint that is called I will log it there
I do not want to hold up the return to the client calling the initial Azure Function
Offloading the call of the secondary WebApi onto a background job queue is considered overkill for this requirement
Do I simply call HttpClient.PutAsync without an await?
I realise that the dependencies I have used up until the point that the call is made may well not be available when the call returns. Is there a safe way to check if they are?
My answer may cause some controversy but, you can always start a background task and execute it that way.
For anyone reading this answer, this is far from recommended. The OP has been very clear that they don't care about exceptions or understanding what sort of result the request is returning ...
Task.Run(async () =>
{
using (var httpClient = new HttpClient())
{
await httpClient.PutAsync(...);
}
});
If you want to ensure that the call has fired, it may be worth waiting for a second or two after the call is made to ensure it's actually on it's way.
await Task.Delay(1000);
If you're worried about dependencies in the call, be sure to construct your payload (i.e. serialise it, etc.) external to the Task.Run, basically, minimise any work the background task does.
I have a NestJS application, and need to send an HTTP request to another server, so I am using the HttpModule (#nestjs/axios).
I need the data from that request, but the returned type is <Observable<AxiosResponse<any,any>>, where I need just the AxiosResponse.
Reading over the RxJS documentation, it looks like the prescribed way to handle this situation is to make use of RxJS lastValueFrom() or firstValueFrom(), after the deprecation of toPromise().
However, there is a warning attached:
Only use lastValueFrom function if you know an Observable will eventually complete. The firstValueFrom function should be used if you know an Observable will emit at least one value or will eventually complete. If the source Observable does not complete or emit, you will end up with a Promise that is hung up, and potentially all of the state of an async function hanging out in memory. To avoid this situation, look into adding something like timeout, take, takeWhile, or takeUntil amongst others.
The solution that I came up with was:
const response = this.httpService.post('the-url').pipe(take(1))
const axiosResponse: AxiosResponse = await lastValueFrom(response)
TypeScript at least is not complaining. Is this a suitable way to get at the underlying Axios response?
A promise once triggered will only get resolved or rejected once. Once it's resolved the observable is completed. It is one of the major differences between promise and observable which has capability to emit multiple times like callback.
Therefore there is no need to add pipe(take(1)), just use lastValueFrom is sufficient
If you construct an observable from a promise you don't need lastValueFrom nor take. Once subscribed to, it emits once then completes immediately:
const {from} = rxjs;
const answer$ = from(Promise.resolve(42));
answer$.subscribe({
next(x) {
console.log(`answer=${x}`);
},
complete() {
console.log('done');
}
});
<script src="https://unpkg.com/rxjs#%5E7/dist/bundles/rxjs.umd.min.js"></script>
I am using cypress to test our web application.
In certain pages there are different endpoint requests that are executed multiple times. [ e.g. GET /A GET /B GET /A].
What would be the best practise in cypress in order to wait for all requests to finish and guarantee that page has been fully loaded.
I don't want to use a ton cy.wait() commands to wait for all request to be processed. (there are a lot of different sets of requests in each page)
You can use the cy.route() feature from cypress. Using this you can intercept all your Get requests and wait till all of them are executed:
cy.server()
cy.route('GET', '**/users').as('getusers')
cy.visit('/')
cy.wait('#getusers')
I'm sure this is not recommended practice but here's what I came up with. It effectively waits until there's no response for a certain amount of time:
function debouncedWait({ debounceTimeout = 3000, waitTimeout = 4000 } = {}) {
cy.intercept('/api/*').as('ignoreMe');
let done = false;
const recursiveWait = () => {
if (!done) {
// set a timeout so if no response within debounceTimeout
// send a dummy request to satisfy the current wait
const x = setTimeout(() => {
done = true; // end recursion
fetch('/api/blah');
}, debounceTimeout);
// wait for a response
cy.wait('#ignoreMe', { timeout: waitTimeout }).then(() => {
clearTimeout(x); // cancel this wait's timeout
recursiveWait(); // wait for the next response
});
}
};
recursiveWait();
}
According to Cypress FAQ there is no definite way. But I will share some solutions I use:
Use the JQuery sintax supported by cypress
$('document').ready(function() {
//Code to run after it is ready
});
The problem is that after the initial load - some action on the page can initiate a second load.
Select an element like an image or select and wait for it to load. The problem with this method is that some other element might need more time.
Decide on a maindatory time you will wait for the api requests (I personaly use 4000 for my app) and place a cy.wait(mandatoryWaitTime) where you need your page to be loaded.
I faced the same issue with our large Angular application doing tens of requests as you navigate through it.
At first I tried what you are asking: to automatically wait for all requests to complete. I used https://github.com/bahmutov/cypress-network-idle as suggested by #Xiao Wang in this post. This worked and did the job, but I eventually realized I was over-optimizing my tests. Tests became slow. Test was waiting for all kinds of calls to finish, even those that weren't needed at that point in time to finish (like 3rd party analytics etc).
So I'd suggest not trying to wait for everything at a step, but instead finding the key API calls (you don't need to know the full path, even api/customers is enough) in your test step, use cy.intercept() and create an alias for it. Then use cy.wait() with your alias. The result is that you are waiting only when needed and only for the calls that really matter.
// At this point, there are lots of GET requests that need to finish in order to continue the test
// Intercept calls that contain a GET request with a request path containing /api/customer/
cy.intercept({ method: 'GET', url: '**/api/customer/**' }).as("customerData");
// Wait for all the GET requests with path containing /api/customer/ to complete
cy.wait("#customerData");
// Continue my test knowing all requested data is available..
cy.get(".continueMyTest").click()
I am new to angular and want to use it to send data to my app's backend. In several occasions, I have to make several http post calls that should either all succeed or all fail. This is the scenario that's causing me a headache: given two http post calls, what if one call succeeds, but the other fails? This will lead to inconsistencies in the database. I want to know if there's a way to cancel the succeeding calls if at least one call has failed. Thanks!
Without knowing more about your specific situation I would urge you to use the promise error handling if you are not already doing so. There's only one situation that I know you can cancel a promise that has been sent is by using the timeout option in the $http(look at this SO post), but you can definitely prevent future requests. What happens when you make a $http call is that it returns a promise object(look at $q here). What this does is it returns two methods that you can chain on your $http request called success and failure so it looks like $http.success({...stuff...}).error({...more stuff..}). So if you do have error handling in each of these scenarios and you get a .error, dont make the next call.
You can cancel the next requests in the chain, but the previous ones have already been sent. You need to provide the necessary backend functionality to reverse them.
If every step is dependent on the other and causes changes in your database, it might be better to do the whole process in the backend, triggered by a single "POST" request. I think it is easier to model this process synchronously, and that is easier to do in the server than in the client.
However, if you must do the post requests in the client side, you could define each request step as a separate function, and chain them via then(successCallback, errorCallback) (Nice video example here: https://egghead.io/lessons/angularjs-chained-promises).
In your case, at each step you can check if the previous one failed an take action to reverse it by using the error callback of then:
var firstStep = function(initialData){
return $http.post('/some/url', data).then(function(dataFromServer){
// Do something with the data
return {
dataNeededByNextStep: processedData,
dataNeededToReverseThisStep: moreData
}
});
};
var secondStep = function(dataFromPreviousStep){
return $http.post('/some/other/url', data).then(function(dataFromServer){
// Do something with the data
return {
dataNeededByNextStep: processedData,
dataNeededToReverseThisStep: moreData
}
}, function(){
// On error
reversePreviousStep(dataFromPreviousStep.dataNeededToReverseThisStep);
});
};
var thirdFunction = function(){ ... };
...
firstFunction(initialData).then(secondFunction)
.then(thirdFunction)
...
If any of the steps in the chain fails, it's promise would fail, and next steps will not be executed.
When to use use async false or async true in an ajax call. In terms of performance does it make any difference ?
example :
$.ajax({
url : endpoint,
type : "post",
async : false,
success : function(data) {
if (i==1){
getMetricData(data)}
else if (i==2)
{
capture = data;
}
}
});
It's not relative to performance...
You set async to false, when you need that ajax request to be completed before the browser passes to other codes:
<script>
// ...
$.ajax(... async: false ...); // Hey browser! first complete this request,
// then go for other codes
$.ajax(...); // Executed after the completion of the previous async:false request.
</script>
When async setting is set to false, a Synchronous call is made instead of an Asynchronous call.
When the async setting of the jQuery AJAX function is set to true then a jQuery Asynchronous call is made. AJAX itself means Asynchronous JavaScript and XML and hence if you make it Synchronous by setting async setting to false, it will no longer be an AJAX call.
for more information please refer this link
It is best practice to go asynchronous if you can do several things in parallel (no inter-dependencies).
If you need it to complete to continue loading the next thing you could use synchronous, but note that this option is deprecated to avoid abuse of sync:
jQuery.ajax() method's async option deprecated, what now?
In basic terms synchronous requests wait for the response to be received from the request before it allows any code processing to continue. At first this may seem like a good thing to do, but it absolutely is not.
As mentioned, while the request is in process the browser will halt execution of all script and also rendering of the UI as the JS engine of the majority of browsers is (effectively) single-threaded. This means that to your users the browser will appear unresponsive and they may even see OS-level warnings that the program is not responding and to ask them if its process should be ended. It's for this reason that synchronous JS has been deprecated and you see warnings about its use in the devtools console.
The alternative of asynchronous requests is by far the better practice and should always be used where possible. This means that you need to know how to use callbacks and/or promises in order to handle the responses to your async requests when they complete, and also how to structure your JS to work with this pattern. There are many resources already available covering this, this, for example, so I won't go into it here.
There are very few occasions where a synchronous request is necessary. In fact the only one I can think of is when making a request within the beforeunload event handler, and even then it's not guaranteed to work.
In summary. you should look to learn and employ the async pattern in all requests. Synchronous requests are now an anti-pattern which cause more issues than they generally solve.