UnhandledPromiseRejectionWarning: TimeoutError: waiting for selector `#editSite127` failed: timeout 30000ms exceeded. PUPPETEER - async-await

My code was working nicely till some maintenance happened in the website. Now the page is not trying to automatically scroll down to find the element.

Related

"NetworkError when attempting to fetch resource" in Firefox 84/85

We launched a React website powered by a GQL backend not long ago. And just recently from client side metrics collected, we observed that below error is occurring at a high rate during data fetching in the Firefox browser of version 84 and 85 (latest as of the time this question is posted).
NetworkError when attempting to fetch resource.
Inside the React application, an Apollo GraphQL client is used to fetch data. The whole ApolloError object reads like
{"graphQLErrors":[],"networkError":{},"message":"NetworkError when attempting to fetch resource."}
However this could not be reproduced when we ourselves tried on the mentioned browser versions.
I have confirmed that this should not be a CORS problem. We have a pretty large user base and a wide range of Firefox browser versions are used by them, but it's only version 84 and 85 that exhibit this exceptionally high occurrence rate. Not long ago, we fixed a similar issue which was caused by the incompatibility of older version browsers with the Fetch api.
We are a bit clueless regarding what could have caused the issue. The amount of logging at client side is limited to reveal more. Any insights or leads are highly appreciated.
Eventually I found out that this is not an error that is tied to Firefox 84/85. As explained here, the fetch API would just throw a network error message when its process is aborted by the browser (most likely caused by user navigating to a different page or stopping page loading).
I have verified this on a few major browsers by purposely stopping page loading when a fetch API is working in progress. The error messages I received are:
Chrome & Edge - Failed to fetch
Firefox - NetworkError when attempting to fetch resource
Safari - cancelled
Unfortunately it's only the Safari browser that returns a proper message that explains what might have happened.
I think generally these errors are safe to be ignored, or rescued with one retry attempt in case they were caused by connection issues. In the case of Apollo GraphQL client, it's more important to check on the graphQLErrors and networkError attribute of the ApolloError object, which indicate some serious issues caught by the client.

Establish bot to bot communication in Microsoft Bot Framework

I have 2 published bots in Azure. Now, I would like to establish communication between bots.
Using Skill bot concept, I can able send message from parent bot(Bot1) to child bot (Bot2). once Bot2 received the message it throwing below error:
Error 403 - This web app is stopped.
The web app you have attempted to reach is currently stopped and does not accept any requests. Please try to reload the page or visit it again soon.
If you are the web app administrator, please find the common 403 error scenarios and resolution here. For further troubleshooting tools and recommendations, please visit Azure Portal.
Also , I am getting below error at Bot1:
[onTurnError]: Error: Request failed with status code 500
[onTurnError]: [object Object]
[onTurnError] unhandled error: Error: Request failed with status code 500
I didn't find any other errors except above mentioned information /error.
please suggest me what I am doing wrong here.
I look forward for your suggestions/corrections. Really appreciate for your response.

Why do I get a timeout error in Sinch?

I am trying to implement a video chat on my website. The user handling is done by my backend which creates a "signedUserTicket" and that ticket is then used to start the sinchClient. However when I try starting a call just after the message
Successfully initiated call, waiting for MXP signalling.
I get an error saying
Call PROGRESSING timeout. Will hangup call.
Also I get a Chrome warning saying
Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user's experience.
(But I don't think this is the reason for the timeout.)
So after checking in the network tab to find out whether the requests are correctly sent I realized that the problem is always with the requests going to https://rebtelsdk.pubnub.com. For example the request to https://rebtelsdk.pubnub.com/subscribe/sub-c-56224c36-d446-11e3-a531-02ee2ddab7fe/f590205a-c82d-4ec1-bd72-f2997097cbedS/0/14932956939626496?uuid=3b47f938-609d-4940-bb0e-bd7030cb3697&pnsdk=PubNub-JS-Web%2F3.7.2 takes about 20 seconds and it seems to cancel the requests after about 10 seconds giving me the timeout error.
Any ideas on how to fix this?

ERR_EMPTY_RESPONSE using file-uploader safari and mac

My clients report this error from time to time. They receive new::ERR_EMPTY_RESPONSE and the upload retries the maximum number, then fails. Most recent occurrence was on OSX Maverick, Chrome. It happens on Safari as well.
The problem is not widespread, it occurs on a relatively small % of clients.
Fine Uploader 5.1.3
s3.jquery.fine-uploader.js:3878
new::ERR_EMPTY_RESPONSE
s3.jquery.fine-uploader.js:29
POST request for 0 has failed - response code 0
Received an empty or invalid response from the server!
Here is a console screenshot
The issue is with your server or network. The browser complains the the preflight request is being returned with a completely empty response. This is not an issue with Fine Uploader. You'll need to examine logs on your signature server, or perhaps there are some network issues on your end or your client's end.

Why does my Ajax request go directly from state 1 to 4?

I am making a request to a CGI program using AJAX. The response sends me content-length. My purpose is to dynamically exhibit the response progress. For that I need to start a function on onreadystate value of XHR object to be 3. But the request doesn't seems to acquire that status number. Instead it goes directly from state 1 to state 4.
What am I missing?
The response could be going so quickly that you just don't notice it at state 3. Especially if you are running it on localhost, the response could be transmitted very quickly. You could try setting an alert when it gets to stage 3 to test whether it's actually getting there. Also, I belive internet explorer says that it is a mistake to access the response in stage 3 so there could be compatibility issues.
If you're running on localhost, then probably the browser is never getting a chance to run between the time it sends the request and the time it gets the response...
browser opens connection, sets readyState to 1
browser sends packet to server process
server process receives packet, gets priority from scheduler
server returns data to browser, and yields control of the CPU. Browser continues execution.
browser sees all data has been received, sets readyState to 4.
Long story short: don't count on going into the "receiving" state.

Resources