How to turn off network probing in Firefox? - firefox

Recent versions of Firefox do some background accesses to websites that are immediately aborted. (it's related to some IPv4/IPv6 decision or similar)
I am developing a web application and those probes spam my logs (each prints a multi-line stack trace)
Is there a setting in Firefox to turn this probing off?

This is a Firefox feature called Race Cache With Network .
Short description: Firefox sends requests for cached resources concurrently to the local disk cache and the actual network server. Whichever result arrives first is used - the other request is canceled.
The idea is that sometimes the network is faster that the cache on HDD, so this way the page is loaded faster.
To turn it off, open about:config and set network.http.rcwn.enabled to false.
Sources:
Race Cache With Network experiment on Nightly (dev-platform mail list)
primefaces.org forum post
Test connections in background mozillaZine forum

Related

25 second load time on inital network request to my website

When debugging using the chrome's performance profiler i recorded a reload of my own page. The following is the result.
chrome performance profiler screenshot
After reloading the page, there is a 25 second network request that first happens, then the website loads really ast. I've been searching the internet for an answer on how to debug this further or why this could happen but honestly I can't figure it out.
I've tried using another network, my 4G on mobile, flushing DNS, using other browsers/incognito, clearing cache. Nothing works.
My hosting company says that this is a problem on my end and GTmetrix also show very good results. But I'm not sure i trust it since it happens on both my phone and computer. (4G roaming on phone)

What exactly happens when we disable history in Firefox?

I have a problem with live video streams in the system I am developing that happens only in Firefox and only in normal mode.
The player correctly loads the stream, but after a few seconds can't continue to load and just keeps trying and trying forever.
This doesn't happen in Chrome, nor if I load the page in Private Mode, nor with normal videos. Just with live streams, just in Firefox, just in normal mode.
This happens both in local development (home, remote connection) and in the corporative cloud.
It's an Angular 8/NodeJs system and the player I use is Clappr. I changed to Video.js and the problem continued.
The stream is coming from a load balancer with 6 children servers, each one with an apache server who have a proxy to an icecast server that originates the stream.
[load balancer] < [6 child servers with apache server proxy] > [icecast server]
I work for a very large company that has an IPS system installed. It was the first thing I thought. But the IPS team could not find any blocked traffic. Also if it was that, why would the traffic not be blocked in private mode?
So I thought about trying to pinpoint what the exact configuration is different in private mode that does the trick and I figured out that disabling all history (not only navigation and downloads or forms) makes it work too.
Does anyone knows what exactly happens when the navigation history is disabled? Besides not saving history, does it have an impact on something else? Some type of cache, network or something like that? Anyone has any idea about how to make stream work without disabling history? I can't ask my users to disable history just to use my system.
EDIT
One thing that may be relevant to the issue is that in Firefox it doesn't show LIVE label when the transmission starts. It shows a negative number. Maybe this could create some problem with the history.
I couldn't find the information on what exactly happens when we disable history in Firefox, but I could solve the problem of playing the stream in Firefox, so I won't accept this answer, but leave it here for future reference in case someone has a similar problem.
I solved it by adding ?nocache=<random integer of length 10> to the video url. Please notice that if you already have some parameter in your url, you can't have 2 ? characters in your url and have to mix parameters correctly.

Why does selenium chromedriver use less resources than regular chrome

I have noticed that when launching chrome with fresh user data directories via selenium chromedriver it is using up much less resources (cpu, memory, and disk) than when launching normally.
One of the reasons I was able to find out was that selenium chromedriver launches with these arguments:
--disable-background-networking
--disable-client-side-phishing-detection
--disable-default-apps
--disable-hang-monitor
--disable-popup-blocking
--disable-prompt-on-repost
--disable-sync
--disable-web-resources
--enable-automation
--enable-logging
--force-fieldtrials=SiteIsolationExtensions/Control
--ignore-certificate-errors
--log-level=0
--metrics-recording-only
--no-first-run
--password-store=basic
--test-type=webdriver
--use-mock-keychain
After applying those arguments, cpu, memory, and disk usage have massively gone down. However, disk usage is still about 10x higher. Using Windows Resource Monitor, I analyzed the I/O usage and saw a lot of writing to chrome_url_fetcher directory and another directory with two random 5 digit numbers seperated by a underscore; RANDOMNUMBER_RANDOMNUMBER. Both of these directories were in the %temp% folder and contained files that included "pepperflashplayer" in their names.
I am assuming that this is chrome installing a necessary component for pepperflash, but why is this not the case with selenium chromedriver? Is there any way I can stop this?
The Selenium driven ChromeDriver initiated google-chrome v87.0.4280.88 browsing context is initiated with these additional Command Line Switches:
--disable-background-networking: Disables several subsystems which run network requests in the background.
--disable-client-side-phishing-detection: Disables the client-side phishing detection feature.
--disable-default-apps: Disables installation of default apps on first run. This is used during automated testing.
--disable-hang-monitor: Suppresses hang monitor dialogs in renderer processes.
--disable-popup-blocking: Disables pop-up blocking.
--disable-prompt-on-repost: This switch may be used to disable that check that the user had attempted to navigate to a page that was the result of a post.
--disable-sync: Disables syncing browser data to a Google Account.
--enable-automation: Enables indication that browser is controlled by automation.
--enable-blink-features=ShadowDOMV0: Enables one or more Blink runtime-enabled features.
--enable-logging: Controls whether console logging is enabled and optionally configures where it's routed.
--log-level=0: Sets the minimum log level.
--no-first-run: Skip First Run tasks, whether or not it's actually the First Run.
--no-service-autorun: Disables the service process from adding itself as an autorun process.
--password-store=basic: Specifies which encryption storage backend to use.
--remote-debugging-port=0: Enables remote debug over HTTP on the specified port.
--test-type=webdriver: Type of the current test harness ("browser" or "ui" or "webdriver").
--use-mock-keychain
--user-data-dir="C:\Users\username\AppData\Local\Temp\scoped_dir9640_113432031: Directory where the browser stores the user profile.
data:,
The usage of these additional commandline switches makes the initialization of the Google Chrome process requires less callbacks as well as disables a lot more callbacks
Apart from that, Flash used by the regular chrome session is:
32.0.0.465 C:\Users\username\AppData\Local\Google\Chrome\User Data\PepperFlash\32.0.0.465\pepflashplayer.dll
Where as, Flash used by the ChromeDriver initiated chrome session is:
30.0.0.154 C:\WINDOWS\system32\Macromed\Flash\pepflashplayer64_30_0_0_154.dll
For the above mentioned reasons the ChromeDriver initiated Google Chrome is lighter and less memory consuming then the regular Google Chrome.

What is meaning of 'Blocking' in Firebug Net Panel?

I'm using Firebug 1.5.2 and while testing a site before production release i can see a huge amount of time consumed by 'Blocking' parts of the requests.
What exactly does the 'Blocking' mean?
"Blocking" previously (earlier versions of FireBug) was called "Queuing". It actually means that request is sitting in queue waiting for available connection. As far as I know number of persistent connections by default is limited in last versions of Firefox to 6, IE8 also 6. Earlier it was only 2. It can be changed by user in browser settings.
Also as I know that while javascript file is loading, all other resources (css, images) are blocked
Blocking is a term used to describe an event that stops other events or code from processing (within the same thread).
For example if you use "blocking" sockets then code after the socket request has been made will not be processed until the request is complete (within the same thread).
Asynchronous activities (non blocking) would simply make the request and let other code run whilst the request happened in the background.
In your situation it basically means that certain parts of firebug / the browser cannot activate until other parts are complete. I.e. it is waiting for an image to download before downloading more.
As far as I know, two reasons cause components to cause blocking others from loading.
Browser's enforced (but usually configurable) limit of how many parallel resources can be loaded from a particular host at a time.
Inline javascript, which can cause the browser to wait and see if it at all needs to go ahead with downloading the rest of the components (just in case the javascript redirects or replaces the content of the page)
It means "waiting for connection". As explained in the official documentation by Mozilla, "Blocking" is "Time spent in a queue waiting for a network connection." That can be due to Firefox hitting its internal parallel connections limit, as explained there and in answers here.
It can also mean "waiting because server is busy". One possible reason for "Blocking" times is missing in the official documentation linked above: it can happen when the server cannot provide a connection at the time because it is overloaded. In that case, the connection request goes into a queue on the server until it can be processed once a worker process becomes free [source].
In a technical sense, such a connection is not yet established because the request is awaiting accept() from the server [source]. And maybe that is why it is subsumed under "Blocking" by Firefox, as it could also be considered "Time spent in a queue waiting for a network connection".
(This behaviour is not fully consistent as of Firefox 51 though: for the first URL called up in a new tab, the time before the server accepts the connection request is not counted at all in the "Timings" tab – only for subsequent URLs entered. Either of both behaviours could be a bug, I don't know which one.)

ie save onunload bug

I have a dynamic ajaxy app, and I save the state when the user closes the explorer window.
It works ok in all browsers but in IE there is problem. After I close twice the application tab, i can't connect anymore to the server.
My theory is that the connection to the server fail to complete while the tab is being closed and somehow ie7 thinks that it has 2 outstanding connections to the server and therefore queues new connections indefinitely.
Any one has experienced this, any workaround or solution?
In IE if you use long-polling AJAX request, you have to close down the XHR connection on 'unload'. Otherwise it will be kept alive by browser, even if you navigate away from your site. These kept alive connections will then cause the hang, because your browser will hit the maximum open connection limit.
This problem does not happen in other browsers.
Well, you can get around the connection-limit easily enough; simply create a wildcard domain and instruct your app to round-robin the subdomains; e.g. a.rsrc.dmvnoc.com, b.rsrc.dmvnoc.com, etc, for my netMail application. Without this trick, preloading all the images takes almost 30 seconds on a LAN (because of MSIE's low connection limit), but with it, the images download in about a second.
If you need to combine scripts with this trick, just set document.domain to the parent in the new scripts.
However, you might want to checkpoint the state on change anyway- the user might lose their network connection, or their computer might crash. If you want to reduce network traffic, have the client simply set a cookie that contains the relevent state- you can fit an awful lot in there (3000 bytes or so) and then the server gets it automatically on the next connection anyway- where it can save the results (as it presently does) and remove the cookie to signal that it has saved the state.

Resources