How do I find out what makes a browser keep loading? - ajax

I am using Opera and sometimes a page keeps on loading even though all content has already been presented. How do I find out which elements are to be loaded or what causes the ongoing loading process?

Even though all content seems to be 'presented', the page may still be loading images, JavaScript, CSS, or other resources referenced by it. This process performed by the browser isn't refereed to as "AJAX" as you have tagged in your question. AJAX is the asynchronous invocation of JavaScript to retrieve or submit data without requiring page refreshes.
As for examining which resources are causing your page to appear to be still "loading"...
I use Firebug's network tab to look at pending requests for resources in Firefox. It shows every resource your browser requests, how long it takes to retrieve, and the entire request & response headers and body.
Google chrome has something similar built-in, just hit F12 to bring up the "Developer Tools"
I would assume Opera has something similar although I am not sure of it's name.

Related

Find navigation/redirect request with DevTools after button click that executes javascript/ajax

The question is probably easily misunderstood, so I'll go into more detail:
I am trying to automate a task in a certain (very outdated) browser-based idle game that is written in PHP in order to polish my portfolio with a little more variated projects.
I used DevTools to reverse most of the requests and wrote a small C# Request wrapper to test them. I can get most of the actions I want to work, using the respective ajax get requests and the correct cookies/headers - not really part of the problem.
Example:
Attacking an enemy:
https://somebrowsergame.com/game/ajax.php?mod=location&submod=attack&location=3&stage=2&premium=0&sh=****mysessionhash****
Making a GET request to this URI with the correct headers and cookies, I can perform the in-game action programmatically and successfully from my C# console application and see that the fight has taken place when visiting the site in the browser.
The problem:
When monitoring all requests after clicking the "attack" button, via DevTools, even with preserve logs enabled, I don't see any redirects or way of determining how my browser gets told where to navigate to.
Findings
I found out that the button calls a javascript function attack() in its onClick event and tried debugging the javascript in DevTools in order to find out where somethign happens (such as setting document.href or smth), but when Debugging I ran into a seemingly infinite loop of setInterval handler and setTimeout handler in the call stack.
I also cleared the Network tab after the onClick event (and after the ajax request which I could find during Debugging) but the only request/response I got was the document GET request for the final page, no request telling my browser which site to navigate to.
Monitoring requests
The request made to initiate the action (via button click on website or ajax GET request as outlined above)
The document response / site navigated to
What I want to know is how my browser got told which site to navigate too, as the request URI for the document request (getting the html of the target page) has a parameter generated on the server side (logId)
I have also used "All" request types in DevTools, as well as negative filters when monitoring requests but never was I able to see how my browser knows which page to navigate to. I tried with source breakpoints at "beforeunload", tried inspecting the javascript source connected to the onclick event of the button (which didnt give me anything, as the js is minified and barely readable - i am not even sure if the navigation is done via window.target.href) and googled this question in all possible wordings which lead me nowhere
I am not too versed in web development, but I am sure my browser has to be told where to navigate to in some fashion after clicking that button?

Loading a website without browser showing spinning wheel

I am just curious to know how these websites were made to load only once. If you go to the sites http://fueled.com/ or http://ecap.co.nz/, the browser shows the spinning wheel only the first time the website is loaded. When you navigate to other pages from the navigation menu, like About or Contact or Team, when those pages load, the browser doesn't show the spinning wheel.
How do they make them work like this?
It is because page load is not triggered upon those links. Instead, a post request is triggered and its response will be used. Also, further page loads will be quicker, since scripts, styles and pictures will be cached, that is, saved locally on your computer.
You can check what happens using the browser console's network tab. Click on the last request before you click on such a link. You will see that the request log will not be cleared, but other requests are added. That means there is no page load in the meantime.

is Pushstate inferior to Hashbangs when it comes to caching?

There are several advantages to the HTML5 Pushstate in comparison to hasbangs, in fact, Google is now encouraging the use of Pushstate. The only Pushstate disadvantage being publicly discussed is the fact that non-modern browsers do not support it. However, to me it seems that Pushstate is also disadvantageous when it comes to caching. I might be wrong, hence this question.
is Pushstate inferior to Hashbangs when it comes to caching pages?
Here is a case where it seems that Pushstate is bad at caching.
Pushsate
Bob navigates to eg.com/page1, the full page is downloaded, rendered and cached.
Bob clicks a button, eg.com/json/page2 is downloaded and cached.
The browser Processes the JSON and re-renders parts of Bob's page.
Pushstate changes the displayed browser address to eg.com/page2.
Bob closes the browser, then re-opens it and directly visits
eg.com/pushstate2. The full page is downloaded, rendered and cached.*
*-Despite the fact that it is already theoretically available in the cache under the guise of eg.com/json/page2
Hashbangs
Alice navigates to eg.com/#!page1, eg.com/index.html is downloaded and cached.
eg.com/json/page1 is downloaded and cached.
The browser Processes the JSON and renders Alice's page.
Alice clicks a button, eg.com/json/page2 is downloaded and cached, the displayed browser address is changed to eg.com/#!page2
The browser Processes the JSON and renders Alice's page.
Alice closes the browser, then re-opens it and directly visits eg.com/#!page2. NOTHING is downloaded and everything is loaded from cache, unlike Pushstate.
Summary
I have numerous similar cases in mind, The question is whether or not this is indeed valid, I may be missing something which is leading me to wrong conclusions. is Pushstate inferior to Hashbangs when it comes to caching pages?
I think that pushstate is inferior, but if you are building a SPA page correctly the differences should not be significant:
Assuming that you are using one of the latest frameworks, your index.html page should be relatively small with a few <script> tags (frameworks like webpack, systemjs etc).
The js files that are referenced with these tags do get cached normally so the only difference between the two methods is fetching index.html for every pushstate url as opposed to fetching it once in hashbang mode.
I got the idea from the following question:
https://webmasters.stackexchange.com/questions/65694/is-this-way-of-using-pushstate-seo-friendly

How to circumvent cache revalidation on browser refresh?

Sometimes it is useful to measure you site´s performance in a full cached situation. But browsers make this hard to test, because on every manual page refresh it will revalidate all items, which results in a request for every resource on the webserver. Valid cacheitems will respond with a HTTP 304, invalid ones with a 200 OK. So you will end up with wrong timings for this particular use-case because of the latency to your webserver.
One solution is to open a new tab, then enter the site´s url, which results in my expected behaviour: Cached items are served from disk. As soon as you hit refresh, the items revalidate again.
This workflow (open tab, open tab, and so on) is kinda bad, so i want to ask, if anybody knows a better way to achieve this. Maybe there is a nicely hidden shortcut i missed so far out there on the internet.
just add a bookmark and fill in the following in the 'url' field: javascript:location.href=location.href;
it does make perfectly sense for browsers to revalidate all resources of a given url whenever the 'reload' button is clicked, as this best complies with the intention of the user doing so - in other words, a 'reload' is expected to result in the display of 'fresh' data, instead of simply serving what's already in the browser cache. in addition to this, all major browser have implemented a way of actually fetching every single resource, bypassing any instructions in the http headers to cache it (such as a '304 not modified' status), using shortcuts like SHIFT/CTRL + R/F5.
In Firefox, just focus the url bar (Ctrl-L on Windows and Linux, Cmd-L on Mac) and hit enter. This is treated like a normal toplevel load of the url, not a reload.
Unfortunately, other browsers handle that sort of thing differently. The simplest way to deal cross-browser is probably to have a little harness page that loads the thing you want to reload in an iframe and has a button that sets subFrameWindow.location.href = subFrameWindow.location.href to trigger a new load....

Iframe vs normal / ajax get request

I have a page that gathers environment status from a couple of IBM WebSphere servers using iframes similar to this:
<iframe src="http://server:9060/ibm/console/status?text=true&type=server&node=NODE&name=ServerName_server_NODE"></iframe>
and it happily prints out "Started" or "Unavailable" etc. But if I load the same url in a normal browser sometimes it works, sometimes it does not? Some of them are showing a login page, while others are simply return HTTP code 500.
So whats the difference between loading the page through an iframe vs through a browser?
I can tell you that the iframe solution works no matter which machine I am doing it on, so I do not belive it has anything to do with the user whos opening the page. And before you ask, why not keep the solution that works, well its because it takes a long time to open the page with the iframes vs a page where everything is requested through ajax.
Update: Using jQuery to perform the ajax call returns "error" and "undefined" for the servers that I can't see in a normal browser.
One difference is an iframe has to render the view while XHR would not.
An iframe is essentially the same as opening with the browser. In both cases the browsers credentials are used, so there will be no difference between the two.
Secondly, loading something in an iframe should take the same amount of time as requesting it through XHR, since in both cases the browser makes an HTTP request and waits for the response. Although I should add that an iframe will take time to render the content onto the page. However if you plan on displaying it with ajax anyways, an iframe/xhr solution will be more or less the same.
In case of ajax request same origin policy (which restricts cross domain call) comes into picture. So you can't make cross domain call using xhr. Alternative for same is embed flex swf file in your page as activex control and make flex call through javascript and then flex is responsible to make cross domain call (flex can if targeted domain allows cross domain using crossdomain.xml) and renders result using javascript again.

Resources