How facebook load pages? - ajax

I have read somewhere Facebook load pages by an hidden iframe by an Ajax call...Is this true??

Facebook uses something they call BigPipe, which splits the page up into a bunch of little "pagelets" that are individually loaded via AJAX.
In the image below, the areas highlighted in light-blue are individual pagelets.

Related

What are full page reloads and Why did we need to do full page reloads without ajax?

I was reading up on ajax and how it empowers us to exchange data with a server behind the scenes and consequently avoid full page reloads. My confusion lies here, I don't really understand what full-page reloads mean. I think it's probably cause I've been working with ajax/react since the start I guess and have not really seen any webpage of mine fully reload when I access stuff from a database or an api.
It'd be great if someone could explain what they are and why did we need them before ajax?
A full page load is where the entire page is downloaded from the server. A page typically consists of several sections: header, footer, navigation, and content. In a classic web application without AJAX, a user clicks on a link to another page, and has to download the full page, even though only the main content is changing. The header, footer, and navigation all get downloaded again even though they don't change.
With AJAX there is the opportunity to only change the parts of the page that will change. When a user clicks on the link, JavaScript loads just the content for that link and inserts it into the current page. The header, footer, and navigation don't need to reload.
This introduces other problems that need attention.
When AJAX inserts new content into the page, the URL doesn't change. That makes it difficult for users to bookmark or link to specific content. Well written AJAX applications use history.pushState() to update the URL when loading content via AJAX.
There are then two paths to get to every piece of content. Users can either load the URL containing that content directly, or load the content into some other page by following a link. Web developers need to test and ensure both work.
Search engines have trouble crawling AJAX powered sites. For best compatibility, you need to employ server side rendering (SSR) or pre-rendering to serve initial content on a page load that doesn't require JavaScript.
Even for Googlebot (which executes JavaScript) care must be taken to make an AJAX powered site crawlable. Googlebot doesn't simulate user actions like clicking, scrolling, hovering, or moving the mouse.
Content needs to appear on page load without any user interaction
You must use <a href=...> links for navigation so that Googlebot can find other pages by scanning the document object model (DOM). For users, JavaScript can intercept clicks on those links and prevent a full page load by using return false from the onclick handler or event.preventDefault() in the click handler.

How to access a specific Joomla File

Within an article in Joomla, I have the following code:
(loadposition file_download)
This line of code loads more code, but I cannot access it from the Articles page. How do I access the loadposition code?
On the web you are always striving to separate content from display. Here you are just setting up the display so that when the page renders items in the file_download position will render. That is the html for whatever is in file_download will be generated dynamically when the user looks at the page in the browser. In the editor you are just creating html. If you save the article and view it in the rendered form (i.e. in the Joomla frontend) you will see that loadposition will do its work of rendering whatever it is that file_download asks it to render. That is loadposition is a way to dynamically include content (which might include text, javascript, whatever) into an article.

"Fetch as Google" renders all pages to look like my homepage

I am trying to figure out why my website's posts and pages such as my resume are getting a "Complete" status with a green check mark (seemingly no errors or redirects) when fetching and rendering as google, but all of them "render" and look like my homepage. The page speed insights tool seems to be using the same rendering engine as it seems to have the same issue.
Notes:
The html served from my website on initial page load is the correct HTML and content. No redirects occur. The initial page load does not fetch content via JS. I mention this because although my website is not a one page application (I'm using Wordpress), I do use ajax in combination with a post variable flag to fetch new page content when the user navigates to the next page (after the initial page load).
I have verified that all of my pages have been indexed using the "site:" trick in Google search. They are indexed properly, but they aren't "rendering" properly.
Should I be worried? Should I just ignore that the pages aren't rendering properly? It doesn't make any sense. Is anyone else having this issue?
Your resume page has a response type of content-type image/gif so google thinks that the page is an image??

Custom title and image for Facebook share button on AJAX result

This question exists in different flavors, but not for AJAX pages.
I use AJAX to pull a single video into my page and I want a custom FB share button for it. Everything I've read so far says that FB pulls the required title and image from meta-tags in the page's < head> section (og:image and og:title).
I've tried to change the meta properties when the AJAX call returns, before rendering the share button. This hasn't worked. It uses the values that were present upon initial page load. I have yet to encounter a single answer to this question.
Are there data attributes I can add to the 'fb-like' div to specify a custom title and image (similar to data-href)?
Danke!
You need an individual URL for each individual piece of content that you want to share. Open Graph objects (and simple shared links “become” such, automatically) are identified by their URL (og:url).
Now if your whole page is built on AJAX, you still need to create such individual URLs somehow – the Facebook scraper tool does not “speak” JavaScript, and relies solely on the OG meta information that the server delivers for any URL it requests.
Since the hash part of an URL is only of relevance client-side (and does not even get send to the server), “typical” AJAX URLs that rely on those to tell the client which piece of content to load in the background are no good here.
So if you want to share two pieces of content (videos) as http://www.example.com/?v=vid1 and http://www.example.com/?v=vid2, then you have to make sure that your server delivers the meta data for each video under its respective URL.

How does a website load only part of the page and still display full on URLs?

I am looking at the Gawker blogs (http://io9.com, http://lifehacker.com/) and I'm curious about how they are made.
When I click for on a link only the article part of the page reloads displaying a loading icon while it does.
But what I can't figure out is that links point to new URLs like io9.com/something/something and its not something like I see on ajax pages that they put a site.com/#something tag at the end of the url from javascript to mark the page after an ajax request.
Can I change the full blown URL from javascript or what is happening?
When it happens, the website is using the HTML5 History API. This API can change the url (via JavaScript) without changing the page.
See caniuse.com for browser support.
If you would like to implement it in yout website, backbonejs.org would be very useful.

Resources