I need to load the content of a new tab from a URL but I cannot get this to work
I need the same kind of thing as would have been done with an IFrame before HTML 5
The website I am going to has no link to the hosting site whatsoever
I have tried a simple version first
#(Html.Kendo().PanelBar()
.Items(panelbar =>
{
panelbar.Add().Text("Test").LoadContentFrom(#"<object data='https://www.google.com' type='text/html'/>");
})
)
Has anyone been able to do this?
I am using the MVC wrappers as you can see
Paul
Firstly, you need to specify a name for your panelbar, otherwise it won't work. With regards to LoadContentFrom you should simply supply it with a url. Including this html markup makes no sense. So your code needs to look like this:
#(Html.Kendo().PanelBar()
.Name("Test")
.Items(panelbar =>
{
panelbar.Add().Text("Test").LoadContentFrom(#"https://www.google.com/");
})
)
However, this is still not going to work, because the request to google.com (or any other https site without the proper access-control-allow-origin header) will be blocked by CORS.
So this scenario may or may not work, depending on the external site you want to load from.
Furthermore, if you do not have a CORS issue and you are able to load the contents, this is still not going to work like an iframe, because you will receive only HTML, which will likely have broken images and no CSS. That's because you will be placing this html in a document, which has no access to those resources, unless they are referenced by domain name. You would have to search the entire html you receive and replace any image, css, and javascript references... and that still might not provide the same experience as an iframe.
Related
I have tried to set my site up ( http://www.diablo3values.com )according to the guidelines set out here : https://developers.google.com/webmasters/ajax-crawling/ However, it appears that Google has updated their indexes (because I see the revisions to the meta description tags) but the ajax content does not show up in the index.
I am trying to use the “Handle pages without hash fragments” option.
If you view either of the following:
http://www.diablo3values.com/?_escaped_fragment_=
http://www.diablo3values.com/about?_escaped_fragment_=
you will correctly see the HTML snap shot with my content. (those are the two pages I an most concerned about).
Any Ideas? Am I doing something wrong? How do you get google to correclty recognize the tag.
I'm typing this as an answer, since it got a little to long to be a comment.
First of all, your links seems to point to localhost:8080/about, and not /about, which probably is why google doesn't index it in the first place.
Second, here's my experience with pushstate urls and Google AJAX crawling:
My experience is that ajax crawling with pushstate urls is handled a little differently by google than with hashbang urls. Since google won't know that your url is a pushstate url (since it looks just like a regular url), you need to add <meta name="fragment" content="!"> to all your pages, not only the "root" page. And google doesn't seem to know that the pages are part of the same application, so it treats every page as a separate Ajax application. So the Google bot will never actually create a navigation structure inside _escaped_fragment_, like _escaped_fragment_=/about, as it would with a hashbang url (#!/about). Instead, it will request /about?_escaped_fragment_= (which you aparently already have set up). This goes for all your "deep links". Instead of /?_escaped_fragment_=/thelink, google will always request /thelink?_escaped_fragment_=.
But as said initially, the reason it doesn't work for you is probably because you have localhost:8080 urls in your _escaped_fragment_ generated html.
Googlebot only knows to crawl the escaped fragment if your urls conform to the hash bang standard. As users navigate your site, your urls need to be:
http://www.diablo3values.com/
http://www.diablo3values.com/#!contact
http://www.diablo3values.com/#!about
Googlebot actually needs to see these urls in the source code so that it can follow them. Then it knows to download the following urls:
http://www.diablo3values.com/?_escaped_fragment=contact
http://www.diablo3values.com/?_escaped_fragment=about
On your site you appear to be loading a new page on each click, and then loading the content of each page via AJAX too. This is not how I would expect an AJAX site to work. Usually the purpose of using AJAX is so that the user never has to load a whole new page. When the user clicks, the new content section is loaded and inserted into the page. You serve the navigation once and then you only serve escaped fragments of the content.
I couldn't really word the title very well, but here's my problem: I've got a webpage that reads from a database each time the user clicks a button, the content is then replaced for part of the page.
Because it is an ajax load, everything is done in the background, and so the URL stays the same. This wasn't be a problem at all until I realised that I will want to have a different Facebook comments box for each set of content that is loaded - so if someone comments, it is posted to their facebook profile, people click on the link and are then taken to different content.
So... what I need is some way of referencing each set of content, and I've found a site that does exactly that (I'm sure there are a lot of them).
Here's the link.
Each set of content has a different 'hash code' (because I don't know the actual name for it) which is appended to the URL - in this case the code is "#1922934", this allows people to post links to it that specific set of content on Facebook etc. - and also allows a different Facebook comment box for each set of content.
Does anyone know how such a set-up can be achieved or how these 'hash codes' work?
Here's a document from wikipedia on it.
[http://en.wikipedia.org/wiki/Fragment_identifier][1]
The main idea is that URI fragments are used because they don't cause a page reload. They also can be used to refer to anchors on a web page.
What I would do is on page load use JavaScript to read the URI fragment (location.hash) then make a request to your server to load the comments etc. The URI fragment cannot be read by a server and is only found through a client (browser)
Sounds like you want something like SammyJS.
I am using a script for Ajax from Dynamic Drive on my site to load content into my div. It has worked great for me until I created a page where I want links. For some reason I am finding that if I create a page with a single link, the page will not load. I can click on it all I want and the page still will not load. If I have a page that is just purely text content, it loads. Is this a flaw with Ajax, or am I not doing something right? My intention with my site is to have a "Store" section so I can use Amazon Affiliates. I can't even get my page to load even if I have a simple link say pointing to Amazon.com. Unfortunately this Ajax script has been the only successful way I've been able to get my content to load into my main div. For some odd reason links in the links section on my site will appear and that page will load, but not for my "store" page.
My site is: http://veterinarycare.atspace.cc
I'm not asking for a direct code, but just a step in the right direction.
'store.html' gives a 404 Not found... Does this file exist? That is probably your problem... Your links.html page for example has a link to ASPCA and that works fine.
You may also want to look into jQuery, as this is a bit neater for doing ajax and other javascript effects. You could probably get all that javascript mumbo jumbo down to 5 lines or so...
Also remember that your site isn't going to be particularly google friendly with all the content being loaded in via javascript.
I have a content listings page on a site I'm developing that uses ajax to pull items based on filtering picked by the user. (filter by date/ tag/ genre etc)
In order for the page to work for crawlbots and non-javascript users I have a standard listing of content as well, hidden, but with a noscript tag in the header making it display:block
(Have been told this is okay in html5).
Trouble is- the site is doing everything twice now- loading the content via ajax as well as the alternative, only to be hidden with CSS.
I know for sure this isn't best practice, but I'm struggling to think of a solution where the content only loads once. Any thoughts on the matter would be greatly appreciated.
You could display them always with CSS, and then hide them with JS. (Without the use of noscript)
I am looking at the Gawker blogs (http://io9.com, http://lifehacker.com/) and I'm curious about how they are made.
When I click for on a link only the article part of the page reloads displaying a loading icon while it does.
But what I can't figure out is that links point to new URLs like io9.com/something/something and its not something like I see on ajax pages that they put a site.com/#something tag at the end of the url from javascript to mark the page after an ajax request.
Can I change the full blown URL from javascript or what is happening?
When it happens, the website is using the HTML5 History API. This API can change the url (via JavaScript) without changing the page.
See caniuse.com for browser support.
If you would like to implement it in yout website, backbonejs.org would be very useful.