We use CentOS-6, a RedHat EL6 distro rebuild, and this ships with FireFox-10.0.12 ESR. We recently changed the favicon.ico image on several internal servers. Actually we just provided the corporate favicon to those sites that had none.
Now, the difficulty is this. When a FF user who visited a given url before the favicon was provided now visits that url following the update then the new favicon is not displayed in either their URL address bar or in the tab for that page. Instead they see that dashed box outline indicating that no favicon is present.
However, when a FF user who had never previously visited that same url does so then that FF instance does display the new favicon in both the address bar and tab on that FF browser.
I have looked into this briefly and frankly was astounded at how common this problem appears to be and the absolute lack of any sensible response to the issue even on Mozilla's own support forums. I have tried hacking and picking at the places.sqlite store but even deleting the entire places.sqlite file or emptying the favicon tables and restarting firefox does not solve the problem of displaying a changed favicon the tab and url icon display. All that does is hammer the user's bookmarks.
Now I can, and have, resorted to the trick of adding <link rel="icon" href="favicon.ico"> in the <head></head> block of those urls that use static pages but some are generated dynamically by third-party applications. These urls do not offer a convenient method to make this modification to their output.
Where does FireFox-10 cache the favicon for a newly visited url and how does one remove that reference from the user's profile?
A browser does not necessarily request a fresh copy of the /favicon.ico (or an icon specified by a meta tag) on each visit. Once it has a favicon (or thinks there isn't one), it is often some time before it will request an update (this depends on the particular browser). I have had some success with unbookmarking a site and clearing the browser cache.
Related
Using the Open Graph protocol, an image can be signaled to applications such as Facebook to associate with a site link. For example:
<meta property="og:image" content="https://mysite.any/images/thumbnail.jpg">
However, this has a disadvantage: the image is a fixed one, i.e., it is always that same image whatever is the page. Obviously, using PHP, I can also select different existing images depending on the page being called, but doing this for any page on a site with hundreds of constantly updating pages is a mission impossible.
Ideally, I would like to be able to generate that image automatically when the server receives the HTTP call from the agent and generates the page content.
But how? Is there a simple way to do this, taking into account that the page is actually rendered by the browser and so the server actually has no idea how it will be represented? Probably I cannot do that by PHP but I need some JavaScript to do that. Is that correct?
I've been having this problem for some days now and spend 2 hours debugging a simple meta tag without result.
The problem I'm facing is that the facebook debugger provides me with a whole different url from cloudfront and sets wrong meta image.
The url
The Image it should show
I also made a thread on facebook.com: https://developers.facebook.com/bugs/807177749469671/
The object debugger from Facebook for your given URL mentions that some redirects are occurring on the page, it's also mentioning when these redirections occur. It can either be an HTTP direction, meta-tag with og:url property or a link tag. Your page does contain one of these tags, on line 91 to be specific;
<link rel="dns-prefetch" href="//yummygum.com">
This redirection is why Facebook is trying to load an image with the name "meta-img.png" since this is the og:image the homepage is referring to. Try to remove that link redirection and see if it's loading the right image.
I have a site with all secured content. Everything is loaded using https. I have verified this using fiddler2, the built-in debugger, and the DebugBar plugin. Nothing is loaded using http. Nonetheless, I am still getting the "Do you want to view only the webpage content that was delivered securely?" when I try to load the page in IE8. My users are complaining and I don't have a clue how to fix this. They are not computer administrators and cannot change the security policy for IE on their machines.
I figured out the problem and figured I'd post it here in case anyone else ever comes across this issue. The problem is that IE8 was treating the CSS background property with a relative URL as unsecure. So I had something like this:
.SomeRule
{
background: url('/SomeFolder/SomeImage.png') 95% 50% no-repeat;
}
and I had to change it to this to make the warning go away:
.SomeRule
{
background: url('https://www.SomeSite.com/SomeFolder/SomeImage.png') 95% 50% no-repeat;
}
I had a similar problem with a WordPress site where I recently added SSL. Obviously, something was being loaded with HTTP protocol, but what?
First, I checked the obvious:
I checked embedded page and post images for fully qualified paths using http protocol.
Then I checked links relative to the root as #datadamnation suggested in his solution.
Next I looked in my CSS to see if a background image URL used the http protocol.
I checked my plugins and my plugins CSS.
I checked the content in the sidebar widgets.
I checked the images loaded in the carousel slider.
Finally, I checked the theme's header image. When I looked at it using Firebug, I could see that it was still using http. To correct it, I had to remove the WordPress header image, and then add it back again and save. Refresh the page, and now the mixed content warning message is gone! It would have saved me a couple of hours of trial and error if I had done this first, so maybe you'll read this and save yourself some time.
I have tried to set my site up ( http://www.diablo3values.com )according to the guidelines set out here : https://developers.google.com/webmasters/ajax-crawling/ However, it appears that Google has updated their indexes (because I see the revisions to the meta description tags) but the ajax content does not show up in the index.
I am trying to use the “Handle pages without hash fragments” option.
If you view either of the following:
http://www.diablo3values.com/?_escaped_fragment_=
http://www.diablo3values.com/about?_escaped_fragment_=
you will correctly see the HTML snap shot with my content. (those are the two pages I an most concerned about).
Any Ideas? Am I doing something wrong? How do you get google to correclty recognize the tag.
I'm typing this as an answer, since it got a little to long to be a comment.
First of all, your links seems to point to localhost:8080/about, and not /about, which probably is why google doesn't index it in the first place.
Second, here's my experience with pushstate urls and Google AJAX crawling:
My experience is that ajax crawling with pushstate urls is handled a little differently by google than with hashbang urls. Since google won't know that your url is a pushstate url (since it looks just like a regular url), you need to add <meta name="fragment" content="!"> to all your pages, not only the "root" page. And google doesn't seem to know that the pages are part of the same application, so it treats every page as a separate Ajax application. So the Google bot will never actually create a navigation structure inside _escaped_fragment_, like _escaped_fragment_=/about, as it would with a hashbang url (#!/about). Instead, it will request /about?_escaped_fragment_= (which you aparently already have set up). This goes for all your "deep links". Instead of /?_escaped_fragment_=/thelink, google will always request /thelink?_escaped_fragment_=.
But as said initially, the reason it doesn't work for you is probably because you have localhost:8080 urls in your _escaped_fragment_ generated html.
Googlebot only knows to crawl the escaped fragment if your urls conform to the hash bang standard. As users navigate your site, your urls need to be:
http://www.diablo3values.com/
http://www.diablo3values.com/#!contact
http://www.diablo3values.com/#!about
Googlebot actually needs to see these urls in the source code so that it can follow them. Then it knows to download the following urls:
http://www.diablo3values.com/?_escaped_fragment=contact
http://www.diablo3values.com/?_escaped_fragment=about
On your site you appear to be loading a new page on each click, and then loading the content of each page via AJAX too. This is not how I would expect an AJAX site to work. Usually the purpose of using AJAX is so that the user never has to load a whole new page. When the user clicks, the new content section is loaded and inserted into the page. You serve the navigation once and then you only serve escaped fragments of the content.
I’ve created a welcome tab for a Page. When I set the Page Tab URL to pull content from http://journalism.unr.edu/facebook/welcome/, the tab comes up blank. But when I uploaded a copy of the content to a free 000webhost.com hosting account at http://rsj.netii.net/welcome/, and used that address as the Page Tab URL, the content loads just fine. What I want to know is, why?
I’ve experimented with pulling content from other URLs into my page tab:
yahoo.com - works
google.com - doesn’t work
unr.edu - works
journalism.unr.edu - doesn’t work
unr.edu/engineering - doesn’t work (shows error message: “The page cannot be displayed. The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access.”)
Does anyone know why page tabs/iframe apps load content from some domains but not others? Can anyone tell me how to fix the journalism.unr.edu web server (I have access to it, I work for the journalism school) so that page tabs can load content from it? We’d like to be able to pull content straight from our website without having to copy it over to a free hosting account.
I'm not sure what is the problem with http://journalism.unr.edu/facebook/welcome/
But I cannot even get it to load inside of an iframe. Maybe there's some restriction setup in the hosting or server that servers that site. Or maybe a more complex issue with the server not allowing it to be iframed from a different host domain.
Simple to test, just make an html page like.
<html>
<body>
<p>I hope it loads</p>
<iframe src="http://journalism.unr.edu/facebook/welcome/" width="400" height="300"></iframe>
</body>
</html>
Also: "unr.edu/engineering - doesn’t work (shows error message: “The page cannot be displayed. The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access.”)"
That one is because you're pointing to a resource on the server that does not allow HTTP POSTs.